sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
English singer-songwriter 1947 – Dick Fosbury, American high jumper 1947 – Anna Maria Horsford, American actress 1947 – Rob Reiner, American actor, director, producer, and activist 1947 – Jean Seaton, English historian and academic 1947 – John Stossel, American journalist and author 1948 – Stephen Schwartz, American composer and producer 1949 – Shaukat Aziz, Pakistani economist and politician, 15th Prime Minister of Pakistan 1949 – Martin Buchan, Scottish footballer and manager 1950 – Arthur Roche, English archbishop 1951 – Gerrie Knetemann, Dutch cyclist (d. 2004) 1952 – Denis Napthine, Australian politician, 47th Premier of Victoria 1953 – Madhav Kumar Nepal, Nepali banker and politician, 34th Prime Minister of Nepal 1953 – Carolyn Porco, American astronomer and academic 1953 – Phil Alvin, American singer-songwriter and guitarist 1954 – Jeff Greenwald, American author, photographer, and monologist 1954 – Harald Schumacher, German footballer and manager 1955 – Cyprien Ntaryamira, Burundian politician, 5th President of Burundi (d. 1994) 1955 – Alberta Watson, Canadian actress (d. 2015) 1956 – Peter Roebuck, English cricketer, journalist, and sportcaster (d. 2011) 1956 – Steve Vizard, Australian television host, actor, and producer 1960 – Sleepy Floyd, American basketball player and coach 1962 – Alison Nicholas, British golfer 1963 – D. L. Hughley, American actor, producer, and screenwriter 1964 – Linda Pearson, Scottish sport shooter 1965 – Allan Bateman, Welsh rugby player 1965 – Jim Knight, English politician 1966 – Alan Davies, English comedian, actor and screenwriter 1967 – Julio Bocca, Argentinian ballet dancer and director 1967 – Connie Britton, American actress 1967 – Glenn Greenwald, American journalist and author 1967 – Shuler Hensley, American actor and singer 1968 – Moira Kelly, American actress and director 1971 – Darrick Martin, American basketball player and coach 1972 – Shaquille O'Neal, American basketball player, actor, and rapper 1972 – Jaret Reddick, American singer-songwriter, guitarist, and actor 1973 – Michael Finley, American basketball player 1973 – Peter Lindgren, Swedish guitarist and songwriter 1973 – Greg Ostertag, American basketball player 1973 – Trent Willmon, American singer-songwriter and guitarist 1974 – Guy Garvey, English singer-songwriter and guitarist 1974 – Matthew Guy, Australian politician 1974 – Brad Schumacher, American swimmer 1974 – Beanie Sigel, American rapper 1975 – Aracely Arámbula, Mexican actress and singer 1975 – Yannick Nézet-Séguin, Canadian pianist and conductor 1976 – Ken Anderson, American wrestler and actor 1977 – Nantie Hayward, South African cricketer 1977 – Giorgos Karagounis, Greek international footballer 1977 – Shabani Nonda, DR Congolese footballer 1977 – Marcus Thames, American baseball player and coach 1978 – Sage Rosenfels, American football player 1978 – Chad Wicks, American wrestler 1979 – Clint Barmes, American baseball player 1979 – Érik Bédard, Canadian baseball player 1979 – David Flair, American wrestler 1979 – Tim Howard, American soccer player 1980 – Emílson Cribari, Brazilian footballer 1981 – Ellen Muth, American actress 1983 – Andranik Teymourian, Armenian-Iranian footballer 1984 – Daniël de Ridder, Dutch footballer 1984 – Eskil Pedersen, Norwegian politician 1984 – Chris Tomson, American drummer 1985 – Bakaye Traoré, French-Malian footballer 1986 – Jake Arrieta, American baseball player 1986 – Francisco Cervelli, Venezuelan-Italian baseball player 1986 – Ross Detwiler, American baseball player 1986 – Eli Marienthal, American actor 1986 – Charlie Mulgrew, Scottish footballer 1987 – Kevin-Prince Boateng, Ghanaian-German footballer 1987 – Chico Flores, Spanish footballer 1988 – Agnes Carlsson, Swedish singer 1988 – Marina Erakovic, New Zealand tennis player 1988 – Simon Mignolet, Belgian footballer 1989 – Agnieszka Radwańska, Polish tennis player 1990 – Derek Drouin, Canadian athlete 1991 – Lex Luger, American keyboard player and producer 1991 – Emma McDougall, English footballer (d. 2013) 1991 – Tyler Gregory Okonma, American rapper 1993 – Andrés Rentería, Colombian footballer 1994 – Marcus Smart, American basketball player 1995 – Georgi Kitanov, Bulgarian footballer 1996 – Christian Coleman, American sprinter 1996 – Tyrell Fuimaono, Australian rugby player 1996 – Timo Werner, German footballer Deaths Pre-1600 190 – Liu Bian (poisoned by Dong Zhuo) (b. 176) 653 – Li Ke, prince of the Tang Dynasty (b. 619) 766 – Chrodegang, Frankish bishop and saint 903 – Lu Guangqi, Chinese official and chancellor 903 – Su Jian, Chinese official and chancellor 1070 – Ulric I, Margrave of Carniola 1251 – Rose of Viterbo, Italian saint (b. 1235) 1353 – Roger Grey, 1st Baron Grey de Ruthyn 1466 – Alvise Loredan, Venetian admiral and statesman (b. 1393) 1490 – Ivan the Young, Ruler of Tver (b. 1458) 1491 – Richard Woodville, 3rd Earl Rivers 1531 – Pedro Arias Dávila, Spanish explorer and diplomat (b. 1440) 1601–1900 1616 – Francis Beaumont, English playwright (b. 1584) 1754 – Henry Pelham, English politician, Prime Minister of the United Kingdom (b. 1694) 1758 – Henry Vane, 1st Earl of Darlington, English politician, Lord Lieutenant of Durham (b. 1705) 1764 – Philip Yorke, 1st Earl of Hardwicke, English lawyer and politician, Lord Chancellor of the United Kingdom (b. 1690) 1796 – Guillaume Thomas François Raynal, French historian and author (b. 1713) 1836 – Deaths at the Battle of the Alamo: James Bonham, American lawyer and soldier (b. 1807) James Bowie, American colonel (b. 1796) Davy Crockett, American soldier and politician (b. 1786) William B. Travis, American lieutenant colonel and lawyer (b. 1809) 1854 – Charles Vane, 3rd Marquess of Londonderry, Irish colonel and diplomat, Under-Secretary of State for War and the Colonies (b. 1778) 1866 – William Whewell, English priest, historian, and philosopher (b. 1794) 1867 – Charles Farrar Browne, American-English author and educator (b. 1834) 1888 – Louisa May Alcott, American novelist and poet (b. 1832) 1895 – Camilla Collett, Norwegian novelist and activist (b. 1813) 1899 – Kaʻiulani of Hawaii (b. 1875) 1900 – Gottlieb Daimler, German engineer and businessman, co-founded Daimler-Motoren-Gesellschaft (b. 1834) 1901–present 1905 – John Henninger Reagan, American surveyor, judge, and politician, 3rd Confederate States of America Secretary of the Treasury (b. 1818) 1905 – Makar Yekmalyan, Armenian composer (b. 1856) 1919 – Oskars Kalpaks, Latvian colonel (b. 1882) 1920 – Ömer Seyfettin, Turkish author and educator (b. 1884) 1932 – John Philip Sousa, American conductor and composer (b. 1854) 1933 – Anton Cermak, Czech-American lawyer and politician, 44th Mayor of Chicago (b. 1873) 1935 – Oliver Wendell Holmes Jr., American colonel, lawyer, and jurist (b. 1841) 1939 – Ferdinand von Lindemann, German mathematician and academic (b. 1852) 1941 – Francis Aveling, Canadian priest, psychologist, and author (b. 1875) 1941 – Gutzon Borglum, American sculptor and academic, designed Mount Rushmore (b. 1867) 1948 – Ross Lockridge Jr., American author, poet, and academic (b. 1914) 1948 – Alice Woodby McKane, First Black woman doctor in Savannah, Georgia (b. 1865) 1950 – Albert François Lebrun, French engineer and politician, 15th President of France (b. 1871) 1951 – Ivor Novello, Welsh singer-songwriter and actor (b. 1893) 1951 – Volodymyr Vynnychenko, Ukrainian playwright and politician, Prime Minister of Ukraine (b. 1880) 1952 – Jürgen Stroop, German general (b. 1895) 1955 – Mammad Amin Rasulzade, Azerbaijani scholar and politician (b. 1884) 1961 – George Formby, English singer-songwriter and actor (b. 1904) 1964 – Paul of Greece (b. 1901) 1965 – Margaret Dumont, American actress (b. 1889) 1967 – John Haden Badley, English author and educator, founded the Bedales School (b. 1865) 1967 – Nelson Eddy, American actor and singer (b. 1901) 1967 – Zoltán Kodály, Hungarian composer, linguist, and philosopher (b. 1882) 1970 – William Hopper, American actor (b. 1915) 1973 – Pearl S. Buck, American novelist, essayist, short story writer, Nobel Prize laureate (b. 1892) 1974 – Ernest Becker, American anthropologist and author (b. 1924) 1976 – Maxie Rosenbloom, American boxer (b. 1903) 1977 – Alvin R. Dyer, American religious leader (b. 1903) 1978 – Dennis Viollet, English-American soccer player and manager (b. 1933) 1981 – George Geary, English cricketer and coach (b. 1893) 1981 – Rambhau Mhalgi, Indian politician and member of the Lok Sabha (b. 1921) 1982 – Ayn Rand, Russian-American philosopher, author, and playwright (b. 1905) 1984 – Billy Collins Jr., American boxer (b. 1961) 1984 – Martin Niemöller, German pastor and theologian (b. 1892) 1984 – Homer N. Wallin, American admiral (b. 1893) 1984 – Henry Wilcoxon, Dominican-American actor and producer (b. 1905) 1986 – Georgia O'Keeffe, American painter (b. 1887) 1988 – Mairéad Farrell, Provisional IRA volunteer (b. 1957) 1988 – Daniel McCann, Provisional IRA volunteer (b. 1957) 1988 – Seán Savage, Provisional IRA volunteer (b. 1965) 1994 – Melina Mercouri, Greek actress and politician, 9th Greek Minister of Culture (b. 1920) 1997 – Cheddi Jagan, Guyanese politician, 4th President of Guyana (b. 1918) 1997 – Michael Manley, Jamaican soldier, pilot, and politician, 4th Prime Minister of Jamaica (b. 1924) 1997 – Ursula Torday, English author (b. 1912) 1999 – Isa bin Salman Al Khalifa, Bahrain king (b. 1933) 2000 – John Colicos, Canadian actor (b. 1928) 2002 – Bryan Fogarty, Canadian ice hockey player (b. 1969) 2004 – Hercules, American wrestler (b. 1957) 2004 – Frances Dee, American actress (b. 1909) 2005 – Hans Bethe, German-American physicist and academic, Nobel Prize laureate (b. 1906) 2005 – Danny Gardella, American baseball player and trainer (b. 1920) 2005 – Tommy Vance, English radio host (b. 1943) 2005 – Teresa Wright, American actress (b. 1918) 2005 – Gladys Marín, Chilean activist and political figure (b.1938) 2006 – Anne Braden, American journalist and activist (b. 1924) 2006 – Kirby Puckett, American baseball player and sportscaster (b. 1960) 2007 – Jean Baudrillard, French photographer and theorist (b. 1929) 2007 – Ernest Gallo, American businessman, co-founded E & J Gallo Winery (b. 1909) 2008 – Peter Poreku Dery, Ghanaian cardinal (b. 1918) 2009 – Francis Magalona, Filipino rapper, producer, and actor (b. 1964) 2010 – Endurance Idahor, Nigerian footballer (b. 1984) 2010 – Mark Linkous, American singer-songwriter, guitarist, and producer (b. 1962) 2010 – Betty Millard, American philanthropist and activist (b. 1911) 2012 – Francisco Xavier do Amaral, East Timorese politician, 1st President of East Timor (b. 1937) 2012 – Donald M. Payne, American businessman and politician (b. 1934) 2012 – Helen Walulik, American baseball player (b. 1929) 2013 – Chorão, Brazilian singer-songwriter (Charlie Brown Jr.) (b. 1970) 2013 – Stompin' Tom Connors, Canadian singer-songwriter and guitarist (b. 1936) 2013 – Alvin Lee, English singer-songwriter and guitarist (b. 1944) 2013 – W. Wallace Cleland, American biochemist and academic (b. 1930) 2014 – Alemayehu Atomsa, Ethiopian educator and politician (b. 1969) 2014 – Frank Jobe, American soldier and surgeon (b. 1925) 2014 – Sheila MacRae, English-American actress, singer, and dancer (b. 1921) 2014 – Martin Nesbitt, American lawyer and politician (b. 1946) 2014 – Manlio Sgalambro, Italian philosopher, author, and poet (b. 1924) 2015 – Fred Craddock, American minister and academic (b. 1928) 2015 – Ram Sundar Das, Indian lawyer and politician, 18th Chief Minister of Bihar (b. 1921) 2015 – Enrique "Coco" Vicéns, Puerto Rican-American basketball player and politician (b. 1926) 2016 – Nancy
Richard Rushall, British businessman (d. 1953) 1870 – Oscar Straus, Viennese composer and conductor (d. 1954) 1871 – Afonso Costa, Portuguese lawyer and politician, 59th Prime Minister of Portugal (d. 1937) 1872 – Ben Harney, American pianist and composer (d. 1938) 1877 – Rose Fyleman, English writer and poet (d. 1957) 1879 – Jimmy Hunter, New Zealand rugby player (d. 1962) 1882 – F. Burrall Hoffman, American architect, co-designed Villa Vizcaya (d. 1980) 1882 – Guy Kibbee, American actor and singer (d. 1956) 1884 – Molla Mallory, Norwegian-American tennis player (d. 1959) 1885 – Ring Lardner, American journalist and author (d. 1933) 1892 – Bert Smith, English international footballer (d. 1969) 1893 – Furry Lewis, American singer-songwriter and guitarist (d. 1981) 1893 – Ella P. Stewart, pioneering Black American pharmacist (d. 1987) 1895 – Albert Tessier, Canadian priest and historian (d. 1976) 1898 – Gus Sonnenberg, American football player and wrestler (d. 1944) 1900 – Gina Cigna, French-Italian soprano and actress (d. 2001) 1900 – Lefty Grove, American baseball player (d. 1975) 1900 – Henri Jeanson, French journalist and author (d. 1970) 1901–present 1903 – Empress Kōjun of Japan (d. 2000) 1904 – José Antonio Aguirre, Spanish lawyer and politician, 1st President of the Basque Country (d. 1960) 1905 – Bob Wills, American Western swing musician, songwriter, and bandleader (d. 1975) 1906 – Lou Costello, American actor and comedian (d. 1959) 1909 – Obafemi Awolowo, Nigerian lawyer and politician (d. 1987) 1909 – Stanisław Jerzy Lec, Polish poet and author (d. 1966) 1910 – Emma Bailey, American auctioneer and author (d. 1999) 1912 – Mohammed Burhanuddin, Indian spiritual leader, 52nd Da'i al-Mutlaq (d. 2014) 1913 – Ella Logan, Scottish-American singer and actress (d. 1969) 1917 – Donald Davidson, American philosopher and academic (d. 2003) 1917 – Will Eisner, American illustrator and publisher (d. 2005) 1917 – Frankie Howerd, English comedian (d. 1992) 1918 – Howard McGhee, American trumpeter (d. 1987) 1920 – Lewis Gilbert, English director, producer, and screenwriter (d. 2018) 1921 – Leo Bretholz, Austrian-American holocaust survivor and author (d. 2014) 1923 – Ed McMahon, American comedian, game show host, and announcer (d. 2009) 1923 – Wes Montgomery, American guitarist and songwriter (d. 1968) 1924 – Ottmar Walter, German footballer (d. 2013) 1924 – William H. Webster, American lawyer and jurist, 14th Director of Central Intelligence 1926 – Ann Curtis, American swimmer (d. 2012) 1926 – Alan Greenspan, American economist and politician 1926 – Ray O'Connor, Australian politician, 22nd Premier of Western Australia (d. 2013) 1926 – Andrzej Wajda, Polish director, producer, and screenwriter (d. 2016) 1927 – William J. Bell, American screenwriter and producer (d. 2005) 1927 – Gordon Cooper, American engineer, pilot, and astronaut (d. 2004) 1927 – Gabriel García Márquez, Colombian journalist and author, Nobel Prize laureate (d. 2014) 1929 – Tom Foley, American lawyer and politician, 57th Speaker of the United States House of Representatives (d. 2013) 1929 – David Sheppard, English cricketer and bishop (d. 2005) 1930 – Lorin Maazel, French-American violinist, composer, and conductor (d. 2014) 1932 – Marc Bazin, Haitian lawyer and politician, 49th President of Haiti (d. 2010) 1932 – Bronisław Geremek, Polish historian and politician, Polish Minister of Foreign Affairs (d. 2008) 1933 – Ted Abernathy, American baseball player (d. 2004) 1933 – William Davis, German-English journalist and economist (d. 2019) 1933 – Augusto Odone, Italian economist and inventor of Lorenzo's oil (d. 2013) 1934 – Red Simpson, American singer-songwriter (d. 2016) 1935 – Ron Delany, Irish runner and coach 1935 – Derek Kevan, English footballer (d. 2013) 1936 – Bob Akin, American race car driver and journalist (d. 2002) 1936 – Marion Barry, American lawyer and politician, 2nd Mayor of the District of Columbia (d. 2014) 1936 – Choummaly Sayasone, Laotian politician, 5th President of Laos 1937 – Ivan Boesky, American businessman 1937 – Valentina Tereshkova, Russian general, pilot, and astronaut 1938 – Keishu Tanaka, Japanese politician, 17th Japanese Minister of Justice 1939 – Kit Bond, American lawyer and politician, 47th Governor of Missouri 1939 – Adam Osborne, Thai-Indian engineer and businessman, founded the Osborne Computer Corporation (d. 2003) 1940 – Ken Danby, Canadian painter (d. 2007) 1940 – Joanna Miles, French-born American actress 1940 – R. H. Sikes, American golfer 1940 – Willie Stargell, American baseball player and coach (d. 2001) 1940 – Jeff Wooller, English accountant and banker 1941 – Peter Brötzmann, German saxophonist and clarinet player 1941 – Marilyn Strathern, Welsh anthropologist and academic 1942 – Ben Murphy, American actor 1944 – Richard Corliss, American journalist and critic (d. 2015) 1944 – Kiri Te Kanawa, New Zealand soprano and actress 1944 – Mary Wilson, American singer (d. 2021) 1945 – Angelo Castro Jr., Filipino actor and journalist (d. 2012) 1946 – David Gilmour, English singer-songwriter and guitarist 1946 – Richard Noble, Scottish race car driver and businessman 1947 – Kiki Dee, English singer-songwriter 1947 – Dick Fosbury, American high jumper 1947 – Anna Maria Horsford, American actress 1947 – Rob Reiner, American actor, director, producer, and activist 1947 – Jean Seaton, English historian and academic 1947 – John Stossel, American journalist and author 1948 – Stephen Schwartz, American composer and producer 1949 – Shaukat Aziz, Pakistani economist and politician, 15th Prime Minister of Pakistan 1949 – Martin Buchan, Scottish footballer and manager 1950 – Arthur Roche, English archbishop 1951 – Gerrie Knetemann, Dutch cyclist (d. 2004) 1952 – Denis Napthine, Australian politician, 47th Premier of Victoria 1953 – Madhav Kumar Nepal, Nepali banker and politician, 34th Prime Minister of Nepal 1953 – Carolyn Porco, American astronomer and academic 1953 – Phil Alvin, American singer-songwriter and guitarist 1954 – Jeff Greenwald, American author, photographer, and monologist 1954 – Harald Schumacher, German footballer and manager 1955 – Cyprien Ntaryamira, Burundian politician, 5th President of Burundi (d. 1994) 1955 – Alberta Watson, Canadian actress (d. 2015) 1956 – Peter Roebuck, English cricketer, journalist, and sportcaster (d. 2011) 1956 – Steve Vizard, Australian television host, actor, and producer 1960 – Sleepy Floyd, American basketball player and coach 1962 – Alison Nicholas, British golfer 1963 – D. L. Hughley, American actor, producer, and screenwriter 1964 – Linda Pearson, Scottish sport shooter 1965 – Allan Bateman, Welsh rugby player 1965 – Jim Knight, English politician 1966 – Alan Davies, English comedian, actor and screenwriter 1967 – Julio Bocca, Argentinian ballet dancer and director 1967 – Connie Britton, American actress 1967 – Glenn Greenwald, American journalist and author 1967 – Shuler Hensley, American actor and singer 1968 – Moira Kelly, American actress and director 1971 – Darrick Martin, American basketball player and coach 1972 – Shaquille O'Neal, American basketball player, actor, and rapper 1972 – Jaret Reddick, American singer-songwriter, guitarist, and actor 1973 – Michael Finley, American basketball player 1973 – Peter Lindgren, Swedish guitarist and songwriter 1973 – Greg Ostertag, American basketball player 1973 – Trent Willmon, American singer-songwriter and guitarist 1974 – Guy Garvey, English singer-songwriter and guitarist 1974 – Matthew Guy, Australian politician 1974 – Brad Schumacher, American swimmer 1974 – Beanie Sigel, American rapper 1975 – Aracely Arámbula, Mexican actress and singer 1975 – Yannick Nézet-Séguin, Canadian pianist and conductor 1976 – Ken Anderson, American wrestler and actor 1977 – Nantie Hayward, South African cricketer 1977 – Giorgos Karagounis, Greek international footballer 1977 – Shabani Nonda, DR Congolese footballer 1977 – Marcus Thames, American baseball player and coach 1978 – Sage Rosenfels, American football player 1978 – Chad Wicks, American wrestler 1979 – Clint Barmes, American baseball player 1979 – Érik Bédard, Canadian baseball player 1979 – David Flair, American wrestler 1979 – Tim Howard, American soccer player 1980 – Emílson Cribari, Brazilian footballer 1981 – Ellen Muth, American actress 1983 – Andranik Teymourian, Armenian-Iranian footballer 1984 – Daniël de Ridder, Dutch footballer 1984 – Eskil Pedersen, Norwegian politician 1984 – Chris Tomson, American drummer 1985 – Bakaye Traoré, French-Malian footballer 1986 – Jake Arrieta, American baseball player 1986 – Francisco Cervelli, Venezuelan-Italian baseball player 1986 – Ross Detwiler, American baseball player 1986 – Eli Marienthal, American actor 1986 – Charlie Mulgrew, Scottish footballer 1987 – Kevin-Prince Boateng, Ghanaian-German footballer 1987 – Chico Flores, Spanish footballer 1988 – Agnes Carlsson, Swedish singer 1988 – Marina Erakovic, New Zealand tennis player 1988 – Simon Mignolet, Belgian footballer 1989 – Agnieszka Radwańska, Polish tennis player 1990 – Derek Drouin, Canadian athlete 1991 – Lex Luger, American keyboard player and producer 1991 – Emma McDougall, English footballer (d. 2013) 1991 – Tyler Gregory Okonma, American rapper 1993 – Andrés Rentería, Colombian footballer 1994 – Marcus Smart, American basketball player 1995 – Georgi Kitanov, Bulgarian footballer 1996 – Christian Coleman, American sprinter 1996 – Tyrell Fuimaono, Australian rugby player 1996 – Timo Werner, German footballer Deaths Pre-1600 190 – Liu Bian (poisoned by Dong Zhuo) (b. 176) 653 – Li Ke, prince of the Tang Dynasty (b. 619) 766 – Chrodegang, Frankish bishop and saint 903 – Lu Guangqi, Chinese official and chancellor 903 – Su Jian, Chinese official and chancellor 1070 – Ulric I, Margrave of Carniola 1251 – Rose of Viterbo, Italian saint (b. 1235) 1353 – Roger Grey, 1st Baron Grey de Ruthyn 1466 – Alvise Loredan, Venetian admiral and statesman (b. 1393) 1490 – Ivan the Young, Ruler of Tver (b. 1458) 1491 – Richard Woodville, 3rd Earl Rivers 1531 – Pedro Arias Dávila, Spanish explorer and diplomat (b. 1440) 1601–1900 1616 – Francis Beaumont, English playwright (b. 1584) 1754 – Henry Pelham, English politician, Prime Minister of the United Kingdom (b. 1694) 1758 – Henry Vane, 1st Earl of Darlington, English politician, Lord Lieutenant of Durham (b. 1705) 1764 – Philip Yorke, 1st Earl of Hardwicke, English lawyer and politician, Lord Chancellor of the United Kingdom (b. 1690) 1796 – Guillaume Thomas François Raynal, French historian and author (b. 1713) 1836 – Deaths at the Battle of the Alamo: James Bonham, American lawyer and soldier (b. 1807) James Bowie, American colonel (b. 1796) Davy Crockett, American soldier and politician (b. 1786) William B. Travis, American lieutenant colonel and lawyer (b. 1809) 1854 – Charles Vane, 3rd Marquess of Londonderry, Irish colonel and diplomat, Under-Secretary of State for War and the Colonies (b. 1778) 1866 – William Whewell, English priest, historian, and philosopher (b. 1794) 1867 – Charles Farrar Browne, American-English author and educator (b. 1834) 1888 – Louisa May Alcott, American novelist and poet (b. 1832) 1895 – Camilla Collett, Norwegian novelist and activist (b. 1813) 1899 – Kaʻiulani of Hawaii (b. 1875) 1900 – Gottlieb Daimler, German engineer and businessman, co-founded Daimler-Motoren-Gesellschaft (b. 1834) 1901–present 1905 – John
Sangay; but it soon reaches the plain, which commences where it receives its Cusulima branch. The Morona is navigable for small craft for about 300 miles above its mouth, but it is extremely tortuous. Canoes may ascend many of its branches, especially the Cusuhma and the Miazal, the latter almost to the base of Sangay. The Morona has been
of the Amazon before reaching the Pongo de Manseriche. It is formed from a multitude of water-courses which descend the slopes of the Ecuadorian Andes south of the gigantic volcano of Sangay; but it soon reaches the plain, which commences where it receives its Cusulima
led to the construction of Colossus, the world's first operational, programmable electronic computer, and he established the Royal Society Computing Machine Laboratory at the University of Manchester, which produced the world's first working, stored-program electronic computer in 1948, the Manchester Baby. Education and early life Newman was born Maxwell Herman Alexander Neumann in Chelsea, London, England, to a Jewish family, on 7 February 1897. His father was Herman Alexander Neumann, originally from the German city of Bromberg (now in Poland), who had emigrated with his family to London at the age of 15. Herman worked as a secretary in a company, and married Sarah Ann Pike, an Irish schoolteacher, in 1896. The family moved to Dulwich in 1903, and Newman attended Goodrich Road school, then City of London School from 1908. At school, he excelled in classics and in mathematics. He played chess and the piano well. Newman won a scholarship to study mathematics at St John's College, Cambridge in 1915, and in 1916 gained a First in Part I of the Cambridge Mathematical Tripos. World War I Newman's studies were interrupted by World War I. His father was interned as an enemy alien after the start of the war in 1914, and upon his release he returned to Germany. In 1916, Herman changed his name by deed poll to the anglicised "Newman" and Sarah did likewise in 1920. In January 1917, Newman took up a teaching post at Archbishop Holgate's Grammar School in York, leaving in April 1918. He spent some months in the Royal Army Pay Corps, and then taught at Chigwell School for six months in 1919 before returning to Cambridge. He was called up for military service in February 1918, but claimed conscientious objection due to his beliefs and his father's country of origin, and thereby avoided any direct role in the fighting. Between the wars Graduation Newman resumed his interrupted studies in October 1919, and graduated in 1921 as a Wrangler (equivalent to a First) in Part II of the Mathematical Tripos, and gained distinction in Schedule B (the equivalent of Part III). His dissertation considered the use of "symbolic machines" in physics, foreshadowing his later interest in computing machines. Early academic career On 5 November 1923, Newman was elected a Fellow of St John's. He worked on the foundations of combinatorial topology, and proposed that a notion of equivalence be defined using only three elementary "moves". Newman's definition avoided difficulties that had arisen from previous definitions of the concept. Publishing over twenty papers established his reputation as an "expert in modern topology". Newman wrote Elements of the topology of plane sets of points, a work on general
sounded out by Frank Adcock in connection with the Government Code and Cypher School at Bletchley Park. Newman was cautious, concerned to ensure that the work would be sufficiently interesting and useful, and there was also the possibility that his father's German nationality would rule out any involvement in top-secret work. The potential issues were resolved by the summer, and he agreed to arrive at Bletchley Park on 31 August 1942. Newman was invited by F. L. (Peter) Lucas to work on Enigma but decided to join Tiltman's group working on Tunny. Tunny Newman was assigned to the Research Section and set to work on a German teleprinter cipher known as "Tunny". He joined the "Testery" in October. Newman enjoyed the company but disliked the work and found that it was not suited to his talents. He persuaded his superiors that Tutte's method could be mechanised, and he was assigned to develop a suitable machine in December 1942. Shortly afterwards, Edward Travis (then operational head of Bletchley Park) asked Newman to lead research into mechanised codebreaking. The Newmanry When the war ended, Newman was presented with a silver tankard inscribed 'To MHAN from the Newmanry, 1943–45'. Heath Robinson Construction started in January 1943, and the first prototype was delivered in June 1943. It was operated in Newman's new section, termed the "Newmanry", was housed initially in Hut 11 and initially staffed by himself, Donald Michie, two engineers, and 16 Wrens. The Wrens nicknamed the machine the "Heath Robinson", after the cartoonist of the same name who drew humorous drawings of absurd mechanical devices. Colossus The Robinson machines were limited in speed and reliability. Tommy Flowers of the Post Office Research Station, Dollis Hill had experience of thermionic valves and built an electronic machine, the Colossus computer which was installed in the Newmanry. This was a great success and ten were in use by the end of the war. Later academic career Fielden Chair, Victoria University of Manchester In September 1945, Newman was appointed head of the Mathematics Department and to the Fielden Chair of Pure Mathematics at the University of Manchester. Computing Machine Laboratory Newman lost no time in establishing the renowned Royal Society Computing Machine Laboratory at the University. In February 1946, he wrote to John von Neumann, expressing his desire to build a computing machine. The Royal Society approved Newman's grant application in July 1946. Frederic Calland Williams and Thomas Kilburn, experts in electronic circuit design, were recruited from the Telecommunications Research Establishment. Kilburn and Williams built Baby, the world's first electronic stored-program digital computer based on Alan Turing's and John von Neumann's ideas. After the Automatic Computing Engine suffered delays and set backs, Turing accepted Newman's offer and joined the Computer Machine Laboratory in May 1948 as Deputy Director (there being no Director). Turing joined Kilburn and Williams to work on Baby's successor, the Manchester Mark I. Collaboration between the University and Ferranti later produced the Ferranti Mark I, the first mass-produced computer to go on sale. Retirement Newman retired in 1964 to live in Comberton, near Cambridge. After Lyn's death in 1973, he married Margaret Penrose, widow of his friend Lionel Penrose, father of Sir Roger Penrose. He continued to do research on combinatorial topology during a period when England was a major centre of activity notably Cambridge under the leadership of Christopher Zeeman. Newman made important contributions leading to an invitation to present his work at the 1962 International Congress of Mathematicians in Stockholm at the age of 65, and proved a Generalized Poincaré conjecture for topological manifolds in 1966. At the age of 85, Newman began to suffer from Alzheimer's disease. He died in Cambridge two years later. Honours Fellow of the Royal Society, elected 1939 Royal Society Sylvester Medal, awarded 1958 London Mathematical Society, President 1949–1951 LMS De Morgan Medal, awarded 1962 D.Sc. University of Hull, awarded 1968 The Newman Building at Manchester was named in his honour. The building housed the pure mathematicians from the Victoria University of Manchester between moving out of the Mathematics Tower in 2004 and July 2007 when the School of Mathematics moved into its new Alan Turing Building, where a lecture room is named in his honour. In 1946, Newman declined the offer of an OBE as he considered the offer derisory. Alan Turing had been appointed an OBE six months earlier and Newman felt that it was inadequate recognition of Turing's contribution to winning the war, referring to it as the "ludicrous treatment of Turing". See also List of pioneers in computer science References External links Archival materials The Max Newman Digital Archive has digital copies of materials from the library of St. John's College, Cambridge. Pre-computer cryptographers Fellows of
in computer program termination analysis Measuring coalgebra, a coalgebra constructed from two algebras Measure (Apple), an iOS augmented reality app Other uses Measure (album), by Matt Pond PA, 2000, and its title track Measure (bartending) or jigger, a bartending tool used to measure liquor Measure (journal), an international journal of formal poetry "Measures" (Justified), a 2012 episode of the TV series Justified Measure (music), or bar, in musical notation Measure (typography), line length in characters per line Coal measures,
constructed from two algebras Measure (Apple), an iOS augmented reality app Other uses Measure (album), by Matt Pond PA, 2000, and its title track Measure (bartending) or jigger, a bartending tool used to measure liquor Measure (journal), an international journal of formal poetry "Measures" (Justified), a 2012 episode of the TV series Justified Measure (music), or bar, in musical notation Measure (typography), line length in characters per line Coal measures, the coal-bearing part of the Upper Carboniferous System The Measure (SA), an American punk rock band Bar (Music), a time segment in musical notation See also Countermeasure, a
the Red Line. Buses equipped with bike racks at the front (including the Silver Line) may always accommodate bicycles, up to the capacity limit of the racks. The MBTA claims that 95% of its buses are now equipped with bike racks, except for trackless trolleys which still lack this capability. Due to congestion and tight clearances, bicycles are banned from Park Street, Downtown Crossing, and Government Center stations at all times. However, compact folding bicycles are permitted on all MBTA vehicles at all times, provided that they are kept completely folded for the duration of the trip, including passage through faregates. Gasoline-powered vehicles, bike trailers, and Segways are prohibited. No special permit is required to take a bicycle onto an MBTA vehicle, but bicyclists are expected to follow the rules and hours of operation. Cyclists under 16 years old are supposed to be accompanied by a parent or legal guardian. Detailed rules, and an explanation of how to use front-of-bus bike racks and bike parking are on the MBTA website. The MBTA says that over 95% of its stations are equipped with bike racks, many of them under cover from the weather. In addition, over a dozen stations are equipped with "Pedal & Park" fully enclosed areas protected with video surveillance and controlled door access, for improved security. To obtain access, a personally registered CharlieCard must be used. Registration is done online, and requires a valid email address and the serial number of the CharlieCard. All bike parking is free of charge. Parking , the MBTA operates park and ride facilities at 103 locations with a total capacity of 55,000 automobiles, and is the owner of the largest number of off-street paid parking spaces in New England. The number of spaces at stations with parking varies from a few dozen to over 2,500. The larger lots and garages are usually near a major highway exit, and most lots fill up during the morning rush hour. There are some 22,000 spaces on the southern portion of the commuter rail system, 9,400 on the northern portion and 14,600 at subway stations. The parking fee ranges from $4 to $7 per day, and overnight parking (maximum 7 days) is permitted at some stations. Management for a number of parking lots owned by the MBTA is handled by a private contractor. The 2012 contract with LAZ Parking (which was not its first) was terminated in 2017 after employees were discovered "skimming" revenue; the company paid $5.5 million to settle the case. A new contract with stronger performance incentives and anti-fraud penalties was then awarded to Republic Parking System of Tennessee. Customers parking in MBTA-owned and operated lots with existing cash "honor boxes" can pay for parking online or via phone while in their cars or once they board a train, bus, or commuter boat. , the MBTA switched from ParkMobile to PayByPhone as its provider for mobile parking payments by smartphone. Monthly parking permits are available, offering a modest discount. Detailed parking information by station is available online, including prices, estimated vacancy rate, and number of accessible and bicycle parking slots. , the MBTA has a policy for electric vehicle charging stations in its parking spaces, but does not yet have such facilities available. From time to time the MBTA has made various agreements with companies that contribute to commuting options. One company the MBTA selected was Zipcar; the MBTA provides Zipcar with a limited number of parking spaces at various subway stations throughout the system. Hours of operation Traditionally, the MBTA has stopped running around 1 am each night, despite the fact that bars and clubs in most areas of Boston are open until 2 am. Like nearly all subways worldwide, the MBTA's subway does not have parallel express and local tracks, so much rail maintenance is only done when the trains are not running. An MBTA spokesperson has said, "with a 109-year-old system you have to be out there every night" to do necessary maintenance. The MBTA did experiment with "Night Owl" substitute bus service from 2001 to 2005, but abandoned it because of insufficient ridership, citing a $7.53 per rider cost to keep the service open, five times the cost per passenger of an average bus route. A modified form of the MBTA's previous "Night Owl" service was experimentally reinstated starting in the spring of 2014 – this time, all subway lines were proposed to run until 3 am on weekends, along with the 15 most heavily used bus lines and the para-transit service "The Ride". Starting March 28, 2014, the late-night service began operation on a one-year trial basis, with service continuation depending on late-night ridership and on possible corporate sponsorship. , late-night ridership was stable, and much higher than the earlier failed experimental service. However, it is still unclear whether and on what basis the program might be extended past its first year. The extended hours program has not been implemented on the MBTA commuter rail operations. In early 2016, the MBTA decided that Late-Night service would be canceled because of lack of funding. The last night for late-night service was on March 19, 2016. The last train left at 2 a.m. on March 19, 2016. In 2018, the MBTA further tried "Early Morning and Late Night Bus Service Pilots". In June 2019, a year after the trials the board voted to make some changes to the schedule which would allow for further late night service to be incorporated long term Ridership During Fiscal Year 2013, the entire MBTA system had a typical weekday passenger ridership of 1,297,650. The MBTA's rapid transit lines (Red, Green, Orange, and Blue) accounted for 59% of all rides, buses accounted for 30%, and commuter rail accounted for 10% of all rides. The MBTA's ferries and paratransit accounted for the remaining 1% of rides. Passenger ridership has been steadily growing over the years, and between 2010 and 2013, the system saw passenger ridership grow 4.6% or an additional 57,000 daily passengers to the system. Funding Fares and fare collection The MBTA has various fare structures for its various types of service. The CharlieCard electronic farecard is accepted on the subway and bus systems, but not on commuter rail, ferry, or paratransit services. Passengers pay for subway and bus rides at faregates in station entrances or fareboxes in the front of vehicles; MBTA employees manually check tickets on the commuter rail and ferries. Since the 1980s, the MBTA has offered discounted monthly passes on all modes for the convenience of daily commuters and other frequent riders. One-day and seven-day passes, intended primarily for tourists, are available for buses, subway, and inner harbor ferries. The MBTA has periodically raised fares to match inflation and keep the system financially solvent. A substantial increase effective July 2012 raised public ire including an "Occupy the MBTA" protest. A transportation funding law passed in 2013 limits MBTA fare increases to 7% every two years. Subsequent fare increases took place in 2014, 2016, and 2019. Several local politicians, including Boston Mayor Michelle Wu, Representative Ayanna Pressley, and Senator Edward J. Markey, have proposed to eliminate MBTA fares. Subway and bus All subway trips (Green Line, Blue Line, Orange Line, Red Line, Ashmont-Mattapan Line, and the Waterfront section of the Silver Line) cost $2.40 for all users. Local bus and trackless trolley fares (including the Washington Street section of the Silver Line) are $1.70 for all users. All transfers between subway lines are free with all fare media. Passengers using CharlieCards can transfer free from a subway to a bus, and from a bus to a subway for the difference in price ("step-up fare"). CharlieTicket holders can transfer free between buses, but not between subway and bus, except between rapid transit and the Washington Street section of the Silver Line. Paying directly with cash is only available on buses, Green Line surface stops, and the Ashmont-Mattapan Line; the higher CharlieTicket price is charged. The MBTA operates "Inner Express" and "Outer Express" buses to suburbs outside the subway system. Inner Express bus trips cost $4.25; Outer Express trips cost $5.25. Free transfers are available to the subway and local buses with a CharlieCard, and to local buses with a CharlieTicket. CharlieTickets are available from ticket vending machines in MBTA rapid transit stations. CharlieCards are not dispensed by the machines, but are available free of charge on request at most MBTA Customer Service booths in stations, or at the CharlieCard Store at Downtown Crossing station. As given out, the CharlieCards are "empty", and must have value added at an MBTA ticket machine before they can be used. The fare system, including on-board and in-station fare vending machines, was purchased from German-based Scheidt and Bachmann, which developed the technology. The CharlieCards were developed by Gemalto and later by Giesecke & Devrient. In 2006 electronic fares replaced metal tokens, which had been used on and off on transit systems in Boston for over a century. Until 2007, not all subway fares were identical – passengers were not charged for boarding outbound Green Line trains at surface stops, while double fares were charged for the outer ends of the Green Line D branch and the Red Line Braintree branch. As part of a general fare hike effective January 1, 2007, the MBTA eliminated these inconsistent fares. Subway and bus fare history Commuter Rail Commuter rail fares are on a zone-based system, with fares dependent on the distance from downtown. Rides between Zone 1A stations – South Station, Back Bay, most of the Fairmount Line, and eight other stations within several miles of downtown – cost $2.40, the same as a subway fare with a CharlieCard. Fares for other stations range from $5.75 from Zone 1 (~5–10 miles from downtown) to $14.50 from Zone 10 (~60 miles). All Massachusetts stations are Zone 8 or closer; only T.F. Green Airport and Wickford Junction in Rhode Island are Zone 9 and 10. Interzone fares – for trips that do not go to Zone 1A – are offered at a substantial discount to encourage riders to take the commuter rail for less common commuting patterns for which transit is not usually taken. Discounted monthly passes are available for all trips; 10-ride passes at full price are also available for trips to Zone 1A. All monthly passes include unlimited trips on the subway and local bus; some outer-zone monthlies also offer free use of express buses and ferries. A cash-on-board surcharge of $3.00 is added for trips originating from stations with fare vending machines. MBTA boat The Inner Harbor Ferry costs $3.25 per ride, and is grouped as a Zone 1A monthly commuter rail pass. Single rides cost $8.50 from Hull or Hingham to Boston, $17.00 from Hull or Hingham to Logan Airport, and $13.75 from Boston to Logan Airport. The Ride Fares on The Ride, the MBTA's paratransit program, are structured differently from other modes. Passengers using The Ride must maintain an account with the MBTA in order to pay for service. Fares are $3.35 for "ADA trips" originating within of fixed-route bus or subway service and booked in advance, and $5.60 for "premium trips" outside the mandated area. Discounted fares Discounted fares as well as discounted monthly local bus and subway passes are available to seniors over 65, and passengers who are permanently disabled who utilize a special photo CharlieCard (called "Senior ID" and "Transportation Access Pass", respectively). Holders of these passes are also entitled to 50% off the Commuter Rail fares. Passengers who are legally blind ride for free on all MBTA services (including express buses and the Commuter Rail) with a "Blind Access Card". Children under 12 ride for free with an adult (up to 2 per adult). Military personnel, state police officers, police officers and firefighters from the MBTA service area, and certain government officials (Commonwealth Department of Public Utilities employees and state elevator inspectors) ride at no charge upon presentation of proper ID, or if dressed in official work uniforms. Middle school and high school students receive the aforementioned discounts on fares. Student discounts require a "Student CharlieCard" or "S-Card" issued through the holder's school which is valid year-round. College students are not generally eligible for reduced fares, but some colleges offer a "Semester Pass" program. A special "Youth Pass" program was introduced in 2017, allowing young adults less than 25 years old who reside in participating cities or towns and are enrolled in specific low income programs to pay reduced fares. Budget Since the "forward funding" reform in 2000, the MBTA is funded primarily through 16% of the state sales tax excluding the meals tax (with minimum dollar amount guarantee), which is set at 6.25% statewide, and therefore equal to 1% of taxable non-meal purchases statewide. The authority is also funded by passenger fares and formula assessments of the cities and towns in its service area (excepting those which are assessed for the MetroWest Regional Transit Authority). Supplemental income is obtained from its paid parking lots, renting space to retail vendors in and around stations, rents from utility companies using MBTA rights of way, selling surplus land and movable property, advertising on vehicles and properties, and federal operating subsidies for special programs. A May 2019 report found the MBTA had a maintenance backlog of approximately $10 billion, which it hopes to clear by 2032 by increasing spending on capital projects. The Capital Investment Program is a rolling 5-year plan which programs capital expenses. The draft FY2009-2014 CIP allocates $3,795M, including $879M in projects funded from non-MBTA state sources (required for Clean Air Act compliance), and $299M in projects with one-time federal funding from the American Recovery and Reinvestment Act of 2009. Capital projects are paid for by federal grants, allocations from the general budget of the Commonwealth of Massachusetts (for legal commitments and expansion projects) and MBTA bonds (which are paid off through the operating budget). The FY2014 budget includes $1.422 billion for operating expenses and $443.8M in debt and lease payments. The FY2010 budget was supplemented by $160 million in sales tax revenue when the statewide rate was raised from 5% to 6.25%, to avoid service cuts or a fare increase in a year when deferred debt payments were coming due. Capital improvements and planning process The Boston Metropolitan Planning Organization is responsible for overall regional surface transportation planning. As required by federal law for projects to be eligible for federal funding (except earmarks), the MPO maintains a fiscally constrained 20+ year Regional Transportation Plan for surface transportation expansion, the current edition of which is called Journey to 2030. The required 4-year MPO plan is called the Transportation Improvement Plan. The MBTA maintains its own 25-year capital planning document, called the Program for Mass Transportation, which is fiscally unconstrained. The agency's 4-year plan is called the Capital Improvement Plan; it is the primary mechanism by which money is actually allocated to capital projects. Major capital spending projects must be approved by the MBTA Board, and except for unexpected needs, are usually included in the initial CIP. In addition to federal funds programmed through the Boston MPO, and MBTA capital funds derived from fares, sales tax, municipal assessments, and other minor internal sources, the T receives funding from the Commonwealth of Massachusetts for certain projects. The state may fund items in the State Implementation Plan (SIP) – such as the Big Dig mitigation projects – which is the plan required under the Clean Air Act to reduce air pollution. (, all of Massachusetts is designated as a clean air "non-attainment" zone.) Projects underway and future plans Blue Line There is a proposal to extend the Blue Line northward to Lynn, with two potential extension routes having been identified. One proposed path would run through marshland alongside the existing Newburyport/Rockport commuter rail line, while the other would extend the line along the remainder of the BRB&L right of way. In addition, the MBTA has committed to designing an extension of the line's southern terminus westward to Charles/MGH, where it would connect with the Red Line. This was one of the mitigation measures the Commonwealth of Massachusetts agreed to offset increased automobile emissions from the Big Dig, but it was later replaced in this agreement by other projects. Green Line To settle a lawsuit with the Conservation Law Foundation to mitigate increased automobile emissions from the Big Dig, the Commonwealth of Massachusetts agreed to extend the Green Line north to Somerville and Medford, two suburbs currently under-served by the MBTA. This plan starts at a relocated Lechmere station, and terminates at College Avenue in Medford and Union Square in Somerville. The original settlement-imposed deadline was December 31, 2014. There will be an expected daily ridership of 8,420. After projected costs increased to $3 billion, the project was halted in 2015 and scaled back. The revised project broke ground in June 2018 and is expected to serve passengers beginning in late 2021. Another mitigation project in the initial settlement was restoration of service on the E branch between Heath Street and Arborway/Forest Hills. A revised settlement agreement resulted in the substitution of other projects with similar air quality benefits. The state Executive Office of Transportation promised to consider other transit enhancements in the Arborway corridor. Orange and Red Lines In October 2013, MassDOT announced plans for a $1.3 billion subway car order for the Orange and Red Lines, which would replace and expand the existing car fleets and add more frequent service. The MassDOT Board awarded a $566.6 million contract to a China based manufacturer CNR (which became part of CRRC the following year) to build 404 replacement railcars for the Orange Line and Red Line. The other bidders were Bombardier Transportation, Kawasaki Heavy Industries and Hyundai Rotem. CNR began building the cars at a new manufacturing plant in Springfield, Massachusetts, with initial deliveries expected in 2018 and all cars in service by 2023. The Board forwent federal funding to allow the contract to specify the cars be built in Massachusetts, in order to create a local railcar manufacturing industry. In addition to the new rolling stock, the $1.3 billion allocated for the project will pay for testing, signal improvements and expanded maintenance facilities, as well as other related expenses. Sixty percent of the car's components are sourced from the United States. Replacement of the signal systems, which will increase reliability and allow more frequent trains, is expected to be complete by 2022, with a total cost of $218 million for both lines. Commuter rail There are several proposed extensions to current commuter rail lines. An extension of the Stoughton Line known as South Coast Rail is proposed to serve Fall River, and New Bedford. Critics argue that building the extension does not make economic sense. A extension of the Providence Line past Providence to T. F. Green Airport and Wickford Junction in Rhode Island opened in 2012. The Rhode Island Department of Transportation is also studying the feasibility of serving existing Amtrak stations in Kingston and Westerly as well as constructing new stations in Cranston, East Greenwich, and West Davisville. Federal funding has also been provided for preliminary planning of a new station in Pawtucket. In September 2009, CSX Transportation and the Commonwealth of Massachusetts finalized a $100 million agreement to purchase CSX's Framingham to Worcester tracks, as well as some other track, to improve service on the Framingham/Worcester Line. A liability issue that had held up the agreement was resolved. There is also a project underway to upgrade the Fitchburg Line to have cab signaling and to construct a second track along a run near Acton which is shared with freight traffic, so that the Fitchburg to Boston trip will be able to take only about an hour. Completion is expected in December 2015. The state of New Hampshire created the New Hampshire Rail Transit Authority and allocated money to build platforms at Nashua and Manchester. An article in The Eagle-Tribune claimed that Massachusetts was negotiating to buy property which has the potential to extend the Haverhill Line to Plaistow, New Hampshire. Massachusetts agreed in 2005 to make improvements on the Fairmount Line part of its legally binding commitment to mitigate increased air pollution from the Big Dig. These improvements, including four new infill stations, were supposed to be complete by December 31, 2011.
found "safety is not the priority at the T, but it must be." The report said "there is a general feeling that fiscal controls over the years may have gone too far, which coupled with staff cutting has resulted in the inability to accomplish required maintenance and inspections, or has hampered work keeping legacy system assets fully functional." COVID-19 pandemic In February 2020, the COVID-19 pandemic began to impact Massachusetts. When the stay-at-home order was issued the following month, businesses closed or sent staff to work from home, and people were advised to avoid riding public transit unless necessary. At the lowest point, MBTA ridership dropped about 78% on buses, 92% on the subway, on 71% paratransit, and 97% on commuter rail. Bus and subway ran on a modified Saturday schedule; commuter rail was on a reduced schedule and ferries were shut down completely. To facilitate social distancing from drivers, buses started running fare-free with rear-door-only boarding, passengers were required to wear face masks (except small children and people with relevant medical conditions), and the agency began frequently sanitizing vehicles and stations. Driver availability was limited as some employees contracted the virus. The T received $827 million in federal aid for FY2020 and FY2021 to make up for increased costs and lost revenue. In June, the MBTA announced that commuter rail tickets and passes valid as of March 10 would be valid for 90 days, starting on June 22. It also made various fare changes to encourage riders to shift from potentially crowded bus or subway, including discounted ten-ride tickets, half-price tickets for youth, and Zone 1A fares extended to Lynn and River Works stations. 2021 budget proposal Due to the COVID-19 pandemic, ridership on the MBTA has declined by 87% which has forced Massachusetts legislators and the MBTA to potentially implement a plan that would eliminate weekend commuter rail services and shut it down after 9 p.m. on weeknights, eliminate 25 bus routes, stop subway and bus services at midnight, among other changes to scale back services. This plan, if implemented through a vote by the Fiscal and Management Control Board, would save Massachusetts more than $130 million. The loss in services could potentially mean that up to 1,700 riders will not be able to take the bus and 733 riders will not be able to take the train. Supporters of this plan believe that this is the best plan as the majority of ridership has decreased due to the pandemic and it is not feasible to continue providing services that would not be used, especially if there are alternatives to using public transportation, such as personal vehicles. Through saving money by cutting services, the city is planning on using the money for services once the pandemic has ended. Supporters claim that reduced services will still be sufficient for those who still rely on public transport during the pandemic. Changes would not be implemented right away, rather, they could slowly be introduced, with the earliest changes coming in January 2021 and other changes coming as late as summer 2021. This allows the MBTA to adjust service needs based on ridership needs. On the other hand, opponents argue that by reducing services, it will be harder for riders, who are typically low-income or people of color to get to essential jobs. Riders will be forced to find other ways of transportation, which could mean using personal vehicles, leading to an increase in dependence on the automobility paradigm. Opponents argue that public transportation should be treated as a public good, which means asking wealthier people and corporations to pay their share for the upkeep of transportation as a way to achieve mobility justice. Services Buses The MBTA bus system is the nation's sixth largest by ridership and comprises over 150 routes across the Greater Boston area. The area served by the MBTA's bus operations is somewhat larger than its subway and light rail service area, but is significantly smaller than that served by the MBTA's commuter rail operation. At least eight other regional transit authorities also provide bus services within that larger area, these being the Rhode Island Public Transit Authority, Brockton Area Transit Authority, Cape Ann Transportation Authority, Greater Attleboro Taunton Regional Transit Authority, Lowell Regional Transit Authority, Merrimack Valley Regional Transit Authority, Montachusett Regional Transit Authority, and Worcester Regional Transit Authority. All of these authorities have their own fare structures and some subcontract operation to private bus companies. In many cases, their buses serve as feeders to the MBTA commuter rail. Within MBTA's bus service area, transfers from the subway are free if using a CharlieCard (for local buses); transfers to the subway require paying the difference between bus and the higher subway fare (for local buses; if not using a CharlieCard, full subway fare must be paid in addition to full bus fare). Bus-to-bus transfers (for local buses) are free unless paying cash. Many of the outlying routes run express along major highways to downtown. The buses are colored yellow on maps and in station decor. The Silver Line is the MBTA's first service designated as bus rapid transit (BRT), even though it lacks many of the characteristics of bus rapid transit. The first segment began operations in 2002, replacing the 49 bus, which in turn replaced the Washington Street Elevated section of the Orange Line. A full subway fare was charged, with free transfers to the subways downtown until January 1, 2007, when the fare system was revised to categorize the service as a "bus" for fare purposes. The "Washington Street" segment runs along various downtown streets, and mostly in dedicated bus lanes on Washington Street itself. Two Washington Street routes start at Nubian station in Roxbury; the SL5 terminates at Downtown Crossing on Temple Place , while the SL4 terminates at South Station on Essex Street. The "Waterfront" section opened at the end of 2004, and connects South Station to Logan Airport with route SL1 via Ted Williams Tunnel and South Boston (Design Center area) with route SL2. A new service to Chelsea opened April 21, 2018 via the same tunnel that SL1 uses and stops at the Airport Station of the Blue Line. The buses that run the Waterfront section are 2004-05 dual-mode buses, trackless trolley in the Silver Line tunnel and diesel outside. Service to Logan Airport began in June 2005. The Waterfront segment is classified as a "subway" for fare purposes. A transfer between segments is possible at South Station. A "Phase III" tunneled segment was proposed to connect the two segments for through service, but it was controversial due to high cost and the fact that many did not consider Phase I to be adequate replacement service for the old Elevated. All Phase III tunneling proposals have been suspended due to lack of funds, as has the Urban Ring, which was intended to expand upon existing crosstown buses. The MBTA contracts with private bus companies to provide subsidized service on certain routes outside of the usual fare structure. These are known collectively as the HI-RIDE Commuter Bus service, and are not numbered or mapped in the same way as integral bus services. Four routes connecting to Harvard Station (Red Line) still run as trackless trolleys; there was once a much larger trackless trolley system. (See Trolleybuses in Greater Boston.) In FY2005, there were on average 363,500 weekday boardings of MBTA-operated buses and trackless trolleys (not including the Silver Line), or 31.8% of the MBTA system. Another 4,400 boardings (0.38%) occurred on subsidized bus routes operated by private carriers. In June 2020 in the aftermath of COVID-19 pandemic, the MBTA had begun providing real-time information on crowding. The information would be available on the MBTA website, E Ink screens, and in the Transit app. At conception the service is was only available on bus routes 1, 15, 16, 22, 23, 31, 32, 109, and 110, and it remains unclear if further lines or transit modes will be introduced in the future or if this feature will remain permanent. This feature however is not new, since 2019 Google Maps has provided this data. Subway The subway system has three heavy rail rapid transit lines (the Red, Orange and Blue Lines), and two light rail lines (the Green Line and the Ashmont–Mattapan High-Speed Line, the latter designated an extension of the Red Line). The system operates according to a spoke-hub distribution paradigm, with the lines running radially between central Boston and its environs. It is common usage in Boston to refer to all four of the color-coded rail lines which run underground as "the subway" or "the T", regardless of the actual railcar equipment used. All four subway lines cross downtown, forming a quadrilateral configuration, and the Orange and Green Lines (which run approximately parallel in that district) also connect directly at two stations just north of downtown. The Red Line and Blue Line are the only pair of subway lines which do not have a direct transfer connection to each other. Because the various subway lines do not consistently run in any given compass direction, it is customary to refer to line directions as "inbound" or "outbound". Inbound trains travel towards the four downtown transfer stations, and outbound trains travel away from these hub stations. The Green Line has four branches in the west: B (Boston College), C (Cleveland Circle), D (Riverside), and E (Heath Street). The A branch formerly went to Watertown, filling in the north-to-south letter assignment pattern, and the E branch formerly continued beyond Heath Street to Arborway. The Red Line has two branches in the south, Ashmont and Braintree, named after their terminal stations. The colors were assigned on August 26, 1965 in conjunction with design standards developed by Cambridge Seven Associates, and have served as the primary identifier for the lines since the 1964 reorganization of the MTA into the MBTA. The Orange Line is so named because it used to run along Orange Street (now lower Washington Street), as the former "Orange Street" also was the street that joined the city to the mainland through Boston Neck in colonial times; the Green Line because it runs adjacent to parts of the Emerald Necklace park system; the Blue Line because it runs under Boston Harbor; and the Red Line because its northernmost station used to be at Harvard University, whose school color is crimson. The four transit lines all use standard rail gauge, but are otherwise incompatible; trains of one line would have to be modified to run on another. Orange and Blue Line trains are similar enough that modification of some Blue Line trains for operation on the Orange Line was considered, although ultimately rejected for cost reasons. Also, some of the new Blue Line cars from Siemens Transportation were tested on the Orange Line after hours, before acceptance for revenue service on the Blue Line. There are no direct track connections between lines, except between the Red Line and Ashmont-Mattapan High Speed Line, but all except the Blue Line have little-used connections to the national rail network, which have been used for deliveries of railcars and supplies. Opened in September 1897, the four-track-wide segment of the Green Line tunnel between Park Street and Boylston stations was the first subway in the United States, and has been designated a National Historic Landmark. The downtown portions of what are now the Green, Orange, Blue, and Red line tunnels were all in service by 1912. Additions to the rapid transit network occurred in most decades of the 1900s, and continue in the 2000s with the addition of Silver Line bus rapid transit and planned Green Line expansion. (See History and Future plans sections.) In FY2005, there were on average 628,400 weekday boardings on the rapid transit and light rail lines (including the Silver Line Bus Rapid Transit), or 55.0% of the MBTA system. On January 29, 2014, the MBTA completed a countdown clock display system, alerting passengers to arriving trains, at all 53 heavy rail subway stations (the Red, Blue and Orange Lines). The MBTA introduced countdown clocks in underground Green Line stations during 2015. Unlike the other countdown clocks which count down in minutes, the Green Line clocks count down the number of stops away the train is. Commuter rail The MBTA Commuter Rail system is a regional rail network that reaches from Boston into the suburbs of eastern Massachusetts. The system consists of twelve main lines, three of which have two branches. The rail network operates according to a spoke-hub distribution paradigm, with the lines running radially outward from the city of Boston. Eight of the lines converge at South Station, with four of these passing through Back Bay station. The other four converge at North Station. There is no passenger connection between the two sides; the Grand Junction Railroad is used for non-revenue equipment moves accessing the maintenance facility. The North–South Rail Link has been proposed to connect the two halves of the system; it would be constructed under the Central Artery tunnel of the Big Dig. Special MBTA trains are run over the Franklin Line and the Providence/Stoughton Line to Foxborough station for New England Patriots home games and other events at Gillette Stadium. The CapeFLYER intercity service, operated on summer weekends, uses MBTA equipment and operates over the Middleborough/Lakeville Line. Amtrak runs regularly scheduled intercity rail service over four lines: the Lake Shore Limited over the Framingham/Worcester Line, Acela Express and Northeast Regional services over the Providence/Stoughton Line, and the Downeaster over sections of the Lowell Line and Haverhill Line. Freight trains run by Pan Am Southern, Pan Am Railways, CSX Transportation, the Providence and Worcester Railroad, and the Fore River Railroad also use parts of the network. The first commuter rail service in the United States was operated over what is now the Framingham/Worcester Line beginning in 1834. Within the next several decades, Boston was the center of a massive rail network, with eight trunk lines and dozens of branches. By 1900, ownership was consolidated under the Boston and Maine Railroad to the north, the New York Central Railroad to the west, and the New York, New Haven and Hartford Railroad to the south. Most branches and one trunk line – the former Old Colony Railroad main – had their passenger services discontinued during the middle of the 20th century. In 1964, the MBTA was formed to subsidize the failing suburban railroad operations, with an eye towards converting many to extensions of the existing rapid transit system. The first unified branding of the system was applied on October 8, 1974, with "MBTA Commuter Rail" naming and purple coloration analogous to the four subway lines. The system continued to shrink – mostly with the loss of marginal lines with one daily round trip – until 1981. The system has been expanded since, with four lines restored (Fairmount Line in 1979, Old Colony Lines in 1997, and Greenbush Line in 2007), six extended., and a number of stations added and rebuilt, especially on the Fairmount Line. Several further expansions are planned or proposed. The South Coast Rail project, for which preliminary construction began in 2014, would extend the Stoughton section of the Providence/Stoughton Line to Taunton, with two branches to New Bedford and Fall River. Extensions of the Providence/Stoughton Line to Kingston, the Middleborough/Lakeville Line to Buzzards Bay, and the Lowell Line into New Hampshire are also proposed. Infill stations at West Station and South Salem are under construction or planned. Each commuter rail line has up to eleven fare zones, numbered 1A and 1 through 10. Riders are charged based on the number of zones they travel through. Tickets can be purchased on the train, from ticket counters or machines in some rail stations, or with a mobile app. If a local vendor or ticket machine is available, riders will pay a surcharge for paying with cash on board. Fares range from $2.25 to $12.50, with multi-ride and monthly passes available. In 2016, the system averaged 122,600 daily riders, making it the fourth-busiest commuter rail system in the nation. The MBTA commuter rail network was the first
For example, a positive pion () is made of one up quark and one down antiquark; and its corresponding antiparticle, the negative pion (), is made of one up antiquark and one down quark. Because mesons are composed of quarks, they participate in both the weak and strong interactions. Mesons with net electric charge also participate in the electromagnetic interaction. Mesons are classified according to their quark content, total angular momentum, parity and various other properties, such as C-parity and G-parity. Although no meson is stable, those of lower mass are nonetheless more stable than the more massive, and hence are easier to observe and study in particle accelerators or in cosmic ray experiments. The lightest group of mesons is less massive than the lightest group of baryons, meaning that they are more easily produced in experiments, and thus exhibit certain higher-energy phenomena more readily than do baryons. But mesons can be quite massive: for example, the J/Psi meson () containing the charm quark, first seen 1974, is about three times as massive as a proton, and the upsilon meson () containing the bottom quark, first seen in 1977, is about ten times as massive. History From theoretical considerations, in 1934 Hideki Yukawa predicted the existence and the approximate mass of the "meson" as the carrier of the nuclear force that holds atomic nuclei together. If there were no nuclear force, all nuclei with two or more protons would fly apart due to electromagnetic repulsion. Yukawa called his carrier particle the meson, from μέσος mesos, the Greek word for "intermediate", because its predicted mass was between that of the electron and that of the proton, which has about 1,836 times the mass of the electron. Yukawa or Carl David Anderson, who discovered the muon, had originally named the particle the "mesotron", but he was corrected by the physicist Werner Heisenberg (whose father was a professor of Greek at the University of Munich). Heisenberg pointed out that there is no "tr" in the Greek word "mesos". The first candidate for Yukawa's meson, in modern terminology known as the muon, was discovered in 1936 by Carl David Anderson and others in the decay products of cosmic ray interactions. The "mu meson" had about the right mass to be Yukawa's carrier of the strong nuclear force, but over the course of the next decade, it became evident that it was not the right particle. It was eventually found that the "mu meson" did not participate in the strong nuclear interaction at all, but rather behaved like a heavy version of the electron, and was eventually classed as a lepton like the electron, rather than a meson. Physicists in making this choice decided that properties other than particle mass should control their classification. There were years of delays in the subatomic particle research during World War II (1939–1945), with most physicists working in applied projects for wartime necessities. When the war ended in August 1945, many physicists gradually returned to peacetime research. The first true meson to be discovered was what would later be called the "pi meson" (or pion). This discovery was made in 1947, by Cecil Powell, Hugh Muirhead, César Lattes, and Giuseppe Occhialini, who were investigating cosmic ray products at the University of Bristol in England, based on photographic films placed in the Andes mountains. Some of those mesons had about the same mass as the already-known mu "meson", yet seemed to decay into it, leading physicist Robert Marshak to hypothesize in 1947 that it was actually a new and different meson. Over the next few years, more experiments showed that the pion was indeed involved in strong interactions. The pion (as a virtual particle) is also believed to be the primary force carrier for the nuclear force in atomic nuclei. Other mesons, such as the virtual rho mesons are involved in mediating this force as well, but to a lesser extent. Following the discovery of the pion, Yukawa was awarded the 1949 Nobel Prize in Physics for his predictions. In the past, the word meson was sometimes used to mean any force carrier, such as "the Z0 meson", which is involved in mediating the weak interaction. However, this use has fallen out of favor, and mesons are now defined as particles composed of pairs of quarks and antiquarks. Overview Spin, orbital angular momentum, and total angular momentum Spin (quantum number ) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of . The is often dropped because it is the "fundamental" unit of spin, and it is implied that "spin 1" means "spin 1 ". (In some systems of natural units, is chosen to be 1, and therefore does not appear in equations.) Quarks are fermions—specifically in this case, particles having spin ( = ). Because spin projections vary in increments of 1 (that is 1 ), a single quark has a spin vector of length , and has two spin projections (z = + and z = ). Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length = 1 and three spin projections (z = +1, z = 0, and z = −1), called the spin-1 triplet. If two quarks have unaligned spins, the spin vectors add up to make a vector of length S = 0 and only one spin projection (z = 0), called the spin-0 singlet. Because mesons are made of one quark and one antiquark, they can be found in triplet and singlet spin states. The latter are called scalar mesons or pseudoscalar mesons, depending on their parity (see below). There is another quantity of quantized angular momentum, called the orbital angular momentum (quantum number ), that is the angular momentum due to quarks orbiting each other, and comes in increments of 1 . The total angular momentum (quantum number ) of a particle is the combination of intrinsic angular momentum (spin) and orbital angular momentum. It can take any value from up to , in increments of 1. Particle physicists are most interested in mesons with no orbital angular momentum ( = 0), therefore the two groups of mesons most studied are the = 1; = 0 and = 0; = 0, which corresponds to = 1 and = 0, although they are not the only ones. It is also possible to obtain = 1 particles from = 0 and = 1. How to distinguish between the = 1, = 0 and = 0, = 1 mesons is an active area of research in meson spectroscopy. -parity -parity is left-right parity, or spatial parity, and was the first of several "parities" discovered, and so is often called just “parity”. If the universe were reflected in a mirror, most laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called parity (). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (-violation). Based on this, one might think that, if the wavefunction for each particle (more precisely, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: In order for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity ( = −1, or alternatively = −), whereas the other particles are said to have positive or even parity ( = +1, or alternatively = +). For mesons, parity is related to the orbital angular momentum by the relation: where the is a result of the parity of the corresponding spherical harmonic of the wavefunction. The "+1" comes from the fact that, according to the Dirac equation, a quark and an antiquark have opposite intrinsic parities. Therefore, the intrinsic parity of a meson is the product of the intrinsic parities of the quark (+1) and antiquark (−1). As these are different, their product is −1, and so it contributes the "+1" that appears in the exponent. As a consequence, all mesons with no orbital angular momentum ( = 0) have odd parity ( = −1). C-parity -parity is only defined for mesons that are their own antiparticle (i.e. neutral mesons). It represents whether or not the wavefunction of the meson remains the same under the interchange of their quark with their antiquark. If then, the meson is " even" ( = +1). On the other hand, if then the
indeed involved in strong interactions. The pion (as a virtual particle) is also believed to be the primary force carrier for the nuclear force in atomic nuclei. Other mesons, such as the virtual rho mesons are involved in mediating this force as well, but to a lesser extent. Following the discovery of the pion, Yukawa was awarded the 1949 Nobel Prize in Physics for his predictions. In the past, the word meson was sometimes used to mean any force carrier, such as "the Z0 meson", which is involved in mediating the weak interaction. However, this use has fallen out of favor, and mesons are now defined as particles composed of pairs of quarks and antiquarks. Overview Spin, orbital angular momentum, and total angular momentum Spin (quantum number ) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of . The is often dropped because it is the "fundamental" unit of spin, and it is implied that "spin 1" means "spin 1 ". (In some systems of natural units, is chosen to be 1, and therefore does not appear in equations.) Quarks are fermions—specifically in this case, particles having spin ( = ). Because spin projections vary in increments of 1 (that is 1 ), a single quark has a spin vector of length , and has two spin projections (z = + and z = ). Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length = 1 and three spin projections (z = +1, z = 0, and z = −1), called the spin-1 triplet. If two quarks have unaligned spins, the spin vectors add up to make a vector of length S = 0 and only one spin projection (z = 0), called the spin-0 singlet. Because mesons are made of one quark and one antiquark, they can be found in triplet and singlet spin states. The latter are called scalar mesons or pseudoscalar mesons, depending on their parity (see below). There is another quantity of quantized angular momentum, called the orbital angular momentum (quantum number ), that is the angular momentum due to quarks orbiting each other, and comes in increments of 1 . The total angular momentum (quantum number ) of a particle is the combination of intrinsic angular momentum (spin) and orbital angular momentum. It can take any value from up to , in increments of 1. Particle physicists are most interested in mesons with no orbital angular momentum ( = 0), therefore the two groups of mesons most studied are the = 1; = 0 and = 0; = 0, which corresponds to = 1 and = 0, although they are not the only ones. It is also possible to obtain = 1 particles from = 0 and = 1. How to distinguish between the = 1, = 0 and = 0, = 1 mesons is an active area of research in meson spectroscopy. -parity -parity is left-right parity, or spatial parity, and was the first of several "parities" discovered, and so is often called just “parity”. If the universe were reflected in a mirror, most laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called parity (). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (-violation). Based on this, one might think that, if the wavefunction for each particle (more precisely, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: In order for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity ( = −1, or alternatively = −), whereas the other particles are said to have positive or even parity ( = +1, or alternatively = +). For mesons, parity is related to the orbital angular momentum by the relation: where the is a result of the parity of the corresponding spherical harmonic of the wavefunction. The "+1" comes from the fact that, according to the Dirac equation, a quark and an antiquark have opposite intrinsic parities. Therefore, the intrinsic parity of a meson is the product of the intrinsic parities of the quark (+1) and antiquark (−1). As these are different, their product is −1, and so it contributes the "+1" that appears in the exponent. As a consequence, all mesons with no orbital angular momentum ( = 0) have odd parity ( = −1). C-parity -parity is only defined for mesons that are their own antiparticle (i.e. neutral mesons). It represents whether or not the wavefunction of the meson remains the same under the interchange of their quark with their antiquark. If then, the meson is " even" ( = +1). On the other hand, if then the
on Superheroes", and issue #151 "Son of the Ultimate Addenda". Karma The game's equivalent of experience points is Karma, a pool of points initially determined by the sum of a character's three mental attributes (Reason, Intuition, and Psyche). The basic system allows players to increase their chances of success at most tasks by spending points of Karma. For example, a player who wants to make sure he hits a villain in a critical situation can spend however many Karma points are necessary to raise the dice roll to the desired result. The referee distributes additional Karma points at the end of game sessions, typically as rewards for accomplishing heroic goals such as defeating villains, saving innocents, and foiling crimes. Karma can also be lost for unheroic actions such as fleeing from a villain or failing to stop a crime. In fact, in a notable departure from many RPGs, but strongly in keeping with the genre, all Karma is lost if a hero kills someone or allows someone to die. In the Advanced Game, Karma points can also be spent to permanently increase character attributes and powers. Game mechanics Two primary game mechanics drive the game: column shifts and colored results. Both influence the difficulty of an action. A column shift is used when a character is trying a hard or easy action. A column shift to the left indicates a penalty, while a shift to the right indicates a bonus. The column for each ability is divided into four colors: white, green, yellow, and red. A white result is always a failure or unfavorable outcome. In most cases, getting a green result is all that is needed to succeed at a particular action. Yellow and red results usually indicate more favorable results that could knock back, stun, or even kill an opponent. However, the GM can determine that succeeding at a hard task might require a yellow or red result. Additional rules in the "Campaign Book" of the Basic Set, and the subsequent Advanced Set, use the same game mechanic to resolve non-violent tasks. Official game supplements The original Marvel Super Heroes game received extensive support from TSR, covering a variety of Marvel Comics characters and settings, including a Gamer's Handbook of the Marvel Universe patterned after Marvel's Official Handbook of the Marvel Universe. MSH also got its own column in the TSR-published gaming magazine, Dragon, called "The Marvel-phile", which usually spotlighted a character or group of characters that hadn't yet appeared in a published game product. Reception In the July–August 1984 edition of Space Gamer (No. 70), Allen Varney wrote that the game was only suited to younger players and Marvel fanatics, saying, "this is a respectable effort, and an excellent introductory game for a devoted Marvel fan aged 10 to 12; older, more experienced, or less devoted buyers will probably be disappointed. 'Nuff said." Pete Tamlyn reviewed Marvel Super Heroes for Imagine magazine and stated that "this game has been produced in collaboration with Marvel and that opportunity itself is probably worth a new game release. However, Marvel Superheroes is not just another Superhero game. In many ways it is substantially different from other SHrpgs." In the January–February 1985 edition of Different Worlds (Issue #38), Troy Christensen gave it an average rating of 2.5 stars out of 4, saying, "The Marvel Super Heroes roleplaying game overall is a basic and simple system which I would recommend for beginning and novice players [...] People who enjoy a fast and uncomplicated game and like a system which is conservative and to the point will like this game." Marcus L. Rowland reviewed Marvel Super Heroes for White Dwarf #62, giving it an overall rating of 8 out of 10, and stated that "All in all, a useful system which is suitable for beginning players and referees, but should still suit experienced gamers." Seven years later, Varney revisited the game in the August 1991 edition of Dragon (Issue #172), reviewing the new Basic Set edition that had just been released. While Varney appreciated that the game was designed for younger players, he felt that it failed to recreate the excitement of the comics. "This is the gravest flaw of this system and support line: its apathy about recreating the spirit of Marvel stories. In this new Basic Set edition... you couldn’t find a miracle if you used microscopic vision. Look at this set’s few elementary mini-scenarios: all fight scenes. The four-color grandeur and narrative magic in the best Marvel stories are absent. Is this a good introduction to role-playing?" Varney instead suggested Toon by Steve Jackson Games or Ghostbusters by West End Games as better role-playing alternatives for new and beginning young players. In the 2007 book Hobby Games: The 100 Best, Steve Kenson commented that "it's a testament to the game's longevity that it still has enthusiastic fan support on the Internet and an active play community more than a decade after its last product was published. Even more so that it continues to set a standard by which new superhero roleplaying games are measured. Like modern comic book writers and artists following the greats of the Silver Age, modern RPG designers have a tough act to follow." Later Marvel RPGs Before losing the Marvel license back to Marvel Comics, TSR published a different game using their SAGA System game engine, called the Marvel Super Heroes Adventure Game. This version, written by Mike Selinker, was published in the late 1990s as a card-based version of the Marvel role-playing game (though a method of converting characters from the prior format to the SAGA System was included in the core rules).
medical background attempting to make a pill that can cure a rare disease). Resources and Popularity Characters also has two variable attributes: Resources and Popularity. These attributes use the same terms as the character's seven attributes ("Poor," "Amazing," "Unearthly," etc.). But unlike the seven physical and mental attributes, which change slowly, if at all, Resources and Popularity can change quickly. Resources represent the character's wealth. Rather than have the player keep track of how much money the character has, the Advanced Game assumes the character has enough money to cover basic living expenses. The Resources ability is used when the character tries to buy something like a new car or house. The game books note that a character's Resources score can change after winning the lottery or having a major business transaction go bad, among other things. Popularity reflects how much the character is liked or disliked. Popularity can influence non-player characters. A superhero with a high rating, like Captain America (whose popularity is Unearthly-the highest most characters can achieve), might use his Popularity to gain entrance to a club. If he were to try the same thing as his secret identity Steve Rogers (whose Popularity is only Typical), he would probably be unable to do it. Villains also have a Popularity score, which is usually negative (a bouncer might let Doctor Doom or Magneto into the aforementioned club out of fear). Popularity can change, too. Character creation The game is intended to use existing Marvel characters as the heroes. The Basic Set and Advanced Set both contain simple systems for creating original superheroes, based on random ability rolls (as in Dungeons & Dragons). In addition, the Basic Set Campaign Book allows players to create original heroes by describing the desired kind of hero and working together with the GM to assign the appropriate abilities, powers, and talents.The Ultimate Powers Book, by David Edward Martin, expands and organizes the game's list of powers. Players are given a variety of body types, secret origins, weaknesses, and powers to choose from. The UPB gives a greater range to characters one could create. The book suffers from editing problems and omissions; several errata and partial revisions were released in the pages of TSR's Dragon magazine in issue #122 "The Ultimate Addenda to the Ultimate Powers Book", issue #134 "The Ultimate Addenda's Addenda", issue #150 "Death Effects on Superheroes", and issue #151 "Son of the Ultimate Addenda". Karma The game's equivalent of experience points is Karma, a pool of points initially determined by the sum of a character's three mental attributes (Reason, Intuition, and Psyche). The basic system allows players to increase their chances of success at most tasks by spending points of Karma. For example, a player who wants to make sure he hits a villain in a critical situation can spend however many Karma points are necessary to raise the dice roll to the desired result. The referee distributes additional Karma points at the end of game sessions, typically as rewards for accomplishing heroic goals such as defeating villains, saving innocents, and foiling crimes. Karma can also be lost for unheroic actions such as fleeing from a villain or failing to stop a crime. In fact, in a notable departure from many RPGs, but strongly in keeping with the genre, all Karma is lost if a hero kills someone or allows someone to die. In the Advanced Game, Karma points can also be spent to permanently increase character attributes and powers. Game mechanics Two primary game mechanics drive the game: column shifts and colored results. Both influence the difficulty of an action. A column shift is used when a character is trying a hard or easy action. A column shift to the left indicates a penalty, while a shift to the right indicates a bonus. The column for each ability is divided into four colors: white, green, yellow, and red. A white result is always a failure or unfavorable outcome. In most cases,
late 19th and early 20th centuries that measure theory became a branch of mathematics. The foundations of modern measure theory were laid in the works of Émile Borel, Henri Lebesgue, Nikolai Luzin, Johann Radon, Constantin Carathéodory, and Maurice Fréchet, among others. Definition Let be a set and a -algebra over . A function from to the extended real number line is called a measure if it satisfies the following properties: Non-negativity: For all in Σ, we have . Null empty set: . Countable additivity (or -additivity): For all countable collections of pairwise disjoint sets in Σ, If at least one set has finite measure, then the requirement that is met automatically. Indeed, by countable additivity, and therefore If the condition of non-negativity is omitted but the second and third of these conditions are met, and takes on at most one of the values , then is called a signed measure. The pair is called a measurable space, the members of Σ are called measurable sets. A triple is called a measure space. A probability measure is a measure with total measure one – i.e. . A probability space is a measure space with a probability measure. For measure spaces that are also topological spaces various compatibility conditions can be placed for the measure and the topology. Most measures met in practice in analysis (and in many cases also in probability theory) are Radon measures. Radon measures have an alternative definition in terms of linear functionals on the locally convex space of continuous functions with compact support. This approach is taken by Bourbaki (2004) and a number of other sources. For more details, see the article on Radon measures. Instances Some important measures are listed here. The counting measure is defined by = number of elements in . The Lebesgue measure on is a complete translation-invariant measure on a σ-algebra containing the intervals in such that ; and every other measure with these properties extends Lebesgue measure. Circular angle measure is invariant under rotation, and hyperbolic angle measure is invariant under squeeze mapping. The Haar measure for a locally compact topological group is a generalization of the Lebesgue measure (and also of counting measure and circular angle measure) and has similar uniqueness properties. The Hausdorff measure is a generalization of the Lebesgue measure to sets with non-integer dimension, in particular, fractal sets. Every probability space gives rise to a measure which takes the value 1 on the whole space (and therefore takes all its values in the unit interval [0, 1]). Such a measure is called a probability measure. See probability axioms. The Dirac measure δa (cf. Dirac delta function) is given by δa(S) = χS(a), where χS is the indicator function of . The measure of a set is 1 if it contains the point and 0 otherwise. Other 'named' measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Gaussian measure, Baire measure, Radon measure, Young measure, and Loeb measure. In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (see conservation law for a list of these) or not. Negative values lead to signed measures, see "generalizations" below. Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics. Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble. Basic properties Let be a measure. Monotonicity If and are measurable sets with
to equal . μ{x : f(x) ≥ t} = μ{x : f(x) > t} (a.e.) If the -measurable function takes values on then for almost all with respect to the Lebesgue measure. This property is used in connection with Lebesgue integral. Additivity Measures are required to be countably additive. However, the condition can be strengthened as follows. For any set and any set of nonnegative define: That is, we define the sum of the to be the supremum of all the sums of finitely many of them. A measure on is -additive if for any and any family of disjoint sets the following hold: Note that the second condition is equivalent to the statement that the ideal of null sets is -complete. Sigma-finite measures A measure space is called finite if is a finite real number (rather than ∞). Nonzero finite measures are analogous to probability measures in the sense that any finite measure is proportional to the probability measure . A measure is called σ-finite if can be decomposed into a countable union of measurable sets of finite measure. Analogously, a set in a measure space is said to have a σ-finite measure if it is a countable union of sets with finite measure. For example, the real numbers with the standard Lebesgue measure are σ-finite but not finite. Consider the closed intervals for all integers ; there are countably many such intervals, each has measure 1, and their union is the entire real line. Alternatively, consider the real numbers with the counting measure, which assigns to each finite set of reals the number of points in the set. This measure space is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. The σ-finite measure spaces have some very convenient properties; σ-finiteness can be compared in this respect to the Lindelöf property of topological spaces. They can be also thought of as a vague generalization of the idea that a measure space may have 'uncountable measure'. s-finite measures A measure is said to be s-finite if it is a countable sum of bounded measures. S-finite measures are more general than sigma-finite ones and have applications in the theory of stochastic processes. Non-measurable sets If the axiom of choice is assumed to be true, it can be proved that not all subsets of Euclidean space are Lebesgue measurable; examples of such sets include the Vitali set, and the non-measurable sets postulated by the Hausdorff paradox and the Banach–Tarski paradox. Generalizations For certain purposes, it is useful to have a "measure" whose values are not restricted to the non-negative reals or infinity. For instance, a countably additive set function with values in
Roper Roxbury, Massachusetts. who demonstrated his machine at fairs and circuses in the eastern U.S. in 1867, Roper built about 10 steam cars and cycles from the 1860s until his death in 1896. Summary of early inventions First motorcycle companies In 1894, Hildebrand & Wolfmüller became the first series production motorcycle, and the first to be called a motorcycle (). Excelsior Motor Company, originally a bicycle manufacturing company based in Coventry, England, began production of their first motorcycle model in 1896. The first production motorcycle in the US was the Orient-Aster, built by Charles Metz in 1898 at his factory in Waltham, Massachusetts. In the early period of motorcycle history, many producers of bicycles adapted their designs to accommodate the new internal combustion engine. As the engines became more powerful and designs outgrew the bicycle origins, the number of motorcycle producers increased. Many of the nineteenth-century inventors who worked on early motorcycles often moved on to other inventions. Daimler and Roper, for example, both went on to develop automobiles. At the end of the 19th century the first major mass-production firms were set up. In 1898, Triumph Motorcycles in England began producing motorbikes, and by 1903 it was producing over 500 bikes. Other British firms were Royal Enfield, Norton, Douglas Motorcycles and Birmingham Small Arms Company who began motorbike production in 1899, 1902, 1907 and 1910, respectively. Indian began production in 1901 and Harley-Davidson was established two years later. By the outbreak of World War I, the largest motorcycle manufacturer in the world was Indian, producing over 20,000 bikes per year. First World War During the First World War, motorbike production was greatly ramped up for the war effort to supply effective communications with front line troops. Messengers on horses were replaced with despatch riders on motorcycles carrying messages, performing reconnaissance and acting as a military police. American company Harley-Davidson was devoting over 50% of its factory output toward military contract by the end of the war. The British company Triumph Motorcycles sold more than 30,000 of its Triumph Type H model to allied forces during the war. With the rear wheel driven by a belt, the Model H was fitted with a air-cooled four-stroke single-cylinder engine. It was also the first Triumph without pedals. The Model H in particular, is regarded by many as having been the first "modern motorcycle". Introduced in 1915 it had a 550 cc side-valve four-stroke engine with a three-speed gearbox and belt transmission. It was so popular with its users that it was nicknamed the "Trusty Triumph". Postwar By 1920, Harley-Davidson was the largest manufacturer, with their motorcycles being sold by dealers in 67 countries. Amongst many British motorcycle manufacturers, Chater-Lea with its twin-cylinder models followed by its large singles in the 1920s stood out. Initially, using converted a Woodmann-designed ohv Blackburne engine it became the first 350 cc to exceed 100 mph (160 km/h), recording 100.81 mph (162.24 km/h) over the flying kilometre during April 1924.[7] Later, Chater-Lea set a world record for the flying kilometre for 350 cc and 500 cc motorcycles at 102.9 mph (165.6 km/h) for the firm. Chater-Lea produced variants of these world-beating sports models and became popular among racers at the Isle of Man TT. Today, the firm is probably best remembered for its long-term contract to manufacture and supply AA Patrol motorcycles and sidecars. By the late 1920s or early 1930s, DKW in Germany took over as the largest manufacturer. In the 1950s, streamlining began to play an increasing part in the development of racing motorcycles and the "dustbin fairing" held out the possibility of radical changes to motorcycle design. NSU and Moto Guzzi were in the vanguard of this development, both producing very radical designs well ahead of their time. NSU produced the most advanced design, but after the deaths of four NSU riders in the 1954–1956 seasons, they abandoned further development and quit Grand Prix motorcycle racing. Moto Guzzi produced competitive race machines, and until the end of 1957 had a succession of victories. The following year, 1958, full enclosure fairings were banned from racing by the FIM in the light of the safety concerns. From the 1960s through the 1990s, small two-stroke motorcycles were popular worldwide, partly as a result of East German MZs Walter Kaaden's engine work in the 1950s. Today In the 21st century, the motorcycle industry is mainly dominated by Indian and Japanese motorcycle companies. In addition to the large capacity motorcycles, there is a large market in smaller capacity (less than 300 cc) motorcycles, mostly concentrated in Asian and African countries and produced in China and India. A Japanese example is the 1958 Honda Super Cub, which went on to become the biggest selling vehicle of all time, with its 60 millionth unit produced in April 2008. Today, this area is dominated by mostly Indian companies with Hero MotoCorp emerging as the world's largest manufacturer of two wheelers. Its Splendor model has sold more than 8.5 million to date. Other major producers are Bajaj and TVS Motors. Technical aspects Construction Motorcycle construction is the engineering, manufacturing, and assembly of components and systems for a motorcycle which results in the performance, cost, and aesthetics desired by the designer. With some exceptions, construction of modern mass-produced motorcycles has standardised on a steel or aluminium frame, telescopic forks holding the front wheel, and disc brakes. Some other body parts, designed for either aesthetic or performance reasons may be added. A petrol-powered engine typically consisting of between one and four cylinders (and less commonly, up to eight cylinders) coupled to a manual five- or six-speed sequential transmission drives the swingarm-mounted rear wheel by a chain, driveshaft, or belt. The repair can be done using a Motorcycle lift. Fuel economy Motorcycle fuel economy varies greatly with engine displacement and riding style. A streamlined, fully faired Matzu Matsuzawa Honda XL125 achieved in the Craig Vetter Fuel Economy Challenge "on real highways in real conditions". Due to low engine displacements (), and high power-to-mass ratios, motorcycles offer good fuel economy. Under conditions of fuel scarcity like 1950s Britain and modern developing nations, motorcycles claim large shares of the vehicle market. In the United States, the average motorcycle fuel economy is 44 miles per US gallon (19 km per liter). Electric motorcycles Very high fuel economy equivalents are often derived by electric motorcycles. Electric motorcycles are nearly silent, zero-emission electric motor-driven vehicles. Operating range and top speed are limited by battery technology. Fuel cells and petroleum-electric hybrids are also under development to extend the range and improve performance of the electric drive system. Reliability A 2013 survey of 4,424 readers of the US Consumer Reports magazine collected reliability data on 4,680 motorcycles purchased new from 2009 to 2012. The most common problem areas were accessories, brakes, electrical (including starters, charging, ignition), and fuel systems, and the types of motorcycles with the greatest problems were touring, off-road/dual sport, sport-touring, and cruisers. There were not enough sport bikes in the survey for a statistically significant conclusion, though the data hinted at reliability as good as cruisers. These results may be partially explained by accessories including such equipment as fairings, luggage, and auxiliary lighting, which are frequently added to touring, adventure touring/dual sport and sport touring bikes. Trouble with fuel systems is often the result of improper winter storage, and brake problems may also be due to poor maintenance. Of the five brands with enough data to draw conclusions, Honda, Kawasaki and Yamaha were statistically tied, with 11 to 14% of those bikes in the survey experiencing major repairs. Harley-Davidsons had a rate of 24%, while BMWs did worse, with 30% of those needing major repairs. There were not enough Triumph and Suzuki motorcycles surveyed for a statistically sound conclusion, though it appeared Suzukis were as reliable as the other three Japanese brands while Triumphs were comparable to Harley-Davidson and BMW. Three-fourths of the repairs in the survey cost less than US$200 and two-thirds of the motorcycles were repaired in less than two days. In spite of their relatively worse reliability in this survey, Harley-Davidson and BMW owners showed the greatest owner satisfaction, and
bike, cycle, or (if three-wheeled) trike, is a two- or three-wheeled motor vehicle. Motorcycle design varies greatly to suit a range of different purposes: long-distance travel, commuting, cruising, sport (including racing), and off-road riding. Motorcycling is riding a motorcycle and being involved in other related social activity such as joining a motorcycle club and attending motorcycle rallies. The 1885 Daimler Reitwagen made by Gottlieb Daimler and Wilhelm Maybach in Germany was the first internal combustion, petroleum-fueled motorcycle. In 1894, Hildebrand & Wolfmüller became the first series production motorcycle. In 2014, the three top motorcycle producers globally by volume were Honda (28%), Yamaha (17%) (both from Japan), and Hero MotoCorp (India). In developing countries, motorcycles are considered utilitarian due to lower prices and greater fuel economy. Of all the motorcycles in the world, 58% are in the Asia-Pacific and Southern and Eastern Asia regions, excluding car-centric Japan. According to the US Department of Transportation, the number of fatalities per vehicle mile traveled was 37 times higher for motorcycles than for cars. Types The term motorcycle has different legal definitions depending on jurisdiction (see ). There are three major types of motorcycle: street, off-road, and dual purpose. Within these types, there are many sub-types of motorcycles for different purposes. There is often a racing counterpart to each type, such as road racing and street bikes, or motocross including dirt bikes. Street bikes include cruisers, sportbikes, scooters and mopeds, and many other types. Off-road motorcycles include many types designed for dirt-oriented racing classes such as motocross and are not street legal in most areas. Dual purpose machines like the dual-sport style are made to go off-road but include features to make them legal and comfortable on the street as well. Each configuration offers either specialised advantage or broad capability, and each design creates a different riding posture. In some countries the use of pillions (rear seats) is restricted. History Experimentation and invention The first internal combustion, petroleum fueled motorcycle was the Daimler Reitwagen. It was designed and built by the German inventors Gottlieb Daimler and Wilhelm Maybach in Bad Cannstatt, Germany, in 1885. This vehicle was unlike either the safety bicycles or the boneshaker bicycles of the era in that it had zero degrees of steering axis angle and no fork offset, and thus did not use the principles of bicycle and motorcycle dynamics developed nearly 70 years earlier. Instead, it relied on two outrigger wheels to remain upright while turning. The inventors called their invention the Reitwagen ("riding car"). It was designed as an expedient testbed for their new engine, rather than a true prototype vehicle. The first commercial design for a self-propelled cycle was a three-wheel design called the Butler Petrol Cycle, conceived of Edward Butler in England in 1884. He exhibited his plans for the vehicle at the Stanley Cycle Show in London in 1884. The vehicle was built by the Merryweather Fire Engine company in Greenwich, in 1888. The Butler Petrol Cycle was a three-wheeled vehicle, with the rear wheel directly driven by a , displacement, bore × stroke, flat twin four-stroke engine (with magneto ignition replaced by coil and battery) equipped with rotary valves and a float-fed carburettor (five years before Maybach) and Ackermann steering, all of which were state of the art at the time. Starting was by compressed air. The engine was liquid-cooled, with a radiator over the rear driving wheel. Speed was controlled by means of a throttle valve lever. No braking system was fitted; the vehicle was stopped by raising and lowering the rear driving wheel using a foot-operated lever; the weight of the machine was then borne by two small castor wheels. The driver was seated between the front wheels. It wasn't, however, a success, as Butler failed to find sufficient financial backing. Many authorities have excluded steam powered, electric motorcycles or diesel-powered two-wheelers from the definition of a 'motorcycle', and credit the Daimler Reitwagen as the world's first motorcycle. Given the rapid rise in use of electric motorcycles worldwide, defining only internal-combustion powered two-wheelers as 'motorcycles' is increasingly problematic. The first (petroleum fueled) internal-combustion motorcycles, like the German Reitwagen, were, however, also the first practical motorcycles. If a two-wheeled vehicle with steam propulsion is considered a motorcycle, then the first motorcycles built seem to be the French Michaux-Perreaux steam velocipede which patent application was filled in December 1868, constructed around the same time as the American Roper steam velocipede, built by Sylvester H. Roper Roxbury, Massachusetts. who demonstrated his machine at fairs and circuses in the eastern U.S. in 1867, Roper built about 10 steam cars and cycles from the 1860s until his death in 1896. Summary of early inventions First motorcycle companies In 1894, Hildebrand & Wolfmüller became the first series production motorcycle, and the first to be called a motorcycle (). Excelsior Motor Company, originally a bicycle manufacturing company based in Coventry, England, began production of their first motorcycle model in 1896. The first production motorcycle in the US was the Orient-Aster, built by Charles Metz in 1898 at his factory in Waltham, Massachusetts. In the early period of motorcycle history, many producers of bicycles adapted their designs to accommodate the new internal combustion engine. As the engines became more powerful and designs outgrew the bicycle origins, the number of motorcycle producers increased. Many of the nineteenth-century inventors who worked on early motorcycles often moved on to other inventions. Daimler and Roper, for example, both went on to develop automobiles. At the end of the 19th century the first major mass-production firms were set up. In 1898, Triumph Motorcycles in England began producing motorbikes, and by 1903 it was producing over 500 bikes. Other British firms were Royal Enfield, Norton, Douglas Motorcycles and Birmingham Small Arms Company who began motorbike production in 1899, 1902, 1907 and 1910, respectively. Indian began production in 1901 and Harley-Davidson was established two years later. By the outbreak of World War I, the largest motorcycle manufacturer in the world was Indian, producing over 20,000 bikes per year. First World War During the First World War, motorbike production was greatly ramped up for the war effort to supply effective communications with front line troops. Messengers on horses were replaced with despatch riders on motorcycles carrying messages, performing reconnaissance and acting as a military police. American company Harley-Davidson was devoting over 50% of its factory output toward military contract by the end of the war. The British company Triumph Motorcycles sold more than 30,000 of its Triumph Type H model to allied forces during the war. With the rear wheel driven by a belt, the Model H was fitted with a air-cooled four-stroke single-cylinder engine. It was also the first Triumph without pedals. The Model H in particular, is regarded by many as having been the first "modern motorcycle". Introduced in 1915 it had a 550 cc side-valve four-stroke engine with a three-speed gearbox and belt transmission. It was so popular with its users that it was nicknamed the "Trusty Triumph". Postwar By 1920, Harley-Davidson was the largest manufacturer, with their motorcycles being sold by dealers in 67 countries. Amongst many British motorcycle manufacturers, Chater-Lea with its twin-cylinder models followed by its large singles in the 1920s stood out. Initially, using converted a Woodmann-designed ohv Blackburne engine it became the first 350 cc to exceed 100 mph (160 km/h), recording 100.81 mph (162.24 km/h) over the flying kilometre during April 1924.[7] Later, Chater-Lea set a world record for the flying kilometre for 350 cc and 500 cc motorcycles at 102.9 mph (165.6 km/h) for the firm. Chater-Lea produced variants of these world-beating sports models and became popular among racers at the Isle of Man TT. Today, the firm is probably best remembered for its long-term contract to manufacture and supply AA Patrol motorcycles and sidecars. By the late 1920s or early 1930s, DKW in Germany took over as the largest manufacturer. In the 1950s, streamlining began to play an increasing part in the development of racing motorcycles and the "dustbin fairing" held out the possibility of radical changes to motorcycle design. NSU and Moto Guzzi were in the vanguard of this development, both producing very radical designs well ahead of their time. NSU produced the most advanced design, but after the deaths of four NSU riders in the 1954–1956 seasons, they abandoned further development and quit Grand Prix motorcycle racing. Moto Guzzi produced competitive race machines, and until the end of 1957 had a succession of victories. The following year, 1958, full enclosure fairings were banned from racing by the FIM in the light of the safety concerns. From the 1960s through the 1990s, small two-stroke motorcycles were popular worldwide, partly as a result of East German MZs Walter Kaaden's engine work in the 1950s. Today In the 21st century, the motorcycle industry is mainly dominated by Indian and Japanese motorcycle companies. In addition to the large capacity motorcycles, there is a large market in smaller capacity (less than 300 cc) motorcycles, mostly concentrated in Asian and African countries and produced in China and India. A Japanese example is the 1958 Honda Super Cub, which went on to become the biggest selling vehicle of all time, with its 60 millionth unit produced in April 2008. Today, this area is dominated by mostly Indian companies with Hero MotoCorp emerging as the world's largest manufacturer of two wheelers. Its Splendor model has sold more than 8.5 million to date. Other major producers
from the observations of ground meteorological stations, atmospheric pressure is converted to sea level. Air temperature maps are compiled both from the actual values observed on the surface of the earth and from values converted to sea level. The pressure field in the free atmosphere is represented either by maps of the distribution of pressure at different standard altitudes—for example, at every kilometer above sea level—or by maps of baric topography on which altitudes (more precisely geopotentials) of the main isobaric surfaces (for example, 900, 800, and 700 millibars) counted off from sea level are plotted. The temperature, humidity, and wind on aeroclimatic maps may apply either to standard altitudes or to the main isobaric surfaces. Isolines are drawn on maps of such climatic features as the long-term mean values (of atmospheric pressure, temperature, humidity, total precipitation, and so forth) to connect points with equal values of the feature in question—for example, isobars for pressure, isotherms for temperature, and isohyets for precipitation. Isoamplitudes are drawn on maps of amplitudes (for example, annual amplitudes of air temperature—that is, the differences between the mean temperatures of the warmest and coldest month). Isanomals are drawn on maps of anomalies (for example, deviations of the mean temperature of each place from the mean temperature of the entire latitudinal zone). Isolines of frequency are drawn on maps showing the frequency of a particular phenomenon (for example, the annual number of days with a thunderstorm or snow cover). Isochrones are drawn on maps showing the dates of onset of a given phenomenon (for example, the first frost and appearance or disappearance of the snow cover) or the date of a particular value of a meteorological element in the course of a year (for example, passing of the mean daily air temperature through zero). Isolines of the mean numerical value of wind velocity or isotachs are drawn on wind maps (charts); the wind resultants and directions of prevailing winds are indicated by arrows of different lengths or arrows with different plumes; lines of flow are often drawn. Maps of the zonal and meridional components of wind are frequently compiled for the free atmosphere. Atmospheric pressure and wind are usually combined on climatic maps. Wind roses, curves showing the distribution of other meteorological elements, diagrams of the annual course of elements at individual stations, and the like are also plotted on climatic maps. Maps of climatic regionalization, that is, division of the earth's surface into climatic zones and regions according to some classification of climates, are a special kind of climatic map. Climatic maps are often incorporated into climatic atlases of varying geographic ranges (globe, hemispheres, continents, countries, oceans) or included in comprehensive atlases. Besides general climatic maps, applied climatic maps and atlases have great practical value. Aeroclimatic maps, aeroclimatic atlases, and agroclimatic maps are the most numerous. Extraterrestrial Maps exist of the Solar System, and other cosmological features such as star maps. In addition maps of other bodies such as the Moon and other planets are technically not geographical maps. Floor maps are also spatial but not necessarily geospatial. Topological Diagrams such as schematic diagrams and Gantt charts and treemaps display logical relationships between items, rather than geographical relationships. Topological in nature, only the connectivity is significant. The London Underground map and similar subway maps around the world are a common example of these maps. General General-purpose maps provide many types of information on one map. Most atlas maps, wall maps, and road maps fall into this category. The following are some features that might be shown on general-purpose maps: bodies of water, roads, railway lines, parks, elevations, towns and cities, political boundaries, latitude and longitude, national and provincial parks. These maps give a broad understanding of the location and features of an area. The reader may gain an understanding of the type of landscape, the location of urban places, and the location of major transportation routes all at once. List Aeronautical chart Atlas Cadastral map Climatic map Geologic map Historical map Linguistic map Nautical map Physical map Political map Relief map Resource map Road map Star map Street map Thematic map Topographic map Train track map Transit map Weather map World map Legal regulation Some countries required that all published maps represent their national claims regarding border disputes. For example: Within Russia, Google Maps shows Crimea as part of Russia. Both the Republic of India and the People's Republic of China require that all maps show areas subject to the Sino-Indian border dispute in their own favor. In 2010, the People's Republic of China began requiring that all online maps served from within China be hosted there, making them subject to Chinese laws. See also General Counter-mapping Map–territory relation Censorship of maps List of online map services Map collection Map designing and types Automatic label placement City map Compass rose Contour map Estate map Fantasy map Floor plan Geologic map Hypsometric tints Map design Orthophotomap—A map created from orthophotography Pictorial maps Plat Road atlas Transit map Page layout (cartography) Map history Early world maps History of cartography List of cartographers Related topics Aerial landscape art Digital geologic mapping Economic geography Geographic coordinate system Index map Global Map List of online map services Map database management References Citations Bibliography David Buisseret, ed., Monarchs, Ministers and Maps: The Emergence of Cartography as a Tool of Government in Early Modern Europe. Chicago: University of Chicago Press, 1992, Denis E. Cosgrove (ed.) Mappings. Reaktion Books, 1999 Freeman, Herbert, Automated Cartographic Text Placement. White paper. Ahn, J. and Freeman, H., “A program for automatic name placement,” Proc. AUTO-CARTO 6, Ottawa, 1983. 444–455. Freeman, H., “Computer Name Placement,” ch. 29, in Geographical Information Systems, 1, D.J. Maguire, M.F. Goodchild, and D.W. Rhind, John Wiley, New York, 1991, 449–460. Mark Monmonier, How to Lie with Maps, O'Connor, J.J. and E.F. Robertson, The History of Cartography. Scotland : St. Andrews University, 2002. External links International Cartographic Association (ICA), the world body for mapping and GIScience professionals Geography and Maps, an Illustrated Guide, by the staff of the U.S.
are many map projections. Which projection to use depends on the purpose of the map. Symbology The various features shown on a map are represented by conventional signs or symbols. For example, colors can be used to indicate a classification of roads. Those signs are usually explained in the margin of the map, or on a separately published characteristic sheet. Some cartographers prefer to make the map cover practically the entire screen or sheet of paper, leaving no room "outside" the map for information about the map as a whole. These cartographers typically place such information in an otherwise "blank" region "inside" the mapcartouche, map legend, title, compass rose, bar scale, etc. In particular, some maps contain smaller "sub-maps" in otherwise blank regions—often one at a much smaller scale showing the whole globe and where the whole map fits on that globe, and a few showing "regions of interest" at a larger scale to show details that wouldn't otherwise fit. Occasionally sub-maps use the same scale as the large map—a few maps of the contiguous United States include a sub-map to the same scale for each of the two non-contiguous states. Design The design and production of maps is a craft that has developed over thousands of years, from clay tablets to Geographic information systems. As a form of Design, particularly closely related to Graphic design, map making incorporates scientific knowledge about how maps are used, integrated with principles of artistic expression, to create an aesthetically attractive product, carries an aura of authority, and functionally serves a particular purpose for an intended audience. Designing a map involves bringing together a number of elements and making a large number of decisions. The elements of design fall into several broad topics, each of which has its own theory, its own research agenda, and its own best practices. That said, there are synergistic effects between these elements, meaning that the overall design process is not just working on each element one at a time, but an iterative feedback process of adjusting each to achieve the desired gestalt. Map projections: The foundation of the map is the plane on which it rests (whether paper or screen), but projections are required to flatten the surface of the earth. All projections distort this surface, but the cartographer can be strategic about how and where distortion occurs. Generalization: All maps must be drawn at a smaller scale than reality, requiring that the information included on a map be a very small sample of the wealth of information about a place. Generalization is the process of adjusting the level of detail in geographic information to be appropriate for the scale and purpose of a map, through procedures such as selection, simplification, and classification. Symbology: Any map visually represents the location and properties of geographic phenomena using map symbols, graphical depictions composed of several visual variables, such as size, shape, color, and pattern. Composition: As all of the symbols are brought together, their interactions have major effects on map reading, such as grouping and Visual hierarchy. Typography or Labeling: Text serves a number of purposes on the map, especially aiding the recognition of features, but labels must be designed and positioned well to be effective. Layout: The map image must be placed on the page (whether paper, web, or other media), along with related elements, such as the title, legend, additional maps, text, images, and so on. Each of these elements have their own design considerations, as does their integration, which largely follows the principles of Graphic design. Map type-specific design: Different kinds of maps, especially thematic maps, have their own design needs and best practices. Types Maps of the world or large areas are often either 'political' or 'physical'. The most important purpose of the political map is to show territorial borders; the purpose of the physical is to show features of geography such as mountains, soil type, or land use including infrastructures such as roads, railroads, and buildings. Topographic maps show elevations and relief with contour lines or shading. Geological maps show not only the physical surface, but characteristics of the underlying rock, fault lines, and subsurface structures. Electronic From the last quarter of the 20th century, the indispensable tool of the cartographer has been the computer. Much of cartography, especially at the data-gathering survey level, has been subsumed by Geographic Information Systems (GIS). The functionality of maps has been greatly advanced by technology simplifying the superimposition of spatially located variables onto existing geographical maps. Having local information such as rainfall level, distribution of wildlife, or demographic data integrated within the map allows more efficient analysis and better decision making. In the pre-electronic age such superimposition of data led Dr. John Snow to identify the location of an outbreak of cholera. Today, it is used by agencies of humankind, as diverse as wildlife conservationists and militaries around the world. Even when GIS is not involved, most cartographers now use a variety of computer graphics programs to generate new maps. Interactive, computerized maps are commercially available, allowing users to zoom in or zoom out (respectively meaning to increase or decrease the scale), sometimes by replacing one map with another of different scale, centered where possible on the same point. In-car global navigation satellite systems are computerized maps with route planning and advice facilities that monitor the user's position with the help of satellites. From the computer scientist's point of view, zooming in entails one or a combination of: replacing the map by a more detailed one enlarging the same map without enlarging the pixels, hence showing more detail by removing less information compared to the less detailed version enlarging the same map with the pixels enlarged (replaced by rectangles of pixels); no additional detail is shown, but, depending on the quality of one's vision, possibly more detail can be seen; if a computer display does not show adjacent pixels really separate, but overlapping instead (this does not apply for an LCD, but may apply for a cathode ray tube), then replacing a pixel by a rectangle of pixels does show more detail. A variation of this method is interpolation. For example: Typically (2) applies to a Portable Document Format (PDF) file or other format based on vector graphics. The increase in detail is limited to the information contained in the file: enlargement of a curve may eventually result in a series of standard geometric figures such as straight lines, arcs of circles, or splines. (2) may apply to text and (3) to the outline of a map feature such as a forest or building. (1) may apply to the text as needed (displaying labels for more features), while (2) applies to the rest of the image. Text is not necessarily enlarged when zooming in. Similarly, a road represented by a double line may or may not become wider when one zooms in. The map may also have layers that are partly raster graphics and partly vector graphics. For a single raster graphics image (2) applies until the pixels in the image file correspond to the pixels of the display, thereafter (3) applies. Climatic The maps that reflect the territorial distribution of climatic conditions based on the results of long-term observations are called climatic maps. These maps can be compiled both for individual climatic features (temperature, precipitation, humidity) and for combinations of them at the earth's surface and in the upper layers of the atmosphere. Climatic maps show climatic features across a large region and permit values of climatic features to be compared in different parts of the region. When generating the map, spatial interpolation can be used to synthesize values where there are no measurements, under the assumption that conditions change smoothly. Climatic maps generally apply to individual months and the year as a whole, sometimes to the four seasons, to the growing period, and so forth. On maps compiled from the observations of ground meteorological stations, atmospheric pressure is converted to sea level. Air temperature maps are compiled both from the actual values observed on the surface of the earth and from values converted to sea level. The pressure field in the free atmosphere is represented either by maps of the distribution of pressure at different standard altitudes—for example, at every kilometer above sea level—or by maps of baric topography on which altitudes (more precisely geopotentials) of the main isobaric surfaces (for example, 900, 800, and 700 millibars) counted off from sea level are plotted. The temperature, humidity, and wind on aeroclimatic maps may apply either to standard altitudes or to the main isobaric surfaces. Isolines are drawn on maps of such climatic features as the long-term mean values (of atmospheric pressure, temperature, humidity, total precipitation, and so forth) to connect
management has been defined by a market embracing diversity and a rising service industry. Managers are currently being trained to encourage greater equality for minorities and women in the workplace, by offering increased flexibility in working hours, better retraining, and innovative (and usually industry-specific) performance markers. Managers destined for the service sector are being trained to use unique measurement techniques, better worker support and more charismatic leadership styles. Human resources finds itself increasingly working with management in a training capacity to help collect management data on the success (or failure) of management actions with employees. Evidence-based management Evidence-based management is an emerging movement to use the current, best evidence in management and decision-making. It is part of the larger movement towards evidence-based practices. Evidence-based management entails managerial decisions and organizational practices informed by the best available evidence. As with other evidence-based practice, this is based on the three principles of: 1) published peer-reviewed (often in management or social science journals) research evidence that bears on whether and why a particular management practice works; 2) judgement and experience from contextual management practice, to understand the organization and interpersonal dynamics in a situation and determine the risks and benefits of available actions; and 3) the preferences and values of those affected. History Some see management as a late-modern (in the sense of late modernity) conceptualization. On those terms it cannot have a pre-modern history – only harbingers (such as stewards). Others, however, detect management-like thought among ancient Sumerian traders and the builders of the pyramids of ancient Egypt. Slave-owners through the centuries faced the problems of exploiting/motivating a dependent but sometimes unenthusiastic or recalcitrant workforce, but many pre-industrial enterprises, given their small scale, did not feel compelled to face the issues of management systematically. However, innovations such as the spread of Arabic numerals (5th to 15th centuries) and the codification of double-entry book-keeping (1494) provided tools for management assessment, planning and control. An organisation is more stable if members have the right to express their differences and solve their conflicts within it. While one person can begin an organisation, "it is lasting when it is left in the care of many and when many desire to maintain it". A weak manager can follow a strong one, but not another weak one, and maintain authority. A manager seeking to change an established organization "should retain at least a shadow of the ancient customs". With the changing workplaces of industrial revolutions in the 18th and 19th centuries, military theory and practice contributed approaches to managing the newly popular factories. Given the scale of most commercial operations and the lack of mechanized record-keeping and recording before the industrial revolution, it made sense for most owners of enterprises in those times to carry out management functions by and for themselves. But with growing size and complexity of organizations, a distinction between owners (individuals, industrial dynasties or groups of shareholders) and day-to-day managers (independent specialists in planning and control) gradually became more common. Early writing The field of management originated in ancient China, including possibly the first highly centralized bureaucratic state, and the earliest (by the second century BC) example of an administration based on merit through testing. Some theorists have cited ancient military texts as providing lessons for civilian managers. For example, Chinese general Sun Tzu in his 6th-century BC work The Art of War recommends (when re-phrased in modern terminology) being aware of and acting on strengths and weaknesses of both a manager's organization and a foe's. The writings of influential Chinese Legalist philosopher Shen Buhai may be considered to embody a rare premodern example of abstract theory of administration. American philosopher Herrlee G. Creel and other scholars find the influence of Chinese administration in Europe by the 12th century. Thomas Taylor Meadows, Britain's consul in Guangzhou, argued in his Desultory Notes on the Government and People of China (1847) that "the long duration of the Chinese empire is solely and altogether owing to the good government which consists in the advancement of men of talent and merit only," and that the British must reform their civil service by making the institution meritocratic. Influenced by the ancient Chinese imperial examination, the Northcote–Trevelyan Report of 1854 recommended that recruitment should be on the basis of merit determined through competitive examination, candidates should have a solid general education to enable inter-departmental transfers, and promotion should be through achievement rather than "preferment, patronage, or purchase". This led to implementation of Her Majesty's Civil Service as a systematic, meritocratic civil service bureaucracy. Like the British, the development of French bureaucracy was influenced by the Chinese system. Voltaire claimed that the Chinese had "perfected moral science" and François Quesnay advocated an economic and political system modeled after that of the Chinese. French civil service examinations adopted in the late 19th century were also heavily based on general cultural studies. These features have been likened to the earlier Chinese model. Various ancient and medieval civilizations produced "mirrors for princes" books, which aimed to advise new monarchs on how to govern. Plato described job specialization in 350 BC, and Alfarabi listed several leadership traits in AD 900. Other examples include the Indian Arthashastra by Chanakya (written around 300 BC), and The Prince by Italian author Niccolò Machiavelli (c. 1515). Written in 1776 by Adam Smith, a Scottish moral philosopher, The Wealth of Nations discussed efficient organization of work through division of labour. Smith described how changes in processes could boost productivity in the manufacture of pins. While individuals could produce 200 pins per day, Smith analyzed the steps involved in manufacture and, with 10 specialists, enabled production of 48,000 pins per day. 19th century Classical economists such as Adam Smith (1723–1790) and John Stuart Mill (1806–1873) provided a theoretical background to resource allocation, production (economics), and pricing issues. About the same time, innovators like Eli Whitney (1765–1825), James Watt (1736–1819), and Matthew Boulton (1728–1809) developed elements of technical production such as standardization, quality-control procedures, cost-accounting, interchangeability of parts, and work-planning. Many of these aspects of management existed in the pre-1861 slave-based sector of the US economy. That environment saw 4 million people, as the contemporary usages had it, "managed" in profitable quasi-mass production before wage slavery eclipsed chattel slavery. Salaried managers as an identifiable group first became prominent in the late 19th century. As large corporations began to overshadow small family businesses the need for personnel management positions became more necessary. Businesses grew into large corporations and the need for clerks, bookkeepers, secretaries and managers expanded. The demand for trained managers led college and university administrators to consider and move forward with plans to create the first schools of business on their campuses. 20th century At the turn of the twentieth century the need for skilled and trained managers had become increasingly apparent. The demand occurred as personnel departments began to expand rapidly. In 1915, less than one in twenty manufacturing firms had a dedicated personnel department. By 1929 that number had grown to over one-third. Formal management education became standardized at colleges and universities. Colleges and universities capitalized on the needs of corporations by forming business schools and corporate placement departments. This shift toward formal business education marked the creation of a corporate elite in the US. By about 1900 one finds managers trying to place their theories on what they regarded as a thoroughly scientific basis (see scientism for perceived limitations of this belief). Examples include Henry R. Towne's Science of management in the 1890s, Frederick Winslow Taylor's The Principles of Scientific Management (1911), Lillian Gilbreth's Psychology of Management (1914), Frank and Lillian Gilbreth's Applied motion study (1917), and Henry L. Gantt's charts (1910s). J. Duncan wrote the first college management textbook in 1911. In 1912 Yoichi Ueno introduced Taylorism to Japan and became the first management consultant of the "Japanese management style". His son Ichiro Ueno pioneered Japanese quality assurance. The first comprehensive theories of management appeared around 1920. The Harvard Business School offered the first Master of Business Administration degree (MBA) in 1921. People like Henri Fayol (1841–1925) and Alexander Church (1866–1936) described the various branches of management and their inter-relationships. In the early 20th century, people like Ordway Tead (1891–1973), Walter Scott (1869–1955) and J. Mooney applied the principles of psychology to management. Other writers, such as Elton Mayo (1880–1949), Mary Parker Follett (1868–1933), Chester Barnard (1886–1961), Max Weber (1864–1920), who saw what he called the "administrator" as bureaucrat, Rensis Likert (1903–1981), and Chris Argyris (born 1923) approached the phenomenon of management from a sociological perspective. The 1930s and 1940s saw the development of a militarization trend in management in parts of Eurasia – both the NKVD (in the Soviet Union) and the SS (in the Greater Germanic Reich), for example, managed labor camps as industrial enterprises using slave labor supervised by uniformed cadres. Military habits persisted in some management circles. Peter Drucker (1909–2005) wrote one of the earliest books on applied management: Concept of the Corporation (published in 1946). It resulted from Alfred Sloan (chairman of General Motors until 1956) commissioning a study of the organisation. Drucker went on to write 39 books, many in the same vein. H. Dodge, Ronald Fisher (1890–1962), and Thornton C. Fry introduced statistical techniques into management-studies. In the 1940s, Patrick Blackett worked in the development of the applied-mathematics science of operations research, initially for military operations. Operations research, sometimes known as "management science" (but distinct from Taylor's scientific management), attempts to take a scientific approach to solving decision-problems, and can apply directly to multiple management problems, particularly in the areas of logistics and operations. Some of the later 20th-century developments include the theory of constraints (introduced in 1984), management by objectives (systematised in 1954), re-engineering (early 1990s), Six Sigma (1986), management by walking around (1970s), the Viable system model (1972), and various information-technology-driven theories such as agile software development (so-named from 2001), as well as group-management theories such as Cog's Ladder (1972) and the notion of "thriving on chaos" (1987). As the general recognition of managers as a class solidified during the 20th century and gave perceived practitioners of the art/science of management a certain amount of prestige, so the way opened for popularised systems of management ideas to peddle their wares. In this context many management fads may have had more to do with pop psychology than with scientific theories of management. Business management includes the following branches: financial management human resource management Management cybernetics information technology management (responsible for management information systems ) marketing management operations management and production management strategic management 21st century In the 21st century observers find it increasingly difficult to subdivide management into functional categories in this way. More and more processes simultaneously involve several categories. Instead, one tends to think in terms of the various processes, tasks, and objects subject to management. Branches of management theory also exist relating to nonprofits and to government: such as public administration, public management, and educational management. Further, management programs related to civil-society organizations have also spawned programs in nonprofit management and social entrepreneurship. Note that many of the assumptions made by management have come under attack from business-ethics viewpoints, critical management studies, and anti-corporate activism. As one consequence, workplace democracy (sometimes referred to as Workers' self-management) has become both more common and more advocated, in some places distributing all management functions among workers, each of whom takes on a portion of the work. However, these models predate any current political issue, and may occur more naturally than does a command hierarchy. All management embraces to some degree a democratic principle—in that in the long term, the majority of workers must support management. Otherwise, they leave to find other work or go on strike. Despite the move toward workplace democracy, command-and-control organization structures remain commonplace as de facto organization structures. Indeed, the entrenched nature of command-and-control is evident in the way that recent layoffs have been conducted with management ranks affected far less than employees at the lower levels. In some cases, management has even rewarded itself with bonuses after laying off lower-level workers. According to leadership-academic Manfred F.R. Kets de Vries, a contemporary senior-management team will almost inevitably have some personality disorders. Nature of work In profitable organizations, management's primary function is the satisfaction of a range of stakeholders. This typically involves making a profit (for the shareholders), creating valued products at a reasonable cost (for customers), and providing great employment opportunities for employees. In case of nonprofit management, one of the main functions is, keeping the faith of donors. In most models of management and governance, shareholders vote for the board of directors, and the board then hires senior management. Some organizations have experimented with other methods (such as employee-voting models) of selecting or reviewing managers, but this is rare. Topics Basics According to Fayol, management operates through five basic functions: planning, organizing, coordinating, commanding, and controlling. Planning: Deciding what needs to happen in the future and generating plans for action (deciding in advance). Organizing (or staffing): Making sure the human and nonhuman resources are put into place. Commanding (or leading): Determining what must be done in a situation and
the success of the enterprise. Scholars have focused on the management of individual, organizational, and inter-organizational relationships. This implies effective communication: an enterprise environment (as opposed to a physical or mechanical mechanism) implies human motivation and implies some sort of successful progress or system outcome. As such, management is not the manipulation of a mechanism (machine or automated program), not the herding of animals, and can occur either in a legal or in an illegal enterprise or environment. From an individual's perspective, management does not need to be seen solely from an enterprise point of view, because management is an essential function in improving one's life and relationships. Management is therefore everywhere and it has a wider range of application. Communication and a positive endeavor are two main aspects of it either through enterprise or through independent pursuit. Plans, measurements, motivational psychological tools, goals, and economic measures (profit, etc.) may or may not be necessary components for there to be management. At first, one views management functionally, such as measuring quantity, adjusting plans, and meeting goals, but this applies even in situations where planning does not take place. From this perspective, Henri Fayol (1841–1925) considers management to consist of five functions: planning (forecasting) organizing commanding coordinating controlling In another way of thinking, Mary Parker Follett (1868–1933), allegedly defined management as "the art of getting things done through people". She described management as a philosophy. Critics, however, find this definition useful but far too narrow. The phrase "management is what managers do" occurs widely, suggesting the difficulty of defining management without circularity, the shifting nature of definitions and the connection of managerial practices with the existence of a managerial cadre or of a class. One habit of thought regards management as equivalent to "business administration" and thus excludes management in places outside commerce, as for example in charities and in the public sector. More broadly, every organization must "manage" its work, people, processes, technology, etc. to maximize effectiveness. Nonetheless, many people refer to university departments that teach management as "business schools". Some such institutions (such as the Harvard Business School) use that name, while others (such as the Yale School of Management) employ the broader term "management". English-speakers may also use the term "management" or "the management" as a collective word describing the managers of an organization, for example of a corporation. Historically this use of the term often contrasted with the term "labor" – referring to those being managed. But in the present era the concept of management is identified in the wide areas and its frontiers have been pushed to a broader range. Apart from profitable organizations, even non-profit organizations apply management concepts. The concept and its uses are not constrained. Management as a whole is the process of planning, organizing, directing, leading and controlling. Levels Most organizations have three management levels: first-level, middle-level, and top-level managers. First-line managers are the lowest level of management and manage the work of non-managerial individuals who are directly involved with the production or creation of the organization's products. First-line managers are often called supervisors, but may also be called line managers, office managers, or even foremen. Middle managers include all levels of management between the first-line level and the top level of the organization. These managers manage the work of first-line managers and may have titles such as department head, project leader, plant manager, or division manager. Top managers are responsible for making organization-wide decisions and establishing the plans and goals that affect the entire organization. These individuals typically have titles such as executive vice president, president, managing director, chief operating officer, chief executive officer, or chairman of the board. These managers are classified in a hierarchy of authority, and perform different tasks. In many organizations, the number of managers in every level resembles a pyramid. Each level is explained below in specifications of their different responsibilities and likely job titles. Top The top or senior layer of management consists of the board of directors (including non-executive directors, executive directors and independent directors), president, vice-president, CEOs and other members of the C-level executives. Different organizations have various members in their C-suite, which may include a chief financial officer, chief technology officer, and so on. They are responsible for controlling and overseeing the operations of the entire organization. They set a "tone at the top" and develop strategic plans, company policies, and make decisions on the overall direction of the organization. In addition, top-level managers play a significant role in the mobilization of outside resources. Senior managers are accountable to the shareholders, the general public and to public bodies that oversee corporations and similar organizations. Some members of the senior management may serve as the public face of the organization, and they may make speeches to introduce new strategies or appear in marketing. The board of directors is typically primarily composed of non-executives who owe a fiduciary duty to shareholders and are not closely involved in the day-to-day activities of the organization, although this varies depending on the type (e.g., public versus private), size and culture of the organization. These directors are theoretically liable for breaches of that duty and typically insured under directors and officers liability insurance. Fortune 500 directors are estimated to spend 4.4 hours per week on board duties, and median compensation was $212,512 in 2010. The board sets corporate strategy, makes major decisions such as major acquisitions, and hires, evaluates, and fires the top-level manager (chief executive officer or CEO). The CEO typically hires other positions. However, board involvement in the hiring of other positions such as the chief financial officer (CFO) has increased. In 2013, a survey of over 160 CEOs and directors of public and private companies found that the top weaknesses of CEOs were "mentoring skills" and "board engagement", and 10% of companies never evaluated the CEO. The board may also have certain employees (e.g., internal auditors) report to them or directly hire independent contractors; for example, the board (through the audit committee) typically selects the auditor. Helpful skills of top management vary by the type of organization but typically include a broad understanding of competition, world economies, and politics. In addition, the CEO is responsible for implementing and determining (within the board's framework) the broad policies of the organization. Executive management accomplishes the day-to-day details, including: instructions for preparation of department budgets, procedures, schedules; appointment of middle level executives such as department managers; coordination of departments; media and governmental relations; and shareholder communication. Middle Consist of general managers, branch managers and department managers. They are accountable to the top management for their department's function. They devote more time to organizational and directional functions. Their roles can be emphasized as executing organizational plans in conformance with the company's policies and the objectives of the top management, they define and discuss information and policies from top management to lower management, and most importantly they inspire and provide guidance to lower-level managers towards better performance. Middle management is the midway management of a categorized organization, being secondary to the senior management but above the deepest levels of operational members. An operational manager may be well-thought-out by middle management or may be categorized as non-management operate, liable to the policy of the specific organization. The efficiency of the middle level is vital in any organization since they bridge the gap between top level and bottom level staffs. Their functions include: Design and implement effective group and inter-group work and information systems. Define and monitor group-level performance indicators. Diagnose and resolve problems within and among workgroups. Design and implement reward systems that support cooperative behavior. They also make decisions and share ideas with top managers. Lower Lower managers include supervisors, section leaders, forepersons and team leaders. They focus on controlling and directing regular employees. They are usually responsible for assigning employees' tasks, guiding and supervising employees on day-to-day activities, ensuring the quality and quantity of production and/or service, making recommendations and suggestions to employees on their work, and channeling employee concerns that they cannot resolve to mid-level managers or other administrators. First-level or "front line" managers also act as role models for their employees. In some types of work, front line managers may also do some of the same tasks that employees do, at least some of the time. For example, in some restaurants, the front line managers will also serve customers during a very busy period of the day. Front-line managers typically provide: Training for new employees Basic supervision Motivation Performance feedback and guidance Some front-line managers may also provide career planning for employees who aim to rise within the organization. Training Colleges and universities around the world offer bachelor's degrees, graduate degrees, diplomas and certificates in management, generally within their colleges of business, business schools or faculty of management but also in other related departments. In the 2010s, there has been an increase in online management education and training in the form of electronic educational technology (also called e-learning). Online education has increased the accessibility of management training to people who do not live near a college or university, or who cannot afford to travel to a city where such training is available. Requirement While some professions require academic credentials in order to work in the profession (e.g., law, medicine, engineering, which require, respectively the Bachelor of Law, Doctor of Medicine and Bachelor of Engineering degrees), management and administration positions do not necessarily require the completion of academic degrees. Some well-known senior executives in the US who did not complete a degree include Steve Jobs, Bill Gates and Mark Zuckerberg. However, many managers and executives have completed some type of business or management training, such as a Bachelor of Commerce or a Master of Business Administration degree. Some major organizations, including companies, non-profit organizations and governments, require applicants to managerial or executive positions to hold at minimum bachelor's degree in a field related to administration or management, or in the
occur, fracture is a less orderly form that may be conchoidal (having smooth curves resembling the interior of a shell), fibrous, splintery, hackly (jagged with sharp edges), or uneven. If the mineral is well crystallized, it will also have a distinctive crystal habit (for example, hexagonal, columnar, botryoidal) that reflects the crystal structure or internal arrangement of atoms. It is also affected by crystal defects and twinning. Many crystals are polymorphic, having more than one possible crystal structure depending on factors such as pressure and temperature. Crystal structure The crystal structure is the arrangement of atoms in a crystal. It is represented by a lattice of points which repeats a basic pattern, called a unit cell, in three dimensions. The lattice can be characterized by its symmetries and by the dimensions of the unit cell. These dimensions are represented by three Miller indices. The lattice remains unchanged by certain symmetry operations about any given point in the lattice: reflection, rotation, inversion, and rotary inversion, a combination of rotation and reflection. Together, they make up a mathematical object called a crystallographic point group or crystal class. There are 32 possible crystal classes. In addition, there are operations that displace all the points: translation, screw axis, and glide plane. In combination with the point symmetries, they form 230 possible space groups. Most geology departments have X-ray powder diffraction equipment to analyze the crystal structures of minerals. X-rays have wavelengths that are the same order of magnitude as the distances between atoms. Diffraction, the constructive and destructive interference between waves scattered at different atoms, leads to distinctive patterns of high and low intensity that depend on the geometry of the crystal. In a sample that is ground to a powder, the X-rays sample a random distribution of all crystal orientations. Powder diffraction can distinguish between minerals that may appear the same in a hand sample, for example quartz and its polymorphs tridymite and cristobalite. Isomorphous minerals of different compositions have similar powder diffraction patterns, the main difference being in spacing and intensity of lines. For example, the (halite) crystal structure is space group Fm3m; this structure is shared by sylvite (), periclase (), bunsenite (), galena (), alabandite (), chlorargyrite (), and osbornite (). Chemical elements A few minerals are chemical elements, including sulfur, copper, silver, and gold, but the vast majority are compounds. The classical method for identifying composition is wet chemical analysis, which involves dissolving a mineral in an acid such as hydrochloric acid (HCl). The elements in solution are then identified using colorimetry, volumetric analysis or gravimetric analysis. Since 1960, most chemistry analysis is done using instruments. One of these, atomic absorption spectroscopy, is similar to wet chemistry in that the sample must still be dissolved, but it is much faster and cheaper. The solution is vaporized and its absorption spectrum is measured in the visible and ultraviolet range. Other techniques are X-ray fluorescence, electron microprobe analysis atom probe tomography and optical emission spectrography. Optical In addition to macroscopic properties such as colour or lustre, minerals have properties that require a polarizing microscope to observe. Transmitted light When light passes from air or a vacuum into a transparent crystal, some of it is reflected at the surface and some refracted. The latter is a bending of the light path that occurs because the speed of light changes as it goes into the crystal; Snell's law relates the bending angle to the Refractive index, the ratio of speed in a vacuum to speed in the crystal. Crystals whose point symmetry group falls in the cubic system are isotropic: the index does not depend on direction. All other crystals are anisotropic: light passing through them is broken up into two plane polarized rays that travel at different speeds and refract at different angles. A polarizing microscope is similar to an ordinary microscope, but it has two plane-polarized filters, a (polarizer) below the sample and an analyzer above it, polarized perpendicular to each other. Light passes successively through the polarizer, the sample and the analyzer. If there is no sample, the analyzer blocks all the light from the polarizer. However, an anisotropic sample will generally change the polarization so some of the light can pass through. Thin sections and powders can be used as samples. When an isotropic crystal is viewed, it appears dark because it does not change the polarization of the light. However, when it is immersed in a calibrated liquid with a lower index of refraction and the microscope is thrown out of focus, a bright line called a Becke line appears around the perimeter of the crystal. By observing the presence or absence of such lines in liquids with different indices, the index of the crystal can be estimated, usually to within . Systematic Systematic mineralogy is the identification and classification of minerals by their properties. Historically, mineralogy was heavily concerned with taxonomy of the rock-forming minerals. In 1959, the International Mineralogical Association formed the Commission of New Minerals and Mineral Names to rationalize the nomenclature and regulate the introduction of new names. In July 2006, it was merged with the Commission on Classification of Minerals to form the Commission on New Minerals, Nomenclature, and Classification. There are over 6,000 named and unnamed minerals, and about 100 are discovered each year. The Manual of Mineralogy places minerals in the following classes: native elements, sulfides, sulfosalts, oxides and hydroxides, halides, carbonates, nitrates and borates, sulfates, chromates, molybdates and tungstates, phosphates, arsenates and vanadates, and silicates. Formation environments The environments of mineral formation and growth are highly varied, ranging from slow crystallization at the high temperatures and pressures of
in three dimensions. The lattice can be characterized by its symmetries and by the dimensions of the unit cell. These dimensions are represented by three Miller indices. The lattice remains unchanged by certain symmetry operations about any given point in the lattice: reflection, rotation, inversion, and rotary inversion, a combination of rotation and reflection. Together, they make up a mathematical object called a crystallographic point group or crystal class. There are 32 possible crystal classes. In addition, there are operations that displace all the points: translation, screw axis, and glide plane. In combination with the point symmetries, they form 230 possible space groups. Most geology departments have X-ray powder diffraction equipment to analyze the crystal structures of minerals. X-rays have wavelengths that are the same order of magnitude as the distances between atoms. Diffraction, the constructive and destructive interference between waves scattered at different atoms, leads to distinctive patterns of high and low intensity that depend on the geometry of the crystal. In a sample that is ground to a powder, the X-rays sample a random distribution of all crystal orientations. Powder diffraction can distinguish between minerals that may appear the same in a hand sample, for example quartz and its polymorphs tridymite and cristobalite. Isomorphous minerals of different compositions have similar powder diffraction patterns, the main difference being in spacing and intensity of lines. For example, the (halite) crystal structure is space group Fm3m; this structure is shared by sylvite (), periclase (), bunsenite (), galena (), alabandite (), chlorargyrite (), and osbornite (). Chemical elements A few minerals are chemical elements, including sulfur, copper, silver, and gold, but the vast majority are compounds. The classical method for identifying composition is wet chemical analysis, which involves dissolving a mineral in an acid such as hydrochloric acid (HCl). The elements in solution are then identified using colorimetry, volumetric analysis or gravimetric analysis. Since 1960, most chemistry analysis is done using instruments. One of these, atomic absorption spectroscopy, is similar to wet chemistry in that the sample must still be dissolved, but it is much faster and cheaper. The solution is vaporized and its absorption spectrum is measured in the visible and ultraviolet range. Other techniques are X-ray fluorescence, electron microprobe analysis atom probe tomography and optical emission spectrography. Optical In addition to macroscopic properties such as colour or lustre, minerals have properties that require a polarizing microscope to observe. Transmitted light When light passes from air or a vacuum into a transparent crystal, some of it is reflected at the surface and some refracted. The latter is a bending of the light path that occurs because the speed of light changes as it goes into the crystal; Snell's law relates the bending angle to the Refractive index, the ratio of speed in a vacuum to speed in the crystal. Crystals whose point symmetry group falls in the cubic system are isotropic: the index does not depend on direction. All other crystals are anisotropic: light passing through them is broken up into two plane polarized rays that travel at different speeds and refract at different angles. A polarizing microscope is similar to an ordinary microscope, but it has two plane-polarized filters, a (polarizer) below the sample and an analyzer above it, polarized perpendicular to each other. Light passes successively through the polarizer, the sample and the analyzer. If there is no sample, the analyzer blocks all the light from the polarizer. However, an anisotropic sample will generally change the polarization so some of the light can pass through. Thin sections and powders can be used as samples. When an isotropic crystal is viewed, it appears dark because it does not change the polarization of the light. However, when it is immersed in a calibrated liquid with a lower index of refraction and the microscope is thrown out of focus, a bright line called a Becke line appears around the perimeter of the crystal. By observing the presence or absence of such lines in liquids with different indices, the index of the crystal can be estimated, usually to within . Systematic Systematic mineralogy is the identification and classification of minerals by their properties. Historically, mineralogy was heavily concerned with taxonomy of the rock-forming minerals. In 1959, the International Mineralogical Association formed the Commission of New Minerals and Mineral Names to rationalize the nomenclature and regulate the introduction of new names. In July 2006, it was merged with the Commission on Classification of Minerals to form the Commission on New Minerals, Nomenclature, and Classification. There are over 6,000 named and unnamed minerals, and about 100 are discovered each year. The Manual of Mineralogy places minerals in the following classes: native elements, sulfides, sulfosalts, oxides and hydroxides, halides, carbonates, nitrates and borates, sulfates, chromates, molybdates and tungstates, phosphates, arsenates and vanadates, and silicates. Formation environments The environments of mineral formation and growth are highly varied, ranging from slow crystallization at the high temperatures and pressures of igneous melts deep within the Earth's crust to the low temperature precipitation from a saline brine at the Earth's surface. Various possible methods of formation include: sublimation from volcanic gases deposition from aqueous solutions and hydrothermal brines crystallization from an igneous magma or lava recrystallization due to metamorphic processes and metasomatism crystallization during diagenesis of sediments formation by oxidation and weathering of rocks exposed to the atmosphere or within the soil environment. Biomineralogy Biomineralogy is a cross-over field between mineralogy, paleontology and biology. It is the study of how plants and animals stabilize minerals under biological control, and the sequencing of mineral replacement of those minerals after deposition. It uses techniques from chemical mineralogy, especially isotopic studies, to determine such things as growth forms in living plants and animals as well as things like the original mineral content of fossils. A new approach to mineralogy called mineral evolution explores the co-evolution of the geosphere and biosphere, including the role of minerals in the origin of life and processes as mineral-catalyzed organic synthesis and the selective adsorption of organic molecules on mineral surfaces. Mineral ecology In 2011, several researchers began to develop a Mineral Evolution Database. This database integrates the crowd-sourced site Mindat.org, which has over 690,000 mineral-locality pairs, with the official IMA list of approved minerals and age data from geological publications. This database makes it possible to apply statistics to answer new questions, an approach that has been called mineral ecology. One such question is how much of mineral evolution is deterministic and how much the result of chance. Some factors are deterministic, such as the chemical nature of a mineral and conditions for its stability; but mineralogy can also be affected by the processes that determine a
rounded iron kettles, because of a greater surface area for evaporation. Around this time, cane sugar replaced maple sugar as the dominant sweetener in the US; as a result, producers focused marketing efforts on maple syrup. The first evaporator, used to heat and concentrate sap, was patented in 1858. In 1872, an evaporator was developed that featured two pans and a metal arch or firebox, which greatly decreased boiling time. Around 1900, producers bent the tin that formed the bottom of a pan into a series of flues, which increased the heated surface area of the pan and again decreased boiling time. Some producers also added a finishing pan, a separate batch evaporator, as a final stage in the evaporation process. Buckets began to be replaced with plastic bags, which allowed people to see at a distance how much sap had been collected. Syrup producers also began using tractors to haul vats of sap from the trees being tapped (the sugarbush) to the evaporator. Some producers adopted motor-powered tappers and metal tubing systems to convey sap from the tree to a central collection container, but these techniques were not widely used. Heating methods also diversified: modern producers use wood, oil, natural gas, propane, or steam to evaporate sap. Modern filtration methods were perfected to prevent contamination of the syrup. A large number of technological changes took place during the 1970s. Plastic tubing systems that had been experimental since the early part of the century were perfected, and the sap came directly from the tree to the evaporator house. Vacuum pumps were added to the tubing systems, and preheaters were developed to recycle heat lost in the steam. Producers developed reverse-osmosis machines to take a portion of water out of the sap before it was boiled, increasing processing efficiency. Improvements in tubing and vacuum pumps, new filtering techniques, "supercharged" preheaters, and better storage containers have since been developed. Research continues on pest control and improved woodlot management. In 2009, researchers at the University of Vermont unveiled a new type of tap that prevents backflow of sap into the tree, reducing bacterial contamination and preventing the tree from attempting to heal the bore hole. Experiments show that it may be possible to use saplings in a plantation instead of mature trees, dramatically boosting productivity per acre. As a result of the smaller tree diameter, milder diurnal temperature swings are needed for the tree to freeze and thaw, which enables sap production in milder climatic conditions outside of northeastern North America. Processing Open pan evaporation methods have been streamlined since colonial days, but remain basically unchanged. Sap must first be collected and boiled down to obtain syrup. Maple syrup is made by boiling between 20 and 50 volumes of sap (depending on its concentration) over an open fire until 1 volume of syrup is obtained, usually at a temperature over the boiling point of water. As the boiling point of water varies with changes in air pressure the correct value for pure water is determined at the place where the syrup is being produced, each time evaporation is begun and periodically throughout the day. Syrup can be boiled entirely over one heat source or can be drawn off into smaller batches and boiled at a more controlled temperature. Defoamers are often added during boiling. Boiling the syrup is a tightly controlled process, which ensures appropriate sugar content. Syrup boiled too long will eventually crystallize, whereas under-boiled syrup will be watery, and will quickly spoil. The finished syrup has a density of 66° on the Brix scale (a hydrometric scale used to measure sugar solutions). The syrup is then filtered to remove precipitated "sugar sand", crystals made up largely of sugar and calcium malate. These crystals are not toxic, but create a "gritty" texture in the syrup if not filtered out. In addition to open pan evaporation methods, many large producers use the more fuel efficient reverse osmosis procedure to separate the water from the sap. Smaller producers can also use batchwise recirculating reverse osmosis, with the most energy-efficient operation taking the sugar concentration to 25% prior to boiling. The higher the sugar content of the sap, the smaller the volume of sap is needed to obtain the same amount of syrup. 57 units of sap with 1.5 percent sugar content will yield 1 unit of syrup, but only 25 units of sap with a 3.5 percent sugar content are needed to obtain one unit of syrup. The sap's sugar content is highly variable and will fluctuate even within the same tree. The filtered syrup is graded and packaged while still hot, usually at a temperature of or greater. The containers are turned over after being sealed to sterilize the cap with the hot syrup. Packages can be made of metal, glass, or coated plastic, depending on volume and target market. The syrup can also be heated longer and further processed to create a variety of other maple products, including maple sugar, maple butter or cream, and maple candy or taffy. Off-flavours Off-flavours can sometimes develop during the production of maple syrup, resulting from contaminants in the boiling apparatus (such as disinfectants), microorganisms, fermentation products, metallic can flavours, and "buddy sap", an off-flavour occurring late in the syrup season when tree budding has begun. In some circumstances, it is possible to remove off-flavours through processing. Production Maple syrup production is centred in northeastern North America; however, given the correct weather conditions, it can be made wherever suitable species of maple trees grow, such as New Zealand, where there are efforts to establish commercial production. A maple syrup production farm is called a "sugarbush" or "sugarwood". Sap is often boiled in a "sugar house" (also known as a "sugar shack", "sugar shanty", or cabane à sucre), a building louvered at the top to vent the steam from the boiling sap. Maples are usually tapped beginning at 30 to 40 years of age. Each tree can support between one and three taps, depending on its trunk diameter. The average maple tree will produce of sap per season, up to per day. This is roughly equal to seven percent of its total sap. Tap seasons typically happen during late winter and spring and usually last for four to eight weeks, though the exact dates depends on the weather, location, and climate. The timing of the season and the region of maximum sap flow are both expected to be significantly altered by climate change by 2100. During the day, starch stored in the roots for the winter rises through the trunk as sugary sap, allowing it to be tapped. Sap is not tapped at night because the temperature drop inhibits sap flow, although taps are typically left in place overnight. Some producers also tap in autumn, though this practice is less common than spring tapping. Maples can continue to be tapped for sap until they are over 100 years old. Commerce Until the 1930s, the United States produced most of the world's maple syrup. Today, after rapid growth in the 1990s, Canada produces more than 80 percent of the world's maple syrup, producing about in 2016. The vast majority of this comes from the province of Quebec, which is the world's largest producer, with about 70 percent of global production. Canada exported more than C$362 million of maple syrup in 2016. In 2015, 64 percent of Canadian maple syrup exports went to the United States (a value of C$229 million), 8 percent to Germany (C$31 million), 6 percent to Japan (C$26 million), and 5 percent to the United Kingdom (C$16 million). In 2015, Quebec accounts for 90.83 percent of maple syrup produced in Canada, followed by New Brunswick at 4.83 percent, Ontario at 4.14 percent, and Nova Scotia at 0.2 percent. However, 94.28 percent of exported Canadian maple syrup originated from Quebec, whereas 4.91 percent of exported syrup originated from New Brunswick, and the remaining 0.81 percent from all other provinces. Ontario holds the most maple syrup farms in Canada outside of Quebec, with 2,240 maple syrup producers in 2011. This is followed by New Brunswick, with 191 maple syrup producers; and Nova Scotia, with 152 maple syrup producers. As of 2016, Quebec had some 7,300 producers working with 13,500 farmers, collectively making over of syrup. Production in Quebec is controlled through a supply management system, with producers receiving quota allotments from the government sanctioned Federation of Quebec Maple Syrup Producers (Fédération des producteurs acéricoles du Québec, FPAQ), which also maintains reserves of syrup, although there is a black-market trade in Quebec product. In 2017, the FPAQ mandated increased output of maple syrup production, attempting to establish Quebec's dominance in the world market. The Canadian provinces of Manitoba and Saskatchewan produce maple syrup using the sap of the box elder or Manitoba maple (Acer negundo). In 2011, there were 67 maple syrup producers in Manitoba, and 24 in Saskatchewan. A Manitoba maple tree's yield is usually less than half that of a similar sugar maple tree. Manitoba maple syrup has a slightly different flavour from sugar-maple syrup, because it contains less sugar and the tree's sap flows more slowly. British Columbia is home to a growing maple sugar industry using sap from the bigleaf maple, which is native to the West Coast of the United States and Canada. In 2011, there were 82 maple syrup producers in British Columbia. Vermont is the biggest US producer, with over during the 2019 season, followed by New York with and Maine with . Wisconsin, Ohio, New Hampshire, Michigan, Pennsylvania, Massachusetts and Connecticut all produced marketable quantities of maple syrup. Maple syrup has been produced on a small scale in some other countries, notably Japan and South Korea. However, in South Korea in particular, it is traditional to consume maple sap, called gorosoe, instead of processing it into syrup. Markings Under Canadian Maple Product Regulations, containers of maple syrup must include the words "maple syrup", its grade name and net quantity in litres or millilitres, on the main display panel with a minimum font size of 1.6 mm. If the maple syrup is of Canada Grade A level, the name of the colour class must appear on the label in both English and French. Also, the lot number or production code, and either: (1) the name and address of the sugar bush establishment, packing or shipper establishment, or (2) the first dealer and the registration number of the packing establishment, must be labeled on any display panel other than the bottom. Grades Following an effort from the International Maple Syrup Institute (IMSI) and many maple syrup producer associations, both Canada and the United States have altered their laws regarding the classification of maple syrup to be uniform. Whereas in the past each state or province had their own laws on the classification of maple syrup, now those laws define a unified grading system. This had been a work in progress for several years, and most of the finalization of the new grading system was made in 2014. The Canadian Food Inspection Agency (CFIA) announced in the Canada Gazette on 28 June 2014 that rules for the sale of maple syrup would be amended to include new descriptors, at the request of the IMSI. As of 31 December 2014, the CFIA and as of 2 March 2015, the
the box elder or Manitoba maple (Acer negundo), the silver maple (A. saccharinum), and the bigleaf maple (A. macrophyllum). In the Southeastern United States, Florida sugar maple (Acer floridanum) is occasionally used for maple syrup production. Similar syrups may also be produced from walnut, birch, or palm trees, among other sources. History Indigenous peoples Indigenous peoples living in northeastern North America were the first groups known to have produced maple syrup and maple sugar. According to Indigenous oral traditions, as well as archaeological evidence, maple tree sap was being processed into syrup long before Europeans arrived in the region. There are no authenticated accounts of how maple syrup production and consumption began, but various legends exist; one of the most popular involves maple sap being used in place of water to cook venison served to a chief. Indigenous tribes developed rituals around sugar-making, celebrating the Sugar Moon (the first full moon of spring) with a Maple Dance. Many aboriginal dishes replaced the salt traditional in European cuisine with maple sugar or syrup. The Algonquians recognized maple sap as a source of energy and nutrition. At the beginning of the spring thaw, they made V-shaped incisions in tree trunks; they then inserted reeds or concave pieces of bark to run the sap into clay buckets or tightly woven birch-bark baskets. The maple sap was concentrated first by leaving it exposed to the cold temperatures overnight and disposing of the layer of ice that formed on top. Following that, the sap was transported by sled to large fires where it was boiled in clay pots to produce maple syrup. Often, multiple pots were used in conjunction, with the liquid being transferred between them as it grew more concentrated. Contrary to popular belief, syrup was not produced by dropping heated stones into wooden bowls. European colonists In the early stages of European colonization in northeastern North America, local Indigenous peoples showed the arriving colonists how to tap the trunks of certain types of maples during the spring thaw to harvest the sap. André Thevet, the "Royal Cosmographer of France", wrote about Jacques Cartier drinking maple sap during his Canadian voyages. By 1680, European settlers and fur traders were involved in harvesting maple products. However, rather than making incisions in the bark, the Europeans used the method of drilling tapholes in the trunks with augers. Prior to the 19th century, processed maple sap was used primarily as a source of concentrated sugar, in both liquid and crystallized-solid form, as cane sugar had to be imported from the West Indies. Maple sugaring parties typically began to operate at the start of the spring thaw in regions of woodland with sufficiently large numbers of maples. Syrup makers first bored holes in the trunks, usually more than one hole per large tree; they then inserted wooden spouts into the holes and hung a wooden bucket from the protruding end of each spout to collect the sap. The buckets were commonly made by cutting cylindrical segments from a large tree trunk and then hollowing out each segment's core from one end of the cylinder, creating a seamless, watertight container. Sap filled the buckets, and was then either transferred to larger holding vessels (barrels, large pots, or hollowed-out wooden logs), often mounted on sledges or wagons pulled by draft animals, or carried in buckets or other convenient containers. The sap-collection buckets were returned to the spouts mounted on the trees, and the process was repeated for as long as the flow of sap remained "sweet". The specific weather conditions of the thaw period were, and still are, critical in determining the length of the sugaring season. As the weather continues to warm, a maple tree's normal early spring biological process eventually alters the taste of the sap, making it unpalatable, perhaps due to an increase in amino acids. The boiling process was very time-consuming. The harvested sap was transported back to the party's base camp, where it was then poured into large vessels (usually made from metal) and boiled to achieve the desired consistency. The sap was usually transported using large barrels pulled by horses or oxen to a central collection point, where it was processed either over a fire built out in the open or inside a shelter built for that purpose (the "sugar shack"). Since 1850 Around the time of the American Civil War (1861–1865), syrup makers started using large, flat sheet metal pans as they were more efficient for boiling than heavy, rounded iron kettles, because of a greater surface area for evaporation. Around this time, cane sugar replaced maple sugar as the dominant sweetener in the US; as a result, producers focused marketing efforts on maple syrup. The first evaporator, used to heat and concentrate sap, was patented in 1858. In 1872, an evaporator was developed that featured two pans and a metal arch or firebox, which greatly decreased boiling time. Around 1900, producers bent the tin that formed the bottom of a pan into a series of flues, which increased the heated surface area of the pan and again decreased boiling time. Some producers also added a finishing pan, a separate batch evaporator, as a final stage in the evaporation process. Buckets began to be replaced with plastic bags, which allowed people to see at a distance how much sap had been collected. Syrup producers also began using tractors to haul vats of sap from the trees being tapped (the sugarbush) to the evaporator. Some producers adopted motor-powered tappers and metal tubing systems to convey sap from the tree to a central collection container, but these techniques were not widely used. Heating methods also diversified: modern producers use wood, oil, natural gas, propane, or steam to evaporate sap. Modern filtration methods were perfected to prevent contamination of the syrup. A large number of technological changes took place during the 1970s. Plastic tubing systems that had been experimental since the early part of the century were perfected, and the sap came directly from the tree to the evaporator house. Vacuum pumps were added to the tubing systems, and preheaters were developed to recycle heat lost in the steam. Producers developed reverse-osmosis machines to take a portion of water out of the sap before it was boiled, increasing processing efficiency. Improvements in tubing and vacuum pumps, new filtering techniques, "supercharged" preheaters, and better storage containers have since been developed. Research continues on pest control and improved woodlot management. In 2009, researchers at the University of Vermont unveiled a new type of tap that prevents backflow of sap into the tree, reducing bacterial contamination and preventing the tree from attempting to heal the bore hole. Experiments show that it may be possible to use saplings in a plantation instead of mature trees, dramatically boosting productivity per acre. As a result of the smaller tree diameter, milder diurnal temperature swings are needed for the tree to freeze and thaw, which enables sap production in milder climatic conditions outside of northeastern North America. Processing Open pan evaporation methods have been streamlined since colonial days, but remain basically unchanged. Sap must first be collected and boiled down to obtain syrup. Maple syrup is made by boiling between 20 and 50 volumes of sap (depending on its concentration) over an open fire until 1 volume of syrup is obtained, usually at a temperature over the boiling point of water. As the boiling point of water varies with changes in air pressure the correct value for pure water is determined at the place where the syrup is being produced, each time evaporation is begun and periodically throughout the day. Syrup can be boiled entirely over one heat source or can be drawn off into smaller batches and boiled at a more controlled temperature. Defoamers are often added during boiling. Boiling the syrup is a tightly controlled process, which ensures appropriate sugar content. Syrup boiled too long will eventually crystallize, whereas under-boiled syrup will be watery, and will quickly spoil. The finished syrup has a density of 66° on the Brix scale (a hydrometric scale used to measure sugar solutions). The syrup is then filtered to remove precipitated "sugar sand", crystals made up largely of sugar and calcium malate. These crystals are not toxic, but create a "gritty" texture in the syrup if not filtered out. In addition to open pan evaporation methods, many large producers use the more fuel efficient reverse osmosis procedure to separate the water from the sap. Smaller producers can also use batchwise recirculating reverse osmosis, with the most energy-efficient operation taking the sugar concentration to 25% prior to boiling. The higher the sugar content of the sap, the smaller the volume of sap is needed to obtain the same amount of syrup. 57 units of sap with 1.5 percent sugar content will yield 1 unit of syrup, but only 25 units of sap with a 3.5 percent sugar content are needed to obtain one unit of syrup. The sap's sugar content is highly variable and will fluctuate even within the same tree. The filtered syrup is graded and packaged while still hot, usually at a temperature of or greater. The containers are turned over after being sealed to sterilize the cap with the hot syrup. Packages can be made of metal, glass, or coated plastic, depending on volume and target market. The syrup can also be heated longer and further processed to create a variety of other maple products, including maple sugar, maple butter or cream, and maple candy or taffy. Off-flavours Off-flavours can sometimes develop during the production of maple syrup, resulting from contaminants in the boiling apparatus (such as disinfectants), microorganisms, fermentation products, metallic can flavours, and "buddy sap", an off-flavour occurring late in the syrup season when tree budding has begun. In some circumstances, it is possible to remove off-flavours through processing. Production Maple syrup production is centred in northeastern North America; however, given the correct weather conditions, it can be made wherever suitable species of maple trees grow, such as New Zealand, where there are efforts to establish commercial production. A maple syrup production farm is called a "sugarbush" or "sugarwood". Sap is often boiled in a "sugar house" (also known as a "sugar shack", "sugar shanty", or cabane à sucre), a building louvered at the top to vent the steam from the boiling sap. Maples are usually tapped beginning at 30 to 40 years of age. Each tree can support between one and three taps, depending on its trunk diameter. The average maple tree will produce of sap per season, up to per day. This is roughly equal to seven percent of its total sap. Tap seasons typically happen during late winter and spring and usually last for four to eight
(album), a 2000 album by rapper Kool Keith Matthew (elm cultivar), a cultivar of the Chinese Elm Ulmus parvifolia Hurricane Matthew, a former hurricane in the Atlantic Ocean. Christianity Matthew the Apostle, one of the apostles of Jesus Gospel of Matthew, a book of
of the Chinese Elm Ulmus parvifolia Hurricane Matthew, a former hurricane in the Atlantic Ocean. Christianity Matthew the Apostle, one of the apostles of Jesus Gospel of Matthew,
Boy, a young male human, usually a child or adolescent Gentleman, any man of good, courteous conduct Male connector, in hardware and electronics Masculine gender, in languages with grammatical gender Male as norm, perception the corresponding female category is a derivation Art and entertainment Male (film), a 2015 Indian film Male (Foetus album), a 1992 live album by Foetus Male (Natalie Imbruglia album), a 2015 studio album by Natalie Imbruglia , a German band Il Male, an Italian satirical magazine published in Italy between 1978 and 1982 Places Malé, the capital of the Maldives Malé Island, the
in France Male, Belgium, a quarter in Bruges Male, Vikramgad, a village in Maharashtra, India Male (woreda), a woreda in Ethiopia Males, Crete, a village in Greece Maleš (mountain), a mountain in Bulgaria and Northern Macedonia Male, Mauritania, a town in Mauritania Other uses Male language, several languages Maale people, an ethnic group of Ethiopia Male (surname) Medium-altitude long-endurance unmanned aerial vehicle, an unmanned aerial vehicle malE, a bacterial gene encoding maltose-binding protein People with the name Male (surname) (including a list of people with the name) Male Rao Holkar (1745–1767), Maharaja of Indore The Malês, as in the Malê revolt Male Sa'u (born 1987), Japanese professional rugby union footballer See also Female (disambiguation) Male and
ā, ǟ, ē, ī, ō, ȱ, ȭ and ū are separate letters that sort in alphabetical order immediately after a, ä, e, i, o, ȯ, õ, and u, respectively. Samogitian. ā, ē, ė̄, ī, ū and ō are separate letters that sort in alphabetical order immediately after a, e, ė, i, u and o respectively. Transcriptions of Nahuatl, the Aztecs' language, spoken in Mexico. When the Spanish conquistadors arrived, they wrote the language in their own alphabet without distinguishing long vowels. Over a century later, in 1645, Horacio Carochi defined macrons to mark long vowels ā, ē, ī and ō, and short vowels with grave (`) accents. This is rare nowadays since many people write Nahuatl without any orthographic sign and with the letters k, s and w, not present in the original alphabet. Modern transcriptions of Old English, for long vowels. Latin transliteration of Pali and Sanskrit, and in the IAST and ISO 15919 transcriptions of Indo-Aryan and Dravidian languages. Polynesian languages: Cook Islands Māori. In Cook Islands Māori, the macron or mākarōna is not commonly used in writing, but is used in references and teaching materials for those learning the language. Hawaiian. The macron is called kahakō, and it indicates vowel length, which changes meaning and the placement of stress. Māori. In modern written Māori, the macron is used to designate long vowels, with the trema mark sometimes used if the macron is unavailable (e.g. "Mäori"). The Māori word for macron is tohutō. The term pōtae ("hat") is also used. In the past, writing in Māori either did not distinguish vowel length, or doubled long vowels (e.g. "Maaori"), as some iwi dialects still do. Niuean. In Niuean, "popular spelling" does not worry too much about vowel quantity (length), so the macron is primarily used in scholarly study of the language. Tahitian. The use of the macron is comparatively recent in Tahitian. The Fare Vānaa or Académie Tahitienne (Tahitian Academy) recommends using the macron, called the tārava, to represent long vowels in written text, especially for scientific or teaching texts and it has widespread acceptance. (In the past, written Tahitian either did not distinguish vowel length, or used multiple other ways). Tongan and Samoan. The macron is called the toloi/fakamamafa or fa'amamafa, respectively. Its usage is similar to that in Māori, including its substitution by a trema. Its usage is not universal in Samoan, but recent academic publications and advanced study textbooks promote its use. The macron is used in Fijian language dictionaries, in instructional materials for non-Fijian speakers, and in books and papers on Fijian linguistics. It is not typically used in Fijian publications intended for fluent speakers, where context is usually sufficient for a reader to distinguish between heteronyms. Both Cyrillic and Latin transcriptions of Udege. The Latin and Cyrillic alphabet transcriptions of the Tsebari dialect of Tsez. In western Cree, Sauk, and Saulteaux, the Algonquianist Standard Roman Orthography (SRO) indicates long vowels either with a circumflex ⟨â ê î ô⟩ or with a macron ⟨ā ē ī ō⟩. Tone The following languages or alphabets use the macron to mark tones: In the International Phonetic Alphabet, a macron over a vowel indicates a mid-level tone. In Pinyin, the official Romanization of Mandarin Chinese, macrons over a, e, i, o, u, ü (ā, ē, ī, ō, ū, ǖ) indicate the high level tone of Mandarin Chinese. The alternative to the macron is the number 1 after the syllable (for example, tā = ta1). Similarly in the Yale romanization of Cantonese, macrons over a, e, i, o, u, m, n (ā, ē, ī, ō, ū, m̄, n̄) indicate the high level tone of Cantonese. Like Mandarin, the alternative to the macron is the number 1 after the syllable (for example, tā = ta1). In Pe̍h-ōe-jī romanization of Hokkien, macrons over a, e, i, m, n, o, o͘, u, (ā, ē, ī, m̄, n̄, ō, ō͘, ū) indicate the mid level tone ("light departing" or 7th tone) of Hokkien. Omission Sometimes the macron marks an omitted n or m, like the tilde: In Old English texts a macron above a letter indicates the omission of an m or n that would normally follow that letter. In older handwriting such as the German Kurrentschrift, the macron over an a-e-i-o-u or ä-ö-ü stood for an n, or over an m or an n meant that the letter was doubled. This continued into print in English in the sixteenth century, and to some extent in German. Over a u at the end of a word, the macron indicated um as a form of scribal abbreviation. Letter extension In romanizations of Hebrew, the macron below is typically used to mark the begadkefat consonant lenition. However, for typographical reasons a regular macron is used on p and g instead: p̄, ḡ. The macron is used in the orthography of a number of vernacular languages of the Solomon Islands and Vanuatu, particularly those first transcribed by Anglican missionaries. The macron has no unique value, and is simply used to distinguish between two different phonemes. Thus, in several languages of the Banks Islands, including Mwotlap, the simple m stands for , but an m with a macron (m̄) is a rounded labial-velar nasal ; while the simple n stands for the common alveolar nasal , an n with macron (n̄) represents the velar nasal ; the vowel ē stands for a (short) higher by contrast with plain e ; likewise ō contrasts with plain o . In Hiw orthography, the consonant r̄ stands
n, nj, and r. Languages with this feature include standard and dialect varieties of Serbo-Croatian, Slovene, and Bulgarian. Transcriptions of Arabic typically use macrons to indicate long vowels – (alif when pronounced ), (waw, when pronounced or ), and (ya', when pronounced or ). Thus the Arabic word (three) is transliterated thalāthah. Transcriptions of Sanskrit typically use a macron over ā, ī, ū, ṝ, and ḹ in order to mark a long vowel (e and o are always long and consequently do not need any macron). In Latin, many of the more recent dictionaries and learning materials use the macron as the modern equivalent of the ancient Roman apex to mark long vowels. Any of the six vowel letters (ā, ē, ī, ō, ū, ӯ) can bear it. It is sometimes used in conjunction with the breve, especially to distinguish the short vowels and from their semi-vowel counterparts and , originally, and often to this day, spelt with the same letters. However, the older of these editions are not always explicit on whether they mark long vowels or heavy syllables – a confusion that is even found in some modern learning materials. In addition, most of the newest academic publications use both the macron and the breve sparingly, mainly when vowel length is relevant to the discussion. In romanization of classical Greek, the letters η (eta) and ω (omega) are transliterated, respectively, as ē and ō, representing the long vowels of classical Greek, whereas the short vowels ε (epsilon) and ο (omicron) are always transliterated as plain e and o. The other long vowel phonemes don't have dedicated letters in the Greek alphabet, being indicated by digraphs (transliterated likewise as digraphs) or by the letters α, ι , υ – represented as ā, ī, ū. The same three letters are transliterated as plain a, i, u when representing short vowels. The Hepburn romanization system of Japanese, for example, kōtsū (, ) "traffic" as opposed to kotsu (, ) "bone" or "knack". The Syriac language uses macrons to indicate long vowels in its romanized transliteration: ā for , ē for , ū for and ō for . Baltic languages and Baltic-Finnic languages: Latvian. ā, ē, ī, ū are separate letters but are given the same position in collation as a, e, i, u respectively. Ō was also used in Latvian, but it was discarded as of 1946. Some usage remains in Latgalian. Lithuanian. ū is a separate letter but is given the same position in collation as the unaccented u. It marks a long vowel; other long vowels are indicated with an ogonek (which used to indicate nasalization, but it no longer does): ą, ę, į, ų and o being always long in Lithuanian except for some recent loanwords. For the long counterpart of i, y is used. Livonian. ā, ǟ, ē, ī, ō, ȱ, ȭ and ū are separate letters that sort in alphabetical order immediately after a, ä, e, i, o, ȯ, õ, and u, respectively. Samogitian. ā, ē, ė̄, ī, ū and ō are separate letters that sort in alphabetical order immediately after a, e, ė, i, u and o respectively. Transcriptions of Nahuatl, the Aztecs' language, spoken in Mexico. When the Spanish conquistadors arrived, they wrote the language in their own alphabet without distinguishing long vowels. Over a century later, in 1645, Horacio Carochi defined macrons to mark long vowels ā, ē, ī and ō, and short vowels with grave (`) accents. This is rare nowadays since many people write Nahuatl without any orthographic sign and with the letters k, s and w, not present in the original alphabet. Modern transcriptions of Old English, for long vowels. Latin transliteration of Pali and Sanskrit, and in the
the Greater Middle East. The ones found in Europe and North America appear to have various styles but most are built on Western architectural designs, some are former churches or other buildings that were used by non-Muslims. In Africa, most mosques are old but the new ones are built in imitation of those of the Middle East. This can be seen in the Abuja National Mosque in Nigeria and others. Islam forbids figurative art, on the grounds that the artist must not imitate God's creation. Mosques are, therefore, decorated with abstract patterns and beautiful inscriptions. Decoration is often concentrated around doorways and the miḥrāb. Tiles are used widely in mosques. They lend themselves to pattern-making, can be made with beautiful subtle colors, and can create a cool atmosphere, an advantage in the hot Arab countries. Quotations from the Quran often adorn mosque interiors. These texts are meant to inspire people by their beauty, while also reminding them of the words of Allah. Prayer hall The prayer hall, also known as the muṣallá (), rarely has furniture; chairs and pews are generally absent from the prayer hall so as to allow as many worshipers as possible to line the room. Some mosques have Islamic calligraphy and Quranic verses on the walls to assist worshippers in focusing on the beauty of Islam and its holiest book, the Quran, as well as for decoration. Often, a limited part of the prayer hall is sanctified formally as a masjid in the sharia sense (although the term masjid is also used for the larger mosque complex as well). Once designated, there are onerous limitations on the use of this formally designated masjid, and it may not be used for any purpose other than worship; restrictions that do not necessarily apply to the rest of the prayer area, and to the rest of the mosque complex (although such uses may be restricted by the conditions of the waqf that owns the mosque). In many mosques, especially the early congregational mosques, the prayer hall is in the hypostyle form (the roof held up by a multitude of columns). One of the finest examples of the hypostyle-plan mosques is the Great Mosque of Kairouan (also known as the Mosque of Uqba) in Tunisia. Usually opposite the entrance to the prayer hall is the qiblah wall, the visually emphasized area inside the prayer hall. The qiblah wall should, in a properly oriented mosque, be set perpendicular to a line leading to Mecca, the location of the Kaaba. Congregants pray in rows parallel to the qiblah wall and thus arrange themselves so they face Mecca. In the qiblah wall, usually at its center, is the mihrab, a niche or depression indicating the direction of Mecca. Usually the mihrab is not occupied by furniture either. A raised minbar or pulpit is located to the right side of the mihrab for a Khaṭīb, or some other speaker, to offer a Khuṭbah (Sermon) during Friday prayers. The mihrab serves as the location where the imam leads the five daily prayers on a regular basis. Left to the mihrab, in the front left corner of the mosque, sometimes there is a kursu (Turkish , Bosnian ), a small elevated plateau (rarely with a chair or other type of seat) used for less formal preaching and speeches. Makhphil Women who pray in mosques are separated from men there. Their part for prayer is called makhphil or maqfil (Bosnian ). It is located above the main prayer hall, elevated in the background as stairs-separated gallery or plateau (surface-shortened to the back relative to the bottom main part). It usually has a perforated fence at the front, through which imam (and male prayers in the main hall) can be partially seen. Makhphil is completely used by men when Jumu'ah is practised (due to lack of space). Mihrab A miḥrāb, also spelled as mehrab is a semicircular niche in the wall of a mosque that faces the qiblah (i.e the "front" of the mosque); the imam stands in this niche and leads prayer. Given that the imam typically stands alone in the frontmost row, this niche's practical effect is to save unused space. The minbar is a pulpit from which the Friday sermon is delivered. While the minbar of Muhammad was a simple chair, later it became larger and attracted artistic attention. Some remained made of wood, albeit exquisitely carved, while others were made of marble and featured friezes. Minarets A common feature in mosques is the minaret, the tall, slender tower that usually is situated at one of the corners of the mosque structure. The top of the minaret is always the highest point in mosques that have one, and often the highest point in the immediate area. The tallest minaret in the world is located at the Hassan II Mosque in Casablanca, Morocco. It has a height of and completed in 1993, it was designed by Michel Pinseau. The first minaret was constructed in 665 in Basra during the reign of the Umayyad caliph Muawiyah I. Muawiyah encouraged the construction of minarets, as they were supposed to bring mosques on par with Christian churches with their bell towers. Consequently, mosque architects borrowed the shape of the bell tower for their minarets, which were used for essentially the same purpose—calling the faithful to prayer. The oldest standing minaret in the world is the minaret of the Great Mosque of Kairouan in Tunisia, built between the 8th and the 9th century, it is a massive square tower consisting of three superimposed tiers of gradual size and decor. Before the five required daily prayers, a Mu’adhdhin () calls the worshippers to prayer from the minaret. In many countries like Singapore where Muslims are not the majority, mosques are prohibited from loudly broadcasting the Adhān (, Call to Prayer), although it is supposed to be said loudly to the surrounding community. The adhan is required before every prayer. However, nearly every mosque assigns a muezzin for each prayer to say the adhan as it is a recommended practice or Sunnah () of the Islamic prophet Muhammad. At mosques that do not have minarets, the adhan is called instead from inside the mosque or somewhere else on the ground. The Iqâmah (), which is similar to the adhan and proclaimed right before the commencement of prayers, is usually not proclaimed from the minaret even if a mosque has one. Domes The domes, often placed directly above the main prayer hall, may signify the vaults of the heaven and sky. As time progressed, domes grew, from occupying a small part of the roof near the mihrab to encompassing the whole roof above the prayer hall. Although domes normally took on the shape of a hemisphere, the Mughals in India popularized onion-shaped domes in South Asia which has gone on to become characteristic of the Arabic architectural style of dome. Some mosques have multiple, often smaller, domes in addition to the main large dome that resides at the center. Ablution facilities As ritual purification precedes all prayers, mosques often have ablution fountains or other facilities for washing in their entryways or courtyards. However, worshippers at much smaller mosques often have to use restrooms to perform their ablutions. In traditional mosques, this function is often elaborated into a freestanding building in the center of a courtyard. This desire for cleanliness extends to the prayer halls where shoes are disallowed to be worn anywhere other than the cloakroom. Thus, foyers with shelves to put shoes and racks to hold coats are commonplace among mosques. Contemporary features Modern mosques have a variety of amenities available to their congregants. As mosques are supposed to appeal to the community, they may also have additional facilities, from health clinics and clubs (gyms) to libraries to gymnasiums, to serve the community. Symbols Certain symbols are represented in a mosque's architecture to allude to different aspects of the Islamic religion. One of these feature symbols is the spiral. The "cosmic spiral" found in designs and on minarets is a references to heaven as it has "no beginning and no end". Mosques also often have floral patterns or images of fruit and vegetables. These are allusions to the paradise after death. Rules and etiquette Mosques, in accordance with Islamic practices, institute a number of rules intended to keep Muslims focused on worshiping God. While there are several rules, such as those regarding not allowing shoes in the prayer hall, that are universal, there are many other rules that are dealt with and enforced in a variety of ways from mosque to mosque. Prayer leader (Imam) Appointment of a prayer leader is considered desirable, but not always obligatory. The permanent prayer leader (imam) must be a free honest individual and is authoritative in religious matters. In mosques constructed and maintained by the government, the prayer leader is appointed by the ruler; in private mosques, however, appointment is made by members of the congregation through majority voting. According to the Hanafi school of Islamic jurisprudence, the individual who built the mosque has a stronger claim to the title of imam, but this view is not shared by the other schools. Leadership at prayer falls into three categories, depending on the type of prayer: five daily prayers, Friday prayer, or optional prayers. According to the Hanafi and Maliki school of Islamic jurisprudence, appointment of a prayer leader for Friday service is mandatory because otherwise the prayer is invalid. The Shafi'i and Hanbali schools, however, argue that the appointment is not necessary and the prayer is valid as long as it is performed in a congregation. A slave may lead a Friday prayer, but Muslim authorities disagree over whether the job can be done by a minor. An imam appointed to lead Friday prayers may also lead at the five daily prayers; Muslim scholars agree to the leader appointed for five daily services may lead the Friday service as well. All Muslim authorities hold the consensus opinion that only men may lead prayer for men. Nevertheless, women prayer leaders are allowed to lead prayer in front of all-female congregations. Cleanliness All mosques have rules regarding cleanliness, as it is an essential part of the worshippers' experience. Muslims before prayer are required to cleanse themselves in an ablution process known as wudu. However, even to those who enter the prayer hall of a mosque without the intention of praying, there are still rules that apply. Shoes must not be worn inside the carpeted prayer hall. Some mosques will also extend that rule to include other parts of the facility even if those other locations are not devoted to prayer. Congregants and visitors to mosques are supposed to be clean themselves. It is also undesirable to come to the mosque after eating something that smells, such as garlic. Dress Islam requires that its adherents wear clothes that portray modesty. Men are supposed to come to the mosque wearing loose and clean clothes that do not reveal the shape of the body. Likewise, it is recommended that women at a mosque wear loose clothing that covers to the wrists and ankles, and cover their heads with a Ḥijāb (), or other covering. Many Muslims, regardless of their ethnic background, wear Middle Eastern clothing associated with Arabic Islam to special occasions and prayers at mosques. Concentration As mosques are places of worship, those within the mosque are required to remain respectful to those in prayer. Loud talking within the mosque, as well as discussion of topics deemed disrespectful, is forbidden in areas where people are praying. In addition, it is disrespectful to walk in front of or otherwise disturb Muslims in prayer. The walls within the mosque have few items, except for possibly Islamic calligraphy, so Muslims in prayer are not distracted. Muslims are also discouraged from wearing clothing with distracting images and symbols so as not to divert the attention of those standing behind them during prayer. In many mosques, even the carpeted prayer area has no designs, its plainness helping worshippers to focus. Gender separation There is nothing written in the Qur'an about the issue of space in mosques and gender separation. However, traditional rules have segregated women and men. By traditional rules, women are most often told to occupy the rows behind the men. In part, this was a practical matter as the traditional posture for prayerkneeling on the floor, head to the groundmade mixed-gender prayer uncomfortably revealing for many women and distracting for some men. Traditionalists try to argue that Muhammad preferred women to pray at home rather than at a mosque, and they cite a ḥadīth in which Muhammad supposedly said: "The best mosques for women are the inner parts of their houses," although women were active participants in the mosque started by Muhammad. Muhammad told Muslims not to forbid women from entering mosques. They are allowed to go in. The second Sunni caliph 'Umar at one time prohibited women from attending mosques especially at night because he feared they might be sexually harassed or assaulted by men, so he required them to pray at home. Sometimes a special part of the mosque was railed off for women; for example, the governor of Mecca in 870 had ropes tied between the columns to make a separate place for women. Many mosques today will put the women behind a barrier or partition or in another room. Mosques in South and Southeast Asia put men and women in separate rooms, as the divisions were built into them centuries ago. In nearly two-thirds of American mosques, women pray behind partitions or in separate areas, not in the main prayer hall; some mosques do not admit women at all due to the lack of space and the fact that some prayers, such as the Friday Jumuʻah, are mandatory for men but optional for women. Although there are sections exclusively for women and children, the Grand Mosque in Mecca is desegregated. Non-Muslims Under most interpretations of sharia, non-Muslims are permitted to enter mosques provided that they respect the place and the people inside it. A dissenting opinion and minority view is presented by followers of the Maliki school of Islamic jurisprudence, who argue that non-Muslims may not be allowed into mosques under any circumstances. The Quran addresses the subject of non-Muslims, and particularly polytheists, in mosques in two verses in its ninth chapter, Sura At-Tawba. The seventeenth verse of the chapter prohibits those who join gods with Allah—polytheists—from maintaining mosques: The twenty-eighth verse of the same chapter is more specific as it only considers polytheists in the Masjid al-Haram in Mecca: According to Ahmad ibn Hanbal, these verses were followed to the letter at the times of Muhammad, when Jews and Christians, considered monotheists, were still allowed to Al-Masjid Al-Haram. However, the Umayyad caliph Umar II later forbade non-Muslims from entering mosques, and his ruling remains in practice in present-day Saudi Arabia. Today, the decision on whether non-Muslims should be allowed to enter mosques varies. With few exceptions, mosques in the Arabian Peninsula as well as Morocco do not allow entry to non-Muslims. For example, the Hassan II Mosque in Casablanca is one of only two mosques in Morocco currently open to non-Muslims. However, there are
in mosque attendance by gender or age. Architecture Styles Arab-plan or hypostyle mosques are the earliest type of mosques, pioneered under the Umayyad Dynasty. These mosques have square or rectangular plans with an enclosed courtyard (sahn) and covered prayer hall. Historically, in the warm Middle Eastern and Mediterranean climates, the courtyard served to accommodate the large number of worshippers during Friday prayers. Most early hypostyle mosques had flat roofs on prayer halls, which required the use of numerous columns and supports. One of the most notable hypostyle mosques is the Great Mosque of Cordoba in Spain, the building being supported by over 850 columns. Frequently, hypostyle mosques have outer arcades (riwaq) so that visitors can enjoy the shade. Arab-plan mosques were constructed mostly under the Umayyad and Abbasid dynasties; subsequently, however, the simplicity of the Arab plan limited the opportunities for further development, the mosques consequently losing popularity. The first departure within mosque design started in Persia (Iran). The Persians had inherited a rich architectural legacy from the earlier Persian dynasties, and they began incorporating elements from earlier Parthian and Sassanid designs into their mosques, influenced by buildings such as the Palace of Ardashir and the Sarvestan Palace. Thus, Islamic architecture witnessed the introduction of such structures as domes and large, arched entrances, referred to as iwans. During Seljuq rule, as Islamic mysticism was on the rise, the four-iwan arrangement took form. The four-iwan format, finalized by the Seljuqs, and later inherited by the Safavids, firmly established the courtyard façade of such mosques, with the towering gateways at every side, as more important than the actual buildings themselves. They typically took the form of a square-shaped central courtyard with large entrances at each side, giving the impression of gateways to the spiritual world. The Persians also introduced Persian gardens into mosque designs. Soon, a distinctly Persian style of mosques started appearing that would significantly influence the designs of later Timurid, and also Mughal, mosque designs. The Ottomans introduced central dome mosques in the 15th century. These mosques have a large dome centered over the prayer hall. In addition to having a large central dome, a common feature is smaller domes that exist off-center over the prayer hall or throughout the rest of the mosque, where prayer is not performed. This style was heavily influenced by Byzantine architecture with its use of large central domes. Mosques built in Southeast Asia often represent the Indonesian-Javanese style architecture, which are different from the ones found throughout the Greater Middle East. The ones found in Europe and North America appear to have various styles but most are built on Western architectural designs, some are former churches or other buildings that were used by non-Muslims. In Africa, most mosques are old but the new ones are built in imitation of those of the Middle East. This can be seen in the Abuja National Mosque in Nigeria and others. Islam forbids figurative art, on the grounds that the artist must not imitate God's creation. Mosques are, therefore, decorated with abstract patterns and beautiful inscriptions. Decoration is often concentrated around doorways and the miḥrāb. Tiles are used widely in mosques. They lend themselves to pattern-making, can be made with beautiful subtle colors, and can create a cool atmosphere, an advantage in the hot Arab countries. Quotations from the Quran often adorn mosque interiors. These texts are meant to inspire people by their beauty, while also reminding them of the words of Allah. Prayer hall The prayer hall, also known as the muṣallá (), rarely has furniture; chairs and pews are generally absent from the prayer hall so as to allow as many worshipers as possible to line the room. Some mosques have Islamic calligraphy and Quranic verses on the walls to assist worshippers in focusing on the beauty of Islam and its holiest book, the Quran, as well as for decoration. Often, a limited part of the prayer hall is sanctified formally as a masjid in the sharia sense (although the term masjid is also used for the larger mosque complex as well). Once designated, there are onerous limitations on the use of this formally designated masjid, and it may not be used for any purpose other than worship; restrictions that do not necessarily apply to the rest of the prayer area, and to the rest of the mosque complex (although such uses may be restricted by the conditions of the waqf that owns the mosque). In many mosques, especially the early congregational mosques, the prayer hall is in the hypostyle form (the roof held up by a multitude of columns). One of the finest examples of the hypostyle-plan mosques is the Great Mosque of Kairouan (also known as the Mosque of Uqba) in Tunisia. Usually opposite the entrance to the prayer hall is the qiblah wall, the visually emphasized area inside the prayer hall. The qiblah wall should, in a properly oriented mosque, be set perpendicular to a line leading to Mecca, the location of the Kaaba. Congregants pray in rows parallel to the qiblah wall and thus arrange themselves so they face Mecca. In the qiblah wall, usually at its center, is the mihrab, a niche or depression indicating the direction of Mecca. Usually the mihrab is not occupied by furniture either. A raised minbar or pulpit is located to the right side of the mihrab for a Khaṭīb, or some other speaker, to offer a Khuṭbah (Sermon) during Friday prayers. The mihrab serves as the location where the imam leads the five daily prayers on a regular basis. Left to the mihrab, in the front left corner of the mosque, sometimes there is a kursu (Turkish , Bosnian ), a small elevated plateau (rarely with a chair or other type of seat) used for less formal preaching and speeches. Makhphil Women who pray in mosques are separated from men there. Their part for prayer is called makhphil or maqfil (Bosnian ). It is located above the main prayer hall, elevated in the background as stairs-separated gallery or plateau (surface-shortened to the back relative to the bottom main part). It usually has a perforated fence at the front, through which imam (and male prayers in the main hall) can be partially seen. Makhphil is completely used by men when Jumu'ah is practised (due to lack of space). Mihrab A miḥrāb, also spelled as mehrab is a semicircular niche in the wall of a mosque that faces the qiblah (i.e the "front" of the mosque); the imam stands in this niche and leads prayer. Given that the imam typically stands alone in the frontmost row, this niche's practical effect is to save unused space. The minbar is a pulpit from which the Friday sermon is delivered. While the minbar of Muhammad was a simple chair, later it became larger and attracted artistic attention. Some remained made of wood, albeit exquisitely carved, while others were made of marble and featured friezes. Minarets A common feature in mosques is the minaret, the tall, slender tower that usually is situated at one of the corners of the mosque structure. The top of the minaret is always the highest point in mosques that have one, and often the highest point in the immediate area. The tallest minaret in the world is located at the Hassan II Mosque in Casablanca, Morocco. It has a height of and completed in 1993, it was designed by Michel Pinseau. The first minaret was constructed in 665 in Basra during the reign of the Umayyad caliph Muawiyah I. Muawiyah encouraged the construction of minarets, as they were supposed to bring mosques on par with Christian churches with their bell towers. Consequently, mosque architects borrowed the shape of the bell tower for their minarets, which were used for essentially the same purpose—calling the faithful to prayer. The oldest standing minaret in the world is the minaret of the Great Mosque of Kairouan in Tunisia, built between the 8th and the 9th century, it is a massive square tower consisting of three superimposed tiers of gradual size and decor. Before the five required daily prayers, a Mu’adhdhin () calls the worshippers to prayer from the minaret. In many countries like Singapore where Muslims are not the majority, mosques are prohibited from loudly broadcasting the Adhān (, Call to Prayer), although it is supposed to be said loudly to the surrounding community. The adhan is required before every prayer. However, nearly every mosque assigns a muezzin for each prayer to say the adhan as it is a recommended practice or Sunnah () of the Islamic prophet Muhammad. At mosques that do not have minarets, the adhan is called instead from inside the mosque or somewhere else on the ground. The Iqâmah (), which is similar to the adhan and proclaimed right before the commencement of prayers, is usually not proclaimed from the minaret even if a mosque has one. Domes The domes, often placed directly above the main prayer hall, may signify the vaults of the heaven and sky. As time progressed, domes grew, from occupying a small part of the roof near the mihrab to encompassing the whole roof above the prayer hall. Although domes normally took on the shape of a hemisphere, the Mughals in India popularized onion-shaped domes in South Asia which has gone on to become characteristic of the Arabic architectural style of dome. Some mosques have multiple, often smaller, domes in addition to the main large dome that resides at the center. Ablution facilities As ritual purification precedes all prayers, mosques often have ablution fountains or other facilities for washing in their entryways or courtyards. However, worshippers at much smaller mosques often have to use restrooms to perform their ablutions. In traditional mosques, this function is often elaborated into a freestanding building in the center of a courtyard. This desire for cleanliness extends to the prayer halls where shoes are disallowed to be worn anywhere other than the cloakroom. Thus, foyers with shelves to put shoes and racks to hold coats are commonplace among mosques. Contemporary features Modern mosques have a variety of amenities available to their congregants. As mosques are supposed to appeal to the community, they may also have additional facilities, from health clinics and clubs (gyms) to libraries to gymnasiums, to serve the community. Symbols Certain symbols are represented in a mosque's architecture to allude to different aspects of the Islamic religion. One of these feature symbols is the spiral. The "cosmic spiral" found in designs and on minarets is a references to heaven as it has "no beginning and no end". Mosques also often have floral patterns or images of fruit and vegetables. These are allusions to the paradise after death. Rules and etiquette Mosques, in accordance with Islamic practices, institute a number of rules intended to keep Muslims focused on worshiping God. While there are several rules, such as those regarding not allowing shoes in the prayer hall, that are universal, there are many other rules that are dealt with and enforced in a variety of ways from mosque to mosque. Prayer leader (Imam) Appointment of a prayer leader is considered desirable, but not always obligatory. The permanent prayer leader (imam) must be a free honest individual and is authoritative in religious matters. In mosques constructed and maintained by the government, the prayer leader is appointed by the ruler; in private mosques, however, appointment is made by members of the congregation through majority voting. According to the Hanafi school of Islamic jurisprudence, the individual who built the mosque has a stronger claim to the title of imam, but this view is not shared by the other schools. Leadership at prayer falls into three categories, depending on the type of prayer: five daily prayers, Friday prayer, or optional prayers. According to the Hanafi and Maliki school of Islamic jurisprudence, appointment of a prayer leader for Friday service is mandatory because otherwise the prayer is invalid. The Shafi'i and Hanbali schools, however, argue that the appointment is not necessary and the prayer is valid as long as it is performed in a congregation. A slave may lead a Friday prayer, but Muslim authorities disagree over whether the job can be done by a minor. An imam appointed to lead Friday prayers may also lead at the five daily prayers; Muslim scholars agree to the leader appointed for five daily services may lead the Friday service as well. All Muslim authorities hold the consensus opinion that only men may lead prayer for men. Nevertheless, women prayer leaders are allowed to lead prayer in front of all-female congregations. Cleanliness All mosques have rules regarding cleanliness, as it is an essential part of the worshippers' experience. Muslims before prayer are required to cleanse themselves in an ablution process known as wudu. However, even to those who enter the prayer hall of a mosque without the intention of praying, there are still rules that apply. Shoes must not be worn inside the carpeted prayer hall. Some mosques will also extend that rule to include other parts of the facility even if those other locations are not devoted to prayer. Congregants and visitors to mosques are supposed to be clean themselves. It is also undesirable to come to the mosque after eating something that smells, such as garlic. Dress Islam requires that its adherents wear clothes that portray modesty. Men are supposed to come to the mosque wearing loose and clean clothes that do not reveal the shape of the body. Likewise, it is recommended that women at a mosque wear loose clothing that covers to the wrists and ankles, and cover their heads with a Ḥijāb (), or other covering. Many Muslims, regardless of their ethnic background, wear Middle Eastern clothing associated with Arabic Islam to special occasions and prayers at mosques. Concentration As mosques are places of worship, those within the mosque are required to remain respectful to those in prayer. Loud talking within the mosque, as well as discussion of topics deemed disrespectful, is forbidden in areas where people are praying. In addition, it is disrespectful to walk in front of or otherwise disturb Muslims in prayer. The walls within the mosque have few items, except for possibly Islamic calligraphy, so Muslims in prayer are not distracted. Muslims are also discouraged from wearing clothing with distracting images and symbols so as not to divert the attention of those standing behind them during prayer. In many mosques, even the carpeted prayer area has no designs, its plainness helping worshippers to focus. Gender separation There is nothing written in the Qur'an about the issue of space in mosques and gender separation. However, traditional rules have segregated women and men. By traditional rules, women are most often told to occupy the rows behind the men. In part, this was a practical matter as the traditional posture for prayerkneeling on the floor, head to the groundmade mixed-gender prayer uncomfortably revealing for many women and distracting for some men. Traditionalists try to argue that Muhammad preferred women to pray at home rather than at a mosque, and they cite a ḥadīth in which Muhammad supposedly said: "The best mosques for women are the inner parts of their houses," although women were active participants in the mosque started by Muhammad. Muhammad told Muslims not to forbid women from entering mosques. They are allowed to go in. The second Sunni caliph 'Umar at one time prohibited women from attending mosques especially at night because he feared they might be sexually harassed or assaulted by men, so he required them to pray at home. Sometimes a special part of the mosque was railed off for women; for example, the governor of Mecca in 870 had ropes tied between the columns to make a separate place for women. Many mosques today will put the women behind a barrier or partition or in another room. Mosques in South and Southeast Asia put men and women in separate rooms, as the divisions were built into them centuries ago. In nearly two-thirds of American mosques, women pray behind partitions or in separate areas, not in the main prayer hall; some mosques do not admit women at all due to the lack of space and the fact that some prayers, such as the Friday Jumuʻah, are mandatory for men but optional for women. Although there are sections exclusively for women and children, the Grand Mosque in Mecca is desegregated. Non-Muslims Under most interpretations of sharia, non-Muslims are permitted to enter mosques provided that they respect the place and the people inside it. A dissenting opinion and minority view is presented by followers of the Maliki school of Islamic jurisprudence, who argue that non-Muslims may not be allowed into mosques under any circumstances. The Quran addresses the subject of non-Muslims, and particularly polytheists, in mosques in two verses in its ninth chapter, Sura At-Tawba. The seventeenth verse of the chapter prohibits those who join gods with Allah—polytheists—from maintaining mosques: The twenty-eighth verse of the same chapter is more specific as it only considers polytheists in the Masjid al-Haram in Mecca: According to Ahmad ibn Hanbal, these verses were followed to the letter at the times of Muhammad, when Jews and Christians, considered monotheists, were still allowed to Al-Masjid Al-Haram. However, the Umayyad caliph Umar II later forbade non-Muslims from entering mosques, and his ruling remains in practice in present-day Saudi Arabia. Today, the decision on whether non-Muslims should be allowed to enter mosques varies. With few exceptions, mosques in the Arabian Peninsula as well as Morocco do not allow entry to non-Muslims. For example, the Hassan II Mosque in Casablanca is one of only two mosques in Morocco currently open to non-Muslims. However, there are also many other places in the West as well as the Islamic world where non-Muslims are welcome to enter mosques. Most mosques in the United States, for example, report receiving non-Muslim visitors every month. Many mosques throughout the United States welcome non-Muslims as a sign of openness to the rest of the community as well
of the galaxy show that the position of this gas correlates with the spiral arms of the galaxy. That molecular gas occurs predominantly in the spiral arms suggests that molecular clouds must form and dissociate on a timescale shorter than 10 million years—the time it takes for material to pass through the arm region. Vertically to the plane of the galaxy, the molecular gas inhabits the narrow midplane of the galactic disc with a characteristic scale height, Z, of approximately 50 to 75 parsecs, much thinner than the warm atomic (Z from 130 to 400 parsecs) and warm ionized (Z around 1000 parsecs) gaseous components of the ISM. The exception to the ionized-gas distribution are H II regions, which are bubbles of hot ionized gas created in molecular clouds by the intense radiation given off by young massive stars and as such they have approximately the same vertical distribution as the molecular gas. This distribution of molecular gas is averaged out over large distances; however, the small scale distribution of the gas is highly irregular with most of it concentrated in discrete clouds and cloud complexes. Types of molecular cloud Giant molecular clouds A vast assemblage of molecular gas that has more than 10 thousand times the mass of the Sun is called a giant molecular cloud (GMC). GMCs are around 15 to 600 light-years (5 to 200 parsecs) in diameter, with typical masses of 10 thousand to 10 million solar masses. Whereas the average density in the solar vicinity is one particle per cubic centimetre, the average density of a GMC is a hundred to a thousand times as great. Although the Sun is much denser than a GMC, the volume of a GMC is so great that it contains much more mass than the Sun. The substructure of a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps. Filaments are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments observations have revealed quasi-periodic chains of dense cores with spacing of 0.15 parsec comparable to the filament inner width. The densest parts of the filaments and clumps are called "molecular cores", while the densest molecular cores
a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps. Filaments are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments observations have revealed quasi-periodic chains of dense cores with spacing of 0.15 parsec comparable to the filament inner width. The densest parts of the filaments and clumps are called "molecular cores", while the densest molecular cores are called "dense molecular cores" and have densities in excess of 104 to 106 particles per cubic centimetre. Observationally, typical molecular cores are traced with CO and dense molecular cores are traced with ammonia. The concentration of dust within molecular cores is normally sufficient to block light from background stars so that they appear in silhouette as dark nebulae. GMCs are so large that "local" ones can cover a significant fraction of a constellation; thus they are often referred to by the name of that constellation, e.g. the Orion Molecular Cloud (OMC) or the Taurus Molecular Cloud (TMC). These local GMCs are arrayed in a ring in the neighborhood of the Sun coinciding with the Gould Belt. The most massive collection of molecular clouds in the galaxy forms an asymmetrical ring about the galactic center at a radius of 120 parsecs; the largest component of this ring is the Sagittarius B2 complex. The Sagittarius region is chemically rich and is often used as an exemplar by astronomers searching for new molecules in interstellar space. Small molecular clouds Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same
he was strongly encouraged by faculty member Lionel Pries. He earned money to pay for his tuition by working at an Alaskan salmon cannery, working five summers and earning $50 a month, plus 25 cents an hour in overtime pay. In part to escape anti-Japanese prejudice, he moved to Manhattan in 1934, with $40 and no job prospects. He wrapped dishes for an importing company until he found work as a draftsman and engineer. He enrolled at New York University for a master's degree in architecture and got a job with the architecture firm Shreve, Lamb & Harmon, designers of the Empire State Building. The firm helped Yamasaki avoid internment as a Japanese-American during World War II, and he himself sheltered his parents in New York City. After leaving Shreve, Lamb & Harmon, Yamasaki worked briefly for Harrison & Abramovitz and Raymond Loewy. In 1945, Yamasaki moved to Detroit, where he secured a position with Smith, Hinchman & Grylls. Yamasaki left the firm in 1949, and started his own partnership. He worked from Birmingham and Troy, Michigan. One of the first projects he designed at his own firm was Ruhl's Bakery at 7 Mile Road and Monica Street in Detroit. Career Yamasaki's first major project was the Pruitt–Igoe public housing project in St. Louis in 1955. Despite his love of traditional Japanese design and ornamentation, this was a stark, modernist concrete structure, severely constricted by a tight budget. The housing project soon experienced so many problems that it was demolished starting in 1972, less than twenty years after its completion. Its destruction would be considered by architectural historian Charles Jencks to be the symbolic end of modernist architecture. In 1955, he also designed the "sleek" terminal at Lambert–St. Louis International Airport which led to his 1959 commission to design the Dhahran International Airport in Saudi Arabia. In the 1950s, Yamasaki was commissioned by the Reynolds Company to design an aluminum-wrapped building in Southfield, Michigan, which would "symbolize the auto industry's past and future progress with aluminum." The three-story glass building wrapped in aluminum, known as the Reynolds Metals Company's Great Lakes Sales Headquarters Building, was also supposed to reinforce the company's main product and showcase its admirable characteristics of strength and beauty. Yamasaki's first widely-acclaimed design was the Pacific Science Center, with its iconic lacy and airy decorative arches. It was constructed by the City of Seattle for the 1962 Seattle World's Fair. The building raised his public profile so much that he was featured on the cover of Time magazine. In the post-war period, he created a number of office buildings which led to his innovative design of the towers of the World Trade Center in 1964, which began construction March 21, 1966. The first of the towers was finished in 1970. Many of his buildings feature superficial details inspired by the pointed arches of Gothic architecture, and make use of extremely narrow vertical windows. This narrow-windowed style arose from his own personal fear of heights. One particular design challenge of the World Trade Center's design related to the efficacy of the elevator system, which became unique in the world when it was first opened for service. Yamasaki employed the fastest elevators at the time, running at per minute. Instead of placing a traditional large cluster of full-height elevator shafts in the core of each tower, Yamasaki created the Twin Towers' "Skylobby" system. The Skylobby design created three separate, connected elevator systems which would serve different zones of the building, depending on which floor was chosen, saving approximately 70% of the space which would have been required for traditional shafts. The space saved was then used for additional office space. Internally, each office floor was a vast open space unimpeded by support columns, ready to be subdivided as the tenants might choose. In 1978, Yamasaki designed the Federal Reserve Bank tower in Richmond, Virginia. The work was designed with a similar external appearance as the World Trade Center complex, with its narrow fenestration, and now stands
have been required for traditional shafts. The space saved was then used for additional office space. Internally, each office floor was a vast open space unimpeded by support columns, ready to be subdivided as the tenants might choose. In 1978, Yamasaki designed the Federal Reserve Bank tower in Richmond, Virginia. The work was designed with a similar external appearance as the World Trade Center complex, with its narrow fenestration, and now stands at . Yamasaki was a member of the Pennsylvania Avenue Commission, created in 1961 to restore the grand avenue in Washington, DC, but he resigned after disagreements and disillusionment with the design by committee approach. After partnering with Emery Roth and Sons on the design of the World Trade Center, the collaboration continued with other projects including new buildings at Bolling Air Force Base in Washington, DC The campus for the University of Regina was designed in tandem with Yamasaki's plan for Wascana Centre, a park built around Wascana Lake in Regina, Saskatchewan. The original campus design was approved in 1962. Yamasaki was awarded contracts to design the first three buildings: the Classroom Building, the Laboratory Building, and the Dr. John Archer Library, which were built between 1963 and 1967. Yamasaki designed two notable synagogues, North Shore Congregation Israel in Glencoe, Illinois (1964), and Temple Beth El, in Bloomfield Hills, Michigan (1973). He designed a number of buildings on the campus of Carleton College in Northfield, Minnesota between 1958 and 1968. After criticism of his dramatically cantilevered Rainier Tower (1977) in Seattle, Yamasaki became less adventurous in his designs during the last decade of his career. Legacy Despite the many buildings he completed, Yamasaki's reputation faded along with the overall decline of modernism towards the end of the 20th century. Two of his major projects, the Pruitt-Igoe public housing complex, and the original World Trade Center, shared the dubious symbolic distinction of being destroyed while recorded by live TV broadcasts. In many ways, these best-known works ran counter to Yamasaki's own design principles, and he later regretted his reluctant acceptance of architectural compromises dictated by the clients of these projects. Several others of his buildings have also been demolished. Yamasaki collaborated closely with structural engineers, including John Skilling, Leslie Roberts, and Jack Christiansen, to produce some of his innovative architectural designs. He strived to achieve "serenity, surprise, and delight" in his humanistic modernist buildings and their surrounds. Decades after his death, Yamasaki's buildings and legacy would be re-assessed more sympathetically by some architectural critics. Several of his buildings have now been restored in accordance with his original designs, and his McGregor Memorial Conference Center was awarded National Historic Landmark status in 2015. Personal life Yamasaki was first married in 1941 to Teruko "Teri" Hirashiki. They had three children together: Carol, Taro, and Kim. They divorced in 1961 and Yamasaki married Peggy Watty. He and Watty divorced two years later, and Yamasaki married a
the kingdom. It was from the Algarve that some of the early settlers set out. Many came with the important task of the landlord system employment. Servants, squires, knights and noblemen are identified as the ones who secured the beginning of the settlement. Later on, settlers came from the north of Portugal, namely from the region of Entre Douro and Minho, who intervened specifically in the organization of the agricultural area. Majority of settlers were fishermen and peasant farmers, who willingly left Portugal for a new life on the islands, a better one, they hoped, than was possible in a Portugal which had been ravaged by the Black Death and where the best farmlands were strictly controlled by the nobility. To have minimum conditions for the development of agriculture on the island, the settlers had to chop down part of the dense forest and build a large number of water channels, called “levadas”, to carry the abundant waters on the north coast to the south coast of the island. Initially, the settlers produced wheat for their own sustenance but, later began to export wheat to mainland Portugal. In earlier times, fish and vegetables were the settlers’ main means of subsistence. Grain production began to fall and the ensuing crisis forced Henry the Navigator to order other commercial crops to be planted so that the islands could be profitable. These specialised plants, and their associated industrial technology, created one of the major revolutions on the islands and fuelled Portuguese industry. Following the introduction of the first water-driven sugar mill on Madeira, sugar production increased to over 6,000 arrobas (an arroba was equal to 11 to 12 kilograms) by 1455, using advisers from Sicily and financed by Genoese capital. (Genoa acted as an integral part of the island economy until the 17th century.) The accessibility of Madeira attracted Genoese and Flemish traders, who were keen to bypass Venetian monopolies. Sugarcane production was the primary engine of the island's economy which quickly afforded the Funchal metropolis frank economic prosperity. The production of sugar cane attracted adventurers and merchants from all parts of Europe, especially Italians, Basques, Catalans, and Flemish. This meant that, in the second half of the fifteenth century, the city of Funchal became a mandatory port of call for European trade routes. Slaves were used during the island's period of sugar trade to cultivate sugar cane alongside paid workers, though slave owners were only a small minority of the Madeiran population, and those who did own slaves owned only a few. Slaves consisted of Guanches from the nearby Canary islands, captured Berbers from the conquest of Ceuta and West Africans after further exploration of the African coast. Until the first half of the sixteenth century, Madeira was one of the major sugar markets of the Atlantic. Apparently it is in Madeira that, in the context of sugar production, slave labour was applied for the first time. The colonial system of sugar production was put into practice on the island of Madeira, on a much smaller scale, and later transferred, on a large scale, to other overseas production areas. Later on, this small scale of production was completely outmatched by Brazilian and São Tomean plantations. Madeiran sugar production declines in such a way that it is not enough for domestic needs, being imported to the island sugar from other Portuguese colonies. Sugar mills are gradually abandoned, with few remaining, which gave way to other markets in Madeira. In the 17th century, as Portuguese sugar production was shifted to Brazil, São Tomé and Príncipe and elsewhere, Madeira's most important commodity product became its wine. Sugar plantations were replaced by vineyards, originating in the so-called ‘Wine Culture’, which acquired international fame and provided the rise of a new social class, the Bourgeoisie. With the increase of commercial treaties with England, important English merchants settled on the Island and, ultimately, controlled the increasingly important island wine trade. The English traders settled in the Funchal as of the seventeenth century, consolidating the markets from North America, the West Indies and England itself. The Madeira Wine became very popular in the markets and it is also said to have been used in a toast during the Declaration of Independence by the Founding Fathers. In the eighteenth and nineteenth centuries, Madeira stands out for its climate and therapeutic effects. In the nineteenth century, visitors to the island integrated four major groups: patients, travellers, tourists and scientists. Most visitors belonged to the moneyed aristocracy. As a result of a high demand for the season, there was a need to prepare guides for visitors. The first tourist guide of Madeira appeared in 1850 and focused on elements of history, geology, flora, fauna and customs of the island. Regarding hotel infrastructures, the British and the Germans were the first to launch the Madeiran hotel chain. The historic Belmond Reid's Palace, opened in 1891, is still open to this day. Barbary corsairs from North Africa, who enslaved Europeans from ships and coastal communities throughout the Mediterranean region, captured 1,200 people in Porto Santo in 1617. The British first amicably occupied the island in 1801 whereafter Colonel William Henry Clinton became governor. A detachment of the 85th Regiment of Foot under Lieutenant-colonel James Willoughby Gordon garrisoned the island. After the Peace of Amiens, British troops withdrew in 1802, only to reoccupy Madeira in 1807 until the end of the Peninsular War in 1814. In 1846 James Julius Wood wrote a series of seven sketches of the island. In 1856, British troops recovering from cholera, and widows and orphans of soldiers fallen in the Crimean War, were stationed in Funchal, Madeira. World War I On 31 December 1916, during the Great War, a German U-boat, , captained by Max Valentiner, entered Funchal harbour on Madeira. U-38 torpedoed and sank three ships, bringing the war to Portugal by extension. The ships sunk were: CS Dacia (1,856 tons), a British cable-laying vessel. Dacia had previously undertaken war work off the coast of Casablanca and Dakar. It was in the process of diverting the German South American cable into Brest, France. SS Kanguroo (2,493 tons), a French specialized "heavy-lift" transport. Surprise (680 tons), a French gunboat. Her commander and 34 crewmen (including 7 Portuguese) were killed. After attacking the ships, U-38 bombarded Funchal for two hours from a range of about . Batteries on Madeira returned fire and eventually forced U-38 to withdraw. On 12 December 1917, two German U-boats, SM U-156 and SM U-157 (captained by Max Valentiner), again bombarded Funchal. This time the attack lasted around 30 minutes. The U-boats fired 40 shells. There were three fatalities and 17 wounded; a number of houses and Santa Clara church were hit. Charles I (Karl I), the last Emperor of the Austro-Hungarian Empire, was exiled to Madeira after the war. Determined to prevent an attempt to restore Charles to the throne, the Council of Allied Powers agreed he could go into exile on Madeira because it was isolated in the Atlantic and easily guarded. He died there on 1 April 1922 and his coffin lies in a chapel of the Church of Our Lady of Monte. Geography The archipelago of Madeira is located from the African coast and from the European continent (approximately a one-and-a-half-hour flight from the Portuguese capital of Lisbon). Madeira is on the same parallel as Bermuda a few time zones further west in the Atlantic. The two archipelagos are the only land in the Atlantic on the 32nd parallel north. Madeira is found in the extreme south of the Tore-Madeira Ridge, a bathymetric structure of great dimensions oriented along a north-northeast to south-southwest axis that extends for . This submarine structure consists of long geomorphological relief that extends from the abyssal plain to 3500 metres; its highest submersed point is at a depth of about 150 metres (around latitude 36ºN). The origins of the Tore-Madeira Ridge are not clearly established, but may have resulted from a morphological buckling of the lithosphere. Islands and islets Madeira (740.7 km2), including Ilhéu de Agostinho, Ilhéu de São Lourenço, Ilhéu Mole (northwest); Total population: 262,456 (2011 Census). Porto Santo (42.5 km2), including Ilhéu de Baixo ou da Cal, Ilhéu de Ferro, Ilhéu das Cenouras, Ilhéu de Fora, Ilhéu de Cima; Total population: 5,483 (2011 Census). Desertas Islands (14.2 km2), including the three uninhabited islands: Deserta Grande Island, Bugio Island and Ilhéu de Chão. Savage Islands (3.6 km2), archipelago 280 km south-southeast of Madeira Island including three main islands and 16 uninhabited islets in two groups: the Northwest Group (Selvagem Grande Island, Ilhéu de Palheiro da Terra, Ilhéu de Palheiro do Mar) and the Southeast Group (Selvagem Pequena Island, Ilhéu Grande, Ilhéu Sul, Ilhéu Pequeno, Ilhéu Fora, Ilhéu Alto, Ilhéu Comprido, Ilhéu Redondo, Ilhéu Norte). Madeira Island The island of Madeira is at the top of a massive shield volcano that rises about from the floor of the Atlantic Ocean, on the Tore underwater mountain range. The volcano formed atop an east–west rift in the oceanic crust along the African Plate, beginning during the Miocene epoch over 5 million years ago, continuing into the Pleistocene until about 700,000 years ago. This was followed by extensive erosion, producing two large amphitheatres open to south in the central part of the island. Volcanic activity later resumed, producing scoria cones and lava flows atop the older eroded shield. The most recent volcanic eruptions were on the west-central part of the island only 6,500 years ago, creating more cinder cones and lava flows. It is the largest island of the group with an area of , a length of (from Ponte de São Lourenço to Ponte do Pargo), while approximately at its widest point (from Ponte da Cruz to Ponte São Jorge), with a coastline of . It has a mountain ridge that extends along the centre of the island, reaching at its highest point (Pico Ruivo), while much lower (below 200 metres) along its eastern extent. The primitive volcanic foci responsible for the central mountainous area, consisted of the peaks: Ruivo (1,862 m), Torres (1,851 m), Arieiro (1,818 m), Cidrão (1,802 m), Cedro (1,759 m), Casado (1,725 m), Grande (1,657 m), Ferreiro (1,582 m). At the end of this eruptive phase, an island circled by reefs was formed, its marine vestiges are evident in a calcareous layer in the area of Lameiros, in São Vicente (which was later explored for calcium oxide production). Sea cliffs, such as Cabo Girão, valleys and ravines extend from this central spine, making the interior generally inaccessible. Daily life is concentrated in the many villages at the mouths of the ravines, through which the heavy rains of autumn and winter usually travel to the sea. Climate Madeira has many different bioclimates. Based on differences in sun exposure, humidity, and annual mean temperature, there are clear variations between north- and south-facing regions, as well as between some islands. The islands are strongly influenced by the Gulf Stream and Canary Current, giving it mild to warm year-round temperatures; according to the Instituto de Meteorologia (IPMA), the average annual temperature at Funchal weather station is for the 1981–2010 period. Relief is a determinant factor on precipitation levels, areas such as the Madeira Natural Park can get as much as of precipitation a year hosting green lush laurel forests, while Porto Santo, being a much flatter island, has a semiarid climate (BSh). In most winters snowfall occurs in the mountains of Madeira. The main Madeira island has areas with an annual average temperature exceeding along the coast (according to the Portuguese Meteorological Institute). Flora and fauna Madeira island is home to several endemic plant and animal species. In the south, there is very little left of the indigenous subtropical rainforest that once covered the whole island (the original settlers set fire to the island to clear the land for farming) and gave it the name it now bears (Madeira means "wood" in Portuguese). However, in the north, the valleys contain native trees of fine growth. These "laurisilva" forests, called lauraceas madeirense, notably the forests on the northern slopes of Madeira Island, are designated as a World Heritage Site by UNESCO. The paleobotanical record of Madeira reveals that laurisilva forest has existed in this island for at least 1.8 million years. Critically endangered species such as the vine Jasminum azoricum and the rowan Sorbus maderensis are endemic to Madeira. The Madeiran large white butterfly was an endemic subspecies of the Large white which inhabited the laurisilva forests but has not been seen since 1977 so may now be extinct. Madeiran wall lizard The Madeiran wall lizard (Teira dugesii) is a species of lizard in the family Lacertidae. The species is endemic to the Island where it is very common, and is the only small lizard, ranging from sea coasts to altitudes of . It is usually found in rocky places or among scrub and may climb into trees. It is also found in gardens and on the walls of buildings. It feeds on small invertebrates such as ants and also eats some vegetable matter. The tail is easily shed and the stump regenerates slowly. The colouring is variable and tends to match the colour of the animal's surroundings, being some shade of brown or grey with occasionally a greenish tinge. Most animals are finely flecked with darker markings. The underparts are white or cream, sometimes with dark spots, with some males having orange or red underparts and blue throats, but these bright colours may fade if the animal is disturbed. The Madeiran wall lizard grows to a snout-to-vent length of about with a tail about 1.7 times the length of its body. Females lay two to three clutches of eggs in a year with the juveniles being about when they hatch. Endemic birds Two species of birds are endemic to Madeira, the Trocaz pigeon and the Madeira firecrest. In addition to these are several extinct species which may have died out soon after the islands were settled, the Madeiran scops owl, two rail species, Rallus adolfocaesaris and R. lowei, and two quail species, Coturnix lignorum and C. alabrevis, and the Madeiran wood pigeon, a subspecies of the widespread common wood pigeon and which was last seen in the early 20th century. Levadas The island of Madeira is wet in the northwest, but dry in the southeast. In the 16th century the Portuguese started building levadas or aqueducts to carry water to the agricultural regions in the south. Madeira is very mountainous, and building the levadas was difficult and often convicts or slaves were used. Many are cut into the sides of mountains, and it was also necessary to dig of tunnels, some of which are still accessible. Today the levadas not only supply water to the southern parts of the island, but provide hydro-electric power. There are over of levadas and they provide a network of walking paths. Some provide easy and relaxing walks through the countryside, but others are narrow, crumbling ledges where a slip could result in serious injury or death. Since 2011, some improvements have been made to these pathways, after the 2010 Madeira floods
with an annual average temperature exceeding along the coast (according to the Portuguese Meteorological Institute). Flora and fauna Madeira island is home to several endemic plant and animal species. In the south, there is very little left of the indigenous subtropical rainforest that once covered the whole island (the original settlers set fire to the island to clear the land for farming) and gave it the name it now bears (Madeira means "wood" in Portuguese). However, in the north, the valleys contain native trees of fine growth. These "laurisilva" forests, called lauraceas madeirense, notably the forests on the northern slopes of Madeira Island, are designated as a World Heritage Site by UNESCO. The paleobotanical record of Madeira reveals that laurisilva forest has existed in this island for at least 1.8 million years. Critically endangered species such as the vine Jasminum azoricum and the rowan Sorbus maderensis are endemic to Madeira. The Madeiran large white butterfly was an endemic subspecies of the Large white which inhabited the laurisilva forests but has not been seen since 1977 so may now be extinct. Madeiran wall lizard The Madeiran wall lizard (Teira dugesii) is a species of lizard in the family Lacertidae. The species is endemic to the Island where it is very common, and is the only small lizard, ranging from sea coasts to altitudes of . It is usually found in rocky places or among scrub and may climb into trees. It is also found in gardens and on the walls of buildings. It feeds on small invertebrates such as ants and also eats some vegetable matter. The tail is easily shed and the stump regenerates slowly. The colouring is variable and tends to match the colour of the animal's surroundings, being some shade of brown or grey with occasionally a greenish tinge. Most animals are finely flecked with darker markings. The underparts are white or cream, sometimes with dark spots, with some males having orange or red underparts and blue throats, but these bright colours may fade if the animal is disturbed. The Madeiran wall lizard grows to a snout-to-vent length of about with a tail about 1.7 times the length of its body. Females lay two to three clutches of eggs in a year with the juveniles being about when they hatch. Endemic birds Two species of birds are endemic to Madeira, the Trocaz pigeon and the Madeira firecrest. In addition to these are several extinct species which may have died out soon after the islands were settled, the Madeiran scops owl, two rail species, Rallus adolfocaesaris and R. lowei, and two quail species, Coturnix lignorum and C. alabrevis, and the Madeiran wood pigeon, a subspecies of the widespread common wood pigeon and which was last seen in the early 20th century. Levadas The island of Madeira is wet in the northwest, but dry in the southeast. In the 16th century the Portuguese started building levadas or aqueducts to carry water to the agricultural regions in the south. Madeira is very mountainous, and building the levadas was difficult and often convicts or slaves were used. Many are cut into the sides of mountains, and it was also necessary to dig of tunnels, some of which are still accessible. Today the levadas not only supply water to the southern parts of the island, but provide hydro-electric power. There are over of levadas and they provide a network of walking paths. Some provide easy and relaxing walks through the countryside, but others are narrow, crumbling ledges where a slip could result in serious injury or death. Since 2011, some improvements have been made to these pathways, after the 2010 Madeira floods and mudslides on the island, to clean and reconstruct some critical parts of the island, including the levadas. Such improvements involved the continuous maintenance of the water streams, cementing the trails, and positioning safety fences on dangerous paths. Two of the most popular levadas to hike are the Levada do Caldeirão Verde and the Levada do Caldeirão do Inferno, which should not be attempted by hikers prone to vertigo or without torches and helmets. The Levada do Caniçal is a much easier walk, running from Maroços to the Caniçal Tunnel. It is known as the mimosa levada, because "mimosa" trees, (the colloquial name for invasive acacia) are found all along the route. Politics Political autonomy Due to its distinct geography, economy, social and cultural situation, as well as the historical autonomic aspirations of the Madeiran island population, the Autonomous Regions of Madeira was established in 1976. Although it is a politico-administrative autonomic region the Portuguese constitution specifies both a regional and national connection, obliging their administrations to maintain democratic principles and promote regional interests, while still reinforcing national unity. As defined by the Portuguese constitution and other laws, Madeira possesses its own political and administrative statute and has its own government. The branches of Government are the Regional Government and the Legislative Assembly, the later being elected by universal suffrage, using the D'Hondt method of proportional representation. The president of the Regional Government is appointed by the Representative of the Republic according to the results of the election to the legislative assemblies. The sovereignty of the Portuguese Republic was represented in Madeira by the Minister of the Republic, proposed by the Government of the Republic and appointed by the President of the Republic. However, after the sixth amendment to the Portuguese Constitution was passed in 2006, the Minister of the Republic was replaced by a less-powerful Representative of the Republic who is appointed by the President, after listening to the Government, but otherwise it is a presidential prerogative. The other tasks of Representative of the Republic are to sign and order the publication of regional legislative decrees and regional regulatory decrees or to exercise the right of veto over regional laws, should these laws be unconstitutional. Status within the European Union Madeira is also an Outermost Region (OMR) of the European Union, meaning that due to its geographical situation, it is entitled to derogation from some EU policies despite being part of the European Union. According to the Treaty on the Functioning of the European Union, both primary and secondary European Union law applies automatically to Madeira, with possible derogations to take account of its "structural social and economic situation (...) which is compounded by their remoteness, insularity, small size, difficult topography and climate, economic dependence on a few products, the permanence and combination of which severely restrain their development". An example of such derogation is seen in the approval of the International Business Centre of Madeira and other state aid policies to help the rum industry. It forms part of the European Union customs area, the Schengen Area and the European Union Value Added Tax Area. Administrative divisions Administratively, Madeira (with a population of 251,060 inhabitants in 2021) and covering an area of is organised into eleven municipalities: Funchal Funchal is the capital and principal city of the Autonomous Region of Madeira, located along the southern coast of the island of Madeira. It is a modern city, located within a natural geological "amphitheatre" composed of vulcanological structure and fluvial hydrological forces. Beginning at the harbour (Porto de Funchal), the neighbourhoods and streets rise almost , along gentle slopes that helped to provide a natural shelter to the early settlers. Population Demographics The island was settled by Portuguese people, especially farmers from the Minho region,<ref>{{cite web |url=http://www.ceha-madeira.net/livros/infante.html|title=Alberto Vieira, O Infante e a Madeira: dúvidas e certezas, Centro Estudos História Atlântico |publisher=Ceha-madeira.net |access-date=30 July 2010 |url-status=dead |archive-url=https://web.archive.org/web/20100531222502/http://www.ceha-madeira.net/livros/infante.html |archive-date=31 May 2010}}</ref> meaning that Madeirans (), as they are called, are ethnically Portuguese, though they have developed their own distinct regional identity and cultural traits. The region of Madeira and Porto Santo has a total population of just under 256,060, the majority of whom live on the main island of Madeira 251,060 where the population density is ; meanwhile only around 5,000 live on the Porto Santo island where the population density is . About 247,000 (96%) of the population are Catholic and Funchal is the location of the Catholic cathedral. Diaspora Madeirans migrated to the United States, Venezuela, Brazil, Guyana, Saint Vincent and the Grenadines and Trinidad and Tobago."Madeira and Emigration " Madeiran immigrants in North America mostly clustered in the New England and mid-Atlantic states, Toronto, Northern California, and Hawaii. The city of New Bedford is especially rich in Madeirans, hosting the Museum of Madeira Heritage, as well as the annual Madeiran and Luso-American celebration, the Feast of the Blessed Sacrament, the world's largest celebration of Madeiran heritage, regularly drawing crowds of tens of thousands to the city's Madeira Field. In 1846, when a famine struck Madeira over 6,000 of the inhabitants migrated to British Guiana. In 1891 they numbered 4.3% of the population. In 1902 in Honolulu, Hawaii there were 5,000 Portuguese people, mostly Madeirans. In 1910 this grew to 21,000. 1849 saw an emigration of Protestant religious exiles from Madeira to the United States, by way of Trinidad and other locations in the West Indies. Most of them settled in Illinois with financial and physical aid of the American Protestant Society, headquartered in New York City. In the late 1830s the Reverend Robert Reid Kalley, from Scotland, a Presbyterian minister as well as a physician, made a stop at Funchal, Madeira on his way to a mission in China, with his wife, so that she could recover from an illness. The Rev. Kalley and his wife stayed on Madeira where he began preaching the Protestant gospel and converting islanders from Catholicism. Eventually, the Rev. Kalley was arrested for his religious conversion activities and imprisoned. Another missionary from Scotland, William Hepburn Hewitson, took on Protestant ministerial activities in Madeira. By 1846, about 1,000 Protestant Madeirenses, who were discriminated against and the subjects of mob violence because of their religious conversions, chose to immigrate to Trinidad and other locations in the West Indies in answer for a call for sugar plantation workers. The Madeirenses exiles did not fare well in the West Indies. The tropical climate was unfamiliar and they found themselves in serious economic difficulties. By 1848, the American Protestant Society raised money and sent the Rev. Manuel J. Gonsalves, a Baptist minister and a naturalized U.S. citizen from Madeira, to work with the Rev. Arsénio da Silva, who had emigrated with the exiles from Madeira, to arrange to resettle those who wanted to come to the United States. The Rev. da Silva died in early 1849. Later in 1849, the Rev. Gonsalves was then charged with escorting the exiles from Trinidad to be settled in Sangamon and Morgan counties in Illinois on land purchased with funds raised by the American Protestant Society. Accounts state that anywhere from 700 to 1,000 exiles came to the United States at this time. There are several large Madeiran communities around the world, such as the number in the UK, including Jersey, the Portuguese British community mostly made up of Madeirans celebrate Madeira Day. Immigration Madeira is part of the Schengen Area. The Venezuelan (14.4%), British (14.2%), Brazilian (12.1%) and German (7%) nationalities constituted the largest foreign communities residing in Madeira in 2017. The Venezuelan community dramatically increased in number (38%) in 2017 due to migration fuelled by the socioeconomic crisis in Venezuela. In terms of geographical distribution, the foreign population mainly concentrates in Funchal (59.2% of the total of the region), followed by Santa Cruz (13.8%), Calheta (7.3%) and Porto Santo (4%). The foreign population with resident status in the Autonomous Region of Madeira totalled 6,720 (up by 10% from 2016), distributed between residence permits (6,692) and long-stay visas (28). Economy The Gross domestic product (GDP) of the region was 4.9 billion euros in 2018, accounting for 2.4% of Portugal's economic output. GDP per capita adjusted for purchasing power was 22,500 euros or 75% of the EU27 average in the same year. The GDP per employee was 71% of the EU average. Madeira International Business Center The setting-up of a free trade zone, also known as the Madeira International Business Center (MIBC) has led to the installation, under more favorable conditions, of infrastructure, production shops and essential services for small and medium-sized industrial enterprises. The International Business Centre of Madeira comprises presently three sectors of investment: the Industrial Free Trade Zone, the International Shipping Register – MAR and the International Services. Madeira's tax regime has been approved by the European Commission as legal State Aid and its deadline has recently been extended by the E.C. until the end of 2027. The International Business Center of Madeira, also known as Madeira Free Trade Zone, was created formally in the 1980s as a tool of regional economic policy. It consists of a set of incentives, mainly of a tax nature, granted with the objective of attracting inward investment into Madeira, recognized as the most efficient mechanism to modernize, diversify and internationalize the regional economy. The decision to create the International Business Center of Madeira was the result of a thorough process of analysis and study. Other small island economies, with similar geographical and economic restraints, had successfully implemented projects of attraction of foreign direct investment based on international services activities, becoming therefore examples of successful economic policies. Since the beginning, favorable operational and fiscal conditions have been offered in the context of a preferential tax regime, fully recognized and approved by the European Commission in the framework of State aid for regional purposes and under the terms for the Ultra-peripheral Regions set in the Treaties, namely Article 299 of the Treaty on European Union. The IBC of Madeira has therefore been fully integrated in the Portuguese and EU legal systems and, as a consequence, it is regulated and supervised by the competent Portuguese and EU authorities in a transparent and stable business environment, marking a clear difference from the so-called "tax havens" and "offshore jurisdictions", since its inception. In 2015, the European Commission authorized the new state aid regime for new companies incorporated between 2015 and 2020 and the extension of the deadline of the tax reductions until the end of 2027. The present tax regime is outlined in Article 36°-A of the Portuguese Tax Incentives Statute. Available data clearly demonstrates the contribution that this development programme has brought to the local economy over its 20 years of existence: impact in the local labor market, through the creation of qualified jobs for the young population but also for Madeiran professionals who have returned to Madeira thanks to the opportunities now created; an increase in productivity due to the transfer of know how and the implementation of new business practices and technologies; indirect influence on other sectors of activity: business tourism benefits from the visits of investors and their clients and suppliers, and other sectors such as real estate, telecommunications and other services benefit from the growth of their client base; impact on direct sources of revenue: the companies attracted by the IBC of Madeira represent over 40% of the revenue in terms of corporate income tax for the Government of Madeira and nearly 3.000 jobs, most of which qualified, among other benefits. Also there are above average salaries paid by the companies in the IBC of Madeira in comparison with the wages paid in the other sectors of activity in Madeira. Regional government Madeira has been a significant recipient of European Union funding, totaling up to €2 billion. In 2012, it was reported that despite a population of just 250,000, the local administration owes some €6 billion. Furthermore, the Portuguese treasury (IGCP) assumed
recoil spring is located in the stock directly behind the action, and serves the dual function of operating spring and recoil buffer. The stock being in line with the bore also reduces muzzle rise, especially during automatic fire. Because recoil does not significantly shift the point of aim, faster follow-up shots are possible and user fatigue is reduced. In addition, current model M16 flash-suppressors also act as compensators to reduce recoil further. Notes: Free recoil is calculated by using the rifle weight, bullet weight, muzzle velocity, and charge weight. It is that which would be measured if the rifle were fired suspended from strings, free to recoil. A rifle's perceived recoil is also dependent on many other factors which are not readily quantified. Sights The M16's most distinctive ergonomic feature is the carrying handle and rear sight assembly on top of the receiver. This is a by-product of the original AR-10 design, where the carrying handle contained a rear sight that could be dialed in with an elevation wheel for specific range settings and also served to protect the charging handle. The M16 carry handle also provided mounting groove interfaces and a hole at the bottom of the handle groove for mounting a Colt 3×20 telescopic sight featuring a Bullet Drop Compensation elevation adjustment knob for ranges from . This concurs with the pre-M16A2 maximum effective range of . The Colt 3×20 telescopic sight was factory adjusted to be parallax-free at . In Delft, the Netherlands Artillerie-Inrichtingen produced a roughly similar 3×25 telescopic sight for the carrying handle mounting interfaces. The M16 elevated iron sight line has a sight radius. As the M16 series rear sight, front sight and sighting in targets designs were modified over time and non-iron sight (optical) aiming devices and new service ammunition were introduced zeroing procedures changed. The standard pre-M16A2 "Daylight Sight System" uses an AR-15-style L-type flip, two aperture rear sight featuring two combat settings: short-range and long-range , marked 'L' The rear sight features a windage drum that can be adjusted during zeroing with about 1 MOA increments. The front sight is a tapered round post of approximately diameter adjustable during zeroing in about 1 MOA increments. A cartridge or tool is required to (re)zero the sight line. An alternative pre-M16A2 "Low Light Level Sight System", includes a front sight post with a weak light source provided by tritium radioluminescence in an embedded small glass vial and a two aperture rear sight consisting of a diameter aperture marked 'L' intended for normal use out to and a diameter large aperture for night firing. Regulation stipulates the radioluminescant front sight post must be replaced if more than 144 months (12 years) elapsed after manufacture. The "Low Light Level Sight System" elevation and windage adjustment increments are somewhat coarser compared to the "Daylight Sight System". With the advent of the M16A2, a less simple fully adjustable rear sight was added, allowing the rear sight to be dialed in with an elevation wheel for specific range settings between in 100 m increments and to allow windage adjustments with a windage knob without the need of a cartridge or tool. The unmarked approximately diameter aperture rear sight is for normal firing situations, zeroing and with the elevation knob for target distances up to 800 meters. The downsides of relatively small rear sight apertures are less light transmission through the aperture and a reduced field of view. A new larger approximately diameter aperture, marked '0-2' and featuring a windage setting index mark, offers a larger field of view during battle conditions and is used as a ghost ring for quick target engagement and during limited visibility. When flipped down, the engraved windage mark on top of the '0-2' aperture ring shows the dialed in windage setting on a windage scale at the rear of the rear sight assembly. When the normal use rear aperture sight is zeroed at 300 m with SS109/M855 ammunition, first used in the M16A2, the '0-2' rear sight will be zeroed for 200 m. The front sight post was widened to approximately diameter and became square and became adjustable during zeroing in about 1.2 MOA increments. The M16A4 omitted the carrying handle and rear sight assembly on top of the receiver. Instead, it features a MIL-STD-1913 Picatinny railed flat-top upper receiver for mounting various optical sighting devices or a new detachable carrying handle and M16A2-style rear sight assembly. The current U.S. Army and Air Force issue M4(A1) Carbine comes with the M68 Close Combat Optic and Back-up Iron Sight. The U.S. Marine Corps uses the 4×32 ACOG Rifle Combat Optic and the U.S. Navy uses the EOTech Holographic Weapon Sight. Range and accuracy The M16 rifle is considered to be very accurate for a service rifle. Its light recoil, high-velocity and flat trajectory allow shooters to take head shots out to 300 meters. Newer M16s use the newer M855 cartridge increasing their effective range to 600 meters. They are more accurate than their predecessors and are capable of shooting 1–3-inch groups at 100 yards. "In Fallujah, Iraq Marines with ACOG-equipped M16A4s created a stir by taking so many head shots that until the wounds were closely examined, some observers thought the insurgents had been executed." The newest M855A1 EPR cartridge is even more accurate and during testing "...has shown that, on average, 95 percent of the rounds will hit within an 8 × 8-inch (20.3 × 20.3 cm) target at 600 meters." Note *: The effective range of a firearm is the maximum distance at which a weapon may be expected to be accurate and achieve the desired effect. Note **: The horizontal range is the distance traveled by a bullet, fired from the rifle at a height of 1.6 meters and 0° elevation, until the bullet hits the ground. Note ***: The lethal range is the maximum range of a small-arms projectile, while still maintaining the minimum energy required to put a man out of action, which is generally believed to be 15 kilogram-meters (108 ft-lb). This is the equivalent of the muzzle energy of a .22LR handgun. Note ****: The maximum range of a small-arms projectile is attained at about 30° elevation. This maximum range is only of safety interest, not for combat firing. Terminal ballistics The 5.56×45 mm cartridge had several advantages over the 7.62×51 mm NATO round used in the M14 rifle. It enabled each soldier to carry more ammunition and was easier to control during automatic or burst fire. The 5.56×45 mm NATO cartridge can also produce massive wounding effects when the bullet impacts at high speed and yaws ("tumbles") in tissue leading to fragmentation and rapid transfer of energy. The original ammunition for the M16 was the 55-grain M193 cartridge. When fired from a barrel at ranges of up to , the thin-jacketed lead-cored round traveled fast enough (above ) that the force of striking a human body would cause the round to yaw (or tumble) and fragment into about a dozen pieces of various sizes thus created wounds that were out of proportion to its caliber. These wounds were so devastating that many considered the M16 to be an inhumane weapon. As the 5.56 mm round's velocity decreases, so does the number of fragments that it produces. The 5.56 mm round does not normally fragment at distances beyond 200 meters or at velocities below 2500 ft/s, and its lethality becomes largely dependent on shot placement. With the development of the M16A2, the new 62-grain M855 cartridge was adopted in 1983. The heavier bullet had more energy and was made with a steel core to penetrate Soviet body armor. However, this caused less fragmentation on impact and reduced effects against targets without armor, both of which lessened kinetic energy transfer and wounding ability. Some soldiers and Marines coped with this through training, with requirements to shoot vital areas three times to guarantee killing the target. However, there have been repeated and consistent reports of the M855's inability to wound effectively (i.e., fragment) when fired from the short barreled M4 carbine (even at close ranges). The M4's 14.5-in. barrel length reduces muzzle velocity to about 2900 ft/s. This reduced wounding ability is one reason that, despite the Army's transition to short-barrel M4s, the Marine Corps has decided to continue using the M16A4 with its 20-inch barrel as the 5.56×45 mm M855 is largely dependent upon high velocity in order to wound effectively. In 2003, the U.S. Army contended that the lack of lethality of the 5.56×45 mm was more a matter of perception than fact. With good shot placement to the head and chest, the target was usually defeated without issue. The majority of failures were the result of hitting the target in non-vital areas such as extremities. However, a minority of failures occurred in spite of multiple hits to the chest. In 2006, a study found that 20% of soldiers using the M4 Carbine wanted more lethality or stopping power. In June 2010, the U.S. Army announced it began shipping its new 5.56 mm, lead-free, M855A1 Enhanced Performance Round to active combat zones. This upgrade is designed to maximize performance of the 5.56×45 mm round, to extend range, improve accuracy, increase penetration and to consistently fragment in soft-tissue when fired from not only standard length M16s, but also the short-barreled M4 carbines. The U.S. Army has been impressed with the new M855A1 EPR round. A 7.62 NATO M80A1 EPR variant was also developed. Magazines The M16's magazine was meant to be a lightweight, disposable item. As such, it is made of pressed/stamped aluminum and was not designed to be durable. The M16 originally used a 20-round magazine which was later replaced by a bent 30-round design. As a result, the magazine follower tends to rock or tilt, causing malfunctions. Many non-U.S. and commercial magazines have been developed to effectively mitigate these shortcomings (e.g., H&K's all-stainless-steel magazine, Magpul's polymer P-MAG, etc.). Production of the 30-round magazine started late 1967 but did not fully replace the 20-round magazine until the mid-1970s. Standard USGI aluminum 30-round M16 magazines weigh empty and are long. The newer plastic magazines are about a half-inch longer. The newer steel magazines are about 0.5-inch longer and four ounces heavier. The M16's magazine has become the unofficial NATO STANAG magazine and is currently used by many Western nations, in numerous weapon systems. In 2009, the U.S. Military began fielding an "improved magazine" identified by a tan-colored follower. "The new follower incorporates an extended rear leg and modified bullet protrusion for improved round stacking and orientation. The self-leveling/anti-tilt follower minimizes jamming while a wider spring coil profile creates even force distribution. The performance gains have not added weight or cost to the magazines." In July 2016, the U.S. Army introduced another improvement, the new Enhanced Performance Magazine, which it says will result in a 300% increase in reliability in the M4 Carbine. Developed by the United States Army Armament Research, Development and Engineering Center and the Army Research Laboratory in 2013, it is tan colored with blue follower to distinguish it from earlier, incompatible magazines. Muzzle devices Most M16 rifles have a barrel threaded in 1⁄2-28" threads to incorporate the use of a muzzle device such as a flash suppressor or sound suppressor. The initial flash suppressor design had three tines or prongs and was designed to preserve the shooter's night vision by disrupting the flash. Unfortunately it was prone to breakage and getting entangled in vegetation. The design was later changed to close the end to avoid this and became known as the "A1" or "bird cage" flash suppressor on the M16A1. Eventually on the M16A2 version of the rifle, the bottom port was closed to reduce muzzle climb and prevent dust from rising when the rifle was fired in the prone position. For these reasons, the U.S. military declared the A2 flash suppressor as a compensator or a muzzle brake; but it is more commonly known as the "GI" or "A2" flash suppressor. The M16's Vortex Flash Hider weighs 3 ounces, is 2.25 inches long, and does not require a lock washer to attach to barrel. It was developed in 1984, and is one of the earliest privately designed muzzle devices. The U.S. military uses the Vortex Flash Hider on M4 carbines and M16 rifles. A version of the Vortex has been adopted by the Canadian Military for the Colt Canada C8 CQB rifle. Other flash suppressors developed for the M16 include the Phantom Flash Suppressor by Yankee Hill Machine (YHM) and the KX-3 by Noveske Rifleworks. The threaded barrel allows sound suppressors with the same thread pattern to be installed directly to the barrel; however this can result in complications such as being unable to remove the suppressor from the barrel due to repeated firing on full auto or three-round burst. A number of suppressor manufacturers have designed "direct-connect" sound suppressors which can be installed over an existing M16's flash suppressor as opposed to using the barrel's threads. Grenade launchers and shotguns All current M16 type rifles can mount under-barrel 40 mm grenade-launchers, such as the M203 and M320. Both use the same 40 mm grenades as the older, stand-alone M79 grenade launcher. The M16 can also mount under-barrel 12 gauge shotguns such as KAC Masterkey or the M26 Modular Accessory Shotgun System. Riot Control Launcher The M234 Riot Control Launcher is an M16-series rifle attachment firing an M755 blank round. The M234 mounts on the muzzle, bayonet lug, and front sight post of the M16. It fires either the M734 64 mm Kinetic Riot Control or the M742 64 mm CSI Riot Control Ring Airfoil Projectiles. The latter produces a 4 to 5-foot tear gas cloud on impact. The main advantage to using Ring Airfoil Projectiles is that their design does not allow them be thrown back by rioters with any real effect. The M234 is no longer used by U.S. forces. It has been replaced by the M203 40 mm grenade launcher and nonlethal ammunition. Bayonet The M16 is 44.25 inches (1124 mm) long with an M7 bayonet attached. The M7 bayonet is based on earlier designs such as the M4, M5, & M6 bayonets, all of which are direct descendants of the M3 Fighting Knife and have spear-point blade with a half sharpened secondary edge. The newer M9 bayonet has a clip-point blade with saw teeth along the spine, and can be used as a multi-purpose knife and wire-cutter when combined with its scabbard. The current USMC OKC-3S bayonet bears a resemblance to the Marines' iconic Ka-Bar fighting knife with serrations near the handle. Bipod For use as an ad-hoc automatic rifle, the M16 and M16A1 could be equipped with the XM3 bipod, later standardized as the Bipod, M3 (1966) and Rifle Bipod M3 (1983). Weighing only 0.6 lb, the simple and non-adjustable bipod clamps to the barrel of the rifle to allow for supported fire. The M3 bipod continues to be referenced in at least one official manual as late as 1985, where it is stated that one of the most stable firing positions is "the prone biped [sic] supported for automatic fire." NATO standards In March 1970, the U.S. recommended that all NATO forces adopt the 5.56×45 mm cartridge. This shift represented a change in the philosophy of the military's long-held position about caliber size. By the mid 1970s, other armies were looking at M16-style weapons. A NATO standardization effort soon started and tests of various rounds were carried out starting in 1977. The U.S. offered the 5.56×45 mm M193 round, but there were concerns about its penetration in the face of the wider introduction of body armor. In the end the Belgian 5.56×45 mm SS109 round was chosen (STANAG 4172) in October 1980. The SS109 round was based on the U.S. cartridge but included a new stronger, heavier, 62 grain bullet design, with better long range performance and improved penetration (specifically, to consistently penetrate the side of a steel helmet at 600 meters). Due to its design and lower muzzle velocity (about 3110 ft/s) the Belgian SS109 round is considered more humane because it is less likely to fragment than the U.S. M193 round. The NATO 5.56×45 mm standard ammunition produced for U.S. forces is designated M855. In October 1980, shortly after NATO accepted the 5.56×45 mm NATO rifle cartridge. Draft Standardization Agreement 4179 (STANAG 4179) was proposed to allow NATO members to easily share rifle ammunition and magazines down to the individual soldier level. The magazine chosen to become the STANAG magazine was originally designed for the U.S. M16 rifle. Many NATO member nations, but not all, subsequently developed or purchased rifles with the ability to accept this type of magazine. However, the standard was never ratified and remains a 'Draft STANAG'. All current M16 type rifles are designed to fire STANAG 22 mm rifle grenades from their integral flash hiders without the use of an adapter. These 22 mm grenade types range from anti-tank rounds to simple finned tubes with a fragmentation hand grenade attached to the end. They come in the "standard" type which are propelled by a blank cartridge inserted into the chamber of the rifle. They also come in the "bullet trap" and "shoot through" types, as their names imply, they use live ammunition. The U.S. military does not generally use rifle grenades; however, they are used by other nations. The NATO Accessory Rail STANAG 4694, or Picatinny rail STANAG 2324, or a "Tactical Rail" is a bracket used on M16 type rifles to provide a standardized mounting platform. The rail comprises a series of ridges with a T-shaped cross-section interspersed with flat "spacing slots". Scopes are mounted either by sliding them on from one end or the other; by means of a "rail-grabber" which is clamped to the rail with bolts, thumbscrews or levers; or onto the slots between the raised sections. The rail was originally for scopes. However, once established, the use of the system was expanded to other accessories, such as tactical lights, laser aiming modules, night vision devices, reflex sights, foregrips, bipods, and bayonets. Currently, the M16 is in use by 15 NATO countries and more than 80 countries worldwide. Variants M16 This was the first M16 variant adopted operationally, originally by the U.S. Air Force. It was equipped with triangular handguards, butt stocks without a compartment for the storage of a cleaning kit, a three-pronged flash suppressor, full auto, and no forward assist. Bolt carriers were originally chrome plated and slick-sided, lacking forward assist notches. Later, the chrome plated carriers were dropped in favor of Army issued notched and parkerized carriers though the interior portion of the bolt carrier is still chrome-lined. The barrel rifling had a 1:12 (305 mm) twist rate to adequately stabilize M193 ball and M196 tracer ammunition. The Air Force continued to operate these weapons until around 2001, at which time the Air Force converted all of its M16s to the M16A2 configuration. The M16 was also adopted by the British SAS, who used it during the Falklands War. XM16E1 and M16A1 (Colt Model 603) The U.S. Army XM16E1 was essentially the same weapon as the M16 with the addition of a forward assist and corresponding notches in the bolt carrier. The M16A1 was the finalized production model in 1967 and was produced until 1982. To address issues raised by the XM16E1's testing cycle, a closed, bird-cage flash suppressor replaced the XM16E1's three-pronged flash suppressor which caught on twigs and leaves. Various other changes were made after numerous problems in the field. Cleaning kits were developed and issued while barrels with chrome-plated chambers and later fully lined bores were introduced. With these and other changes, the malfunction rate slowly declined and new soldiers were generally unfamiliar with early problems. A rib was built into the side of the receiver on the XM16E1 to help prevent accidentally pressing the magazine release button while closing the ejection port cover. This rib was later extended on production M16A1s to help in preventing the magazine release from inadvertently being pressed. The hole in the bolt that accepts the cam pin was crimped inward on one side, in such a way that the cam pin may not be inserted with the bolt installed backwards, which would cause failures to eject until corrected. The M16A1 saw limited use in training capacities until the early 2000s, but is no longer in active service with the U.S., although is still standard issue in many world armies. M16A2 The development of the M16A2 rifle was originally requested by the United States Marine Corps as a result of combat experience in Vietnam with the XM16E1 and M16A1. It was officially adopted by the Department of Defense as the "US Rifle, 5.56 mm, M16A2" in 1982. The Marines were the first branch of the U.S. Armed Forces to adopt it, in the early/mid-1980s, with the United States Army following suit in the late 1980s. The weapon's reliability allowed it to be widely used around the Marine Corps' special operations divisions as well. Modifications to the M16A2 were extensive. In addition to the then new STANAG 4172 5.56×45mm NATO chambering and its accompanying rifling, the barrel was made with a greater thickness in front of the front sight post, to resist bending in the field and to allow a longer period of sustained fire without overheating. The rest of the barrel was maintained at the original thickness to enable the M203 grenade launcher to be attached. The barrel rifling was revised to a faster 1:7 (178 mm) twist rate to adequately stabilize the new 5.56×45 mm NATO SS109/M855 ball and L110/M856 tracer ammunition. The heavier longer SS109/M855 bullet reduced muzzle velocity from , to about . A new adjustable rear sight was added, allowing the rear sight to be dialed in for specific range settings between 300 and 800 meters to take full advantage of the ballistic characteristics of the SS109/M855 rounds and to allow windage adjustments without the need of a tool or cartridge. The flash suppressor was again modified, this time to be closed on the bottom so it would not kick up dirt or snow when being fired from the prone position, and acting as a recoil compensator. A spent case deflector was incorporated into the upper receiver immediately behind the ejection port to prevent (hot) cartridge cases from striking left-handed users. The action was also modified, replacing the fully automatic setting with a three-round burst setting. When using a fully automatic weapon, inexperienced troops often hold down the trigger and "spray" when under fire. The U.S. Army concluded that three-shot groups provide an optimum combination of ammunition conservation, accuracy, and firepower. The USMC has retired the M16A2 in favor of the newer M16A4; a few M16A2s remain in service with the U.S. Army Reserve and National Guard, Air Force, Navy and Coast Guard. The handguard was modified from the original triangular shape to a round one, which better fit smaller hands and could be fitted to older models of the M16. The new handguards were also symmetrical so armories need not separate left- and right-hand spares. The handguard retention ring was tapered to make it easier to install and uninstall the handguards. A notch for the middle finger was added to the pistol grip, as well as more texture to enhance the grip. The buttstock was lengthened by . The new buttstock became ten times stronger than the original due to advances in polymer technology since the early 1960s. Original M16 stocks were made from cellulose-impregnated phenolic resin; the newer stocks were engineered from DuPont Zytel glass-filled thermoset polymers. The new stock included a fully textured polymer buttplate for better grip on the shoulder, and retained a panel for accessing a small compartment inside the stock, often used for storing a basic cleaning kit. M16A3 The M16A3 is a modified version of the M16A2 adopted in small numbers by the U.S. Navy SEAL, Seabee, and Security units. It features the M16A1 trigger group providing "safe",
rifle. In the late 1950s, designer Eugene Stoner was completing his work on the AR-15. The AR-15 used .22-caliber bullets, which destabilized when they hit a human body, as opposed to the .30 round, which typically passed through in a straight line. The smaller caliber meant that it could be controlled in autofire due to the reduced bolt thrust and free recoil impulse. Being almost one-third the weight of the .30 meant that the soldier could sustain fire for longer with the same load. Due to design innovations, the AR-15 could fire 600 to 700 rounds a minute with an extremely low jamming rate. Parts were stamped out, not hand-machined, so could be mass-produced, and the stock was plastic to reduce weight. In 1958, the Army's Combat Developments Experimentation Command ran experiments with small squads in combat situations using the M14, AR-15, and another rifle designed by Winchester. The resulting study recommended adopting a lightweight rifle like the AR-15. In response, the Army declared that all rifles and machine guns should use the same ammunition, and ordered full production of the M-14. However, advocates for the AR-15 gained the attention of Air Force Chief of Staff General Curtis LeMay. After testing the AR-15 with the ammunition manufactured by Remington that Armalite and Colt recommended, the Air Force declared that the AR-15 was its 'standard model' and ordered 8,500 rifles and 8.5 million rounds. Advocates for the AR-15 in the Defense Advanced Research Projects Agency acquired 1,000 Air Force AR-15s and shipped them to be tested by the Army of the Republic of Vietnam (ARVN). The South Vietnam soldiers issued glowing reports of the weapon's reliability, recording zero broken parts while firing 80,000 rounds in one stage of testing, and requiring only two replacement parts for the 1,000 weapons over the entire course of testing. The report of the experiment recommended that the U.S. provide the AR-15 as the standard rifle of the ARVN, but Admiral Harry Felt, then Commander in Chief, Pacific Forces, rejected the recommendations on the advice of the U.S. Army. Throughout 1962 and 1963, the U.S. military extensively tested the AR-15. Positive evaluations emphasized its lightness, "lethality", and reliability. However, the Army Materiel Command criticized its inaccuracy at longer ranges and lack of penetrating power at higher ranges. In early 1963, the U.S. Special Forces asked, and was given permission, to make the AR-15 its standard weapon. Other users included Army Airborne units in Vietnam and some units affiliated with the Central Intelligence Agency. As more units adopted the AR-15, Secretary of the Army Cyrus Vance ordered an investigation into why the weapon had been rejected by the Army. The resulting report found that Army Materiel Command had rigged the previous tests, selecting tests that would favor the M14 and choosing match grade M14s to compete against AR-15s out of the box. At this point, the bureaucratic battle lines were well-defined, with the Army ordnance agencies opposed to the AR-15 and the Air Force and civilian leadership of the Defense Department in favor. In January 1963, Secretary of Defense Robert McNamara concluded that the AR-15 was the superior weapon system and ordered a halt to M14 production. In late 1963, the Defense Department began mass procurement of rifles for the Air Force and special Army units. Secretary McNamara designated the Army as the procurer for the weapon with the Department, which allowed the Army ordnance establishment to modify the weapon as they wished. The first modification was the addition of a "manual bolt closure," allowing a soldier to ram in a round if it failed to seat properly. The Air Force, which was buying the rifle, and the Marine Corps, which had tested it both objected to this addition, with the Air Force noting, "During three years of testing and operation of the AR-15 rifle under all types of conditions the Air Force has no record of malfunctions that could have been corrected by a manual bolt closing device." They also noted that the closure added weight and complexity, reducing the reliability of the weapon. Colonel Howard Yount, who managed the Army procurement, would later state the bolt closure was added after direction from senior leadership, rather than as a result of any complaint or test result, and testified about the reasons: "the M-1, the M-14, and the carbine had always had something for the soldier to push on; that maybe this would be a comforting feeling to him, or something." After modifications, the new redesigned rifle was subsequently adopted as the M16 Rifle. Despite its early failures the M16 proved to be a revolutionary design and stands as the longest continuously serving rifle in US military history. It has been adopted by many US allies and the 5.56×45 mm NATO cartridge has become not only the NATO standard, but "the standard assault-rifle cartridge in much of the world." It also led to the development of small-caliber high-velocity service rifles by every major army in the world. It is a benchmark against which other assault rifles are judged. M16s were produced by Colt until the late 1980s, when FN Herstal began to manufacture them. Adoption In July 1960, General Curtis LeMay was impressed by a demonstration of the ArmaLite AR-15. In the summer of 1961, General LeMay was promoted to U.S. Air Force chief of staff, and requested 80,000 AR-15s. However, General Maxwell D. Taylor, chairman of the Joint Chiefs of Staff, advised President John F. Kennedy that having two different calibers within the military system at the same time would be problematic and the request was rejected. In October 1961, William Godel, a senior man at the Advanced Research Projects Agency, sent 10 AR-15s to South Vietnam. The reception was enthusiastic, and in 1962 another 1,000 AR-15s were sent. United States Army Special Forces personnel filed battlefield reports lavishly praising the AR-15 and the stopping-power of the 5.56 mm cartridge, and pressed for its adoption. The damage caused by the 5.56 mm bullet was originally believed to be caused by "tumbling" due to the slow 1 turn in rifling twist rate. However, any pointed lead core bullet will "tumble" after penetration in flesh, because the center of gravity is towards the rear of the bullet. The large wounds observed by soldiers in Vietnam were actually caused by bullet fragmentation created by a combination of the bullet's velocity and construction. These wounds were so devastating, that the photographs remained classified into the 1980s. However, despite overwhelming evidence that the AR-15 could bring more firepower to bear than the M14, the Army opposed the adoption of the new rifle. U.S. Secretary of Defense Robert McNamara now had two conflicting views: the ARPA report favoring the AR-15 and the Army's position favoring the M14. Even President Kennedy expressed concern, so McNamara ordered Secretary of the Army Cyrus Vance to test the M14, the AR-15 and the AK-47. The Army reported that only the M14 was suitable for service, but Vance wondered about the impartiality of those conducting the tests. He ordered the Army inspector general to investigate the testing methods used; the inspector general confirmed that the testers were biased towards the M14. In January 1963, Secretary McNamara received reports that M14 production was insufficient to meet the needs of the armed forces and ordered a halt to M14 production. At the time, the AR-15 was the only rifle that could fulfill a requirement of a "universal" infantry weapon for issue to all services. McNamara ordered its adoption, despite receiving reports of several deficiencies, most notably the lack of a chrome-plated chamber. After modifications (most notably, the charging handle was re-located from under the carrying handle like the AR-10, to the rear of the receiver), the new redesigned rifle was renamed the Rifle, Caliber 5.56 mm, M16. Inexplicably, the modification to the new M16 did not include a chrome-plated barrel. Meanwhile, the Army relented and recommended the adoption of the M16 for jungle warfare operations. However, the Army insisted on the inclusion of a forward assist to help push the bolt into battery in the event that a cartridge failed to seat into the chamber. The Air Force, Colt and Eugene Stoner believed that the addition of a forward assist was an unjustified expense. As a result, the design was split into two variants: the Air Force's M16 without the forward assist, and the XM16E1 with the forward assist for the other service branches. In November 1963, McNamara approved the U.S. Army's order of 85,000 XM16E1s; and to appease General LeMay, the Air Force was granted an order for another 19,000 M16s. In March 1964, the M16 rifle went into production and the Army accepted delivery of the first batch of 2,129 rifles later that year, and an additional 57,240 rifles the following year. In 1964, the Army was informed that DuPont could not mass-produce the IMR 4475 stick powder to the specifications demanded by the M16. Therefore, Olin Mathieson Company provided a high-performance ball propellant. While the Olin WC 846 powder achieved the desired per second muzzle velocity, it produced much more fouling, that quickly jammed the M16's action (unless the rifle was cleaned well and often). In March 1965, the Army began to issue the XM16E1 to infantry units. However, the rifle was initially delivered without adequate cleaning kits or instructions because advertising from Colt asserted that the M16's materials made the weapon require little maintenance, which was interpreted by some as meaning the rifle was self-cleaning. Furthermore, cleaning was often conducted with improper equipment, such as insect repellent, water, and aircraft fuel, which induced further wear on the weapon. As a result, reports of stoppages in combat began to surface. The most severe problem was known as "failure to extract"—the spent cartridge case remained lodged in the chamber after the rifle was fired. Documented accounts of dead U.S. troops found next to disassembled rifles eventually led to a Congressional investigation. In February 1967, the improved XM16E1 was standardized as the M16A1. The new rifle had a chrome-plated chamber and bore to eliminate corrosion and stuck cartridges, and other minor modifications. New cleaning kits, powder solvents, and lubricants were also issued. Intensive training programs in weapons cleaning were instituted including a comic book-style operations manual. As a result, reliability problems greatly diminished and the M16A1 rifle achieved widespread acceptance by U.S. troops in Vietnam. In 1969, the M16A1 officially replaced the M14 rifle to become the U.S. military's standard service rifle. In 1970, the new WC 844 powder was introduced to reduce fouling. Reliability During the early part of its service, the M16 had a reputation for poor reliability and a malfunction rate of two per 1000 rounds fired. The M16's action works by passing high-pressure propellant gasses tapped from the barrel down a tube and into the carrier group within the upper receiver, and is commonly referred to as a "direct impingement gas system". The gas goes from the gas tube, through the bolt carrier key, and into the inside of the carrier where it expands in a donut-shaped gas cylinder. Because the bolt is prevented from moving forward by the barrel, the carrier is driven to the rear by the expanding gases and thus converts the energy of the gas to movement of the rifle's parts. The back part of the bolt forms a piston head and the cavity in the bolt carrier is the piston sleeve. It is more correct to call it an internal piston system. This design is much lighter and more compact than a gas-piston design. However, this design requires that combustion byproducts from the discharged cartridge be blown into the receiver as well. This accumulating carbon and vaporized metal build-up within the receiver and bolt-carrier negatively affects reliability and necessitates more intensive maintenance on the part of the individual soldier. The channeling of gasses into the bolt carrier during operation increases the amount of heat that is deposited in the receiver while firing the M16 and causes essential lubricant to be "burned off". This requires frequent and generous applications of appropriate lubricant. Lack of proper lubrication is the most common source of weapon stoppages or jams. The original M16 fared poorly in the jungles of Vietnam and was infamous for reliability problems in the harsh environment. Max Hastings was very critical of the M16's general field issue in Vietnam just as grievous design flaws were becoming apparent. He further states that the Shooting Times experienced repeated malfunctions with a test M16 and assumed these would be corrected before military use, but they were not. Many Marines and soldiers were so angry with the reliability problems they began writing home and on the 26th of March 1967 the Washington Daily News broke the story. Eventually the M16 became the target of a Congressional investigation. The investigation found that: The M16 was issued to troops without cleaning kits or instruction on how to clean the rifle. The M16 and 5.56×45 mm cartridge was tested and approved with the use of a DuPont IMR8208M extruded powder, that was switched to Olin Mathieson WC846 ball powder which produced much more fouling, that quickly jammed the action of the M16 (unless the gun was cleaned well and often). The M16 lacked a forward assist (rendering the rifle inoperable when it failed to go fully forward). The M16 lacked a chrome-plated chamber, which allowed corrosion problems and contributed to case extraction failures (which was considered the most severe problem and required extreme measures to clear, such as inserting the cleaning-rod down the barrel and knocking the spent cartridge out). When these issues were addressed and corrected by the M16A1, the reliability problems decreased greatly. According to a 1968 Department of Army report, the M16A1 rifle achieved widespread acceptance by U.S. troops in Vietnam. "Most men armed with the M16 in Vietnam rated this rifle's performance high, however, many men entertained some misgivings about the M16's reliability. When asked what weapon they preferred to carry in combat, 85 percent indicated that they wanted either the M16 or its [smaller]carbine-length version, the XM177E2." Also "the M14 was preferred by 15 percent, while less than one percent wished to carry either the Stoner rifle, the AK-47, the carbine or a pistol." In March 1970, the "President's Blue Ribbon Defense Panel" concluded that the issuance of the M16 saved the lives of 20,000 U.S. servicemen during the Vietnam War, who would have otherwise died had the M14 remained in service. However, the M16 rifle's reputation continues to suffer. Another underlying cause of the M16’s jamming problem was identified by ordnance staff that discovered that Stoner and ammunition manufacturers had initially tested the AR 15 using DuPont IMR8208M extruded (stick) powder. Later ammunition manufacturers adopted the more readily available Olin Mathieson WC846 ball powder. The ball powder produced a longer peak chamber pressure with undesired timing effects. Upon firing, the cartridge case expands and seals the chamber (obturation). When the peak pressure starts to drop the cartridge case contracts and then can be extracted. With ball powder, the cartridge case was not contracted enough during extraction due to the longer peak pressure period. The ejector would then fail to extract the cartridge case, tearing through the case rim, leaving an obturated case behind. After the introduction of the M4 Carbine, it was found that the shorter barrel length of 14.5 inches also has a negative effect on reliability, as the gas port is located closer to the chamber than the gas port of the standard length M16 rifle: 7.5 inches instead of 13 inches. This affects the M4's timing and increases the amount of stress and heat on the critical components, thereby reducing reliability. In a 2002 assessment the USMC found that the M4 malfunctioned three times more often than the M16A4 (the M4 failed 186 times for 69,000 rounds fired, while the M16A4 failed 61 times). Thereafter, the Army and Colt worked to make modifications to the M4s and M16A4s in order to address the problems found. In tests conducted in 2005 and 2006 the Army found that on average, the new M4s and M16s fired approximately 5,000 rounds between stoppages. In December 2006, the Center for Naval Analyses (CNA) released a report on U.S. small arms in combat. The CNA conducted surveys on 2,608 troops returning from combat in Iraq and Afghanistan over the past 12 months. Only troops who had fired their weapons at enemy targets were allowed to participate. 1,188 troops were armed with M16A2 or A4 rifles, making up 46 percent of the survey. 75 percent of M16 users (891 troops) reported they were satisfied with the weapon. 60 percent (713 troops) were satisfied with handling qualities such as handguards, size, and weight. Of the 40 percent dissatisfied, most were with its size. Only 19 percent of M16 users (226 troops) reported a stoppage, while 80 percent of those that experienced a stoppage said it had little impact on their ability to clear the stoppage and re-engage their target. Half of the M16 users experienced failures of their magazines to feed. 83 percent (986 troops) did not need their rifles repaired while in theater. 71 percent (843 troops) were confident in the M16's reliability, defined as level of soldier confidence their weapon will fire without malfunction, and 72 percent (855 troops) were confident in its durability, defined as level of soldier confidence their weapon will not break or need repair. Both factors were attributed to high levels of soldiers performing their own maintenance. 60 percent of M16 users offered recommendations for improvements. Requests included greater bullet lethality, new-built instead of rebuilt rifles, better quality magazines, decreased weight, and a collapsible stock. Some users recommended shorter and lighter weapons such as the M4 carbine. Some issues have been addressed with the issuing of the Improved STANAG magazine in March 2009, and the M855A1 Enhanced Performance Round in June 2010. In early 2010, two journalists from The New York Times spent three months with soldiers and Marines in Afghanistan. While there, they questioned around 100 infantry troops about the reliability of their M16 rifles, as well as the M4 carbine. The troops did not report reliability problems with their rifles. While only 100 troops were asked, they engaged in daily fighting in Marja, including least a dozen intense engagements in Helmand Province, where the ground is covered in fine powdered sand (called "moon dust" by troops) that can stick to firearms. Weapons were often dusty, wet, and covered in mud. Intense firefights lasted hours with several magazines being expended. Only one soldier reported a jam when his M16 was covered in mud after climbing out of a canal. The weapon was cleared and resumed firing with the next chambered round. Furthermore, the Marine Chief Warrant Officer responsible for weapons training and performance of the Third Battalion, Sixth Marines, reported that "We've had nil in the way of problems; we've had no issues", with his battalion's 350 M16s and 700 M4s. Design The M16 is a lightweight, 5.56 mm, air-cooled, gas-operated, magazine-fed assault rifle, with a rotating bolt. The M16's receivers are made of 7075 aluminum alloy, its barrel, bolt, and bolt carrier of steel, and its handguards, pistol grip, and buttstock of plastics. The M16 internal piston action was derived from the original ArmaLite AR-10 and ArmaLite AR-15 actions. This internal piston action system designed by Eugene Stoner is commonly called a direct impingement system, but it does not use a conventional direct impingement system. In , the designer states: ″This invention is a true expanding gas system instead of the conventional impinging gas system.″ The gas system, bolt carrier, and bolt-locking design were novel for the time. The M16A1 was especially lightweight at with a loaded 30-round magazine. This was significantly less than the M14 that it replaced at with a loaded 20-round magazine. It is also lighter when compared to the AKM's with a loaded 30-round magazine. The M16A2 weighs loaded with a 30-round magazine, because of the adoption of a thicker barrel profile. The thicker barrel is more resistant to damage when handled roughly and is also slower to overheat during sustained fire. Unlike a traditional "bull" barrel that is thick its entire length, the M16A2's barrel is only thick forward of the handguards. The barrel profile under the handguards remained the same as the M16A1 for compatibility with the M203 grenade launcher. Barrel Early model M16 barrels had a rifling twist of four grooves, right-hand twist, one turn in 14 inches (1:355.6 mm or 64 calibers) bore—as it was the same rifling used by the .222 Remington sporting round. This was shown to make the light .223 Remington bullet yaw in flight at long ranges and it was soon replaced. Later models had an improved rifling with six grooves, right-hand twist, one turn in 12 inches (1:304.8 mm or 54.8 calibers) for increased accuracy and was optimized for firing the M193 ball and M196 tracer bullets. Current models are optimized for firing the heavier NATO SS109 ball and long L110 tracer bullets and have six grooves, right-hand twist, one turn in 7 in (1:177.8 mm or 32 calibers). Weapons designed to accept both the M193 or SS109 rounds (like civilian market clones) usually have a six-groove, right-hand twist, one turn in 9 inches (1:228.6 mm or 41.1 calibers) bore, although 1:8 inches and 1:7 inches twist rates are available as well. Recoil The M16 uses a "straight-line" recoil design, where the recoil spring is located in the stock directly behind the action, and serves the dual function of operating spring and recoil buffer. The stock being in line with the bore also reduces muzzle rise, especially during automatic fire. Because recoil does not significantly shift the point of aim, faster follow-up shots are possible and user fatigue is reduced. In addition, current model M16 flash-suppressors also act as compensators to reduce recoil further. Notes: Free recoil is calculated by using the rifle weight, bullet weight, muzzle velocity, and charge weight. It is that which would be measured if the rifle were fired suspended from strings, free to recoil. A rifle's perceived recoil is also dependent on many other factors which are not readily quantified. Sights The M16's most distinctive ergonomic feature is the carrying handle and rear sight assembly on top of the receiver. This is a by-product of the original AR-10 design, where the carrying handle contained a rear sight that could be dialed in with an elevation wheel for specific range settings and also served to protect the charging handle. The M16 carry handle also provided mounting groove interfaces and a hole at the bottom of the handle groove for mounting a Colt 3×20 telescopic sight featuring a Bullet Drop Compensation elevation adjustment knob for ranges from . This concurs with the pre-M16A2 maximum effective range of . The Colt 3×20 telescopic sight was factory adjusted to be parallax-free at . In Delft, the Netherlands Artillerie-Inrichtingen produced a roughly similar 3×25 telescopic sight for the carrying handle mounting interfaces. The M16 elevated iron sight line has a sight radius. As the M16 series rear sight, front sight and sighting in targets designs were modified over time and non-iron sight (optical) aiming devices and new service ammunition were introduced zeroing procedures changed. The standard
called "pretty terrible." In the 1965 documentary Meet Marlon Brando, he revealed that the final product heard in the movie was a result of countless singing takes being cut into one and later joked, "I couldn't hit a note with a baseball bat; some notes I missed by extraordinary margins ... They sewed my words together on one song so tightly that when I mouthed it in front of the camera, I nearly asphyxiated myself". Relations between Brando and costar Frank Sinatra were also frosty, with Stefan Kanfer observing: "The two men were diametrical opposites: Marlon required multiple takes; Frank detested repeating himself." Upon their first meeting Sinatra reportedly scoffed, "Don't give me any of that Actors Studio shit." Brando later quipped, "Frank is the kind of guy, when he dies, he's going to heaven and give God a hard time for making him bald." Frank Sinatra called Brando "the world's most overrated actor", and referred to him as "mumbles". The film was commercially though not critically successful, costing $5.5 million to make and grossing $13 million. Brando played Sakini, a Japanese interpreter for the U.S. Army in postwar Japan, in The Teahouse of the August Moon (1956). Pauline Kael was not particularly impressed by the movie, but noted "Marlon Brando starved himself to play the pixie interpreter Sakini, and he looks as if he's enjoying the stunt—talking with a mad accent, grinning boyishly, bending forward, and doing tricky movements with his legs. He's harmlessly genial (and he is certainly missed when he's offscreen), though the fey, roguish role doesn't allow him to do what he's great at and it's possible that he's less effective in it than a lesser actor might have been." In Sayonara (1957) he appeared as a United States Air Force officer. Newsweek found the film a "dull tale of the meeting of the twain", but it was nevertheless a box-office success. According to Stefan Kanfer's biography of the actor, Brando's manager Jay Kanter negotiated a profitable contract with ten percent of the gross going to Brando, which put him in the millionaire category. The movie was controversial due to openly discussing interracial marriage, but proved a great success, earning 10 Academy Award nominations, with Brando being nominated for Best Actor. The film went on to win four Academy Awards. Teahouse and Sayonara were the first in a string of films Brando would strive to make over the next decade which contained socially relevant messages, and he formed a partnership with Paramount to establish his own production company called Pennebaker, its declared purpose to develop films that contained "social value that would improve the world." The name was a tribute in honor of his mother, who had died in 1954. By all accounts, Brando was devastated by her death, with biographer Peter Manso telling A&E's Biography, "She was the one who could give him approval like no one else could and, after his mother died, it seems that Marlon stops caring." Brando appointed his father to run Pennebaker. In the same A&E special, George Englund claims that Brando gave his father the job because "it gave Marlon a chance to take shots at him, to demean and diminish him". In 1958, Brando appeared in The Young Lions, dyeing his hair blonde and assuming a German accent for the role, which he later admitted was not convincing. The film is based on the novel by Irwin Shaw, and Brando's portrayal of the character Christian Diestl was controversial for its time. He later wrote, "The original script closely followed the book, in which Shaw painted all Germans as evil caricatures, especially Christian, whom he portrayed as a symbol of everything that was bad about Nazism; he was mean, nasty, vicious, a cliché of evil ... I thought the story should demonstrate that there are no inherently 'bad' people in the world, but they can easily be misled." Shaw and Brando even appeared together for a televised interview with CBS correspondent David Schoenbrun and, during a bombastic exchange, Shaw charged that, like most actors, Brando was incapable of playing flat-out villainy; Brando responded by stating "Nobody creates a character but an actor. I play the role; now he exists. He is my creation." The Young Lions also features Brando's only appearance in a film with friend and rival Montgomery Clift (although they shared no scenes together). Brando closed out the decade by appearing in The Fugitive Kind (1960) opposite Anna Magnani. The film was based on another play by Tennessee Williams but was hardly the success A Streetcar Named Desire had been, with the Los Angeles Times labeling Williams's personae "psychologically sick or just plain ugly" and The New Yorker calling it a "cornpone melodrama". One-Eyed Jacks and Mutiny on the Bounty In 1961, Brando made his directorial debut in the western One-Eyed Jacks. The picture was originally directed by Stanley Kubrick, but he was fired early in the production. Paramount then made Brando the director. Brando portrays the lead character Rio, and Karl Malden plays his partner "Dad" Longworth. The supporting cast features Katy Jurado, Ben Johnson, and Slim Pickens. Brando's penchant for multiple retakes and character exploration as an actor carried over into his directing, however, and the film soon went over budget; Paramount expected the film to take three months to complete but shooting stretched to six and the cost doubled to more than six million dollars. Brando's inexperience as an editor also delayed postproduction and Paramount eventually took control of the film. Brando later wrote, "Paramount said it didn't like my version of the story; I'd had everyone lie except Karl Malden. The studio cut the movie to pieces and made him a liar, too. By then, I was bored with the whole project and walked away from it." One-Eyed Jacks was poorly reviewed by critics. While the film did solid business, it ran so over budget that it lost money. Brando's revulsion with the film industry reportedly boiled over on the set of his next film, Metro-Goldwyn-Mayer's remake of Mutiny on the Bounty, which was filmed in Tahiti. The actor was accused of deliberately sabotaging nearly every aspect of the production. On June 16, 1962, The Saturday Evening Post ran an article by Bill Davidson with the headline "Six million dollars down the drain: the mutiny of Marlon Brando". Mutiny director Lewis Milestone claimed that the executives "deserve what they get when they give a ham actor, a petulant child, complete control over an expensive picture." Mutiny on the Bounty nearly capsized MGM and, while the project had indeed been hampered with delays other than Brando's behavior, the accusations would dog the actor for years as studios began to fear Brando's difficult reputation. Critics also began taking note of his fluctuating weight. Box office decline: 1963–1971 Distracted by his personal life and becoming disillusioned with his career, Brando began to view acting as a means to a financial end. Critics protested when he started accepting roles in films many perceived as being beneath his talent, or criticized him for failing to live up to the better roles. Previously only signing short-term deals with film studios, in 1961 Brando uncharacteristically signed a five-picture deal with Universal Studios that would haunt him for the rest of the decade. The Ugly American (1963) was the first of these films. Based on the 1958 novel of the same title that Pennebaker had optioned, the film, which featured Brando's sister Jocelyn, was rated fairly positively but died at the box office. Brando was nominated for a Golden Globe for his performance. All of Brando's other Universal films during this period, including Bedtime Story (1964), The Appaloosa (1966), A Countess from Hong Kong (1967) and The Night of the Following Day (1969), were also critical and commercial flops. Countess in particular was a disappointment for Brando, who had looked forward to working with one of his heroes, director Charlie Chaplin. The experience turned out to be an unhappy one; Brando was horrified at Chaplin's didactic style of direction and his authoritarian approach. Brando had also appeared in the spy thriller Morituri in 1965; that, too, failed to attract an audience. Brando acknowledged his professional decline, writing later, "Some of the films I made during the sixties were successful; some weren't. Some, like The Night of the Following Day, I made only for the money; others, like Candy, I did because a friend asked me to and I didn't want to turn him down ... In some ways I think of my middle age as the Fuck You Years." Candy was especially appalling for many; a 1968 sex farce film directed by Christian Marquand and based on the 1958 novel by Terry Southern, the film satirizes pornographic stories through the adventures of its naive heroine, Candy, played by Ewa Aulin. It is generally regarded as the nadir of Brando's career. The Washington Post observed: "Brando's self-indulgence over a dozen years is costing him and his public his talents." In the March 1966 issue of The Atlantic, Pauline Kael wrote that in his rebellious days, Brando "was antisocial because he knew society was crap; he was a hero to youth because he was strong enough not to take the crap", but now Brando and others like him had become "buffoons, shamelessly, pathetically mocking their public reputations." In an earlier review of The Appaloosa in 1966, Kael wrote that the actor was "trapped in another dog of a movie ... Not for the first time, Mr. Brando gives us a heavy-lidded, adenoidally openmouthed caricature of the inarticulate, stalwart loner." Although he feigned indifference, Brando was hurt by the critical mauling, admitting in the 2015 film Listen to Me Marlon, "They can hit you every day and you have no way of fighting back. I was very convincing in my pose of indifference, but I was very sensitive and it hurt a lot." Brando portrayed a repressed gay army officer in Reflections in a Golden Eye, directed by John Huston and co-starring Elizabeth Taylor. The role turned out as one of his most acclaimed in years, with Stanley Crouch marveling, "Brando's main achievement was to portray the taciturn but stoic gloom of those pulverized by circumstances." The film overall received mixed reviews. Another notable film was The Chase (1966), which paired the actor with Arthur Penn, Robert Duvall, Jane Fonda and Robert Redford. The film deals with themes of racism, sexual revolution, small-town corruption, and vigilantism. The film was received mostly positively. Brando cited Burn! (1969) as his personal favorite of the films he had made, writing in his autobiography, "I think I did some of the best acting I've ever done in that picture, but few people came to see it." Brando dedicated a full chapter to the film in his memoir, stating that the director, Gillo Pontecorvo, was the best director he had ever worked with next to Kazan and Bernardo Bertolucci. Brando also detailed his clashes with Pontecorvo on the set and how "we nearly killed each other." Loosely based on events in the history of Guadeloupe, the film got a hostile reception from critics. In 1971, Michael Winner directed him in the British horror film The Nightcomers with Stephanie Beacham, Thora Hird, Harry Andrews and Anna Palk. It is a prequel to The Turn of the Screw, which later became the 1961 film The Innocents. Brando's performance earned him a nomination for a Best Actor BAFTA, but the film bombed at the box office. The Godfather and Last Tango in Paris During the 1970s, Brando was considered "unbankable". Critics were becoming increasingly dismissive of his work and he had not appeared in a box office hit since The Young Lions in 1958, the last year he had ranked as one of the Top Ten Box Office Stars and the year of his last Academy Award nomination, for Sayonara. Brando's performance as Vito Corleone, the "Don," in The Godfather (1972), Francis Ford Coppola's adaptation of Mario Puzo's 1969 bestselling novel of the same name, was a career turning point, putting him back in the Top Ten and winning him his second Best Actor Oscar. Paramount production chief Robert Evans, who had given Puzo an advance to write The Godfather so that Paramount would own the film rights, hired Coppola after many major directors had turned the film down. Evans wanted an Italian-American director who could provide the film with cultural authenticity. Coppola also came cheap. Evans was conscious of the fact that Paramount's last Mafia film, The Brotherhood (1968) had been a box office bomb, and he believed it was partly due to the fact that the director, Martin Ritt, and the star, Kirk Douglas, were Jews and the film lacked an authentic Italian flavor. The studio originally intended the film to be a low-budget production set in contemporary times without any major actors, but the phenomenal success of the novel gave Evans the clout to turn The Godfather into a prestige picture. Coppola had developed a list of actors for all the roles, and his list of potential Dons included the Oscar-winning Italian-American Ernest Borgnine, the Italian-American Frank de Kova (best known for playing Chief Wild Eagle on the TV sitcom F-Troop), John Marley (a Best Supporting Oscar-nominee for Paramount's 1970 hit film Love Story who was cast as the film producer Jack Woltz in the picture), the Italian-American Richard Conte (who was cast as Don Corleone's deadly rival Don Emilio Barzini), and Italian film producer Carlo Ponti. Coppola admitted in a 1975 interview, "We finally figured we had to lure the best actor in the world. It was that simple. That boiled down to Laurence Olivier or Marlon Brando, who are the greatest actors in the world." The holographic copy of Coppola's cast list shows Brando's name underlined. Evans told Coppola that he had been thinking of Brando for the part two years earlier, and Puzo had imagined Brando in the part when he wrote the novel and had actually written to him about the part, so Coppola and Evans narrowed it down to Brando. (Ironically, Olivier would compete with Brando for the Best Actor Oscar for his part in Sleuth. He bested Brando at the 1972 New York Film Critics Circle Awards.) Albert S. Ruddy, whom Paramount assigned to produce the film, agreed with the choice of Brando. However, Paramount studio executives were opposed to casting Brando due to his reputation for difficulty and his long string of box office flops. Brando also had One-Eyed Jacks working against him, a troubled production that lost money for Paramount when it was released in 1961. Paramount Pictures President Stanley Jaffe told an exasperated Coppola, "As long as I'm president of this studio, Marlon Brando will not be in this picture, and I will no longer allow you to discuss it." Jaffe eventually set three conditions for the casting of Brando: That he would have to take a fee far below what he typically received; he'd have to agree to accept financial responsibility for any production delays his behavior cost; and he had to submit to a screen test. Coppola convinced Brando to a videotaped "make-up" test, in which Brando did his own makeup (he used cotton balls to simulate the character's puffed cheeks). Coppola had feared Brando might be too young to play the Don, but was electrified by the actor's characterization as the head of a crime family. Even so, he had to fight the studio in order to cast the temperamental actor. Brando had doubts himself, stating in his autobiography, "I had never played an Italian before, and I didn't think I could do it successfully." Eventually, Charles Bluhdorn, the president of Paramount parent Gulf+Western, was won over to letting Brando have the role; when he saw the screen test, he asked in amazement, "What are we watching? Who is this old guinea?" Brando was signed for a low fee of $50,000, but in his contract, he was given a percentage of the gross on a sliding scale: 1% of the gross for each $10 million over a $10 million threshold, up to 5% if the picture exceeded $60 million. According to Evans, Brando sold back his points in the picture for $100,000, as he was in dire need of funds. "That $100,000 cost him $11 million," Evans claimed. In a 1994 interview that can be found on the Academy of Achievement website, Coppola insisted, "The Godfather was a very unappreciated movie when we were making it. They were very unhappy with it. They didn't like the cast. They didn't like the way I was shooting it. I was always on the verge of getting fired." When word of this reached Brando, he threatened to walk off the picture, writing in his memoir, "I strongly believe that directors are entitled to independence and freedom to realize their vision, though Francis left the characterizations in our hands and we had to figure out what to do." In a 2010 television interview with Larry King, Al Pacino also talked about how Brando's support helped him keep the role of Michael Corleone in the movie—despite the fact Coppola wanted to fire him. (Pacino also explained in the Larry King interview that while Coppola expressed disappointment in Pacino's early scenes he did not specifically threaten to fire him; Coppola himself was feeling pressure from studio executives who were puzzled by Pacino's performance. In the same interview, Pacino credits Coppola with getting him the part.)Brando was on his best behavior during filming, buoyed by a cast that included Pacino, Robert Duvall, James Caan, and Diane Keaton. In the Vanity Fair article "The Godfather Wars", Mark Seal writes, "With the actors, as in the movie, Brando served as the head of the family. He broke the ice by toasting the group with a glass of wine." 'When we were young, Brando was like the godfather of actors,' says Robert Duvall. 'I used to meet with Dustin Hoffman in Cromwell's Drugstore, and if we mentioned his name once, we mentioned it 25 times in a day.' Caan adds, 'The first day we met Brando everybody was in awe.'" Brando's performance was glowingly reviewed by critics. "I thought it would be interesting to play a gangster, maybe for the first time in the movies, who wasn't like those bad guys Edward G. Robinson played, but who is kind of a hero, a man to be respected," Brando recalled in his autobiography. "Also, because he had so much power and unquestioned authority, I thought it would be an interesting contrast to play him as a gentle man, unlike Al Capone, who beat up people with baseball bats." Duvall later marveled to A&E's Biography, "He minimized the sense of beginning. In other words he, like, deemphasized the word action. He would go in front of that camera just like he was before. Cut! It was all the same. There was really no beginning. I learned a lot from watching that." Brando won the Academy Award for Best Actor for his performance, but he declined it, becoming the second actor to refuse a Best Actor award (after George C. Scott for Patton). He boycotted the award ceremony, instead sending indigenous American rights activist Sacheen Littlefeather, who appeared in full Apache attire, to state Brando's reasons, which were based on his objection to the depiction of indigenous Americans by Hollywood and television. The actor followed The Godfather with Bernardo Bertolucci's 1972 film Last Tango in Paris, playing opposite Maria Schneider, but Brando's highly noted performance threatened to be overshadowed by an uproar over the sexual content of the film. Brando portrays a recent American widower named Paul, who begins an anonymous sexual relationship with a young, betrothed Parisian woman named Jeanne. As with previous films, Brando refused to memorize his lines for many scenes; instead, he wrote his lines on cue cards and posted them around the set for easy reference, leaving Bertolucci with the problem of keeping them out of the picture frame. The film features several intense, graphic scenes involving Brando, including Paul anally raping Jeanne using butter as a lubricant, which it was alleged was not consensual, and Paul's angry, emotionally charged final confrontation with the corpse of his dead wife. The controversial movie was a hit, however, and Brando made the list of Top Ten Box Office Stars for the last time. His gross participation deal earned him $3 million. The voting membership of the Academy of Motion Picture Arts & Sciences again nominated Brando for Best Actor, his seventh nomination. Although Brando won the 1973 New York Film Critics Circle Awards, he did not attend the ceremony or send a representative to pick up the award if he won. Pauline Kael, in The New Yorker review, wrote "The movie breakthrough has finally come. Bertolucci and Brando have altered the face of an art form." Brando confessed in his autobiography, "To this day I can't say what Last Tango in Paris was about", and added the film "required me to do a lot of emotional arm wrestling with myself, and when it was finished, I decided that I wasn't ever again going to destroy myself emotionally to make a movie". In 1973, Brando was devastated by the death of his childhood best friend Wally Cox. Brando slept in Cox's pajamas and wrenched his ashes from his widow. She was going to sue for their return, but finally said "I think Marlon needs the ashes more than I do." Late 1970s In 1976, Brando appeared in The Missouri Breaks with his friend Jack Nicholson. The movie also reunited the actor with director Arthur Penn. As biographer Stefan Kanfer describes, Penn had difficulty controlling Brando, who seemed intent on going over the top with his border-ruffian-turned-contract-killer Robert E. Lee Clayton: "Marlon made him a cross-dressing psychopath. Absent for the first hour of the movie, Clayton enters on horseback, dangling upside down, caparisoned in white buckskin, Littlefeather-style. He speaks in an Irish accent for no apparent reason. Over the next hour, also for no apparent reason, Clayton assumes the intonation of a British upper-class twit and an elderly frontier woman, complete with a granny dress and matching bonnet. Penn, who believed in letting actors do their thing, indulged Marlon all the way." Critics were unkind, with The Observer calling Brando's performance "one of the most extravagant displays of grandedamerie since Sarah Bernhardt", while The Sun complained, "Marlon Brando at fifty-two has the sloppy belly of a sixty-two-year-old, the white hair of a seventy-two-year-old, and the lack of discipline of a precocious twelve-year-old." However, Kanfer noted: "Even though his late work was met with disapproval, a re-examination shows that often, in the middle of the most pedestrian scene, there would be a sudden, luminous occurrence, a flash of the old Marlon that showed how capable he remained." In 1978, Brando narrated the English version of Raoni, a French-Belgian documentary film directed by Jean-Pierre Dutilleux and Luiz Carlos Saldanha that focused on the life of Raoni Metuktire and issues surrounding the survival of the indigenous Indian tribes of north central Brazil. Brando portrayed Superman's father Jor-El in the 1978 film Superman. He agreed to the role only on assurance that he would be paid a large sum for what amounted to a small part, that he would not have to read the script beforehand, and that his lines would be displayed somewhere off-camera. It was revealed in a documentary contained in the 2001 DVD release of Superman that he was paid $3.7 million for two weeks of work. Brando also filmed scenes for the movie's sequel, Superman II, but after producers refused to pay him the same percentage he received for the first movie, he denied them permission to use the footage. "I asked for my usual percentage," he recollected in his memoir, "but they refused, and so did I." However, after Brando's death, the footage was reincorporated into the 2006 recut of the film, Superman II: The Richard Donner Cut and in the 2006 "loose sequel" Superman Returns, in which both used and unused archive footage of him as Jor-El from the first two Superman films was remastered for a scene in the Fortress of Solitude, and Brando's voice-overs were used throughout the film. In 1979, he made a rare television appearance in the miniseries Roots: The Next Generations, portraying George Lincoln Rockwell; he won a Primetime Emmy Award for Outstanding Supporting Actor in a Miniseries or a Movie for his performance. Brando starred as Colonel Walter E. Kurtz in Francis Ford Coppola's Vietnam epic Apocalypse Now (1979). He plays a highly decorated U.S. Army Special Forces officer who goes renegade, running his own operation based in Cambodia and is feared by the U.S. military as much as the Vietnamese. Brando was paid $1 million a week for 3 weeks work. The film drew attention for its lengthy and troubled production, as Eleanor Coppola's documentary Hearts of Darkness: A Filmmaker's Apocalypse documents: Brando showed up on the set overweight, Martin Sheen suffered a heart attack, and severe weather destroyed several expensive sets. The film's release was also postponed several times while Coppola edited millions of feet of footage. In the documentary, Coppola
beat up people with baseball bats." Duvall later marveled to A&E's Biography, "He minimized the sense of beginning. In other words he, like, deemphasized the word action. He would go in front of that camera just like he was before. Cut! It was all the same. There was really no beginning. I learned a lot from watching that." Brando won the Academy Award for Best Actor for his performance, but he declined it, becoming the second actor to refuse a Best Actor award (after George C. Scott for Patton). He boycotted the award ceremony, instead sending indigenous American rights activist Sacheen Littlefeather, who appeared in full Apache attire, to state Brando's reasons, which were based on his objection to the depiction of indigenous Americans by Hollywood and television. The actor followed The Godfather with Bernardo Bertolucci's 1972 film Last Tango in Paris, playing opposite Maria Schneider, but Brando's highly noted performance threatened to be overshadowed by an uproar over the sexual content of the film. Brando portrays a recent American widower named Paul, who begins an anonymous sexual relationship with a young, betrothed Parisian woman named Jeanne. As with previous films, Brando refused to memorize his lines for many scenes; instead, he wrote his lines on cue cards and posted them around the set for easy reference, leaving Bertolucci with the problem of keeping them out of the picture frame. The film features several intense, graphic scenes involving Brando, including Paul anally raping Jeanne using butter as a lubricant, which it was alleged was not consensual, and Paul's angry, emotionally charged final confrontation with the corpse of his dead wife. The controversial movie was a hit, however, and Brando made the list of Top Ten Box Office Stars for the last time. His gross participation deal earned him $3 million. The voting membership of the Academy of Motion Picture Arts & Sciences again nominated Brando for Best Actor, his seventh nomination. Although Brando won the 1973 New York Film Critics Circle Awards, he did not attend the ceremony or send a representative to pick up the award if he won. Pauline Kael, in The New Yorker review, wrote "The movie breakthrough has finally come. Bertolucci and Brando have altered the face of an art form." Brando confessed in his autobiography, "To this day I can't say what Last Tango in Paris was about", and added the film "required me to do a lot of emotional arm wrestling with myself, and when it was finished, I decided that I wasn't ever again going to destroy myself emotionally to make a movie". In 1973, Brando was devastated by the death of his childhood best friend Wally Cox. Brando slept in Cox's pajamas and wrenched his ashes from his widow. She was going to sue for their return, but finally said "I think Marlon needs the ashes more than I do." Late 1970s In 1976, Brando appeared in The Missouri Breaks with his friend Jack Nicholson. The movie also reunited the actor with director Arthur Penn. As biographer Stefan Kanfer describes, Penn had difficulty controlling Brando, who seemed intent on going over the top with his border-ruffian-turned-contract-killer Robert E. Lee Clayton: "Marlon made him a cross-dressing psychopath. Absent for the first hour of the movie, Clayton enters on horseback, dangling upside down, caparisoned in white buckskin, Littlefeather-style. He speaks in an Irish accent for no apparent reason. Over the next hour, also for no apparent reason, Clayton assumes the intonation of a British upper-class twit and an elderly frontier woman, complete with a granny dress and matching bonnet. Penn, who believed in letting actors do their thing, indulged Marlon all the way." Critics were unkind, with The Observer calling Brando's performance "one of the most extravagant displays of grandedamerie since Sarah Bernhardt", while The Sun complained, "Marlon Brando at fifty-two has the sloppy belly of a sixty-two-year-old, the white hair of a seventy-two-year-old, and the lack of discipline of a precocious twelve-year-old." However, Kanfer noted: "Even though his late work was met with disapproval, a re-examination shows that often, in the middle of the most pedestrian scene, there would be a sudden, luminous occurrence, a flash of the old Marlon that showed how capable he remained." In 1978, Brando narrated the English version of Raoni, a French-Belgian documentary film directed by Jean-Pierre Dutilleux and Luiz Carlos Saldanha that focused on the life of Raoni Metuktire and issues surrounding the survival of the indigenous Indian tribes of north central Brazil. Brando portrayed Superman's father Jor-El in the 1978 film Superman. He agreed to the role only on assurance that he would be paid a large sum for what amounted to a small part, that he would not have to read the script beforehand, and that his lines would be displayed somewhere off-camera. It was revealed in a documentary contained in the 2001 DVD release of Superman that he was paid $3.7 million for two weeks of work. Brando also filmed scenes for the movie's sequel, Superman II, but after producers refused to pay him the same percentage he received for the first movie, he denied them permission to use the footage. "I asked for my usual percentage," he recollected in his memoir, "but they refused, and so did I." However, after Brando's death, the footage was reincorporated into the 2006 recut of the film, Superman II: The Richard Donner Cut and in the 2006 "loose sequel" Superman Returns, in which both used and unused archive footage of him as Jor-El from the first two Superman films was remastered for a scene in the Fortress of Solitude, and Brando's voice-overs were used throughout the film. In 1979, he made a rare television appearance in the miniseries Roots: The Next Generations, portraying George Lincoln Rockwell; he won a Primetime Emmy Award for Outstanding Supporting Actor in a Miniseries or a Movie for his performance. Brando starred as Colonel Walter E. Kurtz in Francis Ford Coppola's Vietnam epic Apocalypse Now (1979). He plays a highly decorated U.S. Army Special Forces officer who goes renegade, running his own operation based in Cambodia and is feared by the U.S. military as much as the Vietnamese. Brando was paid $1 million a week for 3 weeks work. The film drew attention for its lengthy and troubled production, as Eleanor Coppola's documentary Hearts of Darkness: A Filmmaker's Apocalypse documents: Brando showed up on the set overweight, Martin Sheen suffered a heart attack, and severe weather destroyed several expensive sets. The film's release was also postponed several times while Coppola edited millions of feet of footage. In the documentary, Coppola talks about how astonished he was when an overweight Brando turned up for his scenes and, feeling desperate, decided to portray Kurtz, who appears emaciated in the original story, as a man who had indulged every aspect of himself. Coppola: "He was already heavy when I hired him and he promised me that he was going to get in shape and I imagined that I would, if he were heavy, I could use that. But he was so fat, he was very, very shy about it ... He was very, very adamant about how he didn't want to portray himself that way." Brando admitted to Coppola that he had not read the book, Heart of Darkness, as the director had asked him to, and the pair spent days exploring the story and the character of Kurtz, much to the actor's financial benefit, according to producer Fred Roos: "The clock was ticking on this deal he had and we had to finish him within three weeks or we'd go into this very expensive overage ... And Francis and Marlon would be talking about the character and whole days would go by. And this is at Marlon's urging—and yet he's getting paid for it." Upon release, Apocalypse Now earned critical acclaim, as did Brando's performance. His whispering of Kurtz's final words "The horror! The horror!", has become particularly famous. Roger Ebert, writing in the Chicago Sun-Times, defended the movie's controversial denouement, opining that the ending, "with Brando's fuzzy, brooding monologues and the final violence, feels much more satisfactory than any conventional ending possibly could." Brando received a fee of $2 million plus 10% of the gross theatrical rental and 10% of the TV sale rights, earning him around $9 million. Later work After appearing as oil tycoon Adam Steiffel in 1980's The Formula, which was poorly received critically, Brando announced his retirement from acting. However, he returned in 1989 in A Dry White Season, based on André Brink's 1979 anti-apartheid novel. Brando agreed to do the film for free, but fell out with director Euzhan Palcy over how the film was edited; he even made a rare television appearance in an interview with Connie Chung to voice his disapproval. In his memoir, he maintained that Palcy "had cut the picture so poorly, I thought, that the inherent drama of this conflict was vague at best." Brando received praise for his performance, earning an Academy Award nomination for Best Supporting Actor and winning the Best Actor Award at the Tokyo Film Festival. Brando scored enthusiastic reviews for his caricature of his Vito Corleone role as Carmine Sabatini in 1990's The Freshman. In his original review, Roger Ebert wrote, "There have been a lot of movies where stars have repeated the triumphs of their parts—but has any star ever done it more triumphantly than Marlon Brando does in The Freshman?" Variety also praised Brando's performance as Sabatini and noted, "Marlon Brando's sublime comedy performance elevates The Freshman from screwball comedy to a quirky niche in film history." Brando also starred alongside his friend Johnny Depp in the box office hit Don Juan DeMarco (1995) and in Depp's controversial The Brave (1997), which was never released in the United States. Later performances, such as his appearance in Christopher Columbus: The Discovery (1992) (for which he was nominated for a Raspberry as "Worst Supporting Actor"), The Island of Dr. Moreau (in which he won a "Worst Supporting Actor" Raspberry) (1996), and his barely recognizable appearance in Free Money (1998), resulted in some of the worst reviews of his career. The Island of Dr. Moreau screenwriter Ron Hutchinson would later say in his memoir, Clinging to the Iceberg: Writing for a Living on the Stage and in Hollywood (2017), that Brando sabotaged the film's production by feuding and refusing to cooperate with his colleagues and the film crew. Unlike its immediate predecessors, Brando's last completed film, The Score (2001), was received generally positively. In the film, in which he portrays a fence, he starred with Robert De Niro. After Brando's death, the novel Fan-Tan was released. Brando conceived the novel with director Donald Cammell in 1979, but it was not released until 2005. Final years and death Brando's notoriety, his troubled family life, and his obesity attracted more attention than his late acting career. He gained a great deal of weight in the 1970s; by the early-to-mid-1990s he weighed over and suffered from Type 2 diabetes. He had a history of weight fluctuation throughout his career that, by and large, he attributed to his years of stress-related overeating followed by compensatory dieting. He also earned a reputation for being difficult on the set, often unwilling or unable to memorize his lines and less interested in taking direction than in confronting the film director with odd demands. He also dabbled with some innovation in his last years. He had several patents issued in his name from the U.S. Patent and Trademark Office, all of which involve a method of tensioning drumheads, between June 2002 and November 2004 (for example, see and its equivalents). In 2004, Brando recorded voice tracks for the character Mrs. Sour in the unreleased animated film Big Bug Man. This was his last role and his only role as a female character. A longtime close friend of entertainer Michael Jackson, Brando paid regular visits to his Neverland Ranch, resting there for weeks at a time. Brando also participated in the singer's two-day solo career 30th-anniversary celebration concerts in 2001, and starred in his 13-minute-long music video "You Rock My World", in the same year. The actor's son, Miko, was Jackson's bodyguard and assistant for several years, and was a friend of the singer. "The last time my father left his house to go anywhere, to spend any kind of time, it was with Michael Jackson", Miko stated. "He loved it ... He had a 24-hour chef, 24-hour security, 24-hour help, 24-hour kitchen, 24-hour maid service. Just carte blanche." "Michael was instrumental helping my father through the last few years of his life. For that I will always be indebted to him. Dad had a hard time breathing in his final days, and he was on oxygen much of the time. He loved the outdoors, so Michael would invite him over to Neverland. Dad could name all the trees there, and the flowers, but being on oxygen it was hard for him to get around and see them all, it's such a big place. So Michael got Dad a golf cart with a portable oxygen tank so he could go around and enjoy Neverland. They'd just drive around—Michael Jackson, Marlon Brando, with an oxygen tank in a golf cart." In April 2001, Brando was hospitalized with pneumonia. In 2004, Brando signed with Tunisian film director Ridha Behi and began preproduction on a project to be titled Brando and Brando. Up to a week before his death, he was working on the script in anticipation of a July/August 2004 start date. Production was suspended in July 2004 following Brando's death, at which time Behi stated that he would continue the film as an homage to Brando, with a new title of Citizen Brando. On July 1, 2004, Brando died of respiratory failure from pulmonary fibrosis with congestive heart failure at the UCLA Medical Center. The cause of death was initially withheld, with his lawyer citing privacy concerns. He also suffered from diabetes and liver cancer. Shortly before his death and despite needing an oxygen mask to breathe, he recorded his voice to appear in The Godfather: The Game, once again as Don Vito Corleone. However, Brando recorded only one line due to his health, and an impersonator was hired to finish his lines. His single recorded line was included within the final game as a tribute to the actor. Some additional lines from his character were directly lifted from the film. Karl Malden—Brando's co-star in three films, A Streetcar Named Desire, On the Waterfront, and One-Eyed Jacks—spoke in a documentary accompanying the DVD of A Streetcar Named Desire about a phone call he received from Brando shortly before Brando's death. A distressed Brando told Malden he kept falling over. Malden wanted to come over, but Brando put him off, telling him there was no point. Three weeks later, Brando was dead. Shortly before his death, he had apparently refused permission for tubes carrying oxygen to be inserted into his lungs, which, he was told, was the only way to prolong his life. Brando was cremated, and his ashes were put in with those of his good friend Wally Cox and another longtime friend, Sam Gilman. They were then scattered partly in Tahiti and partly in Death Valley. In 2007, a 165-minute biopic of Brando for Turner Classic Movies, Brando: The Documentary, produced by Mike Medavoy (the executor of Brando's will), was released. Personal life Brando was known for his tumultuous personal life and his large number of partners and children. He was the father to at least 11 children, three of whom were adopted. In 1976, he told a French journalist, "Homosexuality is so much in fashion, it no longer makes news. Like a large number of men, I, too, have had homosexual experiences, and I am not ashamed. I have never paid much attention to what people think about me. But if there is someone who is convinced that Jack Nicholson and I are lovers, may they continue to do so. I find it amusing." In Songs My Mother Taught Me, Brando wrote that he met Marilyn Monroe at a party where she played piano, unnoticed by anybody else there, that they had an affair and maintained an intermittent relationship for many years, and that he received a telephone call from her several days before she died. He also claimed numerous other romances, although he did not discuss his marriages, his wives, or his children in his autobiography. He met nisei actress and dancer Reiko Sato in the early 1950s. Though their relationship cooled, they remained friends for the rest of Sato's life, with her dividing her time between Los Angeles and Tetiaroa in her later years. In 1954 Dorothy Kilgallen reported they were an item. Brando was smitten with the Mexican actress Katy Jurado after seeing her in High Noon. They met when Brando was filming Viva Zapata! in Mexico. Brando told Joseph L. Mankiewicz that he was attracted to "her enigmatic eyes, black as hell, pointing at you like fiery arrows". Their first date became the beginning of an extended affair that lasted many years and peaked at the time they worked together on One-Eyed Jacks (1960), a film directed by Brando. Brando met actress Rita Moreno in 1954, and they began a love affair. Moreno later revealed in her memoir that when she became pregnant by Brando he arranged for an abortion. After the abortion was botched and Brando fell in love with Tarita Teriipaia, Moreno attempted suicide by overdosing on Brando's sleeping pills. Years after they broke up, Moreno played his love interest in the film The Night of the Following Day. Brando married actress Anna Kashfi in 1957. Kashfi was born in Calcutta and moved to Wales from India in 1947. She is the daughter of a Welsh steel worker of Irish descent, William O'Callaghan, who had been superintendent on the Indian State railways, and his Welsh wife Phoebe. However, in her book, Brando for Breakfast, Kashfi claimed that she was half Indian and that O'Callaghan was her stepfather. She claimed that her biological father was Indian and that she was the result of an "unregistered alliance" between her parents. Brando and Kashfi had a son, Christian Brando, on May 11, 1958; they divorced in 1959. In 1960, Brando married Movita Castaneda, a Mexican-American actress; the marriage was annulled in 1968 after it was discovered her previous marriage was still active. Castaneda had appeared in the first Mutiny on the Bounty film in 1935, some 27 years before the 1962 remake with Brando as Fletcher Christian. They had two children together: Miko Castaneda Brando (born 1961) and Rebecca Brando (born 1966). French actress Tarita Teriipaia, who played Brando's love interest in Mutiny on the Bounty, became his third wife on August 10, 1962. She was 20 years old, 18 years younger than Brando, who was reportedly delighted by her naïveté. Because Teriipaia was a native French speaker, Brando became fluent in the language and gave numerous interviews in French. Brando and Teriipaia had two children together: Simon Teihotu Brando (born 1963) and Tarita Cheyenne Brando (1970–1995). Brando also adopted Teriipaia's daughter, Maimiti Brando (born 1977) and niece, Raiatua Brando (born 1982). Brando and Teriipaia divorced in July 1972. After Brando's death, the daughter of actress Cynthia Lynn claimed that Brando had had a short-lived affair with her mother, who appeared with Brando in Bedtime Story, and that this affair resulted in her birth in 1964. Throughout the late 1960s and into the early 1980s, he had a tempestuous, long-term relationship with actress Jill Banner. Brando had a long-term relationship with his housekeeper Maria Cristina Ruiz, with whom he had three children: Ninna Priscilla Brando (born May 13, 1989), Myles Jonathan Brando (born January 16, 1992), and Timothy Gahan Brando (born January 6, 1994). Brando also adopted Petra Brando-Corval (born 1972), the daughter of his assistant Caroline Barrett and novelist James Clavell. Brando's close friendship with Wally Cox was the subject of rumors. Brando told a journalist: "If Wally had been a woman, I would have married him and we would have lived happily ever after." Two of Cox's wives, however, dismissed the suggestion that the love was more than platonic. Brando's grandson Tuki Brando (born 1990), son of Cheyenne Brando, is a fashion model. His numerous grandchildren also include Prudence Brando and Shane Brando, children of Miko C. Brando; the children of Rebecca Brando; and the three children of Teihotu Brando among others. Stephen Blackehart has been reported to be the son of Brando, but Blackehart disputes this claim. In 2018, Quincy Jones and Jennifer Lee claimed that Brando had had a sexual relationship with comedian and Superman III actor Richard Pryor. Pryor's daughter Rain Pryor later disputed the claim. Lifestyle Brando earned a reputation as a 'bad boy' for his public outbursts and antics. According to Los Angeles magazine, "Brando was rock and roll before anybody knew what rock and roll was." His behavior during the filming of Mutiny on the Bounty (1962) seemed to bolster his reputation as a difficult star. He was blamed for a change in director and a runaway budget, though he disclaimed responsibility for either. On June 12, 1973, Brando broke paparazzo Ron Galella's jaw. Galella had followed Brando, who was accompanied by talk show host Dick Cavett, after a taping of The Dick Cavett Show in New York City. He paid a $40,000 out-of-court settlement and suffered an infected hand as a result. Galella wore a football helmet the next time he photographed Brando at a gala benefiting the American Indians Development Association in 1974. The filming of Mutiny on the Bounty affected Brando's life in a profound way, as he fell in love with Tahiti and its people. He bought a 12-island atoll, Tetiaroa, and in 1970 hired an award-winning young Los Angeles architect, Bernard Judge, to build his home and natural village there without despoiling the environment. An environmental laboratory protecting sea birds and turtles was established, and for many years student groups visited. The 1983 hurricane destroyed many of the structures including his resort. A hotel using Brando's name, The Brando Resort opened in 2014. Brando was an active ham radio operator, with the call signs KE6PZH and FO5GJ (the latter from his island). He was listed in the Federal Communications Commission (FCC) records as Martin Brandeaux to preserve his privacy. In the A&E Biography episode on Brando, biographer Peter Manso comments, "On the one hand, being a celebrity allowed Marlon to take his revenge on the world that had so deeply hurt him, so deeply scarred him. On the other hand he hated it because he knew it was false and ephemeral." In the same program another biographer, David Thomson, relates, "Many, many people who worked with him, and came to work with him with the best intentions, went away in despair saying he's a spoiled kid. It has to be done his way or he goes away with some vast story about how he was wronged, he was offended, and I think that fits with the psychological pattern that he was a wronged kid." Politics In 1946, Brando performed in Ben Hecht's Zionist play A Flag is Born. He attended some fundraisers for John F. Kennedy in the 1960 presidential election. In August 1963, he participated in the March on Washington along with fellow celebrities Harry Belafonte, James Garner, Charlton Heston, Burt Lancaster and Sidney Poitier. Along with Paul Newman, Brando also participated in the freedom rides. In autumn of 1967, Brando visited Helsinki, Finland at a charity party organized by UNICEF at the Helsinki City Theatre. The gala was televised in thirteen countries. Brando’s visit was based on the famine he had seen in Bihar, India, and he presented the film he shot there to the press and invited guests. He spoke in favor of children’s rights and development aid in developing countries. In the aftermath of the 1968 assassination of Martin Luther King Jr., Brando made one of the strongest commitments to furthering King's work. Shortly after King's death, he announced that he was bowing out of the lead role of a major film (The Arrangement) (1969) which was about to begin production in order to devote himself to the civil rights movement. "I felt I'd better go find out where it is; what it is to be black in this country; what this rage is all about," Brando said on the late-night ABC-TV talk show Joey Bishop Show. In A&E's Biography episode on Brando, actor and co-star Martin Sheen states, "I'll never forget the night that Reverend King was shot and I turned on the news and Marlon was walking through Harlem with Mayor Lindsay. And there were snipers and there was a lot of unrest and he kept walking and talking through those neighborhoods with Mayor Lindsay. It was one of the most incredible acts of courage I ever saw, and it meant a lot and did a lot." Brando's participation in the civil rights movement actually began well before King's death. In the early 1960s, he contributed thousands of dollars to both the Southern Christian Leadership Conference (S.C.L.C.) and to a scholarship fund established for the children of slain Mississippi N.A.A.C.P. leader Medgar Evers. In 1964 Brando was arrested at a "fish-in" held to protest a broken treaty that had promised Native Americans fishing rights in Puget Sound. By this time, Brando was already involved in films that carried messages about human rights: Sayonara, which addressed interracial romance, and The Ugly American, depicting the conduct of U.S. officials abroad and the deleterious effect on the citizens of foreign countries. For a time, he was also donating money to the Black Panther Party and considered himself a friend of founder Bobby Seale. Brando ended his financial support for the group over his perception of its increasing radicalization, specifically a passage in a Panther pamphlet put out by Eldridge Cleaver advocating indiscriminate violence, "for the Revolution." Brando was also a supporter of the American Indian Movement. At the 1973 Academy Awards ceremony, Brando refused to accept the Oscar for his career-reviving performance in The Godfather. Sacheen Littlefeather represented him at the ceremony. She appeared in full Apache attire and stated that owing to the "poor treatment of Native Americans in the film industry", Brando would not accept the award. This occurred while the standoff at Wounded Knee was ongoing. The event grabbed the attention of the US and the world media. This was considered a major event and victory for the movement by its supporters and participants. Outside of his film work, Brando appeared before the California Assembly in support of a fair housing law and personally joined picket lines in demonstrations protesting discrimination in housing developments in 1963. He was also an activist against apartheid. In 1964, he favored a boycott of his films in South Africa to prevent them from being shown to a segregated audience. He took part at a 1975 protest rally against American investments in South Africa and for the release of Nelson Mandela. In 1989, Brando also starred in the film A Dry White Season, based upon André Brink's novel of the same name. Comments on Jews and Hollywood In an interview in Playboy magazine in January 1979, Brando said: "You've seen every single race besmirched, but you never saw an image of the kike because the Jews were ever so watchful for that—and rightly so. They never allowed it to be shown on screen. The Jews have done so much for the world that, I suppose, you get extra disappointed because they didn't pay attention to that." Brando made a similar comment on Larry King Live in April 1996, saying: Larry King, who was Jewish, replied: "When you say—when you say something like that, you are playing right in, though, to anti-Semitic people who say the Jews are—" Brando interrupted: "No, no, because I will be the first one who will appraise the Jews honestly and say 'Thank God for the Jews'." Jay Kanter, Brando's agent, producer, and friend, defended him in Daily Variety: "Marlon has spoken to me for hours about his fondness for the Jewish people, and he is a well-known supporter of Israel." Similarly, Louie Kemp, in his article for Jewish Journal, wrote: "You might remember him as Don Vito Corleone, Stanley Kowalski or the eerie Col. Walter E. Kurtz in 'Apocalypse Now', but I remember Marlon Brando as a mensch and a personal friend of the Jewish people when they needed it most." Legacy Brando was one of the most respected actors of the post-war era. He is listed by the American Film Institute as the fourth greatest male star whose screen debut occurred before or during 1950 (it occurred in 1950). He earned respect among critics for his memorable performances and charismatic screen presence. He helped popularize 'method acting'. He is regarded as one of the greatest cinema actors of the 20th century.<ref>Movies in American History: An Encyclopedia'</ref>Encyclopedia Britannica describes him as "the most celebrated of the method actors, and his slurred, mumbling delivery marked his rejection of classical dramatic training. His true and passionate performances proved him one of the greatest actors of his generation". It also notes the apparent paradox of his talent: "He is regarded as the most influential actor of his generation, yet his open disdain for the acting profession ... often manifested itself in the form of questionable choices and uninspired performances. Nevertheless, he remains a riveting screen presence with a vast emotional range and an endless array of compulsively watchable idiosyncrasies." Cultural influence Marlon Brando is a cultural icon with enduring popularity. His rise to national attention in the 1950s had a profound effect on American culture. According to film critic Pauline Kael, "Brando represented a reaction against the post-war mania for security. As a protagonist, the Brando of the early fifties had no code, only his instincts. He was a development from the gangster leader and the outlaw. He was antisocial because he knew society was crap; he was a hero to youth because he was strong enough not to take the crap ... Brando represented a contemporary version of the free American ... Brando is still the most exciting American actor on the screen." Sociologist Dr. Suzanne McDonald-Walker states: "Marlon Brando, sporting leather jacket, jeans, and moody glare, became a cultural icon summing up 'the road' in all its maverick glory." His portrayal of the gang leader Johnny Strabler in The Wild One has become an iconic image, used both as a symbol of rebelliousness and a fashion accessory that includes a Perfecto style motorcycle jacket, a tilted cap, jeans and sunglasses. Johnny's haircut inspired a craze for sideburns, followed by James Dean and Elvis Presley, among others. Dean copied Brando's acting style extensively and Presley used Brando's image as a model for his role in Jailhouse Rock. The "I coulda been a contender" scene from On the Waterfront, according to the author of Brooklyn Boomer, Martin H. Levinson, is "one of the most famous scenes in motion picture history, and the line itself has become part of America's cultural lexicon." An example of the endurance of Brando's popular "Wild One" image was the 2009 release of replicas of the leather jacket worn by Brando's Johnny Strabler character. The jackets were marketed by Triumph, the manufacturer of the Triumph Thunderbird motorcycles featured in The Wild One, and were officially licensed by Brando's estate. Brando was also considered a male sex symbol. Linda Williams writes: "Marlon Brando [was] the quintessential American male sex symbol of the late fifties and early sixties". Brando was an early lesbian icon who, along with James Dean, influenced the butch look and self-image in the 1950s and after. Brando has also been immortalized in music; most notably, he was mentioned in the lyrics of "It's Hard to Be a Saint in the City" by Bruce Springsteen, in which one of the opening lines read "I could walk like Brando right in to the sun", and in Neil Young's "Pocahontas" as a tribute to his lifetime support of Native Americans and in which he is depicted sitting by a fire with Neil and Pocahontas. He was also mentioned in "Vogue" by Madonna, "Is This What You Wanted" by Leonard Cohen on the album New Skin for the Old Ceremony, "Eyeless" by Slipknot on their self-titled album, and most recently in the song simply titled "Marlon Brando" off the Australian singer Alex Cameron's 2017 album Forced Witness. Bob Dylan's 2020 song "My Own Version of You" references one of his most famous performances in the line, "I'll take the Scarface Pacino and the Godfather Brando / Mix 'em up in a tank and get a robot commando". He is also one of the many faces on the cover of The Beatles' album "Sgt Pepper's Lonely Hearts Club Band", directly above the wax model of Ringo Starr. Brando's films, along with those of James Dean, caused Honda to come forward with its "You Meet the Nicest People on a Honda" ads, in order to curb the negative association motorcycles had gotten with rebels and outlaws. Views on acting In his autobiography Songs My Mother Taught Me, Brando observed: He also confessed that, while having great
by "magnetic effluvia" moving along the Earth's magnetic field lines. Instruments and classification scales In 1441, King Sejong's son, Prince Munjong of Korea, invented the first standardized rain gauge. These were sent throughout the Joseon dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer. In 1607, Galileo Galilei constructed a thermoscope. In 1611, Johannes Kepler wrote the first scientific treatise on snow crystals: "Strena Seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow)." In 1643, Evangelista Torricelli invented the mercury barometer. In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit created a reliable scale for measuring temperature with a mercury-type thermometer. In 1742, Anders Celsius, a Swedish astronomer, proposed the "centigrade" temperature scale, the predecessor of the current Celsius scale. In 1783, the first hair hygrometer was demonstrated by Horace-Bénédict de Saussure. In 1802–1803, Luke Howard wrote On the Modification of Clouds, in which he assigns cloud types Latin names. In 1806, Francis Beaufort introduced his system for classifying wind speeds. Near the end of the 19th century the first cloud atlases were published, including the International Cloud Atlas, which has remained in print ever since. The April 1960 launch of the first successful weather satellite, TIROS-1, marked the beginning of the age where weather information became available globally. Atmospheric composition research In 1648, Blaise Pascal rediscovered that atmospheric pressure decreases with height, and deduced that there is a vacuum above the atmosphere. In 1738, Daniel Bernoulli published Hydrodynamics, initiating the Kinetic theory of gases and established the basic laws for the theory of gases. In 1761, Joseph Black discovered that ice absorbs heat without changing its temperature when melting. In 1772, Black's student Daniel Rutherford discovered nitrogen, which he called phlogisticated air, and together they developed the phlogiston theory. In 1777, Antoine Lavoisier discovered oxygen and developed an explanation for combustion. In 1783, in Lavoisier's essay "Reflexions sur le phlogistique," he deprecates the phlogiston theory and proposes a caloric theory. In 1804, Sir John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation. In 1808, John Dalton defended caloric theory in A New System of Chemistry and described how it combines with matter, especially gases; he proposed that the heat capacity of gases varies inversely with atomic weight. In 1824, Sadi Carnot analyzed the efficiency of steam engines using caloric theory; he developed the notion of a reversible process and, in postulating that no such thing exists in nature, laid the foundation for the second law of thermodynamics. Research into cyclones and air flow In 1494, Christopher Columbus experienced a tropical cyclone, which led to the first written European account of a hurricane. In 1686, Edmund Halley presented a systematic study of the trade winds and monsoons and identified solar heating as the cause of atmospheric motions. In 1735, an ideal explanation of global circulation through study of the trade winds was written by George Hadley. In 1743, when Benjamin Franklin was prevented from seeing a lunar eclipse by a hurricane, he decided that cyclones move in a contrary manner to the winds at their periphery. Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes, and the air within deflected by the Coriolis force resulting in the prevailing westerly winds. Late in the 19th century, the motion of air masses along isobars was understood to be the result of the large-scale interaction of the pressure gradient force and the deflecting force. By 1912, this deflecting force was named the Coriolis effect. Just after World War I, a group of meteorologists in Norway led by Vilhelm Bjerknes developed the Norwegian cyclone model that explains the generation, intensification and ultimate decay (the life cycle) of mid-latitude cyclones, and introduced the idea of fronts, that is, sharply defined boundaries between air masses. The group included Carl-Gustaf Rossby (who was the first to explain the large scale atmospheric flow in terms of fluid dynamics), Tor Bergeron (who first determined how rain forms) and Jacob Bjerknes. Observation networks and weather forecasting In the late 16th century and first half of the 17th century a range of meteorological instruments were invented – the thermometer, barometer, hydrometer, as well as wind and rain gauges. In the 1650s natural philosophers started using these instruments to systematically record weather observations. Scientific academies established weather diaries and organised observational networks. In 1654, Ferdinando II de Medici established the first weather observing network, that consisted of meteorological stations in Florence, Cutigliano, Vallombrosa, Bologna, Parma, Milan, Innsbruck, Osnabrück, Paris and Warsaw. The collected data were sent to Florence at regular time intervals. In the 1660s Robert Hooke of the Royal Society of London sponsored networks of weather observers. Hippocrates' treatise Airs, Waters, and Places had linked weather to disease. Thus early meteorologists attempted to correlate weather patterns with epidemic outbreaks, and the climate with public health. During the Age of Enlightenment meteorology tried to rationalise traditional weather lore, including astrological meteorology. But there were also attempts to establish a theoretical understanding of weather phenomena. Edmond Halley and George Hadley tried to explain trade winds. They reasoned that the rising mass of heated equator air is replaced by an inflow of cooler air from high latitudes. A flow of warm air at high altitude from equator to poles in turn established an early picture of circulation. Frustration with the lack of discipline among weather observers, and the poor quality of the instruments, led the early modern nation states to organise large observation networks. Thus by the end of the 18th century, meteorologists had access to large quantities of reliable weather data. In 1832, an electromagnetic telegraph was created by Baron Schilling. The arrival of the electrical telegraph in 1837 afforded, for the first time, a practical method for quickly gathering surface weather observations from a wide area. This data could be used to produce maps of the state of the atmosphere for a region near the Earth's surface and to study how these states evolved through time. To make frequent weather forecasts based on these data required a reliable network of observations, but it was not until 1849 that the Smithsonian Institution began to establish an observation network across the United States under the leadership of Joseph Henry. Similar observation networks were established in Europe at this time. The Reverend William Clement Ley was key in understanding of cirrus clouds and early understandings of Jet Streams. Charles Kenneth Mackinnon Douglas, known as 'CKM' Douglas read Ley's papers after his death and carried on the early study of weather systems. Nineteenth century researchers in meteorology were drawn from military or medical backgrounds, rather than trained as dedicated scientists. In 1854, the United Kingdom government appointed Robert FitzRoy to the new office of Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected. Over the next 50 years, many countries established national meteorological services. The
Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected. Over the next 50 years, many countries established national meteorological services. The India Meteorological Department (1875) was established to follow tropical cyclone and monsoon. The Finnish Meteorological Central Office (1881) was formed from part of Magnetic Observatory of Helsinki University. Japan's Tokyo Meteorological Observatory, the forerunner of the Japan Meteorological Agency, began constructing surface weather maps in 1883. The United States Weather Bureau (1890) was established under the United States Department of Agriculture. The Australian Bureau of Meteorology (1906) was established by a Meteorology Act to unify existing state meteorological services. Numerical weather prediction In 1904, Norwegian scientist Vilhelm Bjerknes first argued in his paper Weather Forecasting as a Problem in Mechanics and Physics that it should be possible to forecast weather from calculations based upon natural laws. It was not until later in the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction. In 1922, Lewis Fry Richardson published "Weather Prediction By Numerical Process," after finding notes and derivations he worked on as an ambulance driver in World War I. He described how small terms in the prognostic fluid dynamics equations that govern atmospheric flow could be neglected, and a numerical calculation scheme that could be devised to allow predictions. Richardson envisioned a large auditorium of thousands of people performing the calculations. However, the sheer number of calculations required was too large to complete without electronic computers, and the size of the grid and time steps used in the calculations led to unrealistic results. Though numerical analysis later found that this was due to numerical instability. Starting in the 1950s, numerical forecasts with computers became feasible. The first weather forecasts derived this way used barotropic (single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves, that is, the pattern of atmospheric lows and highs. In 1959, the UK Meteorological Office received its first computer, a Ferranti Mercury. In the 1960s, the chaotic nature of the atmosphere was first observed and mathematically described by Edward Lorenz, founding the field of chaos theory. These advances have led to the current use of ensemble forecasting in most major forecasting centers, to take into account uncertainty arising from the chaotic nature of the atmosphere. Mathematical models used to predict the long term weather of the Earth (climate models), have been developed that have a resolution today that are as coarse as the older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases. Meteorologists Meteorologists are scientists who study and work in the field of meteorology. The American Meteorological Society publishes and continually updates an authoritative electronic Meteorology Glossary. Meteorologists work in government agencies, private consulting and research services, industrial enterprises, utilities, radio and television stations, and in education. In the United States, meteorologists held about 10,000 jobs in 2018. Although weather forecasts and warnings are the best known products of meteorologists for the public, weather presenters on radio and television are not necessarily professional meteorologists. They are most often reporters with little formal meteorological training, using unregulated titles such as weather specialist or weatherman. The American Meteorological Society and National Weather Association issue "Seals of Approval" to weather broadcasters who meet certain requirements but this is not mandatory to be hired by the media. Equipment Each science has its own unique sets of laboratory equipment. In the atmosphere, there are many things or qualities of the atmosphere that can be measured. Rain, which can be observed, or seen anywhere and anytime was one of the first atmospheric qualities measured historically. Also, two other accurately measured qualities are wind and humidity. Neither of these can be seen but can be felt. The devices to measure these three sprang up in the mid-15th century and were respectively the rain gauge, the anemometer, and the hygrometer. Many attempts had been made prior to the 15th century to construct adequate equipment to measure the many atmospheric variables. Many were faulty in some way or were simply not reliable. Even Aristotle noted this in some of his work as the difficulty to measure the air. Sets of surface measurements are important data to meteorologists. They give a snapshot of a variety of weather conditions at one single location and are usually at a weather station, a ship or a weather buoy. The measurements taken at a weather station can include any number of atmospheric observables. Usually, temperature, pressure, wind measurements, and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. Professional stations may also include air quality sensors (carbon monoxide, carbon dioxide, methane, ozone, dust, and smoke), ceilometer (cloud ceiling), falling precipitation sensor, flood sensor, lightning sensor, microphone (explosions, sonic booms, thunder), pyranometer/pyrheliometer/spectroradiometer (IR/Vis/UV photodiodes), rain gauge/snow gauge, scintillation counter (background radiation, fallout, radon), seismometer (earthquakes and tremors), transmissometer (visibility), and a GPS clock for data logging. Upper air data are of crucial importance for weather forecasting. The most widely used technique is launches of radiosondes. Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization. Remote sensing, as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. The common types of remote sensing are Radar, Lidar, and satellites (or photogrammetry). Each collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. Radar and Lidar are not passive because both use EM radiation to illuminate a specific portion of the atmosphere. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño. Spatial scales The study of the atmosphere can be divided into distinct areas that depend on both time and spatial scales. At one extreme of this scale is climatology. In the timescales of hours to days, meteorology separates into micro-, meso-, and synoptic scale meteorology. Respectively, the geospatial size of each of these three scales relates directly with the appropriate timescale. Other subclassifications are used to describe the unique, local, or broad effects within those subclasses. Microscale Microscale meteorology is the study of atmospheric phenomena on a scale of about or less. Individual thunderstorms, clouds, and local turbulence caused by buildings and other obstacles (such as individual hills) are modeled on this scale. Mesoscale Mesoscale meteorology is the study of atmospheric phenomena that has horizontal scales ranging from 1 km to 1000 km and a vertical scale that starts at the Earth's surface and includes the atmospheric boundary
Mount (computing), the process of making a file system accessible Mount (Unix), the utility in Unix-like operating systems which mounts file systems Displays and equipment Mount, a fixed point for attaching equipment, such as a hardpoint on an airframe Mount, a hanging scroll for mounting paintings Mount, to display an item on a heavy backing such as foamcore, e.g.: To attach a picture or a painting to a support, followed by framing it To pin a biological specimen, on a heavy backing in a stretched stable position for ease of dissection or display To prepare dead animals
ease of dissection or display To prepare dead animals for display in taxidermy Lens mount, an interface used to fix a lens to a camera Mounting, placing a cover slip on a specimen on a microscopic slide Telescope mount, a device used to support a telescope Weapon mount, equipment used to secure an armament Picture mount Sports Mount (grappling), a grappling position Mount, to board an apparatus used for gymnastics, such as a balance beam Other uses Mount, in copulation, the union of the sex organs in mating Mount, a riding animal Mount, or
half-lives of 0.45 and 0.44 seconds respectively. The remaining five isotopes have half-lives between 1 and 20 milliseconds. The isotope 277Mt, created as the final decay product of 293Ts for the first time in 2012, was observed to undergo spontaneous fission with a half-life of 5 milliseconds. Preliminary data analysis considered the possibility of this fission event instead originating from 277Hs, for it also has a half-life of a few milliseconds, and could be populated following undetected electron capture somewhere along the decay chain. This possibility was later deemed very unlikely based on observed decay energies of 281Ds and 281Rg and the short half-life of 277Mt, although there is still some uncertainty of the assignment. Regardless, the rapid fission of 277Mt and 277Hs is strongly suggestive of a region of instability for superheavy nuclei with N = 168–170. The existence of this region, characterized by a decrease in fission barrier height between the deformed shell closure at N = 162 and spherical shell closure at N = 184, is consistent with theoretical models. Predicted properties Other than nuclear properties, no properties of meitnerium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that meitnerium and its parents decay very quickly. Properties of meitnerium metal remain unknown and only predictions are available. Chemical Meitnerium is the seventh member of the 6d series of transition metals, and should be much like the platinum group metals. Calculations on its ionization potentials and atomic and ionic radii are similar to that of its lighter homologue iridium, thus implying that meitnerium's basic properties will resemble those of the other group 9 elements, cobalt, rhodium, and iridium. Prediction of the probable chemical properties of meitnerium has not received much attention recently. Meitnerium is expected to be a noble metal. The standard electrode potential for the Mt3+/Mt couple is expected to be 0.8 V. Based on the most stable oxidation states of the lighter group 9 elements, the most stable oxidation states of meitnerium are predicted to be the +6, +3, and +1 states, with the +3 state being the most stable in aqueous solutions. In comparison, rhodium and iridium show a maximum oxidation state of +6, while the most stable states are +4 and +3 for iridium and +3 for rhodium. The oxidation state +9, represented only by iridium in [IrO4]+, might be possible for its congener meitnerium in the nonafluoride (MtF9) and the [MtO4]+ cation, although [IrO4]+ is expected to be more stable than these meitnerium compounds. The tetrahalides of meitnerium have also been predicted to have similar stabilities to those of iridium, thus also allowing a stable +4 state. It is further expected that the maximum oxidation states of elements from bohrium (element 107) to darmstadtium (element 110) may be stable in the gas phase but not in aqueous solution. Physical and atomic Meitnerium is expected to be a solid under normal conditions and assume a face-centered cubic crystal structure, similarly to its lighter congener iridium. It should be a very heavy metal with a density of around 27–28 g/cm3, which would be among the highest of any of the 118 known elements. Meitnerium is also predicted to be paramagnetic. Theoreticians have predicted the covalent radius of meitnerium to be 6 to 10 pm larger than that of iridium. The atomic radius of meitnerium is expected to be around 128 pm. Experimental chemistry Meitnerium is the first element on the periodic table whose chemistry has not yet been investigated. Unambiguous determination of the chemical characteristics of meitnerium has yet to have been established due to the short half-lives of meitnerium isotopes and a limited number of likely volatile compounds that could be studied on a very small scale. One of the few meitnerium compounds that are likely to be sufficiently volatile is meitnerium hexafluoride (), as its lighter homologue iridium hexafluoride () is volatile above 60 °C and therefore the analogous compound of meitnerium might also be sufficiently volatile; a volatile octafluoride () might also be possible. For chemical studies to be carried out on a transactinide, at least four atoms must be produced, the half-life of the isotope used must be at least 1 second, and the rate of production must be at least one atom per week. Even though the half-life of 278Mt, the most stable confirmed meitnerium isotope, is 4.5 seconds, long enough to perform chemical studies, another obstacle is the need to increase the rate of production of meitnerium isotopes and allow experiments to carry on for weeks or months so that statistically significant results can be obtained. Separation and detection must be carried out continuously to separate out the meitnerium isotopes and have automated systems experiment on
of meitnerium), IUPAC published recommendations according to which the element was to be called unnilennium (with the corresponding symbol of Une), a systematic element name as a placeholder, until the element was discovered (and the discovery then confirmed) and a permanent name was decided on. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who either called it "element 109", with the symbol of E109, (109) or even simply 109, or used the proposed name "meitnerium". The naming of meitnerium was discussed in the element naming controversy regarding the names of elements 104 to 109, but meitnerium was the only proposal and thus was never disputed. The name meitnerium (Mt) was suggested by the GSI team in September 1992 in honor of the Austrian physicist Lise Meitner, a co-discoverer of protactinium (with Otto Hahn), and one of the discoverers of nuclear fission. In 1994 the name was recommended by IUPAC, and was officially adopted in 1997. It is thus the only element named specifically after a non-mythological woman (curium being named for both Pierre and Marie Curie). Isotopes Meitnerium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Eight different isotopes of meitnerium have been reported with atomic masses 266, 268, 270, and 274–278, two of which, meitnerium-268 and meitnerium-270, have known but unconfirmed metastable states. A ninth isotope with atomic mass 282 is unconfirmed. Most of these decay predominantly through alpha decay, although some undergo spontaneous fission. Stability and half-lives All meitnerium isotopes are extremely unstable and radioactive; in general, heavier isotopes are more stable than the lighter. The most stable known meitnerium isotope, 278Mt, is also the heaviest known; it has a half-life of 4.5 seconds. The unconfirmed 282Mt is even heavier and appears to have a longer half-life of 67 seconds. The isotopes 276Mt and 274Mt have half-lives of 0.45 and 0.44 seconds respectively. The remaining five isotopes have half-lives between 1 and 20 milliseconds. The isotope 277Mt, created as the final decay product of 293Ts for the first time in 2012, was observed to undergo spontaneous fission with a half-life of 5 milliseconds. Preliminary data analysis considered the possibility of this fission event instead originating from 277Hs, for it also has a half-life of a few milliseconds, and could be populated following undetected electron capture somewhere along the decay chain. This possibility was later deemed very unlikely based on observed decay energies of 281Ds and 281Rg and the short half-life of 277Mt, although there is still some uncertainty of the assignment. Regardless, the rapid fission of 277Mt and 277Hs is strongly suggestive of a region of instability for superheavy nuclei with N = 168–170. The existence of this region, characterized by a decrease in fission barrier height between the deformed shell closure at N = 162 and spherical shell closure at N = 184, is consistent with theoretical models. Predicted properties Other than nuclear properties, no properties of meitnerium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that meitnerium and its parents decay very quickly. Properties of meitnerium
of using base 1024 originated as technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 (210) approximates 1000 (103), roughly corresponding to the SI prefix kilo-, it was a convenient term to denote the binary multiple. In 1998 the International Electrotechnical Commission (IEC) proposed standards for binary prefixes requiring the use of megabyte to strictly denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST. Nevertheless, the term megabyte continues to be widely used with different meanings: Base 10 1 MB = bytes (= 10002 B = 106 B) is the definition recommended for the International System of Units (SI) and by the International Electrotechnical Commission IEC. This definition is used in networking contexts and most storage media, particularly hard drives, flash-based storage, and DVDs, and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance. The Mac OS X 10.6 file manager is a notable
denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST. Nevertheless, the term megabyte continues to be widely used with different meanings: Base 10 1 MB = bytes (= 10002 B = 106 B) is the definition recommended for the International System of Units (SI) and by the International Electrotechnical Commission IEC. This definition is used in networking contexts and most storage media, particularly hard drives, flash-based storage, and DVDs, and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance. The Mac OS X 10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units. In this convention, one thousand megabytes (1000 MB) is equal to one gigabyte (1 GB), where 1 GB is one billion bytes. Base 2 1 MB = bytes (= 10242 B = 220 B) is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the unambiguous binary prefix mebibyte. In this convention, one thousand and twenty-four megabytes (1024 MB) is equal to one gigabyte (1 GB), where 1 GB is 10243 bytes (i.e., 1 GiB). Mixed 1 MB = bytes (= 1000×1024 B) is the definition used to describe the formatted capacity of the 1.44 MB HD floppy disk, which actually has a capacity of . Randomly addressable semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two. The capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in
of −OH and −H groups at the asymmetric or chiral carbon atoms (this does not apply to those carbons having the carbonyl functional group). Configuration of monosaccharides Like many chiral molecules, the two stereoisomers of glyceraldehyde will gradually rotate the polarization direction of linearly polarized light as it passes through it, even in solution. The two stereoisomers are identified with the prefixes - and -, according to the sense of rotation: -glyceraldehyde is dextrorotatory (rotates the polarization axis clockwise), while -glyceraldehyde is levorotatory (rotates it counterclockwise). The - and - prefixes are also used with other monosaccharides, to distinguish two particular stereoisomers that are mirror-images of each other. For this purpose, one considers the chiral carbon that is furthest removed from the C=O group. Its four bonds must connect to −H, −OH, −C(OH)H, and the rest of the molecule. If the molecule can be rotated in space so that the directions of those four groups match those of the analog groups in -glyceraldehyde's C2, then the isomer receives the - prefix. Otherwise, it receives the - prefix. In the Fischer projection, the - and - prefixes specifies the configuration at the carbon atom that is second from bottom: - if the hydroxyl is on the right side, and - if it is on the left side. Note that the - and - prefixes do not indicate the direction of rotation of polarized light, which is a combined effect of the arrangement at all chiral centers. However, the two enantiomers will always rotate the light in opposite directions, by the same amount. See also system. Cyclisation of monosaccharides A monosaccharide often switches from the acyclic (open-chain) form to a cyclic form, through a nucleophilic addition reaction between the carbonyl group and one of the hydroxyls of the same molecule. The reaction creates a ring of carbon atoms closed by one bridging oxygen atom. The resulting molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. The reaction is easily reversed, yielding the original open-chain form. In these cyclic forms, the ring usually has five or six atoms. These forms are called furanoses and pyranoses, respectively — by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the aldehyde group on carbon 1 and the hydroxyl on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a seven-atom ring (the same of oxepane), rarely encountered, are called heptoses. For many monosaccharides (including glucose), the cyclic forms predominate, in the solid state and in solutions, and therefore the same name commonly is
H(CHOH)(C=O)(CHOH)3H is pentulose, and H(CHOH)2(C=O)(CHOH)2H is pent-3-ulose. Open-chain stereoisomers Two monosaccharides with equivalent molecular graphs (same chain length and same carbonyl position) may still be distinct stereoisomers, whose molecules differ in spatial orientation. This happens only if the molecule contains a stereogenic center, specifically a carbon atom that is chiral (connected to four distinct molecular sub-structures). Those four bonds can have any of two configurations in space distinguished by their handedness. In a simple open-chain monosaccharide, every carbon is chiral except the first and the last atoms of the chain, and (in ketoses) the carbon with the keto group. For example, the triketose H(CHOH)(C=O)(CHOH)H (glycerone, dihydroxyacetone) has no stereogenic center, and therefore exists as a single stereoisomer. The other triose, the aldose H(C=O)(CHOH)2H (glyceraldehyde), has one chiral carbon — the central one, number 2 — which is bonded to groups −H, −OH, −C(OH)H2, and −(C=O)H. Therefore, it exists as two stereoisomers whose molecules are mirror images of each other (like a left and a right glove). Monosaccharides with four or more carbons may contain multiple chiral carbons, so they typically have more than two stereoisomers. The number of distinct stereoisomers with the same diagram is bounded by 2c, where c is the total number of chiral carbons. The Fischer projection is a systematic way of drawing the skeletal formula of an acyclic monosaccharide so that the handedness of each chiral carbon is well specified. Each stereoisomer of a simple open-chain monosaccharide can be identified by the positions (right or left) in the Fischer diagram of the chiral hydroxyls (the hydroxyls attached to the chiral carbons). Most stereoisomers are themselves chiral (distinct from their mirror images). In the Fischer projection, two mirror-image isomers differ by having the positions of all chiral hydroxyls reversed right-to-left. Mirror-image isomers are chemically identical in non-chiral environments, but usually have very different biochemical properties and occurrences in nature. While most stereoisomers can be arranged in pairs of mirror-image forms, there are some non-chiral stereoisomers that are identical to their mirror images, in spite of having chiral centers. This happens whenever the molecular graph is symmetrical, as in the 3-ketopentoses H(CHOH)2(CO)(CHOH)2H, and the two halves are mirror images of each other. In that case, mirroring is equivalent to a half-turn rotation. For this reason, there are only three distinct 3-ketopentose stereoisomers, even though the molecule has two chiral carbons. Distinct stereoisomers that are not mirror-images of each other usually have different chemical properties, even in non-chiral environments. Therefore, each mirror pair and each non-chiral stereoisomer may be given a specific monosaccharide name. For example, there are 16 distinct aldohexose stereoisomers, but the name "glucose" means a specific pair of mirror-image aldohexoses. In the Fischer projection, one of the two glucose isomers has the hydroxyl at left on C3, and at right on C4 and C5; while the other isomer has the reversed pattern. These specific monosaccharide names have conventional three-letter abbreviations, like "Glu" for glucose and "Thr" for threose. Generally, a monosaccharide with n
twelve created in the 18th century by French astronomer Nicolas-Louis de Lacaille and one of several depicting scientific instruments. The name is a Latinised form of the Greek word for microscope. Its stars are faint and hardly visible from most of the non-tropical Northern Hemisphere. The constellation's brightest star is Gamma Microscopii of apparent magnitude 4.68, a yellow giant 2.5 times the Sun's mass located 223 ± 8 light-years distant. It passed within 1.14 and 3.45 light-years of the Sun some 3.9 million years ago, possibly disturbing the outer Solar System. Two star systems—WASP-7 and HD 205739—have been determined to have planets, while two others—the young red dwarf star AU Microscopii and the sunlike HD 202628—have debris disks. AU Microscopii and the binary red dwarf system AT Microscopii are probably a wide triple system and members of the Beta Pictoris moving group. Nicknamed "Speedy Mic", BO Microscopii is a star with an extremely fast rotation period of 9 hours, 7 minutes. Characteristics Microscopium is a small constellation bordered by Capricornus to the north, Piscis Austrinus and Grus to the east, Sagittarius to the west, and Indus to the south, touching on Telescopium to the southwest. The recommended three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Mic". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −27.45° and −45.09°. The whole constellation is visible to observers south of latitude 45°N. Given that its brightest stars are of fifth magnitude, the constellation is invisible to the naked eye in areas with light polluted skies. Features Stars French astronomer Nicolas-Louis de Lacaille charted and designated ten stars with the Bayer designations Alpha through to Iota in 1756. A star in neighbouring Indus that Lacaille had labelled Nu Indi turned out to be in Microscopium, so Gould renamed it Nu Microscopii. Francis Baily considered Gamma and Epsilon Microscopii to belong to the neighbouring constellation Piscis
of the Sun. Nicknamed "Speedy Mic", it has a rotation period of 9 hours 7 minutes. An active star, it has prominent stellar flares that average 100 times stronger than those of the Sun, and are emitting energy mainly in the X-ray and ultraviolet bands of the spectrum. It lies 218 ± 4 light-years away from the Sun. AT Microscopii is a binary star system, both members of which are flare star red dwarfs. The system lies close to and may form a very wide triple system with AU Microscopii, a young star which appears to be a planetary system in the making with a debris disk. The three stars are candidate members of the Beta Pictoris moving group, one of the nearest associations of stars that share a common motion through space. The Astronomical Society of Southern Africa in 2003 reported that observations of four of the Mira variables in Microscopium were very urgently needed as data on their light curves was incomplete. Two of them—R and S Microscopii—are challenging stars for novice amateur astronomers, and the other two, U and RY Microscopii, are more difficult still. Another red giant, T Microscopii, is a semiregular variable that ranges between magnitudes 7.7 and 9.6 over 344 days. Of apparent magnitude 11, DD Microscopii is a symbiotic star system composed of an orange giant of spectral type K2III and white dwarf in close orbit, with the smaller star ionizing the stellar wind of the larger star. The system has a low metallicity. Combined with its high galactic latitude, this indicates that the star system has its origin in the galactic halo of the Milky Way. HD 205739 is a yellow-white main sequence star of spectral type F7V that is around 1.22 times as massive and 2.3 times as luminous as the Sun. It has a Jupiter-sized planet with an orbital period of 280 days that was discovered by the radial velocity method. WASP-7 is a star of spectral type F5V with an apparent magnitude of 9.54, about 1.28 times as massive as the Sun. Its hot Jupiter planet—WASP-7b—was discovered by transit method and found to orbit the star every 4.95 days. HD 202628 is a sunlike star of spectral type G2V with a debris disk that ranges from 158 to 220 AU distant. Its inner edge is sharply defined, indicating a probable planet orbiting between 86 and 158 AU from the star. Deep sky objects Describing Microscopium as "totally unremarkable", astronomer Patrick Moore concluded there was nothing of interest for amateur observers. NGC 6925 is a barred spiral galaxy of apparent magnitude 11.3 which is lens-shaped, as it lies almost edge-on to observers on Earth, 3.7 degrees west-northwest of Alpha Microscopii. SN 2011ei, a Type II Supernova in NGC 6925, was discovered by Stu Parker in New Zealand in July 2011. NGC 6923 lies nearby and is a magnitude fainter still. The Microscopium Void is a roughly rectangular region of relatively empty space, bounded by incomplete sheets of galaxies from other voids. The Microscopium Supercluster is an overdensity of galaxy clusters that was first noticed in the early 1990s. The component Abell clusters 3695 and 3696 are likely to be gravitationally bound, while the relations of Abell clusters 3693 and 3705 in the same field are unclear. Meteor showers The Microscopids are a minor meteor shower that appear from June to mid-July. History The stars that comprise Microscopium are in a region previously considered the hind feet of Sagittarius, a neighbouring constellation. John Ellard Gore wrote that al-Sufi seems to have reported that Ptolemy had seen the stars but he (Al Sufi) did not pinpoint their positions. Microscopium itself was introduced in 1751–52 by Lacaille with the French name le Microscope, after he had observed and catalogued 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Commemorating the compound microscope, the Microscope's name had been Latinised by Lacaille to Microscopium by 1763. See also Microscopium (Chinese astronomy) Notes References Citations Cited texts External links
Group. The group can be described as a binary group; the member galaxies are mostly concentrated around either IC 342 or Maffei 1, both of which are the brightest galaxies within the group. The group is part of the Virgo Supercluster. Members The table below lists galaxies that have been identified as associated with the IC342/Maffei 1 Group by I. D. Karachentsev. Note that Karachentsev divides this group into two subgroups centered around IC 342 and Maffei 1. Additionally, KKH 37 is listed as possibly being a member of the IC 342 Subgroup, and KKH 6 is listed as possibly being a member of the Maffei 1 Subgroup. Foreground dust obscuration As seen from Earth, the group lies near the plane of the Milky Way (a region sometimes called the Zone of Avoidance). Consequently, the light from many of the galaxies is severely affected by dust obscuration within the Milky Way. This complicates observational studies of the group, as uncertainties in the dust obscuration also affect measurements of the galaxies' luminosities and distances as well as other related quantities. Moreover, the galaxies within the group have historically been difficult to identify. Many galaxies have only been discovered using late 20th century astronomical instrumentation. For example, while many fainter, more distant galaxies, such as the galaxies in the New General Catalogue, were already identified visually by the end of the nineteenth century, Maffei 1 and Maffei 2 were only discovered in 1968 using infrared photographic images of the region. Furthermore,
Furthermore, it is difficult to determine whether some objects near IC 342 or Maffei 1 are galaxies associated with the IC 342/Maffei Group or diffuse foreground objects within the Milky Way that merely look like galaxies. For example, the objects MB 2 and Camelopardalis C were once thought to be dwarf galaxies in the IC 342/Maffei Group but are now known to be objects within the Milky Way. Group formation and possible interactions with the Local Group Since the IC 342/Maffei Group and the Local Group are located physically close to each other, the two groups may have influenced each other's evolution during the early stages of galaxy formation. An analysis of the velocities and distances to the IC 342/Maffei Group as measured by M. J. Valtonen and collaborators suggested that IC 342 and Maffei 1 were moving faster than what could be accounted for in the expansion of the universe. They therefore suggested that IC 342 and Maffei 1 were ejected from the Local Group after a violent gravitational interaction with the Andromeda Galaxy during the early stages of the formation of the two groups. However, this interpretation is dependent on the distances measured to the galaxies in the group, which in turn is dependent on accurately measuring the degree to which interstellar dust in the Milky Way obscures the group. More recent observations have demonstrated that the dust obscuration may have been previously overestimated, so the distances may have been underestimated. If these new distance measurements are correct, then the galaxies in the IC 342/Maffei Group appear to be moving at the rate expected from the expansion of the universe, and the scenario of a collision between the IC 342/Maffei Group and
the names used by Karachentsev. NGC, IC, UGC, and PGC numbers have been used in many cases to allow for easier referencing. Interactions within the group Messier 81, Messier 82, and NGC 3077 are all strongly interacting with each other. Observations of the 21-centimeter hydrogen line indicate how the galaxies are connected. The gravitational interactions have stripped some hydrogen gas away from all three galaxies, leading to the formation of filamentary gas structures within the group. Bridges of neutral hydrogen have been shown to connect M81 with M82 and NGC 3077. Moreover, the interactions have also caused some interstellar gas to fall into the centers of Messier 82 and NGC 3077, which has led to strong starburst activity (or the formation of many stars) within the centers of these two galaxies. Computer simulations of tidal interactions have been used to show how the current structure of
estimated to have a total mass of (1.03 ± 0.17). The M81 Group, the Local Group, and other nearby groups all lie within the Virgo Supercluster (i.e. the Local Supercluster). Members The table below lists galaxies that have been identified as associated with the M81 Group by I. D. Karachentsev. Note that the object names used in the above table differ from the names used by Karachentsev. NGC, IC, UGC, and PGC numbers have been used in many cases to allow for easier referencing. Interactions within the group Messier 81, Messier 82, and NGC 3077 are all strongly interacting with each other. Observations of the 21-centimeter hydrogen line indicate how the galaxies are
surname Mensa (constellation), a constellation in the southern sky Mensa (ecclesiastical), a portion of church property that is appropriated to defray the expenses of
and list of people with the given name or surname Mensa (constellation), a constellation in the southern sky Mensa (ecclesiastical), a portion of church
languages such as Old Norse and Old English was radically different, but was still based on stress patterns. Some classical languages, in contrast, used a different scheme known as quantitative metre, where patterns were based on syllable weight rather than stress. In the dactylic hexameters of Classical Latin and Classical Greek, for example, each of the six feet making up the line was either a dactyl (long-short-short) or a spondee (long-long): a "long syllable" was literally one that took longer to pronounce than a short syllable: specifically, a syllable consisting of a long vowel or diphthong or followed by two consonants. The stress pattern of the words made no difference to the metre. A number of other ancient languages also used quantitative metre, such as Sanskrit, Persian and Classical Arabic (but not Biblical Hebrew). Finally, non-stressed languages that have little or no differentiation of syllable length, such as French or Chinese, base their verses on the number of syllables only. The most common form in French is the , with twelve syllables a verse, and in classical Chinese five characters, and thus five syllables. But since each Chinese character is pronounced using one syllable in a certain tone, classical Chinese poetry also had more strictly defined rules, such as thematic parallelism or tonal antithesis between lines. Feet In many Western classical poetic traditions, the metre of a verse can be described as a sequence of feet, each foot being a specific sequence of syllable types – such as relatively unstressed/stressed (the norm for English poetry) or long/short (as in most classical Latin and Greek poetry). Iambic pentameter, a common metre in English poetry, is based on a sequence of five iambic feet or iambs, each consisting of a relatively unstressed syllable (here represented with "˘" above the syllable) followed by a relatively stressed one (here represented with "/" above the syllable) – ˘ / ˘ / ˘ / ˘ / ˘ / So long as men can breathe, or eyes can see, ˘ / ˘ / ˘ / ˘ / ˘ / So long lives this, and this gives life to thee. This approach to analyzing and classifying metres originates from Ancient Greek tragedians and poets such as Homer, Pindar, Hesiod, and Sappho. However some metres have an overall rhythmic pattern to the line that cannot easily be described using feet. This occurs in Sanskrit poetry; see Vedic metre and Sanskrit metre. (Although this poetry is in fact specified using feet, each "foot" is more or less equivalent to an entire line.) It also occurs in some Western metres, such as the hendecasyllable favoured by Catullus and Martial, which can be described as: x x — ∪ ∪ — ∪ — ∪ — — (where "—" = long, "∪" = short, and "x x" can be realized as "— ∪" or "— —" or "∪ —") Disyllables Macron and breve notation: = stressed/long syllable, = unstressed/short syllable Trisyllables If the line has only one foot, it is called a monometer; two feet, dimeter; three is trimeter; four is tetrameter; five is pentameter; six is hexameter, seven is heptameter and eight is octameter. For example, if the feet are iambs, and if there are five feet to a line, then it is called an iambic pentameter. If the feet are primarily dactyls and there are six to a line, then it is a dactylic hexameter. Caesura Sometimes a natural pause occurs in the middle of a line rather than at a line-break. This is a caesura (cut). A good example is from The Winter's Tale by William Shakespeare; the caesurae are indicated by '/': It is for you we speak, / not for ourselves: You are abused / and by some putter-on That will be damn'd for't; / would I knew the villain, I would land-damn him. / Be she honour-flaw'd, I have three daughters; / the eldest is eleven In Latin and Greek poetry, a caesura is a break within a foot caused by the end of a word. Each line of traditional Germanic alliterative verse is divided into two half-lines by a caesura. This can be seen in Piers Plowman: A fair feeld ful of folk / fond I ther bitwene— Of alle manere of men / the meene and the riche, Werchynge and wandrynge / as the world asketh. Somme putten hem to the plough / pleiden ful selde, In settynge and sowynge / swonken ful harde, And wonnen that thise wastours / with glotonye destruyeth. Enjambment By contrast with caesura, enjambment is incomplete syntax at the end of a line; the meaning runs over from one poetic line to the next, without terminal punctuation. Also from Shakespeare's The Winter's Tale: I am not prone to weeping, as our sex Commonly are; the want of which vain dew Perchance shall dry your pities; but I have That honourable grief lodged here which burns Worse than tears drown. Metric variations Poems with a well-defined overall metric pattern often have a few lines that violate that pattern. A common variation is the inversion of a foot, which turns an iamb ("da-DUM") into a trochee ("DUM-da"). A second variation is a headless verse, which lacks the first syllable of the first foot. A third variation is catalexis, where the end of a line is shortened by a foot, or two or part thereof – an example of this is at the end of each verse in Keats' "La Belle Dame sans Merci": And on thy cheeks a fading rose (4 feet) Fast withereth too (2 feet) Modern English Most English metre is classified according to the same system as Classical metre with an important difference. English is an accentual language, and therefore beats and offbeats (stressed and unstressed syllables) take the place of the long and short syllables of classical systems. In most English verse, the metre can be considered as a sort of back beat, against which natural speech rhythms vary expressively. The most common characteristic feet of English verse are the iamb in two syllables and the anapest in three. (See Foot (prosody) for a complete list of the metrical feet and their names.) Metrical systems The number of metrical systems in English is not agreed upon. The four major types are: accentual verse, accentual-syllabic verse, syllabic verse and quantitative verse. The alliterative verse of Old English could also be added to this list, or included as a special type of accentual verse. Accentual verse focuses on the number of stresses in a line, while ignoring the number of offbeats and syllables; accentual-syllabic verse focuses on regulating both the number of stresses and the total number of syllables in a line; syllabic verse only counts the number of syllables in a line; quantitative verse regulates the patterns of long and short syllables (this sort of verse is often considered alien to English). The use of foreign metres in English is all but exceptional. Frequently used metres The most frequently encountered metre of English verse is the iambic pentameter, in which the metrical norm is five iambic feet per line, though metrical substitution is common and rhythmic variations are practically inexhaustible. John Milton's Paradise Lost, most sonnets, and much else besides in English are written in iambic pentameter. Lines of unrhymed iambic pentameter are commonly known as blank verse. Blank verse in the English language is most famously represented in the plays of William Shakespeare and the great works of Milton, though Tennyson (Ulysses, The Princess) and Wordsworth (The Prelude) also make notable use of it. A rhymed pair of lines of iambic pentameter make a heroic couplet, a verse form which was used so often in the 18th century that it is now used mostly for humorous effect (although see Pale Fire for a non-trivial case). The most famous writers of heroic couplets are Dryden and Pope. Another important metre in English is the ballad metre, also called the "common metre", which is a four-line stanza, with two pairs of a line of iambic tetrameter followed by a line of iambic trimeter; the rhymes usually fall on the lines of trimeter, although in many instances the tetrameter also rhymes. This is the metre of most of the Border and Scots or English ballads. In hymnody it is called the "common metre", as it is the most common of the named hymn metres used to pair many hymn lyrics with melodies, such as Amazing Grace: Amazing Grace! how sweet the sound That saved a wretch like me; I once was lost, but now am found; Was blind, but now I see. Emily Dickinson is famous for her frequent use of ballad metre: Great streets of silence led away To neighborhoods of pause — Here was no notice — no dissent — No universe — no laws. Other languages Sanskrit Versification in Classical Sanskrit poetry is of three kinds. Syllabic () metres depend on the number of syllables in a verse, with relative freedom in the distribution of light and heavy syllables. This style is derived from older Vedic forms. An example is the Anuṣṭubh metre found in the great epics, the Mahabharata and the Ramayana, which has exactly eight syllables in each line, of which only some are specified as to length. Syllabo-quantitative () metres depend on syllable count, but the light-heavy patterns are fixed. An example is the Mandākrāntā metre, in which each line has 17 syllables in a fixed pattern. Quantitative () metres depend on duration, where each line has a fixed number of morae, grouped in feet with usually 4 morae in each foot. An example is the Arya metre, in which each verse has four lines of 12, 18, 12, and 15 morae respectively. In each 4-mora foot there can be two long syllables, four short syllables, or one long and two short in any order. Standard traditional works on metre are Pingala's and Kedāra's . The most exhaustive compilations, such as the modern ones by Patwardhan and Velankar contain over 600 metres. This is a substantially larger repertoire than in any other metrical tradition. Greek and Latin The metrical "feet" in the classical languages were based on the length of time taken to pronounce each syllable, which were categorized according to their weight as either "long" syllables or "short" syllables (indicated as dum and di below). These are also called "heavy" and "light" syllables, respectively, to distinguish from long and short vowels. The foot is often compared to a musical measure and the long and short syllables to whole notes and half notes. In English poetry, feet are determined by emphasis rather than length, with stressed and unstressed syllables serving the same function as long and short syllables in classical metre. The basic unit in Greek and Latin prosody is a mora, which is defined as a single short syllable. A long syllable is equivalent to two morae. A long syllable contains either a long vowel, a diphthong, or a short vowel followed by two or more consonants. Various rules of elision sometimes prevent a grammatical syllable from making a full syllable, and certain other lengthening and shortening rules (such as correption) can create long or short syllables in contexts where one would expect the opposite. The most important Classical metre is the dactylic hexameter, the metre of Homer and Virgil. This form uses verses of six feet. The word dactyl comes from the Greek word daktylos meaning finger, since there is one long part followed by two short stretches. The first four feet are dactyls (daa-duh-duh), but can be spondees (daa-daa). The fifth foot is almost always a dactyl. The sixth foot is either a spondee or a trochee (daa-duh). The initial syllable of either foot is called the ictus, the basic "beat" of the verse. There is usually a caesura after the ictus of the third foot. The opening line of the Aeneid is a typical line of dactylic hexameter: Armă vĭ | rumquĕ că | nō, Troi | ae quī | prīmŭs ăb | ōrīs ("I sing of arms and the man, who first from the shores of Troy...") In this example, the first and second feet are dactyls; their first syllables, "Ar" and "rum" respectively, contain short vowels, but count as long because the vowels are both followed by two consonants. The third and fourth feet are spondees, the first of which is divided by the main caesura of the verse. The fifth foot is a dactyl, as is nearly always the case. The final foot is a spondee. The dactylic hexameter was imitated in English by Henry Wadsworth Longfellow in his poem Evangeline: This is the forest primeval. The murmuring pines and the hemlocks, Bearded with moss, and in garments green, indistinct in the twilight, Stand like Druids of old, with voices sad and prophetic, Stand like harpers hoar, with beards that rest on their bosoms. Notice how the first line: This is the | for-est pri | me-val. The | mur-muring | pines and the | hem-locks Follows this pattern: dum diddy | dum diddy | dum diddy | dum diddy | dum diddy | dum dum Also important in Greek and Latin poetry is the dactylic pentameter. This was a line of verse, made up of two equal parts, each of which contains two dactyls followed by a long syllable, which counts as a half foot. In this way, the number of feet amounts to five in total. Spondees can take the place of the dactyls in the first half, but never in the second. The long syllable at the close of the first half of the verse always ends a word, giving rise to a caesura. Dactylic pentameter is never used in isolation. Rather, a line of dactylic pentameter follows a line of dactylic hexameter in the elegiac distich or elegiac couplet, a form of verse that was used for the composition of elegies and other tragic and solemn verse in the Greek and Latin world, as well as love poetry that was sometimes light and cheerful. An example from Ovid's Tristia: Vergĭlĭ | um vī | dī tan | tum, nĕc ă | māră Tĭ | bullō Tempŭs ă | mīcĭtĭ | ae || fātă dĕ | dērĕ mĕ | ae. ("Virgil I merely saw, and the harsh Fates gave Tibullus no time for my friendship.") The Greeks and Romans also used a number of lyric metres, which were typically used for shorter poems than elegiacs or hexameter. In Aeolic verse, one important line was called the hendecasyllabic, a line of eleven syllables. This metre was used most often in the Sapphic stanza, named after the Greek poet Sappho, who wrote many of her poems in the form. A hendecasyllabic is a line with a never-varying structure: two trochees, followed by a dactyl, then two more trochees. In the Sapphic stanza, three hendecasyllabics are followed by an "Adonic" line, made up of a dactyl and a trochee. This is the form of Catullus 51 (itself an homage to Sappho 31): Illĕ mī pār essĕ dĕō vĭdētur; illĕ, sī fās est, sŭpĕrārĕ dīvōs, quī sĕdēns adversŭs ĭdentĭdem tē spectăt ĕt audit ("He seems to me to be like a god; if it is permitted, he seems above the gods, who sitting across from you gazes at you and hears you again and again.") The Sapphic stanza was imitated in English by Algernon Charles Swinburne in a poem he simply called Sapphics: Saw the white implacable Aphrodite, Saw the hair unbound and the feet unsandalled Shine as fire of sunset on western waters; Saw the reluctant... Classical Arabic The metrical system of Classical Arabic poetry, like those of classical Greek and Latin, is based on the weight of syllables classified as either "long" or "short". The basic principles of Arabic poetic metre Arūḍ or Arud ( ) Science of Poetry ( ), were put forward by Al-Farahidi (786 - 718 CE) who did so after noticing that poems consisted of repeated syllables in each verse. In his first book, Al-Ard ( ), he described 15 types of verse. Al-Akhfash described one extra, the 16th. A short syllable contains a short vowel with no following consonants. For example, the word kataba, which syllabifies as ka-ta-ba, contains three short vowels and is made up of three short syllables. A long syllable contains either a long vowel or a short vowel followed by a consonant as is the case in the word maktūbun which syllabifies as mak-tū-bun. These are the only syllable types possible in Classical Arabic phonology which, by and large, does not allow a syllable to end in more than one consonant or a consonant to occur in the same syllable after a long vowel. In other words, syllables of the type -āk- or -akr- are not
them attempted to introduce a new approach or to simplify the rules. ………. Is it not time for a new, simple presentation which avoids contrivance, displays close affinity to [the art of] poetry, and perhaps renders the science of prosody palatable as well as manageable?” In the 20th and the 21st centuries, numerous scholars have endeavored to supplement al-Kʰalīl's contribution. The Arabic metres Classical Arabic has sixteen established metres. Though each of them allows for a certain amount of variation, their basic patterns are as follows, using: "–" for 1 long syllable "⏑" for 1 short syllable "x" for a position that can contain 1 long or 1 short "o" for a position that can contain 1 long or 2 shorts "S" for a position that can contain 1 long, 2 shorts, or 1 long + 1 short Classical Persian The terminology for metrical system used in classical and classical-style Persian poetry is the same as that of Classical Arabic, even though these are quite different in both origin and structure. This has led to serious confusion among prosodists, both ancient and modern, as to the true source and nature of the Persian metres, the most obvious error being the assumption that they were copied from Arabic. Persian poetry is quantitative, and the metrical patterns are made of long and short syllables, much as in Classical Greek, Latin and Arabic. Anceps positions in the line, however, that is places where either a long or short syllable can be used (marked "x" in the schemes below), are not found in Persian verse except in some metres at the beginning of a line. Persian poetry is written in couplets, with each half-line (hemistich) being 10-14 syllables long. Except in the ruba'i (quatrain), where either of two very similar metres may be used, the same metre is used for every line in the poem. Rhyme is always used, sometimes with double rhyme or internal rhymes in addition. In some poems, known as masnavi, the two halves of each couplet rhyme, with a scheme aa, bb, cc and so on. In lyric poetry, the same rhyme is used throughout the poem at the end of each couplet, but except in the opening couplet, the two halves of each couplet do not rhyme; hence the scheme is aa, ba, ca, da. A ruba'i (quatrain) also usually has the rhyme aa, ba. A particular feature of classical Persian prosody, not found in Latin, Greek or Arabic, is that instead of two lengths of syllables (long and short), there are three lengths (short, long, and overlong). Overlong syllables can be used anywhere in the line in place of a long + a short, or in the final position in a line or half line. When a metre has a pair of short syllables (⏑ ⏑), it is common for a long syllable to be substituted, especially at the end of a line or half-line. About 30 different metres are commonly used in Persian. 70% of lyric poems are written in one of the following seven metres: ⏑ – ⏑ – ⏑ ⏑ – – ⏑ – ⏑ – ⏑ ⏑ – – – ⏑ – ⏑ – ⏑ ⏑ – – ⏑ – ⏑ – – ⏑ – – – ⏑ – – – ⏑ – – – ⏑ – x ⏑ – – ⏑ ⏑ – – ⏑ ⏑ – – ⏑ ⏑ – x ⏑ – – ⏑ – ⏑ – ⏑ ⏑ – ⏑ – – – ⏑ – – – ⏑ – – – ⏑ – – – – – ⏑ ⏑ – – ⏑ ⏑ – – ⏑ ⏑ – – Masnavi poems (that is, long poems in rhyming couplets) are always written in one of the shorter 11 or 10-syllable metres (traditionally seven in number) such as the following: ⏑ – – ⏑ – – ⏑ – – ⏑ – (e.g. Ferdowsi's Shahnameh) ⏑ – – – ⏑ – – – ⏑ – – (e.g. Gorgani's Vis o Ramin) – ⏑ – – – ⏑ – – – ⏑ – (e.g. Rumi's Masnavi-e Ma'navi) – – ⏑ ⏑ – ⏑ – ⏑ – – (e.g. Nezami's Leyli o Majnun) The two metres used for ruba'iyat (quatrains), which are only used for this, are the following, of which the second is a variant of the first: – – ⏑ ⏑ – – ⏑ ⏑ – – ⏑ ⏑ – – – ⏑ ⏑ – ⏑ – ⏑ – – ⏑ ⏑ – Classical Chinese Classical Chinese poetic metric may be divided into fixed and variable length line types, although the actual scansion of the metre is complicated by various factors, including linguistic changes and variations encountered in dealing with a tradition extending over a geographically extensive regional area for a continuous time period of over some two-and-a-half millennia. Beginning with the earlier recorded forms: the Classic of Poetry tends toward couplets of four-character lines, grouped in rhymed quatrains; and, the Chuci follows this to some extent, but moves toward variations in line length. Han Dynasty poetry tended towards the variable line-length forms of the folk ballads and the Music Bureau yuefu. Jian'an poetry, Six Dynasties poetry, and Tang Dynasty poetry tend towards a poetic metre based on fixed-length lines of five, seven, (or, more rarely six) characters/verbal units tended to predominate, generally in couplet/quatrain-based forms, of various total verse lengths. The Song poetry is specially known for its use of the ci, using variable line lengths which follow the specific pattern of a certain musical song's lyrics, thus ci are sometimes referred to as "fixed-rhythm" forms. Yuan poetry metres continued this practice with their qu forms, similarly fixed-rhythm forms based on now obscure or perhaps completely lost original examples (or, ur-types). Not that Classical Chinese poetry ever lost the use of the shi forms, with their metrical patterns found in the "old style poetry" (gushi) and the regulated verse forms of (lüshi or jintishi). The regulated verse forms also prescribed patterns based upon linguistic tonality. The use of caesura is important in regard to the metrical analysis of Classical Chinese poetry forms. Old English The metric system of Old English poetry was different from that of modern English, and related more to the verse forms of most of the older Germanic languages such as Old Norse. It used alliterative verse, a metrical pattern involving varied numbers of syllables but a fixed number (usually four) of strong stresses in each line. The unstressed syllables were relatively unimportant, but the caesurae (breaks between the half-lines) played a major role in Old English poetry. In place of using feet, alliterative verse divided each line into two half-lines. Each half-line had to follow one of five or so patterns, each of which defined a sequence of stressed and unstressed syllables, typically with two stressed syllables per half line. Unlike typical Western poetry, however, the number of unstressed syllables could vary somewhat. For example, the common pattern "DUM-da-DUM-da" could allow between one and five unstressed syllables between the two stresses. The following is a famous example, taken from The Battle of Maldon, a poem written shortly after the date of that battle (AD 991): Hige sceal þe heardra, || heorte þe cēnre, mōd sceal þe māre, || swā ūre mægen lȳtlað ("Will must be the harder, courage the bolder, spirit must be the more, as our might lessens.") In the quoted section, the stressed syllables have been underlined. (Normally, the stressed syllable must be long if followed by another syllable in a word. However, by a rule known as syllable resolution, two short syllables in a single word are considered equal to a single long syllable. Hence, sometimes two syllables have been underlined, as in hige and mægen.) The German philologist Eduard Sievers (died 1932) identified five different patterns of half-line in Anglo-Saxon alliterative poetry. The first three half-lines have the type A pattern "DUM-da-(da-)DUM-da", while the last one has the type C pattern "da-(da-da-)DUM-DUM-da", with parentheses indicating optional unstressed syllables that have been inserted. Note also the pervasive pattern of alliteration, where the first and/or second stressed syllables alliterate with the third, but not with the fourth. French In French poetry, metre is determined solely by the number of syllables in a line. A silent 'e' counts as a syllable before a consonant, but is elided before a vowel (where h aspiré counts as a consonant). At the end of a line, the "e" remains unelided but is hypermetrical (outside the count of syllables, like a feminine ending in English verse), in that case, the rhyme is also called "feminine", whereas it is called "masculine" in the other cases. The most frequently encountered metre in Classical French poetry is the alexandrine, composed of two hemistiches of six syllables each. Two famous alexandrines are La fille de Minos et de Pasiphaë (Jean Racine) (the daughter of Minos and of Pasiphaë), and Waterloo ! Waterloo ! Waterloo ! Morne plaine! (Victor Hugo) (Waterloo! Waterloo! Waterloo! Gloomy plain!) Classical French poetry also had a complex set of rules for rhymes that goes beyond how words merely sound. These are usually taken into account when describing the metre of a poem. Spanish In Spanish poetry the metre is determined by the number of syllables the verse has. Still it is the phonetic accent in the last word of the verse that decides the final count of the line. If the accent of the final word is at the last syllable, then the poetic rule states that one syllable shall be added to the actual count of syllables in the said line, thus having a higher number of poetic syllables than the number of grammatical syllables. If the accent lies on the second to last syllable of the last word in the verse, then the final count of poetic syllables will be the same as the grammatical number of syllables. Furthermore, if the accent lies on the third to last syllable, then one syllable is subtracted from the actual count, having then less poetic syllables than grammatical syllables. Spanish poetry uses poetic licenses, unique to Romance languages, to change the number of syllables by manipulating mainly the vowels in the line. Regarding these poetic licenses one must consider three kinds of phenomena: (1) syneresis, (2) dieresis and (3) hiatus There are many types of licenses, used either to add or subtract syllables, that may be applied when needed after taking in consideration the poetic rules of the last word. Yet all have in common that they only manipulate vowels that are close to each other and not interrupted by consonants. Some common metres in Spanish verse are: Septenary: A line with seven poetic syllables Octosyllable: A line with eight poetic syllables. This metre is commonly used in romances, narrative poems similar to English ballads, and in most proverbs. Hendecasyllable: A line with eleven poetic syllables. This metre plays a similar role to pentameter in English verse. It is commonly used in sonnets, among other things. Alexandrine: A line consisting of fourteen syllables, commonly separated into two hemistichs of seven syllables each (In most languages, this term denotes a line of twelve or sometimes thirteen syllables, but not in Spanish). Italian In Italian poetry, metre is determined solely by the position of the last accent in a line, the position of the other accents being however important for verse equilibrium. Syllables are enumerated with respect to a verse which ends with a paroxytone, so that a Septenary (having seven syllables) is defined as a verse whose last accent falls on the sixth syllable: it may so contain eight syllables (Ei fu. Siccome immobile) or just six (la terra al nunzio sta). Moreover, when a word ends with a vowel and the next one starts with a vowel, they are considered to be in the same syllable (synalepha): so Gli anni e i giorni consists of only four syllables ("Gli an" "ni e i" "gior" "ni"). Even-syllabic verses have a fixed stress pattern. Because of the mostly trochaic nature of the Italian language, verses with an even number of syllables are far easier to compose, and the Novenary is usually regarded as the most difficult verse. Some common metres in Italian verse are: Sexenary: A line whose last stressed syllable is on the fifth, with a fixed stress on the second one as well (Al Re Travicello / Piovuto ai ranocchi, Giusti) Septenary: A line whose last stressed syllable is the sixth one. Octosyllable: A line whose last accent falls on the seventh syllable. More often than not, the secondary accents fall
claimed that Moqed and Hanjour had both purchased tickets there. They claimed that Hani Hanjour spoke very little English, and Moqed did most of the speaking. Hanjour requested a seat in the front row of the airplane. Their credit card failed to authorize, and after being told the agency did not accept personal cheques, the pair left to withdraw cash. They returned shortly afterwards and paid $1842.25 total in cash. During this time, Moqed was staying in Room 343 of the Valencia Motel. On September 2, Moqed paid cash for a $30 weekly membership at Gold's Gym in Greenbelt, Maryland. Three days later he was seen on an ATM camera with Hani Hanjour. After the attacks, employees at an adult video store, Adult Lingerie Center, in Beltsville claimed that Moqed had been in the store three times, although there were no transactions slips that confirmed this. Attacks On September 11, 2001, Moqed arrived at Washington Dulles International Airport. According to the 9/11 Commission Report, Moqed set off the metal detector at the airport and was screened with a hand-wand. He passed the cursory inspection, and was able to board his flight at 7:50. He was seated in 12A, adjacent to Mihdhar who was in 12B. Moqed helped to hijack the plane and assisted Hani Hanjour in crashing the plane into the Pentagon at 9:37 A.M., killing 189 people (64 on the plane and 125 on the ground). The flight
Arabia before joining Al-Qaeda in 1999 and being chosen to participate in the 9/11 attacks. He arrived in the United States in May 2001 and helped with the planning of how the attacks would be carried out. On September 11, 2001, Moqed boarded American Airlines Flight 77 and assisted in the hijacking of the plane so that it could be crashed into the Pentagon. Early life and activities Moqed was a law student from the small town of Al-Nakhil, Saudi Arabia (west of Medina), studying at King Fahd University's Faculty of Administration and Economics. Before he dropped out, he was apparently recruited into al-Qaeda in 1999 along with friend Satam al-Suqami, with whom he had earlier shared a college room. The two trained at Khalden, a large training facility near Kabul that was run by Ibn al-Shaykh al-Libi. A friend in Saudi Arabia claimed he was last seen there in 2000, before leaving to study English in the United States. In November 2000, Moqed and Suqami flew into Iran from Bahrain together. Some time late in 2000, Moqed traveled to the United Arab Emirates, where he purchased traveler's cheques presumed to have been paid for by 9/11 financier Mustafa Ahmed al-Hawsawi. Five other hijackers also passed through the UAE and purchased travellers cheques, including Wail al-Shehri, Saeed al-Ghamdi, Hamza al-Ghamdi, Ahmed al-Haznawi and Ahmed al-Nami. Known as al-Ahlaf during the preparations, Moqed then moved in with hijackers Salem al-Hazmi, Abdulaziz al-Omari and Khalid al-Mihdhar in an apartment in Paterson, New Jersey. 2001 According to the FBI, Moqed first arrived in the United States on May 2, 2001. In March 2001 (this contradicts the previous paragraph), Moqed, Hani Hanjour, Hazmi and Ahmed al-Ghamdi rented a minivan and travelled to Fairfield, Connecticut. There they met a contact in the parking lot of a local convenience store who provided them with false IDs. (This was possibly Eyad Alrababah, a Jordanian charged with document fraud). Moqed was one of the five hijackers who asked for a state identity card on August 2, 2001. On August 24, both Mihdhar and Moqed tried to purchase flight tickets from the American Airlines online ticket-merchant, but had
Perry may also refer to: Matthew C. Perry (1794–1858), American naval officer who forcibly opened Japan to trade with the West Matthew Perry Monument (Newport, Rhode
forcibly opened Japan to trade with the West Matthew Perry Monument (Newport, Rhode Island) USNS Matthew Perry (T-AKE-9) Matthew J. Perry (1921–2011), South Carolina's first African American U.S.
which lifts the ribbon between the type element and the paper is disabled so that the bare, sharp type element strikes the stencil directly. The impact of the type element displaces the coating, making the tissue paper permeable to the oil-based ink. This is called "cutting a stencil". A variety of specialized styluses were used on the stencil to render lettering, illustrations, or other artistic features by hand against a textured plastic backing plate. Mistakes were corrected by brushing them out with a specially formulated correction fluid, and retyping once it has dried. ("Obliterine" was a popular brand of correction fluid in Australia and the United Kingdom.) Stencils were also made with a thermal process, an infrared method similar to that used by early photocopiers. The common machine was a Thermofax. Another device, called an electrostencil machine, sometimes was used to make mimeo stencils from a typed or printed original. It worked by scanning the original on a rotating drum with a moving optical head and burning through the blank stencil with an electric spark in the places where the optical head detected ink. It was slow and produced ozone. Text from electrostencils had lower resolution than that from typed stencils, although the process was good for reproducing illustrations. A skilled mimeo operator using an electrostencil and a very coarse halftone screen could make acceptable printed copies of a photograph. During the declining years of the mimeograph, some people made stencils with early computers and dot-matrix impact printers. Limitations Unlike spirit duplicators (where the only ink available is depleted from the master image), mimeograph technology works by forcing a replenishable supply of ink through the stencil master. In theory, the mimeography process could be continued indefinitely, especially if a durable stencil master were used (e.g. a thin metal foil). In practice, most low-cost mimeo stencils gradually wear out over the course of producing several hundred copies. Typically the stencil deteriorates gradually, producing a characteristic degraded image quality until the stencil tears, abruptly ending the print run. If further copies are desired at this point, another stencil must be made. Often, the stencil material covering the interiors of closed letterforms (e.g. "a", "b", "d", "e", "g", etc.) would fall away during continued printing, causing ink-filled letters in the copies. The stencil would gradually stretch, starting near the top where the mechanical forces were greatest, causing a characteristic "mid-line sag" in the textual lines of the copies, that would progress until the stencil failed completely. The Gestetner Company (and others) devised various methods to make mimeo stencils more durable. Compared to spirit duplication, mimeography produced a darker, more legible image. Spirit duplicated images were usually tinted a light purple or lavender, which gradually became lighter over the course of some dozens of copies. Mimeography was often considered "the next step up" in quality, capable of producing hundreds of copies. Print runs beyond that level were usually produced by professional printers or, as the technology became available, xerographic copiers. Durability Mimeographed images generally have much better durability than spirit-duplicated images, since the inks are more resistant to ultraviolet light. The primary preservation challenge is the
Often, the stencil material covering the interiors of closed letterforms (e.g. "a", "b", "d", "e", "g", etc.) would fall away during continued printing, causing ink-filled letters in the copies. The stencil would gradually stretch, starting near the top where the mechanical forces were greatest, causing a characteristic "mid-line sag" in the textual lines of the copies, that would progress until the stencil failed completely. The Gestetner Company (and others) devised various methods to make mimeo stencils more durable. Compared to spirit duplication, mimeography produced a darker, more legible image. Spirit duplicated images were usually tinted a light purple or lavender, which gradually became lighter over the course of some dozens of copies. Mimeography was often considered "the next step up" in quality, capable of producing hundreds of copies. Print runs beyond that level were usually produced by professional printers or, as the technology became available, xerographic copiers. Durability Mimeographed images generally have much better durability than spirit-duplicated images, since the inks are more resistant to ultraviolet light. The primary preservation challenge is the low-quality paper often used, which would yellow and degrade due to residual acid in the treated pulp from which the paper was made. In the worst case, old copies can crumble into small particles when handled. Mimeographed copies have moderate durability when acid-free paper is used. Contemporary use Gestetner, Risograph, and other companies still make and sell highly automated mimeograph-like machines that are externally similar to photocopiers. The modern version of a mimeograph, called a digital duplicator, or copyprinter, contains a scanner, a thermal head for stencil cutting, and a large roll of stencil material entirely inside the unit. The stencil material consists of a very thin polymer film laminated to a long-fibre non-woven tissue. It makes the stencils and mounts and unmounts them from the print drum automatically, making it almost as easy to operate as a photocopier. The Risograph is the best known of these machines. Although mimeographs remain more economical and energy-efficient in mid-range quantities, easier-to-use photocopying and offset printing have replaced mimeography almost entirely in developed countries. Mimeography continues to be used in developing countries because it is a simple, cheap, and robust technology. Many mimeographs can be hand-cranked, requiring no electricity. Uses and art Mimeographs and the closely related but distinctly different spirit duplicator process were both used extensively in schools to copy homework assignments and tests. They were also commonly used for low-budget amateur publishing, including club newsletters and church bulletins. They were especially popular with science fiction fans, who used them extensively in the production of fanzines in the middle 20th century, before photocopying became inexpensive. Letters and typographical symbols were sometimes used to create illustrations, in a precursor to ASCII art. Because changing ink color in a mimeograph could be a laborious process, involving extensively cleaning the machine or, on newer models, replacing the drum or rollers, and then running the paper through the machine a second time, some fanzine publishers experimented with techniques for painting several colors on the pad. In addition,
on display in the Griffith Observatory in Los Angeles, and at UCLA's Meteorite Gallery. Antarctica A few meteorites were found in Antarctica between 1912 and 1964. In 1969, the 10th Japanese Antarctic Research Expedition found nine meteorites on a blue ice field near the Yamato Mountains. With this discovery, came the realization that movement of ice sheets might act to concentrate meteorites in certain areas. After a dozen other specimens were found in the same place in 1973, a Japanese expedition was launched in 1974 dedicated to the search for meteorites. This team recovered nearly 700 meteorites. Shortly thereafter, the United States began its own program to search for Antarctic meteorites, operating along the Transantarctic Mountains on the other side of the continent: the Antarctic Search for Meteorites (ANSMET) program. European teams, starting with a consortium called "EUROMET" in the 1990/91 season, and continuing with a program by the Italian Programma Nazionale di Ricerche in Antartide have also conducted systematic searches for Antarctic meteorites. The Antarctic Scientific Exploration of China has conducted successful meteorite searches since 2000. A Korean program (KOREAMET) was launched in 2007 and has collected a few meteorites. The combined efforts of all of these expeditions have produced more than 23,000 classified meteorite specimens since 1974, with thousands more that have not yet been classified. For more information see the article by Harvey (2003). Australia At about the same time as meteorite concentrations were being discovered in the cold desert of Antarctica, collectors discovered that many meteorites could also be found in the hot deserts of Australia. Several dozen meteorites had already been found in the Nullarbor region of Western and South Australia. Systematic searches between about 1971 and the present recovered more than 500 others, ~300 of which are currently well characterized. The meteorites can be found in this region because the land presents a flat, featureless, plain covered by limestone. In the extremely arid climate, there has been relatively little weathering or sedimentation on the surface for tens of thousands of years, allowing meteorites to accumulate without being buried or destroyed. The dark-colored meteorites can then be recognized among the very different looking limestone pebbles and rocks. The Sahara In 1986–87, a German team installing a network of seismic stations while prospecting for oil discovered about 65 meteorites on a flat, desert plain about southeast of Dirj (Daraj), Libya. A few years later, a desert enthusiast saw photographs of meteorites being recovered by scientists in Antarctica, and thought that he had seen similar occurrences in northern Africa. In 1989, he recovered about 100 meteorites from several distinct locations in Libya and Algeria. Over the next several years, he and others who followed found at least 400 more meteorites. The find locations were generally in regions known as regs or hamadas: flat, featureless areas covered only by small pebbles and minor amounts of sand. Dark-colored meteorites can be easily spotted in these places. In the case of several meteorite fields, such as Dar al Gani, Dhofar, and others, favorable light-colored geology consisting of basic rocks (clays, dolomites, and limestones) makes meteorites particularly easy to identify. Although meteorites had been sold commercially and collected by hobbyists for many decades, up to the time of the Saharan finds of the late 1980s and early 1990s, most meteorites were deposited in or purchased by museums and similar institutions where they were exhibited and made available for scientific research. The sudden availability of large numbers of meteorites that could be found with relative ease in places that were readily accessible (especially compared to Antarctica), led to a rapid rise in commercial collection of meteorites. This process was accelerated when, in 1997, meteorites coming from both the Moon and Mars were found in Libya. By the late 1990s, private meteorite-collecting expeditions had been launched throughout the Sahara. Specimens of the meteorites recovered in this way are still deposited in research collections, but most of the material is sold to private collectors. These expeditions have now brought the total number of well-described meteorites found in Algeria and Libya to more than 500. Northwest Africa Meteorite markets came into existence in the late 1990s, especially in Morocco. This trade was driven by Western commercialization and an increasing number of collectors. The meteorites were supplied by nomads and local people who combed the deserts looking for specimens to sell. Many thousands of meteorites have been distributed in this way, most of which lack any information about how, when, or where they were discovered. These are the so-called "Northwest Africa" meteorites. When they get classified, they are named "Northwest Africa" (abbreviated NWA) followed by a number. It is generally accepted that NWA meteorites originate in Morocco, Algeria, Western Sahara, Mali, and possibly even further afield. Nearly all of these meteorites leave Africa through Morocco. Scores of important meteorites, including Lunar and Martian ones, have been discovered and made available to science via this route. A few of the more notable meteorites recovered include Tissint and Northwest Africa 7034. Tissint was the first witnessed Martian meteorite fall in over fifty years; NWA 7034 is the oldest meteorite known to come from Mars, and is a unique water-bearing regolith breccia. Arabian Peninsula In 1999, meteorite hunters discovered that the desert in southern and central Oman were also favorable for the collection of many specimens. The gravel plains in the Dhofar and Al Wusta regions of Oman, south of the sandy deserts of the Rub' al Khali, had yielded about 5,000 meteorites as of mid-2009. Included among these are a large number of lunar and Martian meteorites, making Oman a particularly important area both for scientists and collectors. Early expeditions to Oman were mainly done by commercial meteorite dealers, however, international teams of Omani and European scientists have also now collected specimens. The recovery of meteorites from Oman is currently prohibited by national law, but a number of international hunters continue to remove specimens now deemed national treasures. This new law provoked a small international incident, as its implementation preceded any public notification of such a law, resulting in the prolonged imprisonment of a large group of meteorite hunters, primarily from Russia, but whose party also consisted of members from the US as well as several other European countries. In human affairs Meteorites have figured into human culture since their earliest discovery as ceremonial or religious objects, as the subject of writing about events occurring in the sky and as a source of peril. The oldest known iron artifacts are nine small beads hammered from meteoritic iron. They were found in northern Egypt and have been securely dated to 3200 BC. Ceremonial or religious use Although the use of the metal found in meteorites is also recorded in myths of many countries and cultures where the celestial source was often acknowledged, scientific documentation only began in the last few centuries. Meteorite falls may have been the source of cultish worship. The cult in the Temple of Artemis at Ephesus, one of the Seven Wonders of the Ancient World, possibly originated with the observation and recovery of a meteorite that was understood by contemporaries to have fallen to the earth from Jupiter, the principal Roman deity. There are reports that a sacred stone was enshrined at the temple that may have been a meteorite. The Black Stone set into the wall of the Kaaba has often been presumed to be a meteorite, but the little available evidence for this is inconclusive. Some Native Americans treated meteorites as ceremonial objects. In 1915, a iron meteorite was found in a Sinagua (c. 1100–1200 AD) burial cyst near Camp Verde, Arizona, respectfully wrapped in a feather cloth. A small pallasite was found in a pottery jar in an old burial found at Pojoaque Pueblo, New Mexico. Nininger reports several other such instances, in the Southwest US and elsewhere, such as the discovery of Native American beads of meteoric iron found in Hopewell burial mounds, and the discovery of the Winona meteorite in a Native American stone-walled crypt. Historical writings In medieval China during the Song dynasty, a meteorite strike event was recorded by Shen Kuo in 1064 AD near Changzhou. He reported "a loud noise that sounded like a thunder was heard in the sky; a giant star, almost like the moon, appeared in the southeast" and later finding the crater and the still-hot meteorite within, nearby. Two of the oldest recorded meteorite falls in Europe are the Elbogen (1400) and Ensisheim (1492) meteorites. The German physicist, Ernst Florens Chladni, was the first to publish (in 1794) the idea that meteorites might be rocks that originated not from Earth, but from space. His booklet was "On the Origin of the Iron Masses Found by Pallas and Others Similar to it, and on Some Associated Natural Phenomena". In this he compiled all available data on several meteorite finds and falls concluded that they must have their origins in outer space. The scientific community of the time responded with resistance and mockery. It took nearly ten years before a general acceptance of the origin of meteorites was achieved through the work of the French scientist Jean-Baptiste Biot and the British chemist, Edward Howard. Biot's study, initiated by the French Academy of Sciences, was compelled by a fall of thousands of meteorites on 26 April 1803 from the skies of L'Aigle, France. Striking people or property Throughout history, many first- and second-hand reports speak of meteorites killing humans and other animals. One example is from 1490 AD in China, which purportedly killed thousands of people. John Lewis has compiled some of these reports, and summarizes, "No one in recorded history has ever been killed by a meteorite in the presence of a meteoriticist and a medical doctor" and "reviewers who make sweeping negative conclusions usually do not cite any of the primary publications in which the eyewitnesses describe their experiences, and give no evidence of having read them". Modern reports of meteorite strikes include: In 1954 in Sylacauga, Alabama. A stone chondrite, the Hodges meteorite or Sylacauga meteorite, crashed through a roof and injured an occupant. An approximately fragment of the Mbale meteorite fall from Uganda struck a youth, causing no injury. In October 2021 a meteorite penetrated the roof of a house in Golden, British Columbia landing on an occupant's bed. Notable examples Naming Meteorites are always named for the places they were found, where practical, usually a nearby town or geographic feature. In cases where many meteorites were found in one place, the name may be followed by a number or letter (e.g., Allan Hills 84001 or Dimmitt (b)). The name designated by the Meteoritical Society is used by scientists, catalogers, and most collectors. Terrestrial Allende – largest known carbonaceous chondrite (Chihuahua, Mexico, 1969). Allan Hills A81005 – First meteorite determined to be of lunar origin. Allan Hills 84001 – Mars meteorite that was claimed to prove the existence of life on Mars. The Bacubirito Meteorite (Meteorito de Bacubirito) – A meteorite estimated to weigh . Campo del Cielo – a group of iron meteorites associated with a crater field (of the same name) of at least 26 craters in West Chaco Province, Argentina. The total weight of meteorites recovered exceeds 100 tonnes. Canyon Diablo – Associated with Meteor Crater in Arizona. Cape York – One of the largest meteorites in the world. A 34-ton fragment called "Ahnighito", is exhibited at the American Museum of Natural History; the largest meteorite on exhibit in any museum. Gibeon – A large Iron meteorite in Namibia, created the largest known strewn field. Hoba – The largest known intact meteorite. Kaidun – An unusual carbonaceous chondrite. Mbozi meteorite – A 16-metric-ton ungrouped iron meteorite in Tanzania. Murchison – A carbonaceous chondrite found to contain nucleobases – the building block of life. Nōgata – The oldest meteorite whose fall can be dated precisely (to 19 May 861, at Nōgata) Orgueil – A famous meteorite due to its especially primitive nature and high presolar grain content. Sikhote-Alin – Massive iron meteorite impact event that occurred on 12 February 1947. Tucson Ring – Ring shaped meteorite, used by a blacksmith as an anvil, in Tucson AZ. Currently at the Smithsonian. Willamette – The largest meteorite ever found in the United States. 2007 Carancas impact event – On 15 September 2007, a stony meteorite that may have weighed as much as 4000 kilograms created a crater 13 meters in diameter near the village of Carancas, Peru. 2013 Russian meteor event – a 17-metre diameter, 10 000 ton asteroid hit the atmosphere above Chelyabinsk, Russia at 18 km/s around 09:20 local time (03:20 UTC) 15 February 2013, producing a very bright fireball in the morning sky. A number of small meteorite fragments have since been found nearby. Extraterrestrial Bench Crater meteorite (Apollo 12, 1969) and the Hadley Rille meteorite (Apollo 15, 1971)−Fragments of asteroids were found among the samples collected on the Moon. Block Island meteorite and Heat Shield Rock – Discovered on Mars by Opportunity rover among four other iron meteorites. Two nickel-iron meteorites were identified by the Spirit rover. (See also: Mars rocks) Large impact craters Acraman crater in South Australia ( diameter) Ames crater in Major County, Oklahoma diameter Brent crater in northern Ontario ( diameter) Chesapeake Bay impact crater ( diameter) Chicxulub Crater off the coast of Yucatán Peninsula ( diameter) Clearwater Lakes a double crater impact in Québec, Canada ( in diameter) Lonar crater in India ( diameter) Lumparn in Åland, in the Baltic Sea ( diameter) Manicouagan Reservoir in Québec, Canada ( diameter) Manson crater in Iowa ( crater is buried) Meteor Crater in Arizona, also known as "Barringer Crater", the first confirmed terrestrial impact crater. ( diameter) Mjølnir impact crater in the Barents Sea ( diameter) Nördlinger Ries crater in Bavaria, Germany ( diameter) Popigai crater in Russia ( diameter) Siljan (lake) in Sweden, largest crater in Europe ( diameter) Sudbury Basin in Ontario, Canada ( diameter). Ungava Bay in Québec, Canada () Vredefort Crater in South Africa, the largest known impact crater on Earth ( diameter from an estimated wide meteorite). Disintegrating meteoroids Tunguska event in Siberia 1908 (no crater) Vitim event in Siberia 2002 (no crater) Chelyabinsk event in Russia 2013 (no known crater) See also Atmospheric focusing Glossary of meteoritics List of
preserved in sedimentary deposits sufficiently well that they can be recognized through mineralogical and geochemical studies. One limestone quarry in Sweden has produced an anomalously large number — exceeding one hundred — fossil meteorites from the Ordovician, nearly all of which are highly weathered L-chondrites that still resemble the original meteorite under a petrographic microscope, but which have had their original material almost entirely replaced by terrestrial secondary mineralization. The extraterrestrial provenance was demonstrated in part through isotopic analysis of relict spinel grains, a mineral that is common in meteorites, is insoluble in water, and is able to persist chemically unchanged in the terrestrial weathering environment. One of these fossil meteorites, dubbed Österplana 065, appears to represent a distinct type of meteorite that is "extinct" in the sense that it is no longer falling to Earth, the parent body having already been completely depleted from the reservoir of near-Earth objects. Collection A "meteorite fall", also called an "observed fall", is a meteorite collected after its arrival was observed by people or automated devices. Any other meteorite is called a "meteorite find". There are more than 1,100 documented falls listed in widely used databases, most of which have specimens in modern collections. , the Meteoritical Bulletin Database had 1,180 confirmed falls. Falls Most meteorite falls are collected on the basis of eyewitness accounts of the fireball or the impact of the object on the ground, or both. Therefore, despite the fact that meteorites fall with virtually equal probability everywhere on Earth, verified meteorite falls tend to be concentrated in areas with higher human population densities such as Europe, Japan, and northern India. A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first of these was the Přibram meteorite, which fell in Czechoslovakia (now the Czech Republic) in 1959. In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite. Following the Pribram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Network, operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern US. This program also observed a meteorite fall, the Lost City chondrite, allowing its recovery and a calculation of its orbit. Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, Innisfree, in 1977. Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Pribram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002. NASA has an automated system that detects meteors and calculates the orbit, magnitude, ground track, and other parameters over the southeast USA, which often detects a number of events each night. Finds Until the twentieth century, only a few hundred meteorite finds had ever been discovered. More than 80% of these were iron and stony-iron meteorites, which are easily distinguished from local rocks. To this day, few stony meteorites are reported each year that can be considered to be "accidental" finds. The reason there are now more than 30,000 meteorite finds in the world's collections started with the discovery by Harvey H. Nininger that meteorites are much more common on the surface of the Earth than was previously thought. United States Nininger's strategy was to search for meteorites in the Great Plains of the United States, where the land was largely cultivated and the soil contained few rocks. Between the late 1920s and the 1950s, he traveled across the region, educating local people about what meteorites looked like and what to do if they thought they had found one, for example, in the course of clearing a field. The result was the discovery of over 200 new meteorites, mostly stony types. In the late 1960s, Roosevelt County, New Mexico was found to be a particularly good place to find meteorites. After the discovery of a few meteorites in 1967, a public awareness campaign resulted in the finding of nearly 100 new specimens in the next few years, with many being by a single person, Ivan Wilson. In total, nearly 140 meteorites were found in the region since 1967. In the area of the finds, the ground was originally covered by a shallow, loose soil sitting atop a hardpan layer. During the dustbowl era, the loose soil was blown off, leaving any rocks and meteorites that were present stranded on the exposed surface. Beginning in the mid-1960s, amateur meteorite hunters began scouring the arid areas of the southwestern United States. To date, thousands of meteorites have been recovered from the Mojave, Sonoran, Great Basin, and Chihuahuan Deserts, with many being recovered on dry lake beds. Significant finds include the three-tonne Old Woman meteorite, currently on display at the Desert Discovery Center in Barstow, California, and the Franconia and Gold Basin meteorite strewn fields; hundreds of kilograms of meteorites have been recovered from each. A number of finds from the American Southwest have been submitted with false find locations, as many finders think it is unwise to publicly share that information for fear of confiscation by the federal government and competition with other hunters at published find sites. Several of the meteorites found recently are currently on display in the Griffith Observatory in Los Angeles, and at UCLA's Meteorite Gallery. Antarctica A few meteorites were found in Antarctica between 1912 and 1964. In 1969, the 10th Japanese Antarctic Research Expedition found nine meteorites on a blue ice field near the Yamato Mountains. With this discovery, came the realization that movement of ice sheets might act to concentrate meteorites in certain areas. After a dozen other specimens were found in the same place in 1973, a Japanese expedition was launched in 1974 dedicated to the search for meteorites. This team recovered nearly 700 meteorites. Shortly thereafter, the United States began its own program to search for Antarctic meteorites, operating along the Transantarctic Mountains on the other side of the continent: the Antarctic Search for Meteorites (ANSMET) program. European teams, starting with a consortium called "EUROMET" in the 1990/91 season, and continuing with a program by the Italian Programma Nazionale di Ricerche in Antartide have also conducted systematic searches for Antarctic meteorites. The Antarctic Scientific Exploration of China has conducted successful meteorite searches since 2000. A Korean program (KOREAMET) was launched in 2007 and has collected a few meteorites. The combined efforts of all of these expeditions have produced more than 23,000 classified meteorite specimens since 1974, with thousands more that have not yet been classified. For more information see the article by Harvey (2003). Australia At about the same time as meteorite concentrations were being discovered in the cold desert of Antarctica, collectors discovered that many meteorites could also be found in the hot deserts of Australia. Several dozen meteorites had already been found in the Nullarbor region of Western and South Australia. Systematic searches between about 1971 and the present recovered more than 500 others, ~300 of which are currently well characterized. The meteorites can be found in this region because the land presents a flat, featureless, plain covered by limestone. In the extremely arid climate, there has been relatively little weathering or sedimentation on the surface for tens of thousands of years, allowing meteorites to accumulate without being buried or destroyed. The dark-colored meteorites can then be recognized among the very different looking limestone pebbles and rocks. The Sahara In 1986–87, a German team installing a network of seismic stations while prospecting for oil discovered about 65 meteorites on a flat, desert plain about southeast of Dirj (Daraj), Libya. A few years later, a desert enthusiast saw photographs of meteorites being recovered by scientists in Antarctica, and thought that he had seen similar occurrences in northern Africa. In 1989, he recovered about 100 meteorites from several distinct locations in Libya and Algeria. Over the next several years, he and others who followed found at least 400 more meteorites. The find locations were generally in regions known as regs or hamadas: flat, featureless areas covered only by small pebbles and minor amounts of sand. Dark-colored meteorites can be easily spotted in these places. In the case of several meteorite fields, such as Dar al Gani, Dhofar, and others, favorable light-colored geology consisting of basic rocks (clays, dolomites, and limestones) makes meteorites particularly easy to identify. Although meteorites had been sold commercially and collected by hobbyists for many decades, up to the time of the Saharan finds of the late 1980s and early 1990s, most meteorites were deposited in or purchased by museums and similar institutions where they were exhibited and made available for scientific research. The sudden availability of large numbers of meteorites that could be found with relative ease in places that were readily accessible (especially compared to Antarctica), led to a rapid rise in commercial collection of meteorites. This process was accelerated when, in 1997, meteorites coming from both the Moon and Mars were found in Libya. By the late 1990s, private meteorite-collecting expeditions had been launched throughout the Sahara. Specimens of the meteorites recovered in this way are still deposited in research collections, but most of the material is sold to private collectors. These expeditions have now brought the total number of well-described meteorites found in Algeria and Libya to more than 500. Northwest Africa Meteorite markets came into existence in the late 1990s, especially in Morocco. This trade was driven by Western commercialization and an increasing number of collectors. The meteorites were supplied by nomads and local people who combed the deserts looking for specimens to sell. Many thousands of meteorites have been distributed in this way, most of which lack any information about how, when, or where they were discovered. These are the so-called "Northwest Africa" meteorites. When they get classified, they are named "Northwest Africa" (abbreviated NWA) followed by a number. It is generally accepted that NWA meteorites originate in Morocco, Algeria, Western Sahara, Mali, and possibly even further afield. Nearly all of these meteorites leave Africa through Morocco. Scores of important meteorites, including Lunar and Martian ones, have been discovered and made available to science via this route. A few of the more notable meteorites recovered include Tissint and Northwest Africa 7034. Tissint was the first witnessed Martian meteorite fall in over fifty years; NWA 7034 is the oldest meteorite known to come from Mars, and is a unique water-bearing regolith breccia. Arabian Peninsula In 1999, meteorite hunters discovered that the desert in southern and central Oman were also favorable for the collection of many specimens. The gravel plains in the Dhofar and Al Wusta regions of Oman, south of the sandy deserts of the Rub' al Khali, had yielded about 5,000 meteorites as of mid-2009. Included among these are a large number of lunar and Martian meteorites, making Oman a particularly important area both for scientists and collectors. Early expeditions to Oman were mainly done by commercial meteorite dealers, however, international teams of Omani and European scientists have also now collected specimens. The recovery of meteorites from Oman is currently prohibited by national law, but a number of international hunters continue to remove specimens now deemed national treasures. This new law provoked a small international incident, as its implementation preceded any public notification of such a law, resulting in the prolonged imprisonment of a large group of meteorite hunters, primarily from Russia, but whose party also consisted of members from the US as well as several other European countries. In human affairs Meteorites have figured into human culture since their earliest discovery as ceremonial or religious objects, as the subject of writing about events occurring in the sky and as a source of peril. The oldest known iron artifacts are nine small beads hammered from meteoritic iron. They were found in northern Egypt and have been securely dated to 3200 BC. Ceremonial or religious use Although the use of the metal found in meteorites is also recorded in myths of many countries and cultures where the celestial source was often acknowledged, scientific documentation only began in the last few centuries. Meteorite falls may have been the source of cultish worship. The cult in the Temple of Artemis at Ephesus, one of the Seven Wonders of the Ancient World, possibly originated with the observation and recovery of a meteorite that was understood by contemporaries to have fallen to the earth from Jupiter, the principal Roman deity. There are reports that a sacred stone was enshrined at the temple that may have been a meteorite. The Black Stone set into the wall of the Kaaba has often been presumed to be a meteorite, but the little available evidence for this is inconclusive. Some Native Americans treated meteorites as ceremonial objects. In 1915, a iron meteorite was found in a Sinagua (c. 1100–1200 AD) burial cyst near Camp Verde, Arizona,
million human deaths, usually used in reference to projected number of deaths from a nuclear explosion. The term was used by scientists and thinkers who strategized likely outcomes of all-out nuclear warfare. Exponentiation When units occur in exponentiation, such as in square and cubic forms, any multiples-prefix is considered part of the unit, and thus included in the exponentiation. 1 Mm2 means one square megametre or the size of a square of by or , and not (106 m2). 1 Mm3 means one cubic megametre or the size of a cube of by by or 1018 m3, and not (106 m3) Computing
million human deaths, usually used in reference to projected number of deaths from a nuclear explosion. The term was used by scientists and thinkers who strategized likely outcomes of all-out nuclear warfare. Exponentiation When units occur in exponentiation, such as in square and cubic forms, any multiples-prefix is considered part of the unit, and thus included in the exponentiation. 1 Mm2 means one square megametre or the size of a square of by or , and not (106 m2). 1 Mm3 means one cubic megametre or the size of a cube of by by or 1018 m3, and not (106 m3) Computing In some fields of computing, mega may sometimes denote 1,048,576 (220) of information units, for example, a megabyte, a megaword, but denotes (106) units of other quantities, for example, transfer rates: = . The prefix mebi- has
of the Third Republic of Poland. In January 2001, he founded the Civic Platform political party with Donald Tusk and Andrzej Olechowski. He left Civic Platform for personal reasons and at the time of his death was an independent MP. He was member of Kashubian-Pomeranian Association. He was later chosen as a chairman of the Association "Polish Community". Maciej Płażyński was married to Elżbieta Płażyńska and together they had three children: Jakub, Katarzyna, and Kacper. He was killed on the Tupolev Tu-154 of the 36th Special Aviation Regiment which also carrying the President of Poland Lech Kaczyński crashed while landing at Smolensk-North airport near Smolensk, Russia, on 10 April 2010, killing all aboard including Płażyński and the President. Honours and awards In 2000, Płażyński was awarded the Order of Merit of the Italian Republic, First Class. He received the titles of honorary citizen of Młynary, Puck, Pionki and Lidzbark Warmiński. On 16 April 2010 he was posthumously awarded the Grand Cross of the Order of Polonia Restituta. He was also awarded a Gold Medal of Gloria Artis.
Tu-154 of the 36th Special Aviation Regiment which also carrying the President of Poland Lech Kaczyński crashed while landing at Smolensk-North airport near Smolensk, Russia, on 10 April 2010, killing all aboard including Płażyński and the President. Honours and awards In 2000, Płażyński was awarded the Order of Merit of the Italian Republic, First Class. He received the titles of honorary citizen of Młynary, Puck, Pionki and Lidzbark Warmiński. On 16 April 2010 he was posthumously awarded the Grand Cross of the Order of Polonia Restituta. He was also awarded a Gold Medal of Gloria Artis. See also Solidarity References External links Official site 1958 births 2010 deaths People from Młynary Polish lawyers Marshals of the Sejm of the Third Polish Republic Members of the Polish Sejm 1997–2001 Members of the Polish Sejm 2001–2005 Members of the Senate of Poland 2005–2007 Victims of the Smolensk air disaster University of Gdańsk alumni Grand
Lisa Jefferson and the FBI, related that he too was part of this group. They were joined by other passengers, including Lou Nacke, Rich Guadagno, Alan Beaven, Honor Elizabeth Wainio, Linda Gronlund, and William Cashman, along with flight attendants Sandra Bradshaw and Cee Cee Ross-Lyles, in discussing their options and voting on a course of action, ultimately deciding to storm the cockpit and take over the plane. According to the 9/11 Commission Report, after the plane's voice data recorder was recovered, it revealed pounding and crashing sounds against the cockpit door and shouts and screams in English. "Let's get them!" a passenger cries. A hijacker shouts, "Allah akbar!" ("God is great"). Jarrah repeatedly pitched the plane to knock passengers off their feet, but the passengers apparently managed to invade the cockpit, where one was heard shouting, "In the cockpit. If we don't, we'll die." At 10:02 am, a hijacker ordered, "Pull it down! Pull it down!" The 9/11 Commission later reported that the plane's control wheel was turned hard to the right, causing it to roll on its back and plow into an empty field in Shanksville, Pennsylvania, at , killing everyone on board. The plane was 20 minutes of flying time away from its suspected target, the White House or the U.S. Capitol Building in Washington, D.C. According to Vice President Dick Cheney, President George W. Bush gave the order to shoot the plane down. Legacy Bingham is survived by his parents and the Hoagland family members who played a part in his upbringing, by his stepmother and various stepsiblings, and by his partner of six years, Paul Holm. Holm described Bingham as a brave, competitive man, saying, "He hated to lose—at anything." He was known to proudly display a scar he received after being gored at the Running of the Bulls in Pamplona, Spain. He is buried at Madronia Cemetery, Saratoga, California. U.S. Senators John McCain and Barbara Boxer honored Bingham on September 17, 2001, in a ceremony for San Francisco Bay Area victims of the attacks, presenting a folded American flag to Paul Holm. The Mark Kendall Bingham Memorial Tournament (referred to as the Bingham Cup), a biennial international rugby union competition predominantly for gay and bisexual men, was established in 2002 in his memory. Bingham, along with the other passengers on Flight 93, was posthumously awarded the Arthur Ashe Courage Award in 2002. The Eureka Valley Recreation Center's Gymnasium in San Francisco was renamed the Mark Bingham Gymnasium in August 2002. Singer Melissa Etheridge dedicated the song "Tuesday Morning" in 2004 to his memory. Beginning in 2005, the Mark Bingham Award for Excellence in Achievement has been awarded by the California Alumni Association of the University of California, Berkeley to a young alumnus or alumna at its annual Charter Gala. At the National 9/11 Memorial, Bingham and other passengers from Flight 93 are memorialized at the South Pool, on Panel S-67. At the Flight 93 National Memorial in Pennsylvania, Bingham's name is located on one of the 40 panels of polished,
fraternity brother Joseph Salama's wedding. He arrived at Terminal A at Newark International Airport at 7:40 am, ran to Gate 17, and was the last passenger to board United Airlines Flight 93, taking seat 4D, next to passenger Tom Burnett. United Flight 93 was scheduled to depart at 8:00 am, but the Boeing 757 did not depart until 42 minutes later due to runway traffic delays. Four minutes later, American Airlines Flight 11 crashed into the World Trade Center's North Tower. Fifteen minutes later, at 9:03 am, as United Flight 175 crashed into the South Tower, United 93 climbed to cruising altitude, heading west over New Jersey and into Pennsylvania. At 9:25 am, Flight 93 was above eastern Ohio, and pilots Jason Dahl and LeRoy Homer received an alert, "Beware of cockpit intrusion," on the cockpit computer device ACARS (Aircraft Communications and Reporting System). Three minutes later, Cleveland controllers could hear screams over the cockpit's open microphone. Moments later, the hijackers, led by the Lebanese Ziad Samir Jarrah, took over the plane's controls and told passengers, "Keep remaining sitting. We have a bomb on board". Bingham and the other passengers were herded into the back of the plane. Within six minutes, the plane changed course and headed for Washington, D.C. Several of the passengers made phone calls to loved ones, who informed them about the two planes that had crashed into the World Trade Center. After the hijackers veered the plane sharply south, the passengers decided to act. Bingham, along with Todd Beamer, Tom Burnett and Jeremy Glick, formed a plan to take the plane back from the hijackers. They relayed this plan to their loved ones and the authorities via telephone. Bingham got through to his aunt's home in California. Bingham stated, "This is Mark. I want to let you guys know that I love you, in case I don't see you again...I'm on United Airlines, Flight 93. It's being hijacked." According to The Week, Hoagland formed the impression that her son spoke "confidentially" with a fellow passenger, to form a plan to retake the plane. According to ABC News, the call cut off after about three minutes. Hoagland, after seeing news reports of the plane's hijacking, called him back and left two messages for him, calmly saying, "Mark, this is your mom. The news is that it's been hijacked by terrorists. They are planning to probably use the plane as a target to hit some site on the ground. I would say go ahead and do everything you can
the place of articulation and the degree of phonation of voicing are considered separately from manner, as being independent parameters. Homorganic consonants, which have the same place of articulation, may have different manners of articulation. Often nasality and laterality are included in manner, but some phoneticians, such as Peter Ladefoged, consider them to be independent. Broad classifications Manners of articulation with substantial obstruction of the airflow (stops, fricatives, affricates) are called obstruents. These are prototypically voiceless, but voiced obstruents are extremely common as well. Manners without such obstruction (nasals, liquids, approximants, and also vowels) are called sonorants because they are nearly always voiced. Voiceless sonorants are uncommon, but are found in Welsh and Classical Greek (the spelling "rh"), in Standard Tibetan (the "lh" of Lhasa), and the "wh" in those dialects of English that distinguish "which" from "witch". Sonorants may also be called resonants, and some linguists prefer that term, restricting the word 'sonorant' to non-vocoid resonants (that is, nasals and liquids, but not vowels or semi-vowels). Another common distinction is between occlusives (stops, nasals and affricates) and continuants (all else). Stricture From greatest to least stricture, speech sounds may be classified along a cline as stop consonants (with occlusion, or blocked airflow), fricative consonants (with partially blocked and therefore strongly turbulent airflow), approximants (with only slight turbulence), tense vowels, and finally lax vowels (with full unimpeded airflow). Affricates often behave as if they were intermediate between stops and fricatives, but phonetically they are sequences of a stop and fricative. Over time, sounds in a language may move along the cline toward less stricture in a process called lenition or towards more stricture in a process called fortition. Other parameters Sibilants are distinguished from other fricatives by the shape of the tongue and how the airflow is directed over the teeth. Fricatives at coronal places of articulation may be sibilant or non-sibilant, sibilants being the more common. Flaps (also called taps) are similar to very brief stops. However, their articulation and behavior are distinct enough to be considered a separate manner, rather than just length. The main articulatory difference between flaps and stops is that, due to the greater length of stops compared to flaps, a build-up of air pressure occurs behind a stop which does not occur behind a flap. This means that when the stop is released, there is a burst of air as the pressure is relieved, while for flaps there is no such burst. Trills involve the vibration of one of the speech organs. Since trilling is a separate parameter from stricture, the two may be combined. Increasing the stricture of a typical trill results in a trilled fricative. Trilled affricates are also known. Nasal airflow may be added as an independent parameter to any speech sound. It is most commonly found in nasal occlusives and nasal vowels, but nasalized fricatives, taps, and approximants are also found. When a sound is not nasal, it is called oral. Laterality is the release of airflow at the side of the tongue. This can be combined with other manners, resulting in lateral approximants (such as the pronunciation of the letter L in the English word "let"), lateral flaps, and lateral fricatives and affricates. Individual manners Stop, often called a plosive, is an oral occlusive, where there is occlusion (blocking) of the oral vocal tract, and no nasal air flow, so the air flow stops completely. Examples include English (voiceless) and (voiced). If the consonant is voiced, the voicing is the only sound made during occlusion; if it is voiceless, a stop is completely silent. What we hear as a /p/ or /k/ is the effect that the onset of the occlusion has on the preceding vowel, as well as the release burst and its effect on the following vowel. The shape and position of the tongue (the place of articulation) determine the resonant cavity that gives different stops their characteristic sounds. All languages have stops. Nasal, a nasal occlusive, where there is occlusion of the oral tract, but air passes through the nose. The shape and position of the tongue determine the resonant cavity that gives different nasals their characteristic sounds. Examples include English . Nearly all languages have nasals, the only exceptions being in the area of Puget Sound and a single language on Bougainville Island. Fricative, sometimes called spirant, where there is continuous frication (turbulent and noisy airflow) at the place of articulation. Examples include English (voiceless), (voiced), etc. Most languages have fricatives, though many have only an . However, the Indigenous Australian languages are almost completely devoid of fricatives of any kind. Sibilants are a type of fricative where the airflow is guided by a groove in the tongue toward the teeth,
the r-like sounds (taps and trills), and the sibilancy of fricatives. The concept of manner is mainly used in the discussion of consonants, although the movement of the articulators will also greatly alter the resonant properties of the vocal tract, thereby changing the formant structure of speech sounds that is crucial for the identification of vowels. For consonants, the place of articulation and the degree of phonation of voicing are considered separately from manner, as being independent parameters. Homorganic consonants, which have the same place of articulation, may have different manners of articulation. Often nasality and laterality are included in manner, but some phoneticians, such as Peter Ladefoged, consider them to be independent. Broad classifications Manners of articulation with substantial obstruction of the airflow (stops, fricatives, affricates) are called obstruents. These are prototypically voiceless, but voiced obstruents are extremely common as well. Manners without such obstruction (nasals, liquids, approximants, and also vowels) are called sonorants because they are nearly always voiced. Voiceless sonorants are uncommon, but are found in Welsh and Classical Greek (the spelling "rh"), in Standard Tibetan (the "lh" of Lhasa), and the "wh" in those dialects of English that distinguish "which" from "witch". Sonorants may also be called resonants, and some linguists prefer that term, restricting the word 'sonorant' to non-vocoid resonants (that is, nasals and liquids, but not vowels or semi-vowels). Another common distinction is between occlusives (stops, nasals and affricates) and continuants (all else). Stricture From greatest to least stricture, speech sounds may be classified along a cline as stop consonants (with occlusion, or blocked airflow), fricative consonants (with partially blocked and therefore strongly turbulent airflow), approximants (with only slight turbulence), tense vowels, and finally lax vowels (with full unimpeded airflow). Affricates often behave as if they were intermediate between stops and fricatives, but phonetically they are sequences of a stop and fricative. Over time, sounds in a language may move along the cline toward less stricture in a process called lenition or towards more stricture in a process called fortition. Other parameters Sibilants are distinguished from other fricatives by the shape of the tongue and how the airflow is directed over the teeth. Fricatives at coronal places of articulation may be sibilant or non-sibilant, sibilants being the more common. Flaps (also called taps) are similar to very brief stops. However, their articulation and behavior are distinct enough to be considered a separate manner, rather than just length. The main articulatory difference between flaps and stops is that, due to the greater length of stops compared to flaps, a build-up of air pressure occurs behind a stop which does not occur behind a flap. This means that when the stop is released, there is a burst of air as the pressure is relieved, while for flaps there is no such burst. Trills involve the vibration of one of the speech organs. Since trilling is a separate parameter from stricture, the two may be combined. Increasing the stricture of a typical trill results in a trilled fricative. Trilled affricates are also known. Nasal airflow may be added as an independent parameter to any speech sound. It is most commonly found in nasal occlusives and nasal vowels, but nasalized fricatives, taps, and approximants are also found. When a sound is not nasal, it is called oral. Laterality is the release of airflow at the side of the tongue. This can be combined with other manners, resulting in lateral approximants (such as the pronunciation of the letter L in the English word "let"), lateral flaps, and lateral fricatives and affricates. Individual manners Stop, often called a plosive, is an oral occlusive, where there is occlusion (blocking) of the oral vocal tract, and no nasal air flow, so the air flow stops completely. Examples include English (voiceless) and (voiced). If the consonant is voiced, the voicing is the only sound made during occlusion; if it is voiceless, a stop is completely silent. What we hear as a /p/ or /k/ is the effect that the onset of the occlusion has
to the north by the Chelif River. It receives 350 mm of rainfall per year. During French colonization, viticulture was introduced on the plateau. After the country's independence, it was replaced by irrigated market gardening and the culture of citrus fruits and cereals. However, in certain sectors east of Mostaganem, the replacement of the vineyards caused the appearance of small dunes as a consequence of the resumption of soil movement. History In 1984 Relizane Province was carved out of its territory. Administrative divisions The province is divided into 10 districts (daïras), which are further divided into 32 communes or municipalities. Districts Achacha Aïn Nouïssy Aïn Tédelès Bouguirat Hassi Mamèche Kheïr Eddine Mesra Mostaganem Sidi Ali Sidi Lakhdar Communes Achacha (Achaacha)
and Sayada. It is a semi-arid and sandy plateau, in the shape of a triangle and bounded to the north by the Chelif River. It receives 350 mm of rainfall per year. During French colonization, viticulture was introduced on the plateau. After the country's independence, it was replaced by irrigated market gardening and the culture of citrus fruits and cereals. However, in certain sectors east of Mostaganem, the replacement of the vineyards caused the appearance of small dunes as a consequence of the resumption of soil movement. History In 1984 Relizane Province was carved out of its territory. Administrative divisions The province is divided into 10 districts (daïras), which are further
more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for AMD). These allow 2 to 4 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming, video editing, etc. In newer motherboards, the M.2 slots are for SSD and/or Wireless network interface controller. Temperature and reliability Motherboards are generally air cooled with heat sinks often mounted on larger chips in modern motherboards. Insufficient or improper cooling can cause damage to the internal components of the computer, or cause it to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPU's until the late 1990s; since then, most have required CPU fans mounted on heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional computer fans and integrated temperature sensors to detect motherboard and CPU temperatures and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Alternatively computers can use a water cooling system instead of many fans. Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as a careful layout of the motherboard and other components to allow for heat sink placement. A 2003 study found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation, an issue termed capacitor plague. Modern motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at , their expected design life roughly doubles for every below this. At a lifetime of 3 to 4 years can be expected. However, many manufacturers deliver substandard capacitors, which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures around the CPU socket exacerbate this problem. With top blowers, the motherboard components can be kept under , effectively doubling the motherboard lifetime. Mid-range and high-end motherboards, on the other hand, use solid capacitors exclusively. For every 10 °C less, their average lifespan is multiplied approximately by three, resulting in a 6-times higher lifetime expectancy at . These capacitors may be rated for 5000, 10000 or 12000 hours of operation at , extending the projected lifetime in comparison with standard solid capacitors. In Desktop PCs and notebook computers, the motherboard cooling and monitoring solutions are usually based on Super I/O or Embedded Controller. Bootstrapping using the Basic Input/Output System Motherboards contain a ROM (and later EPROM, EEPROM, NOR flash) to initialize hardware devices, and loads an operating system from the peripheral device. Microcomputers such as the Apple II and IBM PC used ROM chips mounted in sockets on the motherboard. At power-up, the central processor unit would load its program counter with the address of the Boot ROM and start executing instructions from the Boot ROM. These instructions initialized and tested the system hardware, displays system information on the screen, performed RAM checks, and then loaded an operating system from a peripheral device. If none was available, then the computer would perform tasks from other ROM stores or display an error message, depending on the model and design of the computer. For example, both the
type of backplane system. The most popular computers of the 1980s such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment. During the late 1980s and early 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard. In the late 1980s, personal computer motherboards began to include single ICs (also called Super I/O chips) capable of supporting a set of low-speed peripherals: PS/2 keyboard and mouse, floppy disk drive, serial ports, and parallel ports. By the late 1990s, many personal computer motherboards included consumer-grade embedded audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retained only the graphics card as a separate component. Business PCs, workstations, and servers were more likely to need expansion cards, either for more robust functions, or for higher speeds; those systems often had fewer embedded components. Laptop and notebook computers that were developed in the 1990s integrated the most common peripherals. This even included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century (like the tablet computer and the netbook). Memory, processors, network controllers, power source, and storage would be integrated into some systems. Design A motherboard provides the electrical connections by which the other components of the system communicate. Unlike a backplane, it also contains the central processing unit and hosts other subsystems and devices. A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables; in modern microcomputers, it is increasingly common to integrate some of these peripherals into the motherboard itself. An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard. Modern motherboards include: CPU sockets (or CPU slots) in which one or more microprocessors may be installed. In the case of CPUs in ball grid array packages, such as the VIA Nano and the Goldmont Plus, the CPU is directly soldered to the motherboard. Memory slots into which the system's main memory is to be installed, typically in the form of DIMM modules containing DRAM chips can be DDR3, DDR4 or DDR5 The chipset which forms an interface between the CPU, main memory, and peripheral buses Non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS The clock generator which produces the system clock signal to synchronize the various components Slots for expansion cards (the interface to the system via the buses supported by the chipset) Power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards. , some graphics cards (e.g. GeForce 8 and Radeon R600) require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply Connectors for hard disk drives, optical disc drives, or solid-state drives, typically SATA and NVMe now. Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as USB for mouse devices and keyboards. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards. Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat. Form factor Motherboards are produced in a variety of sizes and shapes called form factors, some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible systems are designed to fit various case sizes. , most desktop computer motherboards use the ATX standard form factor — even those found in Macintosh and Sun computers, which have not been built from commodity components. A case's motherboard and power supply unit (PSU) form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard. Laptop computers generally use highly integrated, miniaturized, and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard CPU sockets A CPU socket (central processing unit) or slot is an electrical component that attaches to a Printed Circuit Board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets on the motherboard can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture. A CPU socket type and motherboard chipset must support the CPU series and speed. Integrated peripherals With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers. Disk controllers for SATA drives, and historical PATA drives. Historical floppy-disk controller Integrated graphics controller supporting 2D and 3D graphics, with VGA, DVI, HDMI, DisplayPort and TV output integrated sound card supporting 8-channel (7.1) audio and S/PDIF output Ethernet network controller for connection to a LAN and to receive Internet USB controller Wireless network interface controller Bluetooth controller Temperature, voltage, and fan-speed sensors that allow software to monitor the health of
height of artifice is the Maniera painter's penchant for deliberately misappropriating a quotation. Agnolo Bronzino and Giorgio Vasari exemplify this strain of Maniera that lasted from about 1530 to 1580. Based largely at courts and in intellectual circles around Europe, Maniera art couples exaggerated elegance with exquisite attention to surface and detail: porcelain-skinned figures recline in an even, tempered light, acknowledging the viewer with a cool glance, if they make eye contact at all. The Maniera subject rarely displays much emotion, and for this reason works exemplifying this trend are often called 'cold' or 'aloof.' This is typical of the so-called "stylish style" or Maniera in its maturity. Spread of Mannerism The cities Rome, Florence, and Mantua were Mannerist centers in Italy. Venetian painting pursued a different course, represented by Titian in his long career. A number of the earliest Mannerist artists who had been working in Rome during the 1520s fled the city after the Sack of Rome in 1527. As they spread out across the continent in search of employment, their style was disseminated throughout Italy and Northern Europe. The result was the first international artistic style since the Gothic. Other parts of Northern Europe did not have the advantage of such direct contact with Italian artists, but the Mannerist style made its presence felt through prints and illustrated books. European rulers, among others, purchased Italian works, while northern European artists continued to travel to Italy, helping to spread the Mannerist style. Individual Italian artists working in the North gave birth to a movement known as the Northern Mannerism. Francis I of France, for example, was presented with Bronzino's Venus, Cupid, Folly and Time. The style waned in Italy after 1580, as a new generation of artists, including the Carracci brothers, Caravaggio and Cigoli, revived naturalism. Walter Friedlaender identified this period as "anti-mannerism", just as the early Mannerists were "anti-classical" in their reaction away from the aesthetic values of the High Renaissance and today the Carracci brothers and Caravaggio are agreed to have begun the transition to Baroque-style painting which was dominant by 1600. Outside of Italy, however, Mannerism continued into the 17th century. In France, where Rosso traveled to work for the court at Fontainebleau, it is known as the "Henry II style" and had a particular impact on architecture. Other important continental centers of Northern Mannerism include the court of Rudolf II in Prague, as well as Haarlem and Antwerp. Mannerism as a stylistic category is less frequently applied to English visual and decorative arts, where native labels such as "Elizabethan" and "Jacobean" are more commonly applied. Seventeenth-century Artisan Mannerism is one exception, applied to architecture that relies on pattern books rather than on existing precedents in Continental Europe. Of particular note is the Flemish influence at Fontainebleau that combined the eroticism of the French style with an early version of the vanitas tradition that would dominate seventeenth-century Dutch and Flemish painting. Prevalent at this time was the pittore vago, a description of painters from the north who entered the workshops in France and Italy to create a truly international style. Sculpture As in painting, early Italian Mannerist sculpture was very largely an attempt to find an original style that would top the achievement of the High Renaissance, which in sculpture essentially meant Michelangelo, and much of the struggle to achieve this was played out in commissions to fill other places in the Piazza della Signoria in Florence, next to Michelangelo's David. Baccio Bandinelli took over the project of Hercules and Cacus from the master himself, but it was little more popular then than it is now, and maliciously compared by Benvenuto Cellini to "a sack of melons", though it had a long-lasting effect in apparently introducing relief panels on the pedestal of statues. Like other works of his and other Mannerists, it removes far more of the original block than Michelangelo would have done. Cellini's bronze Perseus with the head of Medusa is certainly a masterpiece, designed with eight angles of view, another Mannerist characteristic, and artificially stylized in comparison with the Davids of Michelangelo and Donatello. Originally a goldsmith, his famous gold and enamel Salt Cellar (1543) was his first sculpture, and shows his talent at its best. Small bronze figures for collector's cabinets, often mythological subjects with nudes, were a popular Renaissance form at which Giambologna, originally Flemish but based in Florence, excelled in the later part of the century. He also created life-size sculptures, of which two entered the collection in the Piazza della Signoria. He and his followers devised elegant elongated examples of the figura serpentinata, often of two intertwined figures, that were interesting from all angles. Early theorists Giorgio Vasari Giorgio Vasari's opinions about the art of painting emerge in the praise he bestows on fellow artists in his multi-volume Lives of the Artists: he believed that excellence in painting demanded refinement, richness of invention (invenzione), expressed through virtuoso technique (maniera), and wit and study that appeared in the finished work, all criteria that emphasized the artist's intellect and the patron's sensibility. The artist was now no longer just a trained member of a local Guild of St Luke. Now he took his place at court alongside scholars, poets, and humanists, in a climate that fostered an appreciation for elegance and complexity. The coat-of-arms of Vasari's Medici patrons appears at the top of his portrait, quite as if it were the artist's own. The framing of the woodcut image of Vasari's Lives would be called "Jacobean" in an English-speaking milieu. In it, Michelangelo's Medici tombs inspire the anti-architectural "architectural" features at the top, the papery pierced frame, the satyr nudes at the base. As a mere frame it is extravagant: Mannerist, in short. Gian Paolo Lomazzo Another literary figure from the period is Gian Paolo Lomazzo, who produced two works—one practical and one metaphysical—that helped define the Mannerist artist's self-conscious relation to his art. His Trattato dell'arte della pittura, scoltura et architettura (Milan, 1584) is in part a guide to contemporary concepts of decorum, which the Renaissance inherited in part from Antiquity but Mannerism elaborated upon. Lomazzo's systematic codification of aesthetics, which typifies the more formalized and academic approaches typical of the later 16th century, emphasized a consonance between the functions of interiors and the kinds of painted and sculpted decors that would be suitable. Iconography, often convoluted and abstruse, is a more prominent element in the Mannerist styles. His less practical and more metaphysical Idea del tempio della pittura (The ideal temple of painting, Milan, 1590) offers a description along the lines of the "four temperaments" theory of human nature and personality, defining the role of individuality in judgment and artistic invention. Characteristics of artworks created during the Mannerist period Mannerism was an anti-classical movement which differed greatly from the aesthetic ideologies of the Renaissance. Though Mannerism was initially accepted with positivity based on the writings of Vasari, it was later regarded in a negative light because it solely view as, "an alteration of natural truth and a trite repetition of natural formulas." As an artistic moment, Mannerism involves many characteristics that are unique and specific to experimentation of how art is perceived. Below is a list of many specific characteristics that Mannerist artists would employ in their artworks. Elongation of figures: often Mannerist work featured the elongation of the human figure – occasionally this contributed to the bizarre imagery of some Mannerist art. Distortion of perspective: in paintings, the distortion of perspective explored the ideals for creating a perfect space. However, the idea of perfection sometimes alluded to the creation of unique imagery. One way in which distortion was explored was through the technique of foreshortening. At times, when extreme distortion was utilized, it would render the image nearly impossible to decipher. Black backgrounds: Mannerist artists often utilized flat black backgrounds to present a full contrast of contours in order to create dramatic scenes. Black backgrounds also contributed to a creating sense of fantasy within the subject matter. Use of darkness and light: many Mannerists were interested in capturing the essence of the night sky through the use of intentional illumination, often creating a sense of fantasy scenes. Notably, special attention was paid to torch and moonlight to create dramatic scenes. Sculptural forms: Mannerism was greatly influenced by sculpture, which gained popularity in the sixteenth century. As a result, Mannerist artists often based their depictions of human bodies in reference to sculptures and prints. This allowed Mannerist artists to focus on creating dimension. Clarity of line: the attention that was paid to clean outlines of figures was prominent within Mannerism and differed largely from the Baroque and High Renaissance.The outlines of figures often allowed for more attention to detail. Composition and space: Mannerist artists rejected the ideals of the Renaissance, notably the technique of one-point perspective. Instead, there was an emphasis on atmospheric effects and distortion of perspective. The use of space in Mannerist works instead privileged crowded compositions with various forms and figures or scant compositions with emphasis on black backgrounds. Mannerist movement: the interest in the study of human movement often lead to Mannerist artists rendering a unique type of movement linked to serpentine positions. These positions often anticipate the movements of future positions because of their often-unstable motions figures. In addition, this technique attributes to the artist's experimentation of form. Painted frames: in some Mannerist works, painted frames were utilized to blend in with the background of paintings and at times, contribute to the overall composition of the artwork. This is at times prevalent when there is special attention paid to ornate detailing. Atmospheric effects: many Mannerists utilized the technique of sfumato, known as, "the rendering of soft and hazy contours or surfaces" in their paintings for rendering the streaming of light. Mannerist colour: a unique aspect of Mannerism was in addition to the experimentation of form, composition, and light, much of the same curiosity was applied to color. Many artworks toyed with pure and intense hues of blues, green, pinks, and yellows, which at times detract from the overall design of artworks, and at other times, compliment it. Additionally, when rending skin tone, artists would often concentrate on create overly creaming and light complexions and often utilize undertones of blue. Mannerist artists and examples of their works Jacopo da Pontormo Jacopo da Pontormo's work is one of the most important contributions to Mannerism. He often drew his subject matter from religious narratives; heavily influenced by the works of Michelangelo, he frequently alludes to or uses sculptural forms as models for his compositions. A well-known element of his work is the rendering of gazes by various figures which often pierce out at the viewer in various directions. Dedicated to his work, Pontormo often expressed anxiety about its quality and was known to work slowly and methodically. His legacy is highly regarded, as he influenced artists such as Agnolo Bronzino and the aesthetic ideals of late Mannerism. Pontomoro's Joseph in Egypt, painted in 1517, portrays a running narrative of four Biblical scenes in which Joseph reconnects with his family. On the left side of the composition, Pontomoro depicts a scene of Joseph introducing his family to the Pharaoh of Egypt. On the right, Joseph is riding on a rolling bench, as cherubs fill the composition around him in addition to other figures and large rocks on a path in the distance. Above these scenes, is a spiral staircase which Joseph guides one his sons to their mother at the top. The final scene, on the right, is the final stage of Jacob's death as his sons watch nearby. Pontormo's Joseph in Egypt features many Mannerist elements. One element is utilization of incongruous colors such as various shades of pinks and blues which make up a majority of the canvas. An additional element of Mannerism is the incoherent handling of time about the story of Joseph through various scenes and use of space. Through the inclusion of the four different narratives, Ponotormo creates a cluttered composition and overall sense of busyness. Rosso Fiorentino and the School of Fontainebleau Rosso Fiorentino, who had been a fellow pupil of Pontormo in the studio of Andrea del Sarto, in 1530 brought Florentine Mannerism to Fontainebleau, where he became one of the founders of French 16th-century Mannerism, popularly known as the School of Fontainebleau. The examples of a rich and hectic decorative style at Fontainebleau further disseminated the Italian style through the medium of engravings to Antwerp, and from there throughout Northern Europe, from London to Poland. Mannerist design was extended to luxury goods like silver and carved furniture. A sense of tense, controlled emotion expressed in elaborate symbolism and allegory, and an ideal of female beauty characterized by elongated proportions are features of this style. Agnolo Bronzino Agnolo Bronzino was a pupil of Pontormo, whose style was very influential and often confusing in terms of figuring out the attribution of many artworks. During his career, Bronzino also collaborated with Vasari as a set designer for the production "Comedy of Magicians", where he painted many portraits. Bronzino's work was sought after, and he enjoyed great success when he became a court painter for the Medici family in 1539. A unique Mannerist characteristic of Bronzino's work was the rendering of milky complexions. In the painting, Venus, Cupid, Folly and Time, Bronzino portrays an erotic scene that leaves the viewer with more questions than answers. In the foreground, Cupid and Venus are nearly engaged in a kiss, but pause as if caught in the act. Above the pair are mythological figures, Father Time on the right, who pulls a curtain to reveal the pair and the representation of the goddess of the night on the left. The composition also involves a grouping of masks, a hybrid creature composed of features of a girl and a serpent, and a man depicted in agonizing pain. Many theories are available for the painting, such as it conveying the dangers of syphilis, or that the painting functioned as a court game. Mannerist portraits by Bronzino are distinguished by a serene elegance and meticulous attention to detail. As a result, Bronzino's sitters have been said to project an aloofness and marked emotional distance from the viewer. There is also a virtuosic concentration on capturing the precise pattern and sheen of rich textiles. Specifically, within the Venus, Cupid, Folly and Time, Bronzino utilizes the tactics of Mannerist movement, attention to detail, color, and sculptural forms. Evidence of Mannerist movement is apparent in the awkward movements of Cupid and Venus, as they contort their bodies to partly embrace. Particularly, Bronzino paints the complexion with the many forms as a perfect porcelain white with a smooth effacement of their muscles which provides a reference to the smoothness of sculpture. Alessandro Allori Alessandro Allori's (1535–1607) Susanna and the Elders (below) is distinguished by latent eroticism and consciously brilliant still life detail, in a crowded, contorted composition. Jacopo Tintoretto Jacopo Tintoretto has been known for his vastly different contributions to Venetian painting after the legacy of Titian. His work, which differed greatly from his predecessors, had been criticized by Vasari for its, "fantastical, extravagant, bizarre style." Within his work, Tintoretto adopted Mannerist elements that have distanced him from the classical notion of Venetian painting, as he often created artworks which contained elements of fantasy and retained naturalism. Other unique elements of Tintoretto's work include his attention to color through the regular utilization of rough brushstrokes and experimentation with pigment to create illusion. An artwork that is associated with Mannerist characteristics is the Last Supper; it was commissioned by Michele Alabardi for the San Giorgio Maggiore in 1591. In Tintoretto's Last Supper, the scene is portrayed from the angle of group of people along the right side of the composition. On the left side of the painting, Christ and the Apostles occupy one side of the table and single out Judas. Within the dark space, there are few sources of light; one source is emitted by Christ's halo and hanging torch above the table. In its distinct composition, the Last Supper portrays Mannerist characteristics. One characteristic that Tintoretto utilizes is a black background. Though the painting gives some indication of an interior space through the use of perspective, the edges of the composition are mostly shrouded in shadow which provides drama for the central scene of the Last Supper. Additionally, Tintoretto utilizes the spotlight effects with light, especially with the halo of Christ and the hanging torch above the table. A third Mannerist characteristic that Tintoretto employs are the atmospheric effects of figures shaped in smoke and float about the composition. El Greco El Greco attempted to express religious emotion with exaggerated traits. After the realistic depiction of the human form and the mastery of perspective achieved in High Renaissance, some artists started to deliberately distort proportions in disjointed, irrational space for emotional and artistic effect. El Greco still is a deeply original artist. He has been characterized by modern scholars as an artist so individual that he belongs to no conventional school. Key aspects of Mannerism in El Greco include the jarring "acid" palette, elongated and tortured anatomy, irrational perspective and light, and obscure and troubling iconography. El Greco's style was a culmination of unique developments based on his Greek heritage and travels to Spain and Italy. El Greco's work reflects a multitude of styles including Byzantine elements as well as the influence of Caravaggio and Parmigianino in addition to Venetian coloring. An important element is his attention to color as he regarded it to be one of the most important aspects of his painting. Over the course of his career, El Greco's work remained in high demand as he completed important commissions in locations such as the Colegio de la Encarnación de Madrid. El Greco's unique painting style and connection to Mannerist characteristics is especially prevalent in the work Laocoön. Painted in 1610, it depicts the mythological tale of Laocoön, who warned the Trojans about the danger of the wooden horse which was presented by the Greeks as peace offering to the goddess Minerva. As a result, Minerva retaliated in revenge by summoning serpents to kill Laocoön and his two sons. Instead of being set against the backdrop of Troy, El Greco situated the scene near Toledo, Spain in order to "universalize the story by drawing out its relevance for the contemporary world." El Greco's unique style in Laocoön exemplifies many Mannerist characteristics. Prevalent is the elongation of many of the human forms throughout the composition in conjunction with their serpentine movement, which provides a sense of elegance. An additional element of Mannerist style is the atmospheric effects in which El Greco creates a hazy sky and blurring of landscape in the background. Benvenuto Cellini Benvenuto Cellini created the Cellini Salt Cellar of gold and enamel in 1540 featuring Poseidon and Amphitrite (water and earth) placed in uncomfortable positions and with elongated proportions. It is considered a masterpiece of Mannerist sculpture. Lavinia Fontana Lavinia Fontana (1552–1614) was a Mannerist portraitist often acknowledged to be the first female career artist in Western Europe. She was appointed to be the Portraitist in Ordinary at the Vatican. Her style is characterized as being influenced by the Carracci family of painters by the colors of the Venetian School. She is known for her portraits of
his book Lives of the Most Eminent Painters, Sculptors, and Architects, Giorgio Vasari noted that Michelangelo stated once: "Those who are followers can never pass by whom they follow". The competitive spirit The competitive spirit was cultivated by patrons who encouraged sponsored artists to emphasize virtuosic technique and to compete with one another for commissions. It drove artists to look for new approaches and dramatically illuminated scenes, elaborate clothes and compositions, elongated proportions, highly stylized poses, and a lack of clear perspective. Leonardo da Vinci and Michelangelo were each given a commission by Gonfaloniere Piero Soderini to decorate a wall in the Hall of Five Hundred in Florence. These two artists were set to paint side by side and compete against each other, fueling the incentive to be as innovative as possible. Early mannerism The early Mannerists in Florence—especially the students of Andrea del Sarto such as Jacopo da Pontormo and Rosso Fiorentino—are notable for elongated forms, precariously balanced poses, a collapsed perspective, irrational settings, and theatrical lighting. Parmigianino (a student of Correggio) and Giulio Romano (Raphael's head assistant) were moving in similarly stylized aesthetic directions in Rome. These artists had matured under the influence of the High Renaissance, and their style has been characterized as a reaction to or exaggerated extension of it. Instead of studying nature directly, younger artists began studying Hellenistic sculpture and paintings of masters past. Therefore, this style is often identified as "anti-classical", yet at the time it was considered a natural progression from the High Renaissance. The earliest experimental phase of Mannerism, known for its "anti-classical" forms, lasted until about 1540 or 1550. Marcia B. Hall, professor of art history at Temple University, notes in her book After Raphael that Raphael's premature death marked the beginning of Mannerism in Rome. In past analyses, it has been noted that mannerism arose in the early 16th century contemporaneously with a number of other social, scientific, religious and political movements such as the Copernican heliocentrism, the Sack of Rome in 1527, and the Protestant Reformation's increasing challenge to the power of the Catholic Church. Because of this, the style's elongated forms and distorted forms were once interpreted as a reaction to the idealized compositions prevalent in High Renaissance art. This explanation for the radical stylistic shift c. 1520 has fallen out of scholarly favor, though early Mannerist art is still sharply contrasted with High Renaissance conventions; the accessibility and balance achieved by Raphael's School of Athens no longer seemed to interest young artists. High maniera The second period of Mannerism is commonly differentiated from the earlier, so-called "anti-classical" phase. Subsequent mannerists stressed intellectual conceits and artistic virtuosity, features that have led later critics to accuse them of working in an unnatural and affected "manner" (maniera). Maniera artists looked to their older contemporary Michelangelo as their principal model; theirs was an art imitating art, rather than an art imitating nature. Art historian Sydney Joseph Freedberg argues that the intellectualizing aspect of maniera art involves expecting its audience to notice and appreciate this visual reference—a familiar figure in an unfamiliar setting enclosed between "unseen, but felt, quotation marks". The height of artifice is the Maniera painter's penchant for deliberately misappropriating a quotation. Agnolo Bronzino and Giorgio Vasari exemplify this strain of Maniera that lasted from about 1530 to 1580. Based largely at courts and in intellectual circles around Europe, Maniera art couples exaggerated elegance with exquisite attention to surface and detail: porcelain-skinned figures recline in an even, tempered light, acknowledging the viewer with a cool glance, if they make eye contact at all. The Maniera subject rarely displays much emotion, and for this reason works exemplifying this trend are often called 'cold' or 'aloof.' This is typical of the so-called "stylish style" or Maniera in its maturity. Spread of Mannerism The cities Rome, Florence, and Mantua were Mannerist centers in Italy. Venetian painting pursued a different course, represented by Titian in his long career. A number of the earliest Mannerist artists who had been working in Rome during the 1520s fled the city after the Sack of Rome in 1527. As they spread out across the continent in search of employment, their style was disseminated throughout Italy and Northern Europe. The result was the first international artistic style since the Gothic. Other parts of Northern Europe did not have the advantage of such direct contact with Italian artists, but the Mannerist style made its presence felt through prints and illustrated books. European rulers, among others, purchased Italian works, while northern European artists continued to travel to Italy, helping to spread the Mannerist style. Individual Italian artists working in the North gave birth to a movement known as the Northern Mannerism. Francis I of France, for example, was presented with Bronzino's Venus, Cupid, Folly and Time. The style waned in Italy after 1580, as a new generation of artists, including the Carracci brothers, Caravaggio and Cigoli, revived naturalism. Walter Friedlaender identified this period as "anti-mannerism", just as the early Mannerists were "anti-classical" in their reaction away from the aesthetic values of the High Renaissance and today the Carracci brothers and Caravaggio are agreed to have begun the transition to Baroque-style painting which was dominant by 1600. Outside of Italy, however, Mannerism continued into the 17th century. In France, where Rosso traveled to work for the court at Fontainebleau, it is known as the "Henry II style" and had a particular impact on architecture. Other important continental centers of Northern Mannerism include the court of Rudolf II in Prague, as well as Haarlem and Antwerp. Mannerism as a stylistic category is less frequently applied to English visual and decorative arts, where native labels such as "Elizabethan" and "Jacobean" are more commonly applied. Seventeenth-century Artisan Mannerism is one exception, applied to architecture that relies on pattern books rather than on existing precedents in Continental Europe. Of particular note is the Flemish influence at Fontainebleau that combined the eroticism of the French style with an early version of the vanitas tradition that would dominate seventeenth-century Dutch and Flemish painting. Prevalent at this time was the pittore vago, a description of painters from the north who entered the workshops in France and Italy to create a truly international style. Sculpture As in painting, early Italian Mannerist sculpture was very largely an attempt to find an original style that would top the achievement of the High Renaissance, which in sculpture essentially meant Michelangelo, and much of the struggle to achieve this was played out in commissions to fill other places in the Piazza della Signoria in Florence, next to Michelangelo's David. Baccio Bandinelli took over the project of Hercules and Cacus from the master himself, but it was little more popular then than it is now, and maliciously compared by Benvenuto Cellini to "a sack of melons", though it had a long-lasting effect in apparently introducing relief panels on the pedestal of statues. Like other works of his and other Mannerists, it removes far more of the original block than Michelangelo would have done. Cellini's bronze Perseus with the head of Medusa is certainly a masterpiece, designed with eight angles of view, another Mannerist characteristic, and artificially stylized in comparison with the Davids of Michelangelo and Donatello. Originally a goldsmith, his famous gold and enamel Salt Cellar (1543) was his first sculpture, and shows his talent at its best. Small bronze figures for collector's cabinets, often mythological subjects with nudes, were a popular Renaissance form at which Giambologna, originally Flemish but based in Florence, excelled in the later part of the century. He also created life-size sculptures, of which two entered the collection in the Piazza della Signoria. He and his followers devised elegant elongated examples of the figura serpentinata, often of two intertwined figures, that were interesting from all angles. Early theorists Giorgio Vasari Giorgio Vasari's opinions about the art of painting emerge in the praise he bestows on fellow artists in his multi-volume Lives of the Artists: he believed that excellence in painting demanded refinement, richness of invention (invenzione), expressed through virtuoso technique (maniera), and wit and study that appeared in the finished work, all criteria that emphasized the artist's intellect and the patron's sensibility. The artist was now no longer just a trained member of a local Guild of St Luke. Now he took his place at court alongside scholars, poets, and humanists, in a climate that fostered an appreciation for elegance and complexity. The coat-of-arms of Vasari's Medici patrons appears at the top of his portrait, quite as if it were the artist's own. The framing of the woodcut image of Vasari's Lives would be called "Jacobean" in an English-speaking milieu. In it, Michelangelo's Medici tombs inspire the anti-architectural "architectural" features at the top, the papery pierced frame, the satyr nudes at the base. As a mere frame it is extravagant: Mannerist, in short. Gian Paolo Lomazzo Another literary figure from the period is Gian Paolo Lomazzo, who produced two works—one practical and one metaphysical—that helped define the Mannerist artist's self-conscious relation to his art. His Trattato dell'arte della pittura, scoltura et architettura (Milan, 1584) is in part a guide to contemporary concepts of decorum, which the Renaissance inherited in part from Antiquity but Mannerism elaborated upon. Lomazzo's systematic codification of aesthetics, which typifies the more formalized and academic approaches typical of the later 16th century, emphasized a consonance between the functions of interiors and the kinds of painted and sculpted decors that would be suitable. Iconography, often convoluted and abstruse, is a more prominent element in the Mannerist styles. His less practical and more metaphysical Idea del tempio della pittura (The ideal temple of painting, Milan, 1590) offers a description along the lines of the "four temperaments" theory of human nature and personality, defining the role of individuality in judgment and artistic invention. Characteristics of artworks created during the Mannerist period Mannerism was an anti-classical movement which differed greatly from the aesthetic ideologies of the Renaissance. Though Mannerism was initially accepted with positivity based on the writings of Vasari, it was later regarded in a negative light because it solely view as, "an alteration of natural truth and a trite repetition of natural formulas." As an artistic moment, Mannerism involves many characteristics that are unique and specific to experimentation of how art is perceived. Below is a list of many specific characteristics that Mannerist artists would employ in their artworks. Elongation of figures: often Mannerist work featured the elongation of the human figure – occasionally this contributed to the bizarre imagery of some Mannerist art. Distortion of perspective: in paintings, the distortion of perspective explored the ideals for creating a perfect space. However, the idea of perfection sometimes alluded to the creation of unique imagery. One way in which distortion was explored was through the technique of foreshortening. At times, when extreme distortion was utilized, it would render the image nearly impossible to decipher. Black backgrounds: Mannerist artists often utilized flat black backgrounds to present a full contrast of contours in order to create dramatic scenes. Black backgrounds also contributed to a creating sense of fantasy within the subject matter. Use of darkness and light: many Mannerists were interested in capturing the essence of the night sky through the use of intentional illumination, often creating a sense of fantasy scenes. Notably, special attention was paid to torch and moonlight to create dramatic scenes. Sculptural forms: Mannerism was greatly influenced by sculpture, which gained popularity in the sixteenth century. As a result, Mannerist artists often based their depictions of human bodies in reference to sculptures and prints. This allowed Mannerist artists to focus on creating dimension. Clarity of line: the attention that was paid to clean outlines of figures was prominent within Mannerism and differed largely from the Baroque and High Renaissance.The outlines of figures often allowed for more attention to detail. Composition and space: Mannerist artists rejected the ideals of the Renaissance, notably the technique of one-point perspective. Instead, there was an emphasis on atmospheric effects and distortion of perspective. The use of space in Mannerist works instead privileged crowded compositions with various forms and figures or scant compositions with emphasis on black backgrounds. Mannerist movement: the interest in the study of human movement often lead to Mannerist artists rendering a unique type of movement linked to serpentine positions. These positions often anticipate the movements of future positions because of their often-unstable motions figures. In addition, this technique attributes to the artist's experimentation of form. Painted frames: in some Mannerist works, painted frames were utilized to blend in with the background of paintings and at times, contribute to the overall composition of the artwork. This is at times prevalent when there is special attention paid to ornate detailing. Atmospheric effects: many Mannerists utilized the technique of sfumato, known as, "the rendering of soft and hazy contours or surfaces" in their paintings for rendering the streaming of light. Mannerist colour: a unique aspect of Mannerism was in addition to the experimentation of form, composition, and light, much of the same curiosity was applied to color. Many artworks toyed with pure and intense hues of blues, green, pinks, and yellows, which at times detract from the overall design of artworks, and at other times, compliment it. Additionally, when rending skin tone, artists would often concentrate on create overly creaming and light complexions and often utilize undertones of blue. Mannerist artists and examples of their works Jacopo da Pontormo Jacopo da Pontormo's work is one of the most important contributions to Mannerism. He often drew his subject matter from religious narratives; heavily influenced by the works of Michelangelo, he frequently alludes to or uses sculptural forms as models for his compositions. A well-known element of his work is the rendering of gazes by various figures which often pierce out at the viewer in various directions. Dedicated to his work, Pontormo often expressed anxiety about its quality and was known to work slowly and methodically. His legacy is highly regarded, as he influenced artists such as Agnolo Bronzino and the aesthetic ideals of late Mannerism. Pontomoro's Joseph in Egypt, painted in 1517, portrays a running narrative of four Biblical scenes in which Joseph reconnects with his family. On the left side of the composition, Pontomoro depicts a scene of Joseph introducing his family to the Pharaoh of Egypt. On the right, Joseph is riding on a rolling bench, as cherubs fill the composition around him in addition to other figures and large rocks on a path in the distance. Above these scenes, is a spiral staircase which Joseph guides one his sons to their mother at the top. The final scene, on the right, is the final stage of Jacob's death as his sons watch nearby. Pontormo's Joseph in Egypt features many Mannerist elements. One element is utilization of incongruous colors such as various shades of pinks and blues which make up a majority of the canvas. An additional element of Mannerism is the incoherent handling of time about the story of Joseph through various scenes and use of space. Through the inclusion of the four different narratives, Ponotormo creates a cluttered composition and overall sense of busyness. Rosso Fiorentino and the School of Fontainebleau Rosso Fiorentino, who had been a fellow pupil of Pontormo in the studio of Andrea del Sarto, in 1530 brought Florentine Mannerism to Fontainebleau, where he became one of the founders of French 16th-century Mannerism, popularly known as the School of Fontainebleau. The examples of a rich and hectic decorative style at Fontainebleau further disseminated the Italian style through the medium of engravings to Antwerp, and from there throughout Northern Europe, from London to Poland. Mannerist design was extended to luxury goods like silver and carved furniture. A sense of tense, controlled emotion expressed in elaborate symbolism and allegory, and an ideal of female beauty characterized by elongated proportions are features of this style. Agnolo Bronzino
Last Great Decade. The series looked at various events of the 1990s, including the scandal that brought Lewinsky into the national spotlight. This was Lewinsky's first such interview in more than ten years. In October 2014, she took a public stand against cyberbullying, calling herself "patient zero" of online harassment. Speaking at a Forbes magazine "30 Under 30" summit about her experiences in the aftermath of the scandal, she said, "Having survived myself, what I want to do now is help other victims of the shame game survive, too." She said she was influenced by reading about the suicide of Tyler Clementi, a Rutgers University freshman, involving cyberbullying and joined Twitter to facilitate her efforts. In March 2015, Lewinsky continued to speak out publicly against cyberbullying, delivering a TED talk calling for a more compassionate Internet. In June 2015, she became an ambassador and strategic advisor for anti-bullying organization Bystander Revolution. The same month, she gave an anti-cyberbullying speech at the Cannes Lions International Festival of Creativity. In September 2015, Lewinsky was interviewed by Amy Robach on Good Morning America, about Bystander Revolution's Month of Action campaign for National Bullying Prevention Month. Lewinsky wrote the foreword to an October 2017 book by Sue Scheff and Melissa Schorr, Shame Nation: The Global Epidemic of Online Hate. In October 2017, Lewinsky tweeted the #MeToo hashtag to indicate that she was a victim of sexual harassment and/or sexual assault, but did not provide details. She wrote an essay in the March 2018 issue of Vanity Fair in which she did not directly explain why she used the #MeToo hashtag in October. She did write that looking back at her relationship with Bill Clinton, although it was consensual, because he was 27 years older than her and in a position with a lot more power than she had, in her opinion the relationship constituted an "abuse of power" on Clinton's part. She added that she had been diagnosed with post-traumatic stress disorder due to the experiences involved after the relationship was disclosed. In May 2018, Lewinsky was disinvited from an event hosted by Town & Country when Bill Clinton accepted an invitation to the event. In September 2018, Lewinsky spoke at a conference in Jerusalem. Following her speech, she sat for a Q&A session with the host, journalist Yonit Levi. The first question Levi asked was whether Lewinsky thinks that Clinton owes her a private apology. Lewinsky refused to answer the question, and walked off the stage. She later tweeted that the question was posed in a pre-event meeting with Levi, and Lewinsky told her that such a question was off limits. A spokesman for the Israel Television News Company, which hosted the conference and is Levi's employer, responded that Levi had kept all the agreements she made with Lewinsky and honored her requests. In 2019, she was interviewed by John Oliver on his HBO show Last Week Tonight with John Oliver, where they discussed the importance of solving the problem of public shaming and how her situation may have been different if social media had existed at the time that the scandal broke in the late 1990s. More recently, she started Alt Ending Productions with a first look deal at 20th Television. On August 6, 2019, it was announced that the Clinton–Lewinsky scandal would be the focus of the third season of the television series American Crime Story with the title Impeachment. The season began production in October 2020. Lewinsky was a co-producer. It consists of 10 episodes and premiered on September 7, 2021. The season portrays the Clinton–Lewinsky scandal and is based on the book A Vast Conspiracy: The Real Story of the Sex Scandal That Nearly Brought Down a President by Jeffrey Toobin. The 28-year-old actress Beanie Feldstein plays Monica Lewinsky. In discussing the series and her observations on social media and cancel culture today in an interview with Kara Swisher for the New York Times Opinion podcast Sway, Lewinsky noted that References Further reading Berlant, Lauren, and Duggan, Lisa. Our Monica, Ourselves: The Clinton Affair and the Public Interest. Sexual Cultures. New York: New York University Press, 2001. . Kalb, Marvin. One Scandalous Story: Clinton, Lewinsky, and Thirteen Days That Tarnished American Journalism. New York: Free Press, 2001. . External links "The Price of Shame" speech at TED Roger Ailes Dream Was My Nightmare 1973 births Living people 20th-century American women 21st-century American women Activists from San Francisco Alumni of the London School of Economics American expatriates in the United Kingdom American fashion businesspeople American fashion designers American people of German-Jewish descent American people of Lithuanian-Jewish descent American people of Romanian-Jewish descent American people of Russian-Jewish descent American people of Salvadoran descent American women activists American women fashion designers American women television personalities Anti-bullying activists Beverly Hills High School alumni Jewish activists Jewish fashion designers Lewis & Clark College alumni Mistresses of United States presidents Santa Monica College alumni Television personalities from San
that she had an affair with opera star Plácido Domingo to her daughter's sexual relationship with Clinton. Monica's maternal grandfather, Samuel M. Vilensky, was a Lithuanian Jew, and Monica's maternal grandmother, Bronia Poleshuk, was born in the British Concession of Tianjin, China, to a Russian Jewish family. Monica's parents' acrimonious separation and divorce during 1987 and 1988 had a significant impact on her. Her father later married his current wife, Barbara; her mother later married R. Peter Straus, a media executive and former director of the Voice of America under President Jimmy Carter. The family attended Sinai Temple in Los Angeles, and Monica attended Sinai Akiba Academy, its religious school. For her primary education, she attended the John Thomas Dye School in Bel-Air. She attended Beverly Hills High School for her first three years of high school, before transferring to Bel Air Prep (later known as Pacific Hills School) for her senior year, graduating in 1991. Following her high school graduation, Lewinsky attended Santa Monica College, a two-year community college, and worked for the drama department at Beverly Hills High School and at a tie shop. In 1992, she allegedly began a five-year affair with Andy Bleiler, her married former high school drama instructor. In 1993, she enrolled at Lewis & Clark College in Portland, Oregon, graduating with a bachelor's degree in psychology in 1995. In an appearance on Larry King Live in 2000, she revealed that she started an affair with a 40-year-old married man in Los Angeles when she was 18 years old, and that the affair continued while she was attending Lewis & Clark College in the early 90s. She did not reveal the man's identity. With the assistance of a family connection, Lewinsky secured an unpaid summer White House internship in the office of White House Chief of Staff Leon Panetta. Lewinsky moved to Washington, D.C. and took up the position in July 1995. She moved to a paid position in the White House Office of Legislative Affairs in December 1995. Scandal Lewinsky stated that she had nine sexual encounters in the Oval Office with President Bill Clinton between November 1995 and March 1997. According to her testimony, these involved fellatio and other sexual acts, but not sexual intercourse. Clinton had previously been confronted with allegations of sexual misconduct during his time as Governor of Arkansas. Former Arkansas state employee Paula Jones filed a civil lawsuit against him alleging that he had sexually harassed her. Lewinsky's name surfaced during the discovery phase of Jones' case, when Jones' lawyers sought to show a pattern of behavior by Clinton which involved inappropriate sexual relationships with other government employees. In April 1996, Lewinsky's superiors transferred her from the White House to the Pentagon because they felt that she was spending too much time around Clinton. At the Pentagon, she worked as an assistant to chief Pentagon spokesman Kenneth Bacon. Lewinsky told co-worker Linda Tripp about her relationship with Clinton, and Tripp began secretly recording their telephone conversations beginning in September 1997. Lewinsky left the Pentagon position in December 1997. Lewinsky submitted an affidavit in the Paula Jones case in January 1998 denying any physical relationship with Clinton, and she attempted to persuade Tripp to lie under oath in that case. Tripp gave the tapes to Independent Counsel Kenneth Starr, adding to his on-going investigation into the Whitewater controversy. Starr then broadened his investigation beyond the Arkansas land use deal to include Lewinsky, Clinton, and others for possible perjury and subornation of perjury in the Jones case. Tripp reported the taped conversations to literary agent Lucianne Goldberg. She also convinced Lewinsky to save the gifts that Clinton had given her during their relationship and not to dry clean a blue dress that was stained with Clinton's semen. Under oath, Clinton denied having had "a sexual affair", "sexual relations", or "a sexual relationship" with Lewinsky. News of the Clinton–Lewinsky relationship broke in January 1998. On January 26, 1998, Clinton stated, "I did not have sexual relations with that woman, Miss Lewinsky" in a nationally televised White House news conference. The matter instantly occupied the news media, and Lewinsky spent the next weeks hiding from public attention in her mother's residence at the Watergate complex. News of Lewinsky's affair with Andy Bleiler, her former high school drama instructor, also came to light, and he turned over to Starr various souvenirs, photographs, and documents that Lewinsky had sent him and his wife during the time that she was in the White House. Clinton had also said, "There is not a sexual relationship, an improper sexual relationship or any other kind of improper relationship" which he defended as truthful on August 17, 1998, because of his use of the present tense, arguing "it depends on what the meaning of the word 'is' is". Starr obtained a blue dress from Lewinsky with Clinton's semen stained on it, as well as testimony from her that the President had inserted a cigar into her vagina. Clinton stated, "I did have a relationship with Miss Lewinsky that was not appropriate", but he denied committing perjury because, according to Clinton, the legal definition of oral sex was not encompassed by "sex" per se. In addition, he relied on the definition of "sexual relations" as proposed by the prosecution and agreed by the defense and by Judge Susan Webber Wright, who was hearing the Paula Jones case. Clinton claimed that certain acts were performed on him, not by him, and therefore he did not engage in sexual relations. Lewinsky's testimony to the Starr Commission, however, contradicted Clinton's claim of being totally passive in their encounters. Clinton and Lewinsky were both called before a grand jury; he testified via closed-circuit television, she in person. She was granted transactional immunity by the Office of the Independent Counsel in exchange for her testimony. Life after the scandal The affair led to pop culture celebrity for Lewinsky, as she had become the focus of a political storm. Her immunity agreement restricted what she could talk about publicly, but she was able to cooperate with Andrew Morton in his writing of Monica's Story, her biography which included her side of the Clinton affair. The book was published in March 1999; it was also excerpted as a cover story in Time magazine. On March 3, 1999, Barbara Walters interviewed Lewinsky on ABC's 20/20. The program was watched by 70 million Americans, which ABC said was a record for a news show. Lewinsky made about $500,000 from her participation in the book and another $1 million from international rights to the Walters interview, but was still beset by high legal bills and living costs. In June 1999, Ms. magazine published a series of articles by writer Susan Jane Gilman, sexologist Susie Bright, and author-host Abiola Abrams arguing from three generations of women whether Lewinsky's behavior had any meaning for feminism. Also in 1999, Lewinsky declined to sign an autograph in an airport, saying, "I'm kind of known for something that's not so great to be known for." She made a cameo appearance as herself in two sketches during the May 8, 1999, episode of NBC's Saturday Night Live, a program that had lampooned her relationship with Clinton over the prior 16 months. By her own account, Lewinsky had survived the intense media attention during the scandal period by knitting. In September 1999, she took this interest further by beginning to sell a line of handbags bearing her name, under the company name The Real Monica, Inc. They were sold online as well as at Henri Bendel in New York, Fred Segal in California, and The Cross in London. Lewinsky designed the bags—described by New York magazine as "hippie-ish, reversible totes"—and traveled frequently to supervise their manufacture in Louisiana. At the start of 2000, Lewinsky began appearing in television commercials for the diet company Jenny Craig, Inc. The $1 million endorsement deal, which required Lewinsky to lose 40 or more pounds in six months, gained considerable publicity at the time. Lewinsky said that despite her desire to return to a more private life, she needed the money to pay off legal fees, and she believed in the product. A Jenny Craig spokesperson said of Lewinsky, "She represents a busy active woman of today with a hectic lifestyle. And she has had weight issues and weight struggles for a long time. That represents a lot of women in America." The choice of Lewinsky as a role model proved controversial for Jenny Craig, and some of its private franchises switched to an older advertising campaign. The company stopped running the Lewinsky ads in February 2000, concluded her campaign entirely in April 2000, and paid her only $300,000 of the $1 million contracted for her involvement. Also at the start of 2000, Lewinsky moved to New York City, lived in the West Village, and became an A-list guest in the Manhattan social scene. In February 2000, she appeared on MTV's The Tom Green Show, in an episode in which the host took her to his parents' home in Ottawa in search of fabric for her new handbag business. Later in 2000, Lewinsky worked as a correspondent for Channel 5 in the UK, on the show Monica's Postcards, reporting on U.S. culture and trends from a variety of locations. In March 2002, Lewinsky, no longer bound by the terms of her immunity agreement, appeared in the HBO special, "Monica in Black and White", part of the America Undercover series. In it she answered a studio
pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating, changing to a gas, and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases really do become less dense when warmer, more dense when cooler. In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end up out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. Previously, the more popular conclusion, even for Galileo, was that air was weightless and it is vacuum that provided force, as in a siphon. The discovery helped bring Torricelli to the conclusion: This test, known as Torricelli's experiment, was essentially the first documented pressure gauge. Blaise Pascal went farther, having his brother-in-law try the experiment at different altitudes on a mountain, and finding indeed that the farther down in the ocean of atmosphere, the higher the pressure. Units The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N·m−2 or kg·m−1·s−2). This special name for the unit was added in 1971; before that, pressure in SI was expressed in units such as N·m−2. When indicated, the zero reference is stated in parenthesis following the unit, for example 101 kPa (abs). The pound per square inch (psi) is still in widespread use in the US and Canada, for measuring, for instance, tire pressure. A letter is often appended to the psi unit to indicate the measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although this practice is discouraged by the NIST. Because pressure was once commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., inches of water). Manometric measurement is the subject of pressure head calculations. The most common choices for a manometer's fluid are mercury (Hg) and water; water is nontoxic and readily available, while mercury's density allows for a shorter column (and so a smaller manometer) to measure a given pressure. The abbreviation "W.C." or the words "water column" are often printed on gauges and measurements that use water for the manometer. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. So measurements in "millimetres of mercury" or "inches of mercury" can be converted to SI units as long as attention is paid to the local factors of fluid density and gravity. Temperature fluctuations change the value of fluid density, while location can affect gravity. Although no longer preferred, these manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury (see torr) in most of the world, central venous pressure and lung pressures in centimeters of water are still common, as in settings for CPAP machines. Natural gas pipeline pressures are measured in inches of water, expressed as "inches W.C." Underwater divers use manometric units: the ambient pressure is measured in units of metres sea water (msw) which is defined as equal to one tenth of a bar. The unit used in the US is the foot sea water (fsw), based on standard gravity and a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw, , or , though elsewhere it states that 33 fsw is (one atmosphere), which gives one fsw equal to about 0.445 psi. The msw and fsw are the conventional units for measurement of diver pressure exposure used in decompression tables and the unit of calibration for pneumofathometers and hyperbaric chamber pressure gauges. Both msw and fsw are measured relative to normal atmospheric pressure. In vacuum systems, the units torr (millimeter of mercury), micron (micrometer of mercury), and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure. Atmospheric pressures are usually stated using hectopascal (hPa), kilopascal (kPa), millibar (mbar) or atmospheres (atm). In American and Canadian engineering, stress is often measured in kip. Note that stress is not a true pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1 dyn·cm−2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square metre. Many other hybrid units are used such as mmHg/cm2 or grams-force/cm2 (sometimes as [[kg/cm2]] without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is prohibited in SI; the unit of force in SI is the newton (N). Static and dynamic pressure Static pressure is uniform in all directions, so pressure measurements are independent of direction in an immovable (static) fluid. Flow, however, applies additional pressure on surfaces perpendicular to the flow direction, while having little impact on surfaces parallel to the flow direction. This directional component of pressure in a moving (dynamic) fluid is called dynamic pressure. An instrument facing the flow direction measures the sum of the static and dynamic pressures; this measurement is called the total pressure or stagnation pressure. Since dynamic pressure is referenced to static pressure, it is neither gauge nor absolute; it is a differential pressure. While static gauge pressure is of primary importance to determining net loads on pipe walls, dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured by taking the differential pressure between instruments parallel and perpendicular to the flow. Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its shape is critical to accuracy and the calibration curves are often non-linear. Applications Altimeter Barometer Depth gauge MAP sensor Pitot tube Sphygmomanometer Instruments Many instruments have been invented to measure pressure, with different advantages and disadvantages. Pressure range, sensitivity, dynamic response and cost all vary by several orders of magnitude from one instrument design to the next. The oldest type is the liquid column (a vertical tube filled with mercury) manometer invented by Evangelista Torricelli in 1643. The U-Tube was invented by Christiaan Huygens in 1661. Hydrostatic Hydrostatic gauges (such as the mercury column manometer) compare pressure to the hydrostatic force per unit area at the base of a column of fluid. Hydrostatic gauge measurements are independent of the type of gas being measured, and can be designed to have a very linear calibration. They have poor dynamic response. Piston Piston-type gauges counterbalance the pressure of a fluid with a spring (for example tire-pressure gauges of comparatively low accuracy) or a solid weight, in which case it is known as a deadweight tester and may be used for calibration of other gauges. Liquid column (manometer) Liquid-column gauges consist of a column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight (a force applied due to gravity) is in equilibrium with the pressure differential between the two ends of the tube (a force applied due to fluid pressure). A very simple version is a U-shaped tube half-full of liquid, one side of which is connected to the region of interest while the reference pressure (which might be the atmospheric pressure or a vacuum) is applied to the other. The difference in liquid levels represents the applied pressure. The pressure exerted by a column of fluid of height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore, the pressure difference between the applied pressure Pa and the reference pressure P0 in a U-tube manometer can be found by solving . In other words, the pressure on either end of the liquid (shown in blue in the figure) must be balanced (since the liquid is static), and so . In most liquid-column measurements, the result of the measurement is the height h, expressed typically in mm, cm, or inches. The h is also known as the pressure head. When expressed as a pressure head, pressure is specified in units of length and the measurement fluid must be specified. When accuracy is critical, the temperature of the measurement fluid must likewise be specified, because liquid density is a function of temperature. So, for example, pressure head might be written "742.2 mmHg" or "4.2 inH2O at 59 °F" for measurements taken with mercury or water as the manometric fluid respectively. The word "gauge" or "vacuum" may be added to such a measurement to distinguish between a pressure above or below the atmospheric pressure. Both mm of mercury and inches of water are common pressure heads, which can be converted to S.I. units of pressure using unit conversion and the above formulas. If the fluid being measured is significantly dense, hydrostatic corrections may have to be made for the height between the moving surface of the manometer working fluid and the location where the pressure measurement is desired, except when measuring differential pressure of a fluid (for example, across an orifice plate or venturi), in which case the density ρ should be corrected by subtracting the density of the fluid being measured. Although any fluid can be used, mercury is preferred for its high density (13.534 g/cm3) and low vapour pressure. Its convex meniscus is advantageous since this means there will be no pressure errors from wetting the glass, though under exceptionally clean circumstances, the mercury will stick to glass and the barometer may become stuck (the mercury can sustain a negative absolute pressure) even under a strong vacuum. For low pressure differences, light oil or water are commonly used (the latter giving rise to units of measurement such as inches water gauge and millimetres H2O). Liquid-column pressure gauges have a highly linear calibration. They have poor dynamic response because the fluid in the column may react slowly to a pressure change. When measuring vacuum, the working liquid may evaporate and contaminate the vacuum if its vapor pressure is too high. When measuring liquid pressure, a loop filled with gas or a light fluid can isolate the liquids to prevent them from mixing, but this can be unnecessary, for example, when mercury is used as the manometer fluid to measure differential pressure of a fluid such as water. Simple hydrostatic gauges can measure pressures ranging from a few torrs (a few 100 Pa) to a few atmospheres (approximately ). A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube and has a scale beside the narrower column. The column may be inclined to further amplify the liquid movement. Based on the use and structure, following types of manometers are used Simple manometer Micromanometer Differential manometer Inverted differential manometer McLeod gauge A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer until the pressure is a few millimetres of mercury. The technique is very slow and unsuited to continual monitoring, but is capable of good accuracy. Unlike other manometer gauges, the McLeod gauge reading is dependent on the composition of the gas, since the interpretation relies on the sample compressing as an ideal gas. Due
pressure is always changing and the reference in this case is fixed at 1 bar. To produce an absolute pressure sensor, the manufacturer seals a high vacuum behind the sensing diaphragm. If the process-pressure connection of an absolute-pressure transmitter is open to the air, it will read the actual barometric pressure. History For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating, changing to a gas, and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases really do become less dense when warmer, more dense when cooler. In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end up out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. Previously, the more popular conclusion, even for Galileo, was that air was weightless and it is vacuum that provided force, as in a siphon. The discovery helped bring Torricelli to the conclusion: This test, known as Torricelli's experiment, was essentially the first documented pressure gauge. Blaise Pascal went farther, having his brother-in-law try the experiment at different altitudes on a mountain, and finding indeed that the farther down in the ocean of atmosphere, the higher the pressure. Units The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N·m−2 or kg·m−1·s−2). This special name for the unit was added in 1971; before that, pressure in SI was expressed in units such as N·m−2. When indicated, the zero reference is stated in parenthesis following the unit, for example 101 kPa (abs). The pound per square inch (psi) is still in widespread use in the US and Canada, for measuring, for instance, tire pressure. A letter is often appended to the psi unit to indicate the measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although this practice is discouraged by the NIST. Because pressure was once commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., inches of water). Manometric measurement is the subject of pressure head calculations. The most common choices for a manometer's fluid are mercury (Hg) and water; water is nontoxic and readily available, while mercury's density allows for a shorter column (and so a smaller manometer) to measure a given pressure. The abbreviation "W.C." or the words "water column" are often printed on gauges and measurements that use water for the manometer. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. So measurements in "millimetres of mercury" or "inches of mercury" can be converted to SI units as long as attention is paid to the local factors of fluid density and gravity. Temperature fluctuations change the value of fluid density, while location can affect gravity. Although no longer preferred, these manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury (see torr) in most of the world, central venous pressure and lung pressures in centimeters of water are still common, as in settings for CPAP machines. Natural gas pipeline pressures are measured in inches of water, expressed as "inches W.C." Underwater divers use manometric units: the ambient pressure is measured in units of metres sea water (msw) which is defined as equal to one tenth of a bar. The unit used in the US is the foot sea water (fsw), based on standard gravity and a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw, , or , though elsewhere it states that 33 fsw is (one atmosphere), which gives one fsw equal to about 0.445 psi. The msw and fsw are the conventional units for measurement of diver pressure exposure used in decompression tables and the unit of calibration for pneumofathometers and hyperbaric chamber pressure gauges. Both msw and fsw are measured relative to normal atmospheric pressure. In vacuum systems, the units torr (millimeter of mercury), micron (micrometer of mercury), and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure. Atmospheric pressures are usually stated using hectopascal (hPa), kilopascal (kPa), millibar (mbar) or atmospheres (atm). In American and Canadian engineering, stress is often measured in kip. Note that stress is not a true pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1 dyn·cm−2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square metre. Many other hybrid units are used such as mmHg/cm2 or grams-force/cm2 (sometimes as [[kg/cm2]] without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is prohibited in SI; the unit of force in SI is the newton (N). Static and dynamic pressure Static pressure is uniform in all directions, so pressure measurements are independent of direction in an immovable (static) fluid. Flow, however, applies additional pressure on surfaces perpendicular to the flow direction, while having little impact on surfaces parallel to the flow direction. This directional component of pressure in a moving (dynamic) fluid is called dynamic pressure. An instrument facing the flow direction measures the sum of the static and dynamic pressures; this measurement is called the total pressure or stagnation pressure. Since dynamic pressure is referenced to static pressure, it is neither gauge nor absolute; it is a differential pressure. While static gauge pressure is of primary importance to determining net loads on pipe walls, dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured by taking the differential pressure between instruments parallel and perpendicular to the flow. Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its shape is critical to accuracy and the calibration curves are often non-linear. Applications Altimeter Barometer Depth gauge MAP sensor Pitot tube Sphygmomanometer Instruments Many instruments have been invented to measure pressure, with different advantages and disadvantages. Pressure range, sensitivity, dynamic response and cost all vary by several orders of magnitude from one instrument design to the next. The oldest type is the liquid column (a vertical tube filled with mercury) manometer invented by Evangelista Torricelli in 1643. The U-Tube was invented by Christiaan Huygens in 1661. Hydrostatic Hydrostatic gauges (such as the mercury column manometer) compare pressure to the hydrostatic force per unit area at the base of a column of fluid. Hydrostatic gauge measurements are independent of the type of gas being measured, and can be designed to have a very linear calibration. They have poor dynamic response. Piston Piston-type gauges counterbalance the pressure of a fluid with a spring (for example tire-pressure gauges of comparatively low accuracy) or a solid weight, in which case it is known as a deadweight tester and may be used for calibration of other gauges. Liquid column (manometer) Liquid-column gauges consist of a column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight (a force applied due to gravity) is in equilibrium with the pressure differential between the two ends of the tube (a force applied due to fluid pressure). A very simple version is a U-shaped tube half-full of liquid, one side of which is connected to the region of interest while the reference pressure (which might be the atmospheric pressure or a vacuum) is applied to the other. The difference in liquid levels represents the applied pressure. The pressure exerted by a column of fluid of height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore, the pressure difference between the applied pressure Pa and the reference pressure P0 in a U-tube manometer can be found by solving . In other words, the pressure on either end of the liquid (shown in blue in the figure) must be balanced (since the liquid is static), and so . In most liquid-column measurements, the result of the measurement is the height h, expressed typically in mm, cm, or inches. The h is also known as the pressure head. When expressed as a pressure head, pressure is specified in units of length and the measurement fluid must be specified. When accuracy is critical, the temperature of the measurement fluid must likewise be specified, because liquid density is a function of temperature. So, for example, pressure head might be written "742.2 mmHg" or "4.2 inH2O at 59 °F" for measurements taken with mercury or water as the manometric fluid respectively. The word "gauge" or "vacuum" may be added to such a measurement to distinguish between a pressure above or below the atmospheric pressure. Both mm of mercury and inches of water are common pressure heads, which can be converted to S.I. units of pressure using unit conversion and the above formulas. If the fluid being measured is significantly dense, hydrostatic corrections may have to be made for the height between the moving surface of the manometer working fluid and the location where the pressure measurement is desired, except when measuring differential pressure of a fluid (for example, across an orifice plate or venturi), in which case the density ρ should be corrected by subtracting the density of the fluid being measured. Although any fluid can be used, mercury is preferred for its high density (13.534 g/cm3) and low vapour pressure. Its convex meniscus is advantageous since this means there will be no pressure errors from wetting the glass, though under exceptionally clean circumstances, the mercury will stick to glass and the barometer may become stuck (the mercury can sustain a negative absolute pressure) even under a strong vacuum. For low pressure differences, light oil or water are commonly used (the latter giving rise to units of measurement such as inches water gauge and millimetres H2O). Liquid-column pressure gauges have a highly linear calibration. They have poor dynamic response because the fluid in the column may react slowly to a pressure change. When measuring vacuum, the working liquid may evaporate and contaminate the vacuum if its vapor pressure is too high. When measuring liquid pressure, a loop filled with gas or a light fluid can isolate the liquids to prevent them from mixing, but this can be unnecessary, for example, when mercury is used as the manometer fluid to measure differential pressure of a fluid such as water. Simple hydrostatic gauges can measure pressures ranging from a few torrs (a few 100 Pa) to a few atmospheres (approximately ). A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube and has a scale beside the narrower column. The column may be inclined to further amplify the liquid movement. Based on the use and structure, following types of manometers are used Simple manometer Micromanometer Differential manometer Inverted differential manometer McLeod gauge A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer until the pressure is a few millimetres of mercury. The technique is very slow and unsuited to continual monitoring, but is capable of good accuracy. Unlike other manometer gauges, the McLeod gauge reading is dependent on the composition of the gas, since the interpretation relies on the sample compressing as an ideal gas. Due to the compression process, the McLeod gauge completely ignores partial pressures from non-ideal vapors that condense, such as pump oils, mercury, and even water if compressed enough. Useful range: from around 10−4 Torr (roughly 10−2 Pa) to vacuums as high as 10−6 Torr (0.1 mPa), 0.1 mPa is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-dependent properties. These indirect measurements must be calibrated to SI units by a direct measurement, most commonly a McLeod gauge. Aneroid Aneroid gauges are based on a metallic pressure-sensing element that flexes elastically under the effect of a pressure difference across the element. "Aneroid" means "without fluid", and the term originally distinguished these gauges from the hydrostatic gauges described above. However, aneroid gauges can be used to measure the pressure of a liquid as well as a gas, and they are not the only type of gauge that can operate without fluid. For this reason, they are often called mechanical gauges in modern language. Aneroid gauges are not dependent on the type of gas being measured, unlike thermal and ionization gauges, and are less likely to contaminate the system than hydrostatic gauges. The pressure sensing element may be a Bourdon tube, a diaphragm, a capsule, or a set of bellows, which will change shape in response to the pressure of the region in question. The deflection of the pressure sensing element may be read by a linkage connected to a needle, or it may be read by a secondary transducer. The most common secondary transducers in modern vacuum gauges measure a change in capacitance due to the mechanical deflection. Gauges that rely on a change in capacitance are often referred to as capacitance manometers. Bourdon gauge The Bourdon pressure gauge uses the principle that a flattened tube tends to straighten or regain its circular form in cross-section when pressurized. (A party horn illustrates this principle.) This change in cross-section may be hardly noticeable, involving moderate stresses within the elastic range of easily workable materials. The strain of the material of the tube is magnified by forming the tube into a C shape or even a helix, such that the entire tube tends to straighten out or uncoil elastically as it is pressurized. Eugène Bourdon patented his gauge in France in 1849, and it was widely adopted because of its superior sensitivity, linearity, and accuracy; Edward Ashcroft purchased Bourdon's American patent rights in 1852 and became a major manufacturer of gauges. Also in 1849, Bernard Schaeffer in Magdeburg, Germany patented a successful
step to the left with the left foot followed by a step on the right foot closing to the left foot. France Chretien de Troyes Some of the earliest mentions of the carol occur in the works of the French poet Chrétien de Troyes in his series of Arthurian romances. In the wedding scene in Erec and Enide (about 1170) Puceles carolent et dancent, Trestuit de joie feire tancent (lines 2047–2048) "Maidens performed rounds and other dances, each trying to outdo the other in showing their joy" In The Knight of the Cart, (probably late 1170s) at a meadow where there are knights and ladies, various games are played while: (lines 1656–1659) "[S]ome others were playing at childhood games – rounds, dances and reels, singing, tumbling, and leaping" In what is probably Chretien's last work, Perceval, the Story of the Grail, probably written 1181–1191, we find: "Men and women danced rounds through every street and square" and later at a court setting: "The queen ... had all her maidens join hands together to dance and begin the merry-making. In his honour they began their singing, dances, and rounds" Italy Dante (1265-1321) has a few minor references to dance in his works but a more substantive description of the round dance with song from Bologna comes from Giovanni del Virgilio (floruit 1319–1327). Later in the 14th century Giovanni Boccaccio (1313–1375) shows us the "carola" in Florence in the Decameron (about 1350–1353) which has several passages describing men and women dancing to their own singing or accompanied by musicians. Boccaccio also uses two other terms for contemporary dances, ridda and ballonchio, both of which refer to round dances with singing. Approximately contemporary with the Decameron are a series of frescos in Siena by Ambrogio Lorenzetti painted about 1338–40, one of which shows a group of women doing a "bridge" figure while accompanied by another woman playing the tambourine. England In a life of Saint Dunstan composed about 1000, the author tells how Dunstan, going into a church, found maidens dancing in a ring and singing a hymn. According to the Oxford English Dictionary (1933) the term "carol" was first used in England for this type of circle dance accompanied by singing in manuscripts dating to as early as 1300. The word was used as both a noun and a verb and the usage of carol for a dance form persisted well into the 16th century. One of the earliest references is in Robert of Brunne's early 14th century Handlyng Synne (Handling Sin) where it occurs as a verb. Other chain dances Circle or line dances also existed in other parts of Europe outside England, France and Italy where the term carol was best known. These dances were of the same type with dancers hand-in-hand and a leader who sang the ballad. Scandinavia In Denmark, old ballads mention a closed Ring dance which can open into a Chain dance. A fresco in Ørslev church in Zealand from about 1400 shows nine people, men and women, dancing in a line. The leader and some others in the chain carry bouquets of flowers. Dances could be for men and women, or for men alone, or women alone. In the case of women's dances, however, there may have been a man who acted as the leader. Two dances specifically named in the Danish ballads which appear to be line dances of this type are The Beggar Dance, and The Lucky Dance which may have been a dance for women. A modern version of these medieval chains is seen in the Faroese chain dance, the earliest account of which goes back only to the 17th century. In Sweden too, medieval songs often mentioned dancing. A long chain was formed, with the leader singing the verses and setting the time while the other dancers joined in the chorus. These "Long Dances" have lasted into modern times in Sweden. A similar type of song dance may have existed in Norway in the Middle Ages as well, but no historical accounts have been found. Central Europe The same dance in Germany was called "Reigen" and may have originated from devotional dances at early Christian festivals. Dancing around the church or a fire was frequently denounced by church authorities which only underscores how popular it was. There are records of church and civic officials in various German towns forbidding dancing and singing from the 8th to the 10th centuries. Once again, in singing processions, the leader provided the verse and the other dancers supplied the chorus. The minnesinger Neidhart von Reuental, who lived in the first half of the 13th century wrote several songs for dancing, some of which use the term "reigen". In southern Tyrol, at Runkelstein Castle, a series of frescos was executed in the last years of the 14th century. One of the frescos depicts Elisabeth of Poland, Queen of Hungary leading a chain dance. Circle dances were also found in the area that is today the Czech Republic. Descriptions and illustrations of dancing can be found in church registers, chronicles and the 15th century writings of Bohuslav Hasištejnský z Lobkovic. Dancing was primarily done around trees on the village green but special houses for dancing appear from the 14th century. In Poland as well the earliest village dances were in circles or lines accompanied by the singing or clapping of the participants. The Balkans The present-day folk dances in the Balkans consist of dancers linked together in a hand or shoulder hold in an open or closed circle or a line. The basic round dance goes by many names in the various countries of the region: choros, kolo, oro, horo or hora. The modern couple dance so common in western and northern Europe has only made a few inroads into the Balkan dance repertory. Chain dances of a similar type to these modern dance forms have been documented from the medieval Balkans. Tens of thousands of medieval tombstones called "Stećci" are found in Bosnia and Hercegovina and neighboring areas in Montenegro, Serbia and Croatia. They date from the end of the 12th century to the 16th century. Many of the stones bear inscription and
14th century and a series of murals were painted. One of these shows a group of young men linking arms in a round dance. They are accompanied by two musicians, one playing the kanun while the other beats on a long drum. There is also some documentary evidence from the Dalmatian coast area of what is now Croatia. An anonymous chronicle from 1344 exhorts the people of the city of Zadar to sing and dance circle dances for a festival while in the 14th and 15th centuries, authorities in Dubrovnik forbid circle dances and secular songs on the cathedral grounds. Another early reference comes from the area of present-day Bulgaria in a manuscript of a 14th-century sermon which calls chain dances "devilish and damned." At a later period there are the accounts of two western European travelers to Constantinople, the capital of the Ottoman Empire. Salomon Schweigger (1551–1622) was a German preacher who traveled in the entourage of Jochim von Sinzendorf, Ambassador to Constantinople for Rudolf II in 1577. He describes the events at a Greek wedding: da schrencken sie die Arm uebereinander / machen ein Ring / gehen also im Ring herumb / mit dem Fuessen hart tredent und stampffend / einer singt vor / welchem die andern alle nachfolgen. "then they joined arms one upon the other, made a circle, went round the circle, with their feet stepping hard and stamping; one sang first, with the others all following after." Another traveler, the German pharmacist Reinhold Lubenau, was in Constantinople in November 1588 and reports on a Greek wedding in these terms: eine Companei, oft von zehen oder mehr Perschonen, Grichen herfuhr auf den Platz, fasten einander bei den Henden, machten einen runden Kreis und traten balde hinder sich, balde fur sich, balde gingen sie herumb, sungen grichisch drein, balde trampelden sie starck mit den Fussen auf die Erde. "a company of Greeks, often of ten or more persons, stepped forth to the open place, took each other by the hand, made a round circle, and now stepped backward, now forward, sometimes went around, singing in Greek the while, sometimes stamped strongly on the ground with their feet." Estampie If the story is true that troubadour Raimbaut de Vaqueiras (about 1150–1207) wrote the famous Provençal song Kalenda Maya to fit the tune of an estampie that he heard two jongleurs play, then the history of the estampie extends back to the 12th century. The only musical examples actually identified as "estampie" or "istanpita" occur in two 14th-century manuscripts. The same manuscripts also contain other pieces named "danse real" or other dance names. These are similar in musical structure to the estampies but consensus is divided as to whether these should be considered the same. In addition to these instrumental music compositions, there are also mentions of the estampie in various literary sources from the 13th and 14th centuries. One of these as "stampenie" is found in Gottfried von Strassburg's Tristan from 1210 in a catalog of Tristan's accomplishments: (lines 2293–2295) "he also sang most excellently subtle airs, 'chansons', 'refloits', and 'estampies'" Later, in a description of Isolde: (lines 8058–8062) "She fiddled her 'estampie', her lays, and her strange tunes in the French style, about Sanze and St Denis" A century and a half later in the poem La Prison amoreuse (1372–73) by French chronicler and poet Jean Froissart (c. 1337–1405), we find: La estoient li menestrel Qui s'acquittoient bien et bel A piper et tout de novel Unes danses teles qu'il sorent, Et si trestot que cessé orent Les estampies qu'il batoient, Cil et celes qui s'esbatoient Au danser sans gueres atendre Commencierent leurs mains a tendre Pour caroler. "Here are all the minstrels rare Who now acquit themselves so fair In playing on their pipes whate'er The dances be that one may do. So soon as they have glided through The estampies of this sort Youths and maidens who disport Themselves in dancing now begin With scarce
into actively defending her, threatens the fanboy crowd, and collects all of their memory cards with the photos. On the way back from the restaurant, Kimiko is suffering from the aftermath of the scene and lashes out at Piro on the subway, which causes him to walk off. Meanwhile, Largo develops a relationship with Hayasaka Erika, Piro's coworker at Megagamers. She and Kimiko share a house. As with Piro and Kimiko, Largo and Erika meet by coincidence early in the story. Later, it is revealed that Erika is a former pop idol, who caused a big scene then disappeared from the public eye after her fiancé left her. When she is rediscovered by her fans, Largo helps thwart a fanboy horde, but not well enough to escape being dismissed by the TPCD for it. He then offers to help Erika to deal with her "vulnerabilities in the digital plane". Erika insists on protecting herself, so Largo instructs her in computer-building. This leads into a little more relationship than Largo can handle, partly because he insists all computer building be done in the nude or as close to it as possible, to avoid static electrical discharge ruining components, and partly because his behavior, crude though it may appear, impresses Erika in many ways. The enigmatic Tohya Miho frequently meddles in the lives of the protagonists. Miho knows Piro and Largo from the Endgames MMORPG previous to Megatokyo's plot. She abused a hidden statistic in the game to gain control of nearly all of the game's player characters, but was ultimately defeated by Piro and Largo. In the comic, Miho becomes close friends with Ping, influencing Ping's relationship with Piro and pitting Ping against Largo in video game battles. Miho is also involved in Erika's backstory; Miho manipulated Erika's fans after Erika's disappearance. This effort ended badly, leaving Miho hospitalized, and the TPCD cleaning up the aftermath. Most of the exact details of what happened are left to the readers' imagination, as are her current motivations and ultimate goal. Miho and many of the events surrounding her involve a club in Harajuku, the Cave of Evil (CoE). After getting yelled at for retaining her waitress job, Kimiko quits her voice acting job and goes home to find Erika assembling a new computer in her undergarments. Not long after Erika tells Kimiko to strip, Piro comes by, who she tells to get undressed as well. While Erika and Piro talk about her, Kimiko, who hid when Piro showed up, runs out of the apartment. Kimiko runs into Ping, who wanted to talk to Piro about why, after an explosion at school, she had started to cry uncontrollably. They encounter Largo at the store, who explains what went wrong, although no one knows what he means until Piro comes in and translates. Ping is relieved to know that she won't shut down and Kimiko hugs Piro and apologizes for her actions. Largo leaves for Erika's apartment after she calls looking for help. That night, while Piro and Kimiko fall asleep watching TV, Erika, who finished the computer with Largo's help, tries to seduce Largo, but it freaks him out and he runs out for home. The next morning, after Kimiko departs, Piro finds out she quit her voice acting job and tries to find her. Kimiko and Miho are in the same diner, to which Ed has sent an attack robot (Kill-Bot) against Miho, since she has disrupted his attempts to destroy Ping. Miho is in the diner trying to contact Piro, Kimiko is talking with Erika. Dom is also there to talk with Kimiko. After rescuing both herself and Kimiko from the Kill-Bot and chaos at the diner, the two talk about things. Miho talks to Piro on her phone, argues with him, and then Piro and Kimiko have a conversation about that as the two females are leaving the area. Dom follows and tries to coerce Kimiko into joining SEGA for protection from fans, but she refuses. Drained, she has Miho finish talking to Piro on the phone. Piro then encounters a group who found Kimiko's cell phone and other belongings after she and Miho escaped the diner. The group wants to help Piro get together with Kimiko, partially due to feeling bad for trying to snap a picture up Kimiko's skirt. Piro and the group set out for a press conference Kimiko is going to for the voice acting project, Sight. Besides all of the other fans going to the event, a planned zombie outbreak occurs in the area. Miho, who helped Kimiko get ready for the event and accompanied her to it, later calls the zombies off for unexplained reasons through an unexplained mechanism. Largo and Yuki, who has since been revealed along the way to be a magical girl like her mother Meimi (likewise revealed), steal a Rent-a-Zilla to fight the zombie outbreak. Largo leaves Yuki to help Piro get to Kimiko. Unfortunately, the Rent-a-Zilla gets bitten by zombies and turns into one itself, resulting in the TPCD capturing it. Yuki protects it from the TPCD, teleports it out of the area, and adopts it as a pet in a miniaturized form, all much to her father's chagrin. After the event, Erika, Largo, Kimiko and Piro are reunited, and they talk a bit with Miho, who has shown up again after storming out following an argument with Kenji earlier. Miho declines an offer to eat with the group and wanders off thinking about games and Largo and Piro. She is shown walking amongst the zombies and then in Ed's gun-sights, and in the center of an attack by a number of Ed's Kill-Bots. During the next nine days, Piro and Kimiko have made up and Kimiko returned to both of her jobs, with them seeing little of each other. Largo and Erika are shown to likewise be involved but more often, including going to dinner with the Sonoda family, as the inspector's brother was Erika's fiancé. Kimiko is attempting to get Piro working as an artist on Sight, which unbeknownst to them is now being funded by Dom. Ping is concerned about the whereabouts of Miho, who hasn't been seen during the time, but Piro is still upset about all that has happened and somewhat evasively refuses direct assistance. Ping and Junko, another one of Largo's students, who used to be a friend of Miho, work towards finding Miho. Yuki and Kobayashi Yutaka then also become involved with the attempt because of this. That night, Piro and Kimiko discuss Miho and Endgames, which Yuki overhears, they unaware she is there. This leads Yuki to appropriate Piro's powerless laptop and leave, believing him to still be in love with Miho and that the device might hold clues to finding her. Kimiko and Piro work on his portfolio for Sight and then they say goodnight and leave. He returns to his apartment, but Kimiko goes to the CoE club using a pass Miho gave her long ago in the beginning. Once at the club, Dom mockingly advises her, Yuki unknowingly whisps past her, and she unexpectedly meets up with an old friend Komugiko. During all this, Piro has left his apartment after looking at his sketchbook and a drawing of Miho. His current location is unknown. Aside from Kimiko, concurrent overlapping events have led to almost every main character converging upon the club for various reasons involving Miho, or in support of others involved. Ed, attempting to destroy Ping, fights with Largo, as the staff of the club have maneuvered Ed and Ping into the protective radius of ex-Idol Erika. Yuki and Yutaka get Piro's laptop powered on, she reads the old chat logs between Piro and Miho, and follows instructions from her to him. Going to a "hidden-in-plain-sight" hospital room, she finds Miho alive and well, although seemingly in a weakened state. During a heated argument and Miho's goading, Yuki then forcibly moves Miho to the club. Shortly after the arrival of the two in the center of everyone, the bulk of the denizens go into trance-like states while others are fighting or confused about what to do next. Miho appears to be collapsing. Upon instructions from Erika, Largo finds then uses his Largo-Phone and the club's sound system to knock out power in the immediate area of the club. During this event, Piro has gone to visit Miho at the "hospital" room, where he discovers that she is missing. Following the blackout, Largo, Erika, and Miho board a train, where Miho decides to return home. However, a large crowd has blocked her path home, apparently waiting for someone's return. The next morning, Piro has been brought to jail, where he has been interrogated by police about Miho's disappearance. He is able to leave jail by paying a suspiciously set low bail of about $100 US, which is obtained through a 10,000 yen bill that has been shaped into an origami 'zilla and left in the cell. Piro walks back home, where he finds Miho sleeping on a beanbag in the apartment. Piro and Miho then work out some of the confusion between them, which reveals several background events. She explains the Analogue Support Facility as a sort of safehouse, where she was able to come and go when she wanted. Since Ping in her extreme attempt to find Miho had posted tons of pictures, videos, and information on the internet, people are now using that to "build a 'real' me", as Miho explains it. During the process, at one point Kimiko calls from the studio, updating Piro on his artwork and telling him some of how last night she and others found Miho and how crazy it was. Largo and Erika, who are riding on the roof of a train in the Miyagi prefecture also call during the conversation. After a short conversation with both Largo and Erika on the phone, and a bit more conversation with Miho, Piro instructs her to stay in the apartment until they can figure out what to do. Junko and Ping are shown leaving for school, with Junko seeming taking Ed's shotguns from last night with her. After receiving a phone call from Yutaka, whom Masamichi initially disapproves of, Yuki, who has not changed clothing from the events of the previous chapter, leaves her house, grabs him, and takes him to a rooftop, where they try to explain things after Yutaka was being questioned by Asako and Mami. She goes over everything, even why she referred to herself as a "monster", which Yuki's friends previously overheard and misunderstood. Realizing that Miho is the cause of this mess, Yutaka indirectly vows revenge, but Yuki stops him. Yutaka goes anyway and meets his brother in front of Megagamers, who has tracked Miho to the store since the previous night. Yutaka's brother is a member of a group of Nanasawa fans who plan to intervene and remind Piro who his true love is to get rid of Miho. However, Dom's van is blocking the store's entrance. Though Yuki protests against intervention to the group, Dom, who is unknown to them, performs his own method of intervention anyway and forces Piro to choose between Nanasawa and Miho. It is currently unknown if Dom knows who Miho is, but Miho, in a disguise, overhears the conversation and forces Piro to briefly wear a hat. At the same time, Yuki, deciding that she can wait no longer, steals Dom's van and guns, and rushes into the store with Yutaka in tow. Seeing this, Miho grabs Piro and rushes upstairs, discarding the hat in the process. Yuki subsequently collides with the hat and a presumed explosion occurs, stalling Yuki and Yutaka. Miho and Piro don cosplay outfits as a disguise, escape, and make their way to the local bath house. Just before Yuki grabs Yutaka again, Dom, now trapped under a pile of rubble, expresses his condolences to Yutaka, to which he does not understand. The pair quickly follow Miho and Piro and await for them to leave the bath house. Books Megatokyo was first published in print by Studio Ironcat, a partnership announced in September 2002. Following this, the first book, a compilation of Megatokyo strips under the title Megatokyo Volume One: Chapter Zero, was released by Studio Ironcat in January 2003. According to Gallagher, Studio Ironcat was unable to meet demand for the book, due to problems the company was facing at the time. On July 7, 2003, Gallagher announced that Ironcat would not continue to publish Megatokyo in book form. This was followed by an announcement on August 27, 2003 that Dark Horse Comics would publish Megatokyo Volume 2 and future collected volumes, including a revised edition of Megatokyo Volume 1. The comic once more changed publishers in February 2006, moving from Dark Horse Comics to the CMX Manga imprint of DC Comics. The comic then transferred to CMX's parent Wildstorm, with its last volume published in July 2010. CMX, along with Wildstorm closed down in 2010. Former publisher Dark Horse regained the rights to the series and planned to release it in omnibus format in January 2013, but didn't. , six volumes are available for purchase: volumes 1 through 3 from Dark Horse, volumes 4 and 5 by CMX/DC, and volume 6 by Wildstorm. The books have also been translated into German, Italian, French, and Polish. In July 2004, Megatokyo was the tenth best-selling manga property in the United States, and during the week ending February 20, 2005, volume 3 ranked third in the Nielsen BookScan figures, which was not only its highest ranking to date (), but also made it the highest monthly rank for an original English-language manga title. In July 2007, Kodansha announced that in 2008 it intends to publish Megatokyo in a Japanese-language edition, (in a silver slipcased box as part of Kodansha Box editions, a new manga line started in November 2006). Depending on reader response, Kodansha hoped to subsequently publish the entire Megatokyo book series. The first volume was released in Japan on May 7, 2009. Reception The artwork and characterizations of Megatokyo have received praise from such publications as The New York Times and Comics Bulletin. Many critics praise Megatokyos character designs and pencil work, rendered entirely in grayscale; conversely, it has been criticized for perceived uniformity
her. When she is rediscovered by her fans, Largo helps thwart a fanboy horde, but not well enough to escape being dismissed by the TPCD for it. He then offers to help Erika to deal with her "vulnerabilities in the digital plane". Erika insists on protecting herself, so Largo instructs her in computer-building. This leads into a little more relationship than Largo can handle, partly because he insists all computer building be done in the nude or as close to it as possible, to avoid static electrical discharge ruining components, and partly because his behavior, crude though it may appear, impresses Erika in many ways. The enigmatic Tohya Miho frequently meddles in the lives of the protagonists. Miho knows Piro and Largo from the Endgames MMORPG previous to Megatokyo's plot. She abused a hidden statistic in the game to gain control of nearly all of the game's player characters, but was ultimately defeated by Piro and Largo. In the comic, Miho becomes close friends with Ping, influencing Ping's relationship with Piro and pitting Ping against Largo in video game battles. Miho is also involved in Erika's backstory; Miho manipulated Erika's fans after Erika's disappearance. This effort ended badly, leaving Miho hospitalized, and the TPCD cleaning up the aftermath. Most of the exact details of what happened are left to the readers' imagination, as are her current motivations and ultimate goal. Miho and many of the events surrounding her involve a club in Harajuku, the Cave of Evil (CoE). After getting yelled at for retaining her waitress job, Kimiko quits her voice acting job and goes home to find Erika assembling a new computer in her undergarments. Not long after Erika tells Kimiko to strip, Piro comes by, who she tells to get undressed as well. While Erika and Piro talk about her, Kimiko, who hid when Piro showed up, runs out of the apartment. Kimiko runs into Ping, who wanted to talk to Piro about why, after an explosion at school, she had started to cry uncontrollably. They encounter Largo at the store, who explains what went wrong, although no one knows what he means until Piro comes in and translates. Ping is relieved to know that she won't shut down and Kimiko hugs Piro and apologizes for her actions. Largo leaves for Erika's apartment after she calls looking for help. That night, while Piro and Kimiko fall asleep watching TV, Erika, who finished the computer with Largo's help, tries to seduce Largo, but it freaks him out and he runs out for home. The next morning, after Kimiko departs, Piro finds out she quit her voice acting job and tries to find her. Kimiko and Miho are in the same diner, to which Ed has sent an attack robot (Kill-Bot) against Miho, since she has disrupted his attempts to destroy Ping. Miho is in the diner trying to contact Piro, Kimiko is talking with Erika. Dom is also there to talk with Kimiko. After rescuing both herself and Kimiko from the Kill-Bot and chaos at the diner, the two talk about things. Miho talks to Piro on her phone, argues with him, and then Piro and Kimiko have a conversation about that as the two females are leaving the area. Dom follows and tries to coerce Kimiko into joining SEGA for protection from fans, but she refuses. Drained, she has Miho finish talking to Piro on the phone. Piro then encounters a group who found Kimiko's cell phone and other belongings after she and Miho escaped the diner. The group wants to help Piro get together with Kimiko, partially due to feeling bad for trying to snap a picture up Kimiko's skirt. Piro and the group set out for a press conference Kimiko is going to for the voice acting project, Sight. Besides all of the other fans going to the event, a planned zombie outbreak occurs in the area. Miho, who helped Kimiko get ready for the event and accompanied her to it, later calls the zombies off for unexplained reasons through an unexplained mechanism. Largo and Yuki, who has since been revealed along the way to be a magical girl like her mother Meimi (likewise revealed), steal a Rent-a-Zilla to fight the zombie outbreak. Largo leaves Yuki to help Piro get to Kimiko. Unfortunately, the Rent-a-Zilla gets bitten by zombies and turns into one itself, resulting in the TPCD capturing it. Yuki protects it from the TPCD, teleports it out of the area, and adopts it as a pet in a miniaturized form, all much to her father's chagrin. After the event, Erika, Largo, Kimiko and Piro are reunited, and they talk a bit with Miho, who has shown up again after storming out following an argument with Kenji earlier. Miho declines an offer to eat with the group and wanders off thinking about games and Largo and Piro. She is shown walking amongst the zombies and then in Ed's gun-sights, and in the center of an attack by a number of Ed's Kill-Bots. During the next nine days, Piro and Kimiko have made up and Kimiko returned to both of her jobs, with them seeing little of each other. Largo and Erika are shown to likewise be involved but more often, including going to dinner with the Sonoda family, as the inspector's brother was Erika's fiancé. Kimiko is attempting to get Piro working as an artist on Sight, which unbeknownst to them is now being funded by Dom. Ping is concerned about the whereabouts of Miho, who hasn't been seen during the time, but Piro is still upset about all that has happened and somewhat evasively refuses direct assistance. Ping and Junko, another one of Largo's students, who used to be a friend of Miho, work towards finding Miho. Yuki and Kobayashi Yutaka then also become involved with the attempt because of this. That night, Piro and Kimiko discuss Miho and Endgames, which Yuki overhears, they unaware she is there. This leads Yuki to appropriate Piro's powerless laptop and leave, believing him to still be in love with Miho and that the device might hold clues to finding her. Kimiko and Piro work on his portfolio for Sight and then they say goodnight and leave. He returns to his apartment, but Kimiko goes to the CoE club using a pass Miho gave her long ago in the beginning. Once at the club, Dom mockingly advises her, Yuki unknowingly whisps past her, and she unexpectedly meets up with an old friend Komugiko. During all this, Piro has left his apartment after looking at his sketchbook and a drawing of Miho. His current location is unknown. Aside from Kimiko, concurrent overlapping events have led to almost every main character converging upon the club for various reasons involving Miho, or in support of others involved. Ed, attempting to destroy Ping, fights with Largo, as the staff of the club have maneuvered Ed and Ping into the protective radius of ex-Idol Erika. Yuki and Yutaka get Piro's laptop powered on, she reads the old chat logs between Piro and Miho, and follows instructions from her to him. Going to a "hidden-in-plain-sight" hospital room, she finds Miho alive and well, although seemingly in a weakened state. During a heated argument and Miho's goading, Yuki then forcibly moves Miho to the club. Shortly after the arrival of the two in the center of everyone, the bulk of the denizens go into trance-like states while others are fighting or confused about what to do next. Miho appears to be collapsing. Upon instructions from Erika, Largo finds then uses his Largo-Phone and the club's sound system to knock out power in the immediate area of the club. During this event, Piro has gone to visit Miho at the "hospital" room, where he discovers that she is missing. Following the blackout, Largo, Erika, and Miho board a train, where Miho decides to return home. However, a large crowd has blocked her path home, apparently waiting for someone's return. The next morning, Piro has been brought to jail, where he has been interrogated by police about Miho's disappearance. He is able to leave jail by paying a suspiciously set low bail of about $100 US, which is obtained through a 10,000 yen bill that has been shaped into an origami 'zilla and left in the cell. Piro walks back home, where he finds Miho sleeping on a beanbag in the apartment. Piro and Miho then work out some of the confusion between them, which reveals several background events. She explains the Analogue Support Facility as a sort of safehouse, where she was able to come and go when she wanted. Since Ping in her extreme attempt to find Miho had posted tons of pictures, videos, and information on the internet, people are now using that to "build a 'real' me", as Miho explains it. During the process, at one point Kimiko calls from the studio, updating Piro on his artwork and telling him some of how last night she and others found Miho and how crazy it was. Largo and Erika, who are riding on the roof of a train in the Miyagi prefecture also call during the conversation. After a short conversation with both Largo and Erika on the phone, and a bit more conversation with Miho, Piro instructs her to stay in the apartment until they can figure out what to do. Junko and Ping are shown leaving for school, with Junko seeming taking Ed's shotguns from last night with her. After receiving a phone call from Yutaka, whom Masamichi initially disapproves of, Yuki, who has not changed clothing from the events of the previous chapter, leaves her house, grabs him, and takes him to a rooftop, where they try to explain things after Yutaka was being questioned by Asako and Mami. She goes over everything, even why she referred to herself as a "monster", which Yuki's friends previously overheard and misunderstood. Realizing that Miho is the cause of this mess, Yutaka indirectly vows revenge, but Yuki stops him. Yutaka goes anyway and meets his brother in front of Megagamers, who has tracked Miho to the store since the previous night. Yutaka's brother is a member of a group of Nanasawa fans who plan to intervene and remind Piro who his true love is to get rid of Miho. However, Dom's van is blocking the store's entrance. Though Yuki protests against intervention to the group, Dom, who is unknown to them, performs his own method of intervention anyway and forces Piro to choose between Nanasawa and Miho. It is currently unknown if Dom knows who Miho is, but Miho, in a disguise, overhears the conversation and forces Piro to briefly wear a hat. At the same time, Yuki, deciding that she can wait no longer, steals Dom's van and guns, and rushes into the store with Yutaka in tow. Seeing this, Miho grabs Piro and rushes upstairs, discarding the hat in the process. Yuki subsequently collides with the hat and a presumed explosion occurs, stalling Yuki and Yutaka. Miho and Piro don cosplay outfits as a disguise, escape, and make their way to the local bath house. Just before Yuki grabs Yutaka again, Dom, now trapped under a pile of rubble, expresses his condolences to Yutaka, to which he does not understand. The pair quickly follow Miho and Piro and await for them to leave the bath house. Books Megatokyo was first published in print by Studio Ironcat, a partnership announced in September 2002. Following this, the first book, a compilation of Megatokyo strips under the title Megatokyo Volume One: Chapter Zero, was released by Studio Ironcat in January 2003. According to Gallagher, Studio Ironcat was unable to meet demand for the book, due to problems the company was facing at the time. On July 7, 2003, Gallagher announced that Ironcat would not continue to publish Megatokyo in book form. This was followed by an announcement on August 27, 2003 that Dark Horse Comics would publish Megatokyo Volume 2 and future collected volumes, including a revised edition of Megatokyo Volume 1. The comic once more changed publishers in February 2006, moving from Dark Horse Comics to the CMX Manga imprint of DC Comics. The comic then transferred to CMX's parent Wildstorm, with its last volume published in July 2010. CMX, along with Wildstorm closed down in 2010. Former publisher Dark Horse regained the rights to the series and planned to release it in omnibus format in January 2013, but didn't. , six volumes are available for purchase: volumes 1 through 3 from Dark Horse, volumes 4 and 5 by CMX/DC, and volume 6 by Wildstorm. The books have also been translated into German, Italian, French, and Polish. In July 2004, Megatokyo was the tenth best-selling manga property in the United States, and during the week ending February 20, 2005, volume 3 ranked third in the Nielsen BookScan figures, which was not only its highest ranking to date (), but also made it the highest monthly rank for an original English-language manga title. In July 2007, Kodansha announced that in 2008 it intends to publish Megatokyo in a Japanese-language edition, (in a silver slipcased box as part of Kodansha Box editions, a new manga line started in November 2006). Depending on reader response, Kodansha hoped to subsequently publish the entire Megatokyo book series. The first volume was released in Japan on May 7, 2009. Reception The artwork and characterizations of Megatokyo have received praise from such publications as The New York Times and Comics Bulletin. Many critics praise Megatokyos character designs and pencil work, rendered entirely in grayscale; conversely, it has been criticized for perceived uniformity and simplicity in the designs of its peripheral characters, which have been regarded as confusing and difficult to tell apart due to their similar appearances. Eric Burns of Websnark, found the comic to suffer from "incredibly slow pacing" (, only about two months of in-universe time have elapsed), unclear direction or resolutions for plot threads, a lack of official character profiles and plot summaries for the uninitiated, and an erratic update schedule. Burns also harshly criticized the often non-canonical filler material Gallagher employs to prevent the comic's front page content from becoming stagnant, such as Shirt Guy Dom, a punchline-driven stick figure comic strip written and illustrated by Megatokyo editor Dominic Nguyen. Following Gallagher taking on Megatokyo as a full-time occupation, some critics have complained that updates should be more frequent than when he worked on the comic part time. Update schedule issues prompted Gallagher to install an update progress bar for readers awaiting the next installment of the comic; however, it has since been removed as it itself often wasn't updated. IGN called Megatokyo'''s fans "some of the most patient and forgiving in the webcomic world." During an interview, Gallagher stated that Megatokyo fans "always [tell him] they are patient and find that the final comics are always worth the wait," but he feels as though he "[has] a commitment to [his] readers and to [himself] to deliver the best comics [he] can, and to do it on schedule," finally saying that nothing would make him happier than "[getting] a better handle on the time it takes to create each page." Upon missing deadlines, Gallagher often makes self-disparaging comments. Poking fun at this, Jerry "Tycho" Holkins of Penny Arcade has claimed to have "gotten on famously" with Gallagher, ever since he "figured out that [Gallagher] legitimately detests himself and is not hoisting some kind of glamour." While Megatokyo was originally presented as a slapstick comedy, it began focusing more on the romantic relationships between its characters after Caston's departure from the project. As a result, some fans, preferring the comic's gag-a-day format, have claimed its quality was superior when Caston was writing it. Additionally, it has been said that, without Caston's input, Largo's antics appear contrived. Comics Bulletin regards Megatokyo's characters as convincingly portrayed, commenting that "the reader truly feels connected to the characters, their romantic hijinks, and their wacky misadventures with the personal touches supplied by the author." Likewise, Anime News Network has
set out by Greek theorists. Rather, most of the terminology seems to be a misappropriation on the part of the medieval theorists Although the church modes have no relation to the ancient Greek modes, the overabundance of Greek terminology does point to an interesting possible origin in the liturgical melodies of the Byzantine tradition. This system is called octoechos and is also divided into eight categories, called echoi. For specific medieval music theorists, see also: Isidore of Seville, Aurelian of Réôme, Odo of Cluny, Guido of Arezzo, Hermannus Contractus, Johannes Cotto (Johannes Afflighemensis), Johannes de Muris, Franco of Cologne, Johannes de Garlandia (Johannes Gallicus), Anonymous IV, Marchetto da Padova (Marchettus of Padua), Jacques of Liège, Johannes de Grocheo, Petrus de Cruce (Pierre de la Croix), and Philippe de Vitry. Early medieval music (500–1000) Early chant traditions Chant (or plainsong) is a monophonic sacred (single, unaccompanied melody) form which represents the earliest known music of the Christian church. Chant developed separately in several European centres. Although the most important were Rome, Hispania, Gaul, Milan, and Ireland, there were others as well. These styles were all developed to support the regional liturgies used when celebrating the Mass there. Each area developed its own chant and rules for celebration. In Spain and Portugal, Mozarabic chant was used and shows the influence of North African music. The Mozarabic liturgy even survived through Muslim rule, though this was an isolated strand and this music was later suppressed in an attempt to enforce conformity on the entire liturgy. In Milan, Ambrosian chant, named after St. Ambrose, was the standard, while Beneventan chant developed around Benevento, another Italian liturgical center. Gallican chant was used in Gaul, and Celtic chant in Ireland and Great Britain. Around AD 1011, the Roman Catholic Church wanted to standardize the Mass and chant across its empire. At this time, Rome was the religious centre of western Europe, and Paris was the political centre. The standardization effort consisted mainly of combining these two (Roman and Gallican) regional liturgies. Pope Gregory I (540–604) and Charlemagne (742–814) sent trained singers throughout the Holy Roman Empire (800|962–1806) to teach this new form of chant. This body of chant became known as Gregorian Chant, named after Pope Gregory I. By the 12th and 13th centuries, Gregorian chant had superseded all the other Western chant traditions, with the exception of the Ambrosian chant in Milan and the Mozarabic chant in a few specially designated Spanish chapels. Hildegard von Bingen (1098–1179) was one of the earliest known female composers. She wrote many monophonic works for the Catholic Church, almost all of them for female voices. Early polyphony: organum Around the end of the 9th century, singers in monasteries such as St. Gall in Switzerland began experimenting with adding another part to the chant, generally a voice in parallel motion, singing mostly in perfect fourths or fifths above the original tune (see interval). This development is called organum and represents the beginnings of counterpoint and, ultimately, harmony. Over the next several centuries, organum developed in several ways. The most significant of these developments was the creation of "florid organum" around 1100, sometimes known as the school of St. Martial (named after a monastery in south-central France, which contains the best-preserved manuscript of this repertory). In "florid organum" the original tune would be sung in long notes while an accompanying voice would sing many notes to each one of the original, often in a highly elaborate fashion, all the while emphasizing the perfect consonances (fourths, fifths and octaves), as in the earlier organa. Later developments of organum occurred in England, where the interval of the third was particularly favoured, and where organa were likely improvised against an existing chant melody, and at Notre Dame in Paris, which was to be the centre of musical creative activity throughout the thirteenth century. Much of the music from the early medieval period is anonymous. Some of the names may have been poets and lyric writers, and the tunes for which they wrote words may have been composed by others. Attribution of monophonic music of the medieval period is not always reliable. Surviving manuscripts from this period include the Musica Enchiriadis, Codex Calixtinus of Santiago de Compostela, the Magnus Liber, and the Winchester Troper. For information about specific composers or poets writing during the early medieval period, see Pope Gregory I, St. Godric, Hildegard of Bingen, Hucbald, Notker Balbulus, Odo of Arezzo, Odo of Cluny, and Tutilo. Liturgical drama Another musical tradition of Europe originating during the early Middle Ages was the liturgical drama. Liturgical drama developed possibly in the 10th century from the tropes—poetic embellishments of the liturgical texts. One of the tropes, the so-called Quem Quaeritis, belonging to the liturgy of Easter morning, developed into a short play around the year 950. The oldest surviving written source is the Winchester Troper. Around the year 1000 it was sung widely in Northern Europe. Shortly, a similar Christmas play was developed, musically and textually following the Easter one, and other plays followed. There is a controversy among musicologists as to the instrumental accompaniment of such plays, given that the stage directions, very elaborate and precise in other respects, do not request any participation of instruments. These dramas were performed by monks, nuns and priests. In contrast to secular plays, which were spoken, the liturgical drama was always sung. Many have been preserved sufficiently to allow modern reconstruction and performance (for example the Play of Daniel, which has been recently recorded at least ten times). High medieval music (1000–1300) Goliards The Goliards were itinerant poet-musicians of Europe from the tenth to the middle of the thirteenth century. Most were scholars or ecclesiastics, and they wrote and sang in Latin. Although many of the poems have survived, very little of the music has. They were possibly influential—even decisively so—on the troubadour-trouvère tradition which was to follow. Most of their poetry is secular and, while some of the songs celebrate religious ideals, others are frankly profane, dealing with drunkenness, debauchery and lechery. One of the most important extant sources of Goliards chansons is the Carmina Burana. Ars antiqua The flowering of the Notre Dame school of polyphony from around 1150 to 1250 corresponded to the equally impressive achievements in Gothic architecture: indeed the centre of activity was at the cathedral of Notre Dame itself. Sometimes the music of this period is called the Parisian school, or Parisian organum, and represents the beginning of what is conventionally known as Ars antiqua. This was the period in which rhythmic notation first appeared in western music, mainly a context-based method of rhythmic notation known as the rhythmic modes. This was also the period in which concepts of formal structure developed which were attentive to proportion, texture, and architectural effect. Composers of the period alternated florid and discant organum (more note-against-note, as opposed to the succession of many-note melismas against long-held notes found in the florid type), and created several new musical forms: clausulae, which were melismatic sections of organa extracted and fitted with new words and further musical elaboration; conductus, which were songs for one or more voices to be sung rhythmically, most likely in a procession of some sort; and tropes, which were additions of new words and sometimes new music to sections of older chant. All of these genres save one were based upon chant; that is, one of the voices, (usually three, though sometimes four) nearly always the lowest (the tenor at this point) sang a chant melody, though with freely composed note-lengths, over which the other voices sang organum. The exception to this method was the conductus, a two-voice composition that was freely composed in its entirety. The motet, one of the most important musical forms of the high Middle Ages and Renaissance, developed initially during the Notre Dame period out of the clausula, especially the form using multiple voices as elaborated by Pérotin, who paved the way for this particularly by replacing many of his predecessor (as canon of the cathedral) Léonin's lengthy florid clausulae with substitutes in a discant style. Gradually, there came to be entire books of these substitutes, available to be fitted in and out of the various chants. Since, in fact, there were more than can possibly have been used in context, it is probable that the clausulae came to be performed independently, either in other parts of the mass, or in private devotions. The clausula, thus practised, became the motet when troped with non-liturgical words, and this further developed into a form of great elaboration, sophistication and subtlety in the fourteenth century, the period of Ars nova. Surviving manuscripts from this era include the Montpellier Codex, Bamberg Codex, and Las Huelgas Codex. Composers of this time include Léonin, Pérotin, W. de Wycombe, Adam de St. Victor, and Petrus de Cruce (Pierre de la Croix). Petrus is credited with the innovation of writing more than three semibreves to fit the length of a breve. Coming before the innovation of imperfect tempus, this practice inaugurated the era of what are now called "Petronian" motets. These late 13th-century works are in three to four parts and have multiple texts sung simultaneously. Originally, the tenor line (from the Latin tenere, "to hold") held a preexisting liturgical chant line in the original Latin, while the text of the one, two, or even three voices above, called the voces organales, provided commentary on the liturgical subject either in Latin or in the vernacular French. The rhythmic values of the voces organales decreased as the parts multiplied, with the duplum (the part above the tenor) having smaller rhythmic values than the tenor, the triplum (the line above the duplum) having smaller rhythmic values than the duplum, and so on. As time went by, the texts of the voces organales became increasingly secular in nature and had less and less overt connection to the liturgical text in the tenor line. The Petronian motet is a highly complex genre, given its mixture of several semibreve breves with rhythmic modes and sometimes (with increasing frequency) substitution of secular songs for chant in the tenor. Indeed, ever-increasing rhythmic complexity would be a fundamental characteristic of the 14th century, though music in France, Italy, and England would take quite different paths during that time. Cantigas de Santa Maria The Cantigas de Santa Maria ("Canticles of St. Mary") are 420 poems with musical notation, written in Galician-Portuguese during the reign of Alfonso X The Wise (1221–1284). The manuscript was probably compiled from 1270–1280, and is highly decorated, with an illumination every 10 poems. The illuminations often depict musicians making the manuscript a particularly important source of medieval music iconography. Though the Cantigas are often attributed to Alfonso, it remains unclear as to whether he was a composer himself, or perhaps a compiler; Alfonso is known to regularly invited musicians and poets to court whom were undoubtedly involved in the Cantigas production. It is one of the largest collections of monophonic (solo) songs from the Middle Ages and is characterized by the mention of the Virgin Mary in every song, while every tenth song is a hymn. The manuscripts have survived in four codices: two at El Escorial, one at Madrid's National Library, and one in Florence, Italy. Some have colored miniatures showing pairs of musicians playing a wide variety of instruments. Troubadours, trouvères and Minnesänger The music of the troubadours and trouvères was a vernacular tradition of monophonic secular song, probably accompanied by instruments, sung by professional, occasionally itinerant, musicians who were as skilled as poets as they were singers and instrumentalists. The language of the troubadours was Occitan (also known as the langue d'oc, or Provençal); the language of the trouvères was Old French (also known as langue d'oil). The period of the troubadours corresponded to the flowering of cultural life in Provence which lasted through the twelfth century and into the first decade of the thirteenth. Typical subjects of troubadour song were war, chivalry and courtly love—the love of an idealized woman from afar. The period of the troubadours wound down after the Albigensian Crusade, the fierce campaign by Pope Innocent III to eliminate the Cathar heresy (and northern barons' desire to appropriate the wealth of the south). Surviving troubadours went either to Portugal, Spain, northern Italy or northern France (where the trouvère tradition lived on), where their skills and techniques contributed to the later developments of secular musical culture in those places. The trouvères and troubadours shared similar musical styes, but the trouvères were generally noblemen. The music of the trouvères was similar to that of the troubadours, but was able to survive into the thirteenth century unaffected by the Albigensian Crusade. Most of the more than two thousand surviving trouvère songs include music, and show a sophistication as great as that of the poetry it accompanies. The Minnesänger tradition was the Germanic counterpart to the activity of the troubadours and trouvères to the west. Unfortunately, few sources survive from the time; the sources of Minnesang are mostly from two or three centuries after the peak of the movement, leading to some controversy over the accuracy of these sources. Among the Minnesängers with surviving music are Wolfram von Eschenbach, Walther von der Vogelweide, and Niedhart von Reuenthal. Trovadorismo In the Middle Ages, Galician-Portuguese was the language used in nearly all of Iberia for lyric poetry. From this language derive both modern Galician and Portuguese. The Galician-Portuguese school, which was influenced to some extent (mainly in certain formal aspects) by the Occitan troubadours, is first documented at the end of the twelfth century and lasted until the middle of the fourteenth. The earliest extant composition in this school is usually agreed to be Ora faz ost' o senhor de Navarra by the Portuguese João Soares de Paiva, usually dated just before or after 1200. The troubadours of the movement, not to be confused with the Occitan troubadours (who frequented courts in nearby León and Castile), wrote almost entirely cantigas. Beginning probably around the middle of the thirteenth century, these songs, known also as cantares or trovas, began to be compiled in collections known as cancioneiros (songbooks). Three such anthologies are known: the Cancioneiro da Ajuda, the Cancioneiro Colocci-Brancuti (or Cancioneiro da Biblioteca Nacional de Lisboa), and the Cancioneiro da Vaticana. In addition to these there is the priceless collection of over 400 Galician-Portuguese cantigas in the Cantigas de Santa Maria, which tradition attributes to Alfonso X. The Galician-Portuguese cantigas can be divided into three basic genres: male-voiced love poetry, called cantigas de amor (or cantigas d'amor) female-voiced love poetry, called cantigas de amigo (cantigas d'amigo); and poetry of insult and mockery called cantigas d'escarnho e de mal dizer. All three are lyric genres in the technical sense that they were strophic songs with either musical accompaniment or introduction on a stringed instrument. But all three genres also have dramatic elements, leading early scholars to characterize them as lyric-dramatic. The origins of the cantigas d'amor are usually traced to Provençal and Old French lyric poetry, but formally and rhetorically they are quite different. The cantigas d'amigo are probably rooted in a native song tradition, though this view has been contested. The cantigas d'escarnho e maldizer may also (according to Lang) have deep local roots. The latter two genres (totalling around 900 texts) make the Galician-Portuguese lyric unique in the entire panorama of medieval Romance poetry. Troubadours with surviving melodies Aimeric de Belenoi Aimeric de Peguilhan Airas Nunes Albertet de Sestaro Arnaut Daniel Arnaut de Maruoill Beatritz de Dia Berenguier de Palazol Bernart de Ventadorn Bertran de Born Blacasset Cadenet Daude de Pradas Denis of Portugal Folquet de Marselha Gaucelm Faidit Gui d'Ussel Guilhem Ademar Guilhem Augier Novella Guilhem Magret Guilhem de Saint Leidier Guiraut de Bornelh Guiraut d'Espanha Guiraut Riquier Jaufre Rudel João Soares de Paiva João Zorro Jordan Bonel Marcabru Martín Codax Monge de Montaudon Peire d'Alvernhe Peire Cardenal Peire Raimon de Tolosa Peire Vidal Peirol Perdigon Pistoleta Pons d'Ortaffa Pons de Capduoill Raimbaut d'Aurenga Raimbaut de Vaqueiras Raimon Jordan Raimon de Miraval Rigaut de Berbezilh Uc Brunet Uc de Saint Circ William IX of Aquitaine Composers of the high and late medieval era Late medieval music (1300–1400) France: Ars nova The beginning of the Ars nova is one of the few clear chronological divisions in medieval music, since it corresponds to the publication of the Roman de Fauvel, a huge compilation of poetry and music, in 1310 and 1314. The Roman de Fauvel is a satire on abuses in the medieval church, and is filled with medieval motets, lais, rondeaux and other new secular forms. While most of the music is anonymous, it contains several pieces by Philippe de Vitry, one of the first composers of the isorhythmic motet, a development which distinguishes the fourteenth century. The isorhythmic motet was perfected by Guillaume de Machaut, the finest composer of the time. During the Ars nova era, secular music acquired a polyphonic sophistication formerly found only in sacred music, a development not surprising considering the secular character of the early Renaissance (while this music is typically considered "medieval", the social forces that produced it were responsible for the beginning of the literary and artistic Renaissance in Italy—the distinction between Middle Ages and Renaissance is a blurry one, especially considering arts as different as music and painting). The term "Ars nova" (new art, or new technique) was coined by Philippe de Vitry in his treatise of that name (probably written in 1322), in order to distinguish the practice from the music of the immediately preceding age. The dominant secular genre of the Ars Nova was the chanson, as it would continue to be in France for another two centuries. These chansons were composed in musical forms corresponding to the poetry they set, which were in the so-called formes fixes of rondeau, ballade, and virelai. These forms significantly affected the development of musical structure in ways that are felt even today; for example, the ouvert-clos rhyme-scheme shared by all three demanded a musical realization which contributed directly to the modern notion of antecedent and consequent phrases. It was in this period, too, in which began the long tradition of setting the mass ordinary. This tradition started around mid-century with isolated or paired settings of Kyries, Glorias, etc., but Machaut composed what is thought to be the first complete mass conceived as one composition. The sound world of Ars Nova music is very much one of linear primacy and rhythmic complexity. "Resting" intervals are the fifth and octave, with thirds and sixths considered dissonances. Leaps of more than a sixth in individual voices are not uncommon, leading to speculation of instrumental participation at least in secular performance. Surviving French manuscripts include the Ivrea Codex and the Apt Codex. For information about specific French composers writing in late medieval era, see Jehan de Lescurel, Philippe de Vitry, Guillaume de Machaut, Borlet, Solage, and François Andrieu. Italy: Trecento Most of the music of Ars nova was French in origin; however, the term is often loosely applied to all of the music of the fourteenth century, especially to include the secular music in Italy. There this period was often referred to as Trecento. Italian music has always been known for its lyrical or melodic character, and this goes back to the 14th century in many respects. Italian secular music of this time (what little surviving liturgical music there is, is similar to the French except for somewhat different notation) featured what has been called the cantalina style, with a florid top voice supported by two (or even one; a fair amount of Italian Trecento music is for only two voices) that are more regular and slower moving. This type of texture remained a feature of Italian music in the popular 15th and 16th century secular genres as well, and was an important influence on the eventual development of the trio texture that revolutionized music in the 17th. There were three main forms for secular works in the Trecento. One was the madrigal, not the same as that of 150–250 years later, but with a verse/refrain-like form. Three-line stanzas, each with different words, alternated with a two-line ritornello, with the same text at each appearance. Perhaps we can see the seeds of the subsequent late-Renaissance and Baroque ritornello in this device; it too returns again and again, recognizable each time, in contrast with its surrounding disparate sections. Another form, the caccia ("chase,") was written for two voices in a canon at the unison. Sometimes, this form also featured a ritornello, which was occasionally also in a canonic style. Usually, the name of this genre provided a double meaning, since the texts of caccia were primarily about hunts and related outdoor activities, or at least action-filled scenes. The third main form was the ballata, which was roughly equivalent to the French virelai. Surviving Italian manuscripts include the Squarcialupi Codex and the Rossi Codex. For information about specific Italian composers writing in the late medieval era, see Francesco Landini, Gherardello da Firenze, Andrea da Firenze, Lorenzo da Firenze, Giovanni da Firenze (aka Giovanni da Cascia), Bartolino da Padova, Jacopo da Bologna, Donato da Cascia, Lorenzo Masini, Niccolò da Perugia, and Maestro Piero. Germany: Geisslerlieder The Geisslerlieder were the songs of wandering bands of flagellants, who sought to appease the wrath of an angry God by penitential music accompanied by mortification of their bodies. There were two separate periods of activity of Geisslerlied: one around the middle of the thirteenth century, from which, unfortunately, no music survives (although numerous lyrics do); and another from 1349, for which both words and music survive intact due to the attention of a single priest who wrote about the movement and recorded its music. This second period corresponds to the spread of the Black Death in Europe, and documents one of the most terrible events in European history. Both periods of Geisslerlied activity were mainly in Germany. Ars subtilior As often seen at the end of any musical era, the end of the medieval era is marked by a highly manneristic style known as Ars subtilior. In some ways, this was an attempt to meld the French and Italian styles. This music was highly stylized, with a rhythmic complexity that was not matched until the 20th century. In fact, not only was the rhythmic complexity of this repertoire largely unmatched for five and a half centuries, with extreme syncopations, mensural trickery, and even examples of augenmusik (such as a chanson by Baude Cordier written out in manuscript in the shape of a heart), but also its melodic material was quite complex as well, particularly in its interaction with the rhythmic structures. Already discussed under Ars Nova has been the practice of isorhythm, which continued to develop through late-century and in fact did not achieve its highest degree of sophistication until early in the 15th century. Instead of using isorhythmic techniques in one or two voices, or trading them among voices, some works came to feature a pervading isorhythmic texture which rivals the integral serialism of the 20th century in its systematic ordering of rhythmic and tonal elements. The term "mannerism" was applied by later scholars, as it often is, in response to an impression of sophistication being practised for its own sake, a malady which some authors have felt infected the Ars subtilior. One of the most important extant sources of Ars Subtilior chansons is the Chantilly Codex. For information about specific composers writing music in Ars subtilior style, see Anthonello de Caserta, Philippus de Caserta (aka Philipoctus de Caserta), Johannes Ciconia, Matteo da Perugia, Lorenzo da Firenze, Grimace, Jacob Senleches, and Baude Cordier. Transitioning to the Renaissance Demarcating the end of the medieval era and the beginning of the Renaissance era, with regard to the composition of music, is difficult. While the music of the fourteenth century is fairly obviously medieval in conception, the music of the early fifteenth century is often conceived as belonging to a transitional period, not only retaining some of the ideals of the end of the Middle Ages (such as a type of polyphonic writing in which the parts differ widely from each other in character, as each has its specific textural function), but also showing some of the characteristic traits of the Renaissance (such as the increasingly international style developing through the diffusion of Franco-Flemish musicians throughout Europe, and in terms of texture an increasing equality of parts). Music historians do not agree on when the Renaissance era began, but most historians agree that England was still a medieval society in the early fifteenth century (see periodization issues of the Middle
the Middle Ages and is characterized by the mention of the Virgin Mary in every song, while every tenth song is a hymn. The manuscripts have survived in four codices: two at El Escorial, one at Madrid's National Library, and one in Florence, Italy. Some have colored miniatures showing pairs of musicians playing a wide variety of instruments. Troubadours, trouvères and Minnesänger The music of the troubadours and trouvères was a vernacular tradition of monophonic secular song, probably accompanied by instruments, sung by professional, occasionally itinerant, musicians who were as skilled as poets as they were singers and instrumentalists. The language of the troubadours was Occitan (also known as the langue d'oc, or Provençal); the language of the trouvères was Old French (also known as langue d'oil). The period of the troubadours corresponded to the flowering of cultural life in Provence which lasted through the twelfth century and into the first decade of the thirteenth. Typical subjects of troubadour song were war, chivalry and courtly love—the love of an idealized woman from afar. The period of the troubadours wound down after the Albigensian Crusade, the fierce campaign by Pope Innocent III to eliminate the Cathar heresy (and northern barons' desire to appropriate the wealth of the south). Surviving troubadours went either to Portugal, Spain, northern Italy or northern France (where the trouvère tradition lived on), where their skills and techniques contributed to the later developments of secular musical culture in those places. The trouvères and troubadours shared similar musical styes, but the trouvères were generally noblemen. The music of the trouvères was similar to that of the troubadours, but was able to survive into the thirteenth century unaffected by the Albigensian Crusade. Most of the more than two thousand surviving trouvère songs include music, and show a sophistication as great as that of the poetry it accompanies. The Minnesänger tradition was the Germanic counterpart to the activity of the troubadours and trouvères to the west. Unfortunately, few sources survive from the time; the sources of Minnesang are mostly from two or three centuries after the peak of the movement, leading to some controversy over the accuracy of these sources. Among the Minnesängers with surviving music are Wolfram von Eschenbach, Walther von der Vogelweide, and Niedhart von Reuenthal. Trovadorismo In the Middle Ages, Galician-Portuguese was the language used in nearly all of Iberia for lyric poetry. From this language derive both modern Galician and Portuguese. The Galician-Portuguese school, which was influenced to some extent (mainly in certain formal aspects) by the Occitan troubadours, is first documented at the end of the twelfth century and lasted until the middle of the fourteenth. The earliest extant composition in this school is usually agreed to be Ora faz ost' o senhor de Navarra by the Portuguese João Soares de Paiva, usually dated just before or after 1200. The troubadours of the movement, not to be confused with the Occitan troubadours (who frequented courts in nearby León and Castile), wrote almost entirely cantigas. Beginning probably around the middle of the thirteenth century, these songs, known also as cantares or trovas, began to be compiled in collections known as cancioneiros (songbooks). Three such anthologies are known: the Cancioneiro da Ajuda, the Cancioneiro Colocci-Brancuti (or Cancioneiro da Biblioteca Nacional de Lisboa), and the Cancioneiro da Vaticana. In addition to these there is the priceless collection of over 400 Galician-Portuguese cantigas in the Cantigas de Santa Maria, which tradition attributes to Alfonso X. The Galician-Portuguese cantigas can be divided into three basic genres: male-voiced love poetry, called cantigas de amor (or cantigas d'amor) female-voiced love poetry, called cantigas de amigo (cantigas d'amigo); and poetry of insult and mockery called cantigas d'escarnho e de mal dizer. All three are lyric genres in the technical sense that they were strophic songs with either musical accompaniment or introduction on a stringed instrument. But all three genres also have dramatic elements, leading early scholars to characterize them as lyric-dramatic. The origins of the cantigas d'amor are usually traced to Provençal and Old French lyric poetry, but formally and rhetorically they are quite different. The cantigas d'amigo are probably rooted in a native song tradition, though this view has been contested. The cantigas d'escarnho e maldizer may also (according to Lang) have deep local roots. The latter two genres (totalling around 900 texts) make the Galician-Portuguese lyric unique in the entire panorama of medieval Romance poetry. Troubadours with surviving melodies Aimeric de Belenoi Aimeric de Peguilhan Airas Nunes Albertet de Sestaro Arnaut Daniel Arnaut de Maruoill Beatritz de Dia Berenguier de Palazol Bernart de Ventadorn Bertran de Born Blacasset Cadenet Daude de Pradas Denis of Portugal Folquet de Marselha Gaucelm Faidit Gui d'Ussel Guilhem Ademar Guilhem Augier Novella Guilhem Magret Guilhem de Saint Leidier Guiraut de Bornelh Guiraut d'Espanha Guiraut Riquier Jaufre Rudel João Soares de Paiva João Zorro Jordan Bonel Marcabru Martín Codax Monge de Montaudon Peire d'Alvernhe Peire Cardenal Peire Raimon de Tolosa Peire Vidal Peirol Perdigon Pistoleta Pons d'Ortaffa Pons de Capduoill Raimbaut d'Aurenga Raimbaut de Vaqueiras Raimon Jordan Raimon de Miraval Rigaut de Berbezilh Uc Brunet Uc de Saint Circ William IX of Aquitaine Composers of the high and late medieval era Late medieval music (1300–1400) France: Ars nova The beginning of the Ars nova is one of the few clear chronological divisions in medieval music, since it corresponds to the publication of the Roman de Fauvel, a huge compilation of poetry and music, in 1310 and 1314. The Roman de Fauvel is a satire on abuses in the medieval church, and is filled with medieval motets, lais, rondeaux and other new secular forms. While most of the music is anonymous, it contains several pieces by Philippe de Vitry, one of the first composers of the isorhythmic motet, a development which distinguishes the fourteenth century. The isorhythmic motet was perfected by Guillaume de Machaut, the finest composer of the time. During the Ars nova era, secular music acquired a polyphonic sophistication formerly found only in sacred music, a development not surprising considering the secular character of the early Renaissance (while this music is typically considered "medieval", the social forces that produced it were responsible for the beginning of the literary and artistic Renaissance in Italy—the distinction between Middle Ages and Renaissance is a blurry one, especially considering arts as different as music and painting). The term "Ars nova" (new art, or new technique) was coined by Philippe de Vitry in his treatise of that name (probably written in 1322), in order to distinguish the practice from the music of the immediately preceding age. The dominant secular genre of the Ars Nova was the chanson, as it would continue to be in France for another two centuries. These chansons were composed in musical forms corresponding to the poetry they set, which were in the so-called formes fixes of rondeau, ballade, and virelai. These forms significantly affected the development of musical structure in ways that are felt even today; for example, the ouvert-clos rhyme-scheme shared by all three demanded a musical realization which contributed directly to the modern notion of antecedent and consequent phrases. It was in this period, too, in which began the long tradition of setting the mass ordinary. This tradition started around mid-century with isolated or paired settings of Kyries, Glorias, etc., but Machaut composed what is thought to be the first complete mass conceived as one composition. The sound world of Ars Nova music is very much one of linear primacy and rhythmic complexity. "Resting" intervals are the fifth and octave, with thirds and sixths considered dissonances. Leaps of more than a sixth in individual voices are not uncommon, leading to speculation of instrumental participation at least in secular performance. Surviving French manuscripts include the Ivrea Codex and the Apt Codex. For information about specific French composers writing in late medieval era, see Jehan de Lescurel, Philippe de Vitry, Guillaume de Machaut, Borlet, Solage, and François Andrieu. Italy: Trecento Most of the music of Ars nova was French in origin; however, the term is often loosely applied to all of the music of the fourteenth century, especially to include the secular music in Italy. There this period was often referred to as Trecento. Italian music has always been known for its lyrical or melodic character, and this goes back to the 14th century in many respects. Italian secular music of this time (what little surviving liturgical music there is, is similar to the French except for somewhat different notation) featured what has been called the cantalina style, with a florid top voice supported by two (or even one; a fair amount of Italian Trecento music is for only two voices) that are more regular and slower moving. This type of texture remained a feature of Italian music in the popular 15th and 16th century secular genres as well, and was an important influence on the eventual development of the trio texture that revolutionized music in the 17th. There were three main forms for secular works in the Trecento. One was the madrigal, not the same as that of 150–250 years later, but with a verse/refrain-like form. Three-line stanzas, each with different words, alternated with a two-line ritornello, with the same text at each appearance. Perhaps we can see the seeds of the subsequent late-Renaissance and Baroque ritornello in this device; it too returns again and again, recognizable each time, in contrast with its surrounding disparate sections. Another form, the caccia ("chase,") was written for two voices in a canon at the unison. Sometimes, this form also featured a ritornello, which was occasionally also in a canonic style. Usually, the name of this genre provided a double meaning, since the texts of caccia were primarily about hunts and related outdoor activities, or at least action-filled scenes. The third main form was the ballata, which was roughly equivalent to the French virelai. Surviving Italian manuscripts include the Squarcialupi Codex and the Rossi Codex. For information about specific Italian composers writing in the late medieval era, see Francesco Landini, Gherardello da Firenze, Andrea da Firenze, Lorenzo da Firenze, Giovanni da Firenze (aka Giovanni da Cascia), Bartolino da Padova, Jacopo da Bologna, Donato da Cascia, Lorenzo Masini, Niccolò da Perugia, and Maestro Piero. Germany: Geisslerlieder The Geisslerlieder were the songs of wandering bands of flagellants, who sought to appease the wrath of an angry God by penitential music accompanied by mortification of their bodies. There were two separate periods of activity of Geisslerlied: one around the middle of the thirteenth century, from which, unfortunately, no music survives (although numerous lyrics do); and another from 1349, for which both words and music survive intact due to the attention of a single priest who wrote about the movement and recorded its music. This second period corresponds to the spread of the Black Death in Europe, and documents one of the most terrible events in European history. Both periods of Geisslerlied activity were mainly in Germany. Ars subtilior As often seen at the end of any musical era, the end of the medieval era is marked by a highly manneristic style known as Ars subtilior. In some ways, this was an attempt to meld the French and Italian styles. This music was highly stylized, with a rhythmic complexity that was not matched until the 20th century. In fact, not only was the rhythmic complexity of this repertoire largely unmatched for five and a half centuries, with extreme syncopations, mensural trickery, and even examples of augenmusik (such as a chanson by Baude Cordier written out in manuscript in the shape of a heart), but also its melodic material was quite complex as well, particularly in its interaction with the rhythmic structures. Already discussed under Ars Nova has been the practice of isorhythm, which continued to develop through late-century and in fact did not achieve its highest degree of sophistication until early in the 15th century. Instead of using isorhythmic techniques in one or two voices, or trading them among voices, some works came to feature a pervading isorhythmic texture which rivals the integral serialism of the 20th century in its systematic ordering of rhythmic and tonal elements. The term "mannerism" was applied by later scholars, as it often is, in response to an impression of sophistication being practised for its own sake, a malady which some authors have felt infected the Ars subtilior. One of the most important extant sources of Ars Subtilior chansons is the Chantilly Codex. For information about specific composers writing music in Ars subtilior style, see Anthonello de Caserta, Philippus de Caserta (aka Philipoctus de Caserta), Johannes Ciconia, Matteo da Perugia, Lorenzo da Firenze, Grimace, Jacob Senleches, and Baude Cordier. Transitioning to the Renaissance Demarcating the end of the medieval era and the beginning of the Renaissance era, with regard to the composition of music, is difficult. While the music of the fourteenth century is fairly obviously medieval in conception, the music of the early fifteenth century is often conceived as belonging to a transitional period, not only retaining some of the ideals of the end of the Middle Ages (such as a type of polyphonic writing in which the parts differ widely from each other in character, as each has its specific textural function), but also showing some of the characteristic traits of the Renaissance (such as the increasingly international style developing through the diffusion of Franco-Flemish musicians throughout Europe, and in terms of texture an increasing equality of parts). Music historians do not agree on when the Renaissance era began, but most historians agree that England was still a medieval society in the early fifteenth century (see periodization issues of the Middle Ages). While there is no consensus, 1400 is a useful marker, because it was around that time that the Renaissance came into full swing in Italy. The increasing reliance on the interval of the third as a consonance is one of the most pronounced features of transition into the Renaissance. Polyphony, in use since the 12th century, became increasingly elaborate with highly independent voices throughout the 14th century. With John Dunstaple and other English composers, partly through the local technique of faburden (an improvisatory process in which a chant melody and a written part predominantly in parallel sixths above it are ornamented by one sung in perfect fourths below the latter, and which later took hold on the continent as "fauxbordon"), the interval of the third emerges as an important musical development; because of this Contenance Angloise ("English countenance"), English composers' music is often regarded as the first to sound less truly bizarre to 2000s-era audiences who are not trained in music history. English stylistic tendencies in this regard had come to fruition and began to influence continental composers as early as the 1420s, as can be seen in works of the young Dufay, among others. While the Hundred Years' War continued, English nobles, armies, their chapels and retinues, and therefore some of their composers, travelled in France and performed their music there; it must also of course be remembered that the English controlled portions of northern France at this time. English manuscripts include the Worcester Fragments, the Old St. Andrews Music Book, the Old Hall Manuscript, and Egerton Manuscript. For information about specific composers who are considered transitional between the medieval and the Renaissance, see Zacara da Teramo, Paolo da Firenze, Giovanni Mazzuoli, Antonio da Cividale, Antonius Romanus, Bartolomeo da Bologna, Roy Henry, Arnold de Lantins, Leonel Power, and John Dunstaple. An early composer from the Franco-Flemish School of the Renaissance was Johannes Ockeghem (1410/1425 –1497). He was the most famous member of the Franco-Flemish School in the last half of the 15th century, and is often considered the most influential composer between Dufay and Josquin des Prez. Ockeghem probably studied with Gilles Binchois, and at least was closely associated with him at the Burgundian court. Antoine Busnois wrote a motet in honor of Ockeghem. Ockeghem is a direct link from the Burgundian style to the next generation of Netherlanders, such as Obrecht and Josquin. A strong influence on Josquin des Prez and the subsequent generation of Netherlanders, Ockeghem was famous throughout Europe Charles VII for his expressive music, although he was equally renowned for his technical prowess. Influence The musical styles of Pérotin influenced 20th-century composers such as John Luther Adams and minimalist composer Steve Reich. Bardcore, which involves remixing famous pop songs to have a medieval instrumentation, became a popular meme in 2020. References Sources Further reading Butterfield, Ardis (2002), Poetry and Music in Medieval France, Cambridge: Cambridge University Press. Cyrus, Cynthia J. (1999), "Music": Medieval Glossary ORB
as low-noise microwave amplifiers in radio telescopes, though these have largely been replaced by amplifiers based on FETs. During the early 1960s, the Jet Propulsion Laboratory developed a maser to provide ultra-low-noise amplification of S-band microwave signals received from deep space probes. This maser used deeply refrigerated helium to chill the amplifier down to a temperature of 4 kelvin. Amplification was achieved by exciting a ruby comb with a 12.0 gigahertz klystron. In the early years, it took days to chill and remove the impurities from the hydrogen lines. Refrigeration was a two-stage process with a large Linde unit on the ground, and a crosshead compressor within the antenna. The final injection was at through a micrometer-adjustable entry to the chamber. The whole system noise temperature looking at cold sky (2.7 kelvin in the microwave band) was 17 kelvin. This gave such a low noise figure that the Mariner IV space probe could send still pictures from Mars back to the Earth even though the output power of its radio transmitter was only 15 watts, and hence the total signal power received was only −169 decibels with respect to a milliwatt (dBm). Hydrogen maser The hydrogen maser is used as an atomic frequency standard. Together with other kinds of atomic clocks, these help make up the International Atomic Time standard ("Temps Atomique International" or "TAI" in French). This is the international time scale coordinated by the International Bureau of Weights and Measures. Norman Ramsey and his colleagues first conceived of the maser as a timing standard. More recent masers are practically identical to their original design. Maser oscillations rely on the stimulated emission between two hyperfine energy levels of atomic hydrogen. Here is a brief description of how they work: First, a beam of atomic hydrogen is produced. This is done by submitting the gas at low pressure to a high-frequency radio wave discharge (see the picture on this page). The next step is "state selection"—in order to get some stimulated emission, it is necessary to create a population inversion of the atoms. This is done in a way that is very similar to the Stern–Gerlach experiment. After passing through an aperture and a magnetic field, many of the atoms in the beam are left in the upper energy level of the lasing transition. From this state, the atoms can decay to the lower state and emit some microwave radiation. A high Q factor (quality factor) microwave cavity confines the microwaves and reinjects them repeatedly into the atom beam. The stimulated emission amplifies the microwaves on each pass through the beam. This combination of amplification and feedback is what defines all oscillators. The resonant frequency of the microwave cavity is tuned to the frequency of the hyperfine energy transition of hydrogen: 1,420,405,752 hertz. A small fraction of the signal in the microwave cavity is coupled into a coaxial cable and then sent to a coherent radio receiver. The microwave signal coming out of the maser is very weak (a few picowatts). The frequency of the signal is fixed and extremely stable. The coherent receiver is used to amplify the signal and change the frequency. This is done using a series of phase-locked loops and a high performance quartz oscillator. Astrophysical masers Maser-like stimulated emission has also been observed in nature from interstellar space, and it is frequently called "superradiant emission" to distinguish it from laboratory masers. Such emission is observed from molecules such as water (H2O), hydroxyl radicals (•OH), methanol (CH3OH), formaldehyde (HCHO), and silicon monoxide
low pressure to a high-frequency radio wave discharge (see the picture on this page). The next step is "state selection"—in order to get some stimulated emission, it is necessary to create a population inversion of the atoms. This is done in a way that is very similar to the Stern–Gerlach experiment. After passing through an aperture and a magnetic field, many of the atoms in the beam are left in the upper energy level of the lasing transition. From this state, the atoms can decay to the lower state and emit some microwave radiation. A high Q factor (quality factor) microwave cavity confines the microwaves and reinjects them repeatedly into the atom beam. The stimulated emission amplifies the microwaves on each pass through the beam. This combination of amplification and feedback is what defines all oscillators. The resonant frequency of the microwave cavity is tuned to the frequency of the hyperfine energy transition of hydrogen: 1,420,405,752 hertz. A small fraction of the signal in the microwave cavity is coupled into a coaxial cable and then sent to a coherent radio receiver. The microwave signal coming out of the maser is very weak (a few picowatts). The frequency of the signal is fixed and extremely stable. The coherent receiver is used to amplify the signal and change the frequency. This is done using a series of phase-locked loops and a high performance quartz oscillator. Astrophysical masers Maser-like stimulated emission has also been observed in nature from interstellar space, and it is frequently called "superradiant emission" to distinguish it from laboratory masers. Such emission is observed from molecules such as water (H2O), hydroxyl radicals (•OH), methanol (CH3OH), formaldehyde (HCHO), and silicon monoxide (SiO). Water molecules in star-forming regions can undergo a population inversion and emit radiation at about 22.0 GHz, creating the brightest spectral line in the radio universe. Some water masers also emit radiation from a rotational transition at a frequency of 96 GHz. Extremely powerful masers, associated with active galactic nuclei, are known as megamasers and are up to a million times more powerful than stellar masers. Terminology The meaning of the term maser has changed slightly since its introduction. Initially the acronym was universally given as "microwave amplification by stimulated emission of radiation", which described devices which emitted in the microwave region of the electromagnetic spectrum. The principle and concept of stimulated emission has since been extended to more devices and frequencies. Thus, the original acronym is sometimes modified, as suggested by Charles H. Townes, to "molecular amplification by stimulated emission of radiation." Some have asserted that Townes's efforts to extend the acronym in this way were primarily motivated by the desire to increase the importance of his invention, and his reputation in the scientific community. When the laser was developed, Townes and Schawlow and their colleagues at Bell Labs pushed the use of the term optical maser, but this was largely abandoned in favor of laser, coined by their rival Gordon Gould. In modern usage, devices that emit in the X-ray through infrared portions of the spectrum are typically called lasers, and devices that emit in the microwave region and below are commonly called masers, regardless of whether they emit microwaves or other frequencies. Gould originally proposed distinct names for devices that emit in each portion of the spectrum, including grasers (gamma ray lasers), xasers (x-ray lasers), uvasers (ultraviolet lasers), lasers (visible lasers), irasers (infrared lasers), masers (microwave masers), and rasers (RF masers). Most of these terms never caught on, however, and all have now become (apart from in science fiction) obsolete except for maser and laser. In popular culture In the Godzilla franchise, the Japanese Self-Defense Forces (JSDF) often use fictional maser tanks in a futile effort to defend Japan from Godzilla and other Kaiju. See also Spaser List of laser types References Further reading J.R. Singer, Masers, John Whiley and Sons Inc., 1959. J. Vanier, C. Audoin, The Quantum Physics of Atomic Frequency Standards, Adam Hilger, Bristol, 1989. External links arXiv.org search for "maser"
the Europa-Park theme park resort in Germany. Religious works by Botta, including the Cymbalista Synagogue and Jewish Heritage Center were shown in London at the Royal Institute of British Architects in an exhibition entitled, Architetture del Sacro: Prayers in Stone. “A church is the place, par excellence, of architecture,” he said in an interview with architectural historian Judith Dupré. “When you enter a church, you already are part of what has transpired and will transpire there. The church is a house that puts a believer in a dimension where he or she is the protagonist. The sacred directly lives in the collective. Man becomes a participant in a church, even if he never says anything.” In 1998, he designed the new bus station for Vimercate (near Milan), a red brick building linked to many facilities, underlining the city's recent development. He worked at La
style. His designs tend to include a strong sense of geometry, often being based on very simple shapes, yet creating unique volumes of space. His buildings are often made of brick, yet his use of material is wide, varied, and often unique. His trademark style can be seen widely in Switzerland particularly the Ticino region and also in the Mediatheque in Villeurbanne (1988), a cathedral in Évry (1995), and the San Francisco Museum of Modern Art or SFMOMA (1994). He also designed the Europa-Park Dome, which houses many major events at the Europa-Park theme park resort in Germany. Religious works by Botta, including the Cymbalista Synagogue and Jewish Heritage Center were shown in London at the Royal Institute of British Architects in an exhibition entitled, Architetture del Sacro: Prayers in Stone. “A church is the place, par excellence, of architecture,” he said in an interview with architectural historian Judith Dupré. “When you enter a church, you already are part of what has transpired and will transpire there. The church is a house that puts a believer in a dimension where he or she is the protagonist. The sacred directly lives in the collective. Man becomes a participant in a church, even if he never says anything.” In 1998, he designed the new bus station for Vimercate (near Milan), a red brick building linked to many facilities, underlining the city's recent development. He worked at La Scala's theatre renovation, which proved controversial as preservationists feared that historic details would be lost. In 2004, he designed Museum One of the Leeum, Samsung Museum of Art in Seoul, South Korea.
the people an end to the Triumvirate in favor of Antony's sole rule. However, when Octavian returned to the city with his army, the pair were forced to retreat to Perusia in Etruria. Octavian placed the city under siege while Lucius waited for Antony's legions in Gaul to come to his aid. Away in the East and embarrassed by Fulvia's actions, Antony gave no instructions to his legions. Without reinforcements, Lucius and Fulvia were forced to surrender in February 40 BC. While Octavian pardoned Lucius for his role in the war and even granted him command in Spain as his chief lieutenant there, Fulvia was forced to flee to Greece with her children. With the war over, Octavian was left in sole control over Italy. When Antony's governor of Gaul died, Octavian took over his legions there, further strengthening his control over the West. Despite the Parthian Empire's invasion of Rome's eastern territories, Fulvia's civil war forced Antony to leave the East and return to Rome in order to secure his position. Meeting her in Athens, Antony rebuked Fulvia for her actions before sailing on to Italy with his army to face Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their shared service under Caesar. The legions under their command followed suit. Meanwhile, in Sicyon, Fulvia died of a sudden and unknown illness. Fulvia's death and the mutiny of their soldiers allowed the triumvirs to effect a reconciliation through a new power sharing agreement in September 40 BC. The Roman world was redivided, with Antony receiving the Eastern provinces, Octavian the Western provinces, and Lepidus relegated to a clearly junior position as governor of Africa. This agreement, known as the Treaty of Brundisium, reinforced the Triumvirate and allowed Antony to begin preparing for Caesar's long-awaited campaign against the Parthian Empire. As a symbol of their renewed alliance, Antony married Octavia, Octavian's sister, in October 40 BC. Antony's Parthian War Roman–Parthian relations The rise of the Parthian Empire in the 3rd century BC and Rome's expansion into the Eastern Mediterranean during the 2nd century BC brought the two powers into direct contact, causing centuries of tumultuous and strained relations. Though periods of peace developed cultural and commercial exchanges, war was a constant threat. Influence over the buffer state of the Kingdom of Armenia, located to the north-east of Roman Syria, was often a central issue in the Roman-Parthian conflict. In 95 BC, Tigranes the Great, a Parthian ally, became king. Tigranes would later aid Mithradates of Pontus against Rome before being decisively defeated by Pompey in 66 BC. Thereafter, with his son Artavasdes in Rome as a hostage, Tigranes would rule Armenia as an ally of Rome until his death in 55 BC. Rome then released Artavasdes, who succeeded his father as king. In 53 BC, Rome's governor of Syria, Marcus Licinius Crassus, led an expedition across the Euphrates River into Parthian territory to confront the Parthian Shah Orodes II. Artavasdes II offered Crassus the aid of nearly forty thousand troops to assist his Parthian expedition on the condition that Crassus invade through Armenia as the safer route. Crassus refused, choosing instead the more direct route by crossing the Euphrates directly into desert Parthian territory. Crassus' actions proved disastrous as his army was defeated at the Battle of Carrhae by a numerically inferior Parthian force. Crassus' defeat forced Armenia to shift its loyalty to Parthia, with Artavasdes II's sister marrying Orodes' son and heir Pacorus. In early 44 BC, Julius Caesar announced his intentions to invade Parthia and restore Roman power in the East. His reasons were to punish the Parthians for assisting Pompey in the recent civil war, to avenge Crassus' defeat at Carrhae, and especially to match the glory of Alexander the Great for himself. Before Caesar could launch his campaign, however, he was assassinated. As part of the compromise between Antony and the Republicans to restore order following Caesar's murder, Publius Cornelius Dolabella was assigned the governorship of Syria and command over Caesar's planned Parthian campaign. The compromise did not hold, however, and the Republicans were forced to flee to the East. The Republicans directed Quintus Labienus to attract the Parthians to their side in the resulting war against Antony and Octavian. After the Republicans were defeated at the Battle of Philippi, Labienus joined the Parthians. Despite Rome's internal turmoil during the time, the Parthians did not immediately benefit from the power vacuum in the East due to Orodes II's reluctance despite Labienus' urgings to the contrary. In the summer of 41 BC, Antony, to reassert Roman power in the East, conquered Palmyra on the Roman-Parthian border. Antony then spent the winter of 41 BC in Alexandria with Cleopatra, leaving only two legions to defend the Syrian border against Parthian incursions. The legions, however, were composed of former Republican troops and Labienus convinced Orodes II to invade. Parthian Invasion A Parthian army, led by Orodes II's eldest son Pacorus, invaded Syria in early 40 BC. Labienus, the Republican ally of Brutus and Cassius, accompanied him to advise him and to rally the former Republican soldiers stationed in Syria to the Parthian cause. Labienus recruited many of the former Republican soldiers to the Parthian campaign in opposition to Antony. The joint Parthian–Roman force, after initial success in Syria, separated to lead their offensive in two directions: Pacorus marched south toward Hasmonean Judea while Labienus crossed the Taurus Mountains to the north into Cilicia. Labienus conquered southern Anatolia with little resistance. The Roman governor of Asia, Lucius Munatius Plancus, a partisan of Antony, was forced to flee his province, allowing Labienus to recruit the Roman soldiers stationed there. For his part, Pacorus advanced south to Phoenicia and Palestine. In Hasmonean Judea, the exiled prince Antigonus allied himself with the Parthians. When his brother, Rome's client king Hyrcanus II, refused to accept Parthian domination, he was deposed in favor of Antigonus as Parthia's client king in Judea. Pacorus' conquest had captured much of the Syrian and Palestinian interior, with much of the Phoenician coast occupied as well. The city of Tyre remained the last major Roman outpost in the region. Antony, then in Egypt with Cleopatra, did not respond immediately to the Parthian invasion. Though he left Alexandria for Tyre in early 40 BC, when he learned of the civil war between his wife and Octavian, he was forced to return to Italy with his army to secure his position in Rome rather than defeat the Parthians. Instead, Antony dispatched Publius Ventidius Bassus to check the Parthian advance. Arriving in the East in spring 39 BC, Ventidius surprised Labienus near the Taurus Mountains, claiming victory at the Cilician Gates. Ventidius ordered Labienus executed as a traitor and the formerly rebellious Roman soldiers under his command were reincorporated under Antony's control. He then met a Parthian army at the border between Cilicia and Syria, defeating it and killing a large portion of the Parthian soldiers at the Amanus Pass. Ventidius' actions temporarily halted the Parthian advance and restored Roman authority in the East, forcing Pacorus to abandon his conquests and return to Parthia. In the spring of 38 BC, the Parthians resumed their offensive with Pacorus leading an army across the Euphrates. Ventidius, in order to gain time, leaked disinformation to Pacorus implying that he should cross the Euphrates River at their usual ford. Pacorus did not trust this information and decided to cross the river much farther downstream; this was what Ventidius hoped would occur and gave him time to get his forces ready. The Parthians faced no opposition and proceeded to the town of Gindarus in Cyrrhestica where Ventidius' army was waiting. At the Battle of Cyrrhestica, Ventidius inflicted an overwhelming defeat against the Parthians which resulted in the death of Pacorus. Overall, the Roman army had achieved a complete victory with Ventidius' three successive victories forcing the Parthians back across the Euphrates. Pacorus' death threw the Parthian Empire into chaos. Shah Orodes II, overwhelmed by the grief of his son's death, appointed his younger son Phraates IV as his successor. However, Phraates IV assassinated Orodes II in late 38 BC, succeeding him on the throne. Ventidius feared Antony's wrath if he invaded Parthian territory, thereby stealing his glory; so instead he attacked and subdued the eastern kingdoms, which had revolted against Roman control following the disastrous defeat of Crassus at Carrhae. One such rebel was King Antiochus of Commagene, whom he besieged in Samosata. Antiochus tried to make peace with Ventidius, but Ventidius told him to approach Antony directly. After peace was concluded, Antony sent Ventidius back to Rome where he celebrated a triumph, the first Roman to triumph over the Parthians. Conflict with Sextus Pompey While Antony and the other Triumvirs ratified the Treaty of Brundisium to redivide the Roman world among themselves, the rebel Sextus Pompey, the son of Caesar's rival Pompey the Great, was largely ignored. From his stronghold on Sicily, he continued his piratical activities across Italy and blocked the shipment of grain to Rome. The lack of food in Rome caused the public to blame the Triumvirate and shift its sympathies towards Pompey. This pressure forced the Triumvirs to meet with Sextus in early 39 BC. While Octavian wanted an end to the ongoing blockade of Italy, Antony sought peace in the West in order to make the Triumvirate's legions available for his service in his planned campaign against the Parthians. Though the Triumvirs rejected Sextus' initial request to replace Lepidus as the third man within the Triumvirate, they did grant other concessions. Under the terms of the Treaty of Misenum, Sextus was allowed to retain control over Sicily and Sardinia, with the provinces of Corsica and Greece being added to his territory. He was also promised a future position with the Priestly College of Augurs and the consulship for 35 BC. In exchange, Sextus agreed to end his naval blockade of Italy, supply Rome with grain, and halt his piracy of Roman merchant ships. However, the most important provision of the Treaty was the end of the proscription the Trimumvirate had begun in late 43 BC. Many of the proscribed senators, rather than face death, fled to Sicily seeking Sextus' protection. With the exception of those responsible for Caesar's assassination, all those proscribed were allowed to return to Rome and promised compensation. This caused Sextus to lose many valuable allies as the formerly exiled senators gradually aligned themselves with either Octavian or Antony. To secure the peace, Octavian betrothed his three-year-old nephew and Antony's stepson Marcus Claudius Marcellus to Sextus' daughter Pompeia. With peace in the West secured, Antony planned to retaliate against Parthia by invading their territory. Under an agreement with Octavian, Antony would be supplied with extra troops for his campaign. With this military purpose on his mind, Antony sailed to Greece with Octavia, where he behaved in a most extravagant manner, assuming the attributes of the Greek god Dionysus in 39 BC. The peace with Sextus was short-lived, however. When Sextus demanded control over Greece as the agreement provided, Antony demanded the province's tax revenues be to fund the Parthian campaign. Sextus refused. Meanwhile, Sextus' admiral Menas betrayed him, shifting his loyalty to Octavian and thereby granting him control of Corsica, Sardinia, three of Sextus' legions, and a larger naval force. These actions worked to renew Sextus' blockade of Italy, preventing Octavian from sending the promised troops to Antony for the Parthian campaign. This new delay caused Antony to quarrel with Octavian, forcing Octavia to mediate a truce between them. Under the Treaty of Tarentum, Antony provided a large naval force for Octavian's use against Sextus while Octavian promised to raise new legions for Antony to support his invasion of Parthia. As the term of the Triumvirate was set to expire at the end of 38 BC, the two unilaterally extended their term of office another five years until 33 BC without seeking approval of the senate or the popular assemblies. To seal the Treaty, Antony's elder son Marcus Antonius Antyllus, then only 6 years old, was betrothed to Octavian's only daughter Julia, then only an infant. With the Treaty signed, Antony returned to the East, leaving Octavia in Italy. Reconquest of Judea With Publius Ventidius Bassus returned to Rome in triumph for his defensive campaign against the Parthians, Antony appointed Gaius Sosius as the new governor of Syria and Cilicia in early 38 BC. Antony, still in the West negotiating with Octavian, ordered Sosius to depose Antigonus, who had been installed in the recent Parthian invasion as the ruler of Hasmonean Judea, and to make Herod the new Roman client king in the region. Years before in 40 BC, the Roman senate had proclaimed Herod "King of the Jews" because Herod had been a loyal supporter of Hyrcanus II, Rome's previous client king before the Parthian invasion, and was from a family with long standing connections to Rome. The Romans hoped to use Herod as a bulwark against the Parthians in the coming campaign. Advancing south, Sosius captured the island-city of Aradus on the coast of Phoenicia by the end of 38 BC. The following year, the Romans besieged Jerusalem. After a forty-day siege, the Roman soldiers stormed the city and, despite Herod's pleas for restraint, acted without mercy, pillaging and killing all in their path, prompting Herod to complain to Antony. Herod finally resorted to bribing Sosius and his troops in order that they would not leave him "king of a desert". Antigonus was forced to surrender to Sosius, and was sent to Antony for the triumphal procession in Rome. Herod, however, fearing that Antigonus would win backing in Rome, bribed Antony to execute Antigonus. Antony, who recognized that Antigonus would remain a permanent threat to Herod, ordered him beheaded in Antioch. Now secure on his throne, Herod would rule the Herodian Kingdom until his death in 4 BC, and would be an ever-faithful client king of Rome. Parthian Campaign With the Triumvirate renewed in 38 BC, Antony returned to Athens in the winter with his new wife Octavia, the sister of Octavian. With the assassination of the Parthian king Orodes II by his son Phraates IV, who then seized the Parthian throne, in late 38 BC, Antony prepared to invade Parthia himself. Antony, however, realized Octavian had no intention of sending him the additional legions he had promised under the Treaty of Tarentum. To supplement his own armies, Antony instead looked to Rome's principal vassal in the East: his lover Cleopatra. In addition to significant financial resources, Cleopatra's backing of his Parthian campaign allowed Antony to amass the largest army Rome had ever assembled in the East. Wintering in Antioch during 37, Antony's combined Roman–Egyptian army numbered some 200,000, including sixteen legions (approximately 160,000 soldiers) plus an additional 40,000 auxiliaries. Such a force was twice the size of Marcus Licinius Crassus's army from his failed Parthian invasion of 53 BC and three times those of Lucius Licinius Lucullus and Lucius Cornelius Sulla during the Mithridatic Wars. The size of his army indicated Antony's intention to conquer Parthia, or at least receive its submission by capturing the Parthian capital of Ecbatana. Antony's rear was protected by Rome's client kingdoms in Anatolia, Syria, and Judea, while the client kingdoms of Cappadocia, Pontus, and Commagene would provide supplies along the march. Antony's first target for his invasion was the Kingdom of Armenia. Ruled by King Artavasdes II of Armenia, Armenia had been an ally of Rome since the defeat of Tigranes the Great by Pompey the Great in 66 BC during the Third Mithridatic War. However, following Marcus Licinius Crassus's defeat at the Battle of Carrhae in 53 BC, Armenia was forced into an alliance with Parthia due to Rome's weakened position in the East. Antony dispatched Publius Canidius Crassus to Armenia, receiving Artavasdes II's surrender without opposition. Canidius then led an invasion into the South Caucasus, subduing Iberia. There, Canidius forced the Iberian King Pharnavaz II into an alliance against Zober, king of neighboring Albania, subduing the kingdom and reducing it to a Roman protectorate. With Armenia and the Caucasus secured, Antony marched south, crossing into the Parthian province of Media Atropatene. Though Antony desired a pitched battle, the Parthians would not engage, allowing Antony to march deep into Parthian territory by mid-August of 36 BC. This forced Antony to leave his logistics train in the care of two legions (approximately 10,000 soldiers), which was then attacked and completely destroyed by the Parthian army before Antony could rescue them. Though the Armenian King Artavasdes II and his cavalry were present during the massacre, they did not intervene. Despite the ambush, Antony continued the campaign. However, Antony was soon forced to retreat in mid-October after a failed two-month siege of the provincial capital. The retreat soon proved a disaster as Antony's demoralized army faced increasing supply difficulties in the mountainous terrain during winter while constantly being harassed by the Parthian army. According to the Greek historian Plutarch, eighteen battles were fought between the retreating Romans and the Parthians during the month-long march back to Armenia, with approximately 20,000 infantry and 4,000 cavalry dying during the retreat alone. Once in Armenia, Antony quickly marched back to Syria to protect his interests there by late 36 BC, losing an additional 8,000 soldiers along the way. In all, two-fifths of his original army (some 80,000 men) had died during his failed campaign. Antony and Cleopatra Meanwhile, in Rome, the triumvirate was no more. Octavian forced Lepidus to resign after the older triumvir attempted to take control of Sicily after the defeat of Sextus. Now in sole power, Octavian was occupied in wooing the traditional Republican aristocracy to his side. He married Livia and started to attack Antony in order to raise himself to power. He argued that Antony was a man of low morals to have left his faithful wife abandoned in Rome with the children to be with the promiscuous queen of Egypt. Antony was accused of everything, but most of all, of "going native", an unforgivable crime to the proud Romans. Several times Antony was summoned to Rome, but remained in Alexandria with Cleopatra. Again with Egyptian money, Antony invaded Armenia, this time successfully. In the return, a mock Roman triumph was celebrated in the streets of Alexandria. The parade through the city was a pastiche of Rome's most important military celebration. For the finale, the whole city was summoned to hear a very important political statement. Surrounded by Cleopatra and her children, Antony ended his alliance with Octavian. He distributed kingdoms among his children: Alexander Helios was named king of Armenia, Media and Parthia (territories which were not for the most part under the control of Rome), his twin Cleopatra Selene got Cyrenaica and Libya, and the young Ptolemy Philadelphus was awarded Syria and Cilicia. As for Cleopatra, she was proclaimed Queen of Kings and Queen of Egypt, to rule with Caesarion (Ptolemy XV Caesar, son of Cleopatra by Julius Caesar), King of Kings and King of Egypt. Most important of all, Caesarion was declared legitimate son and heir of Caesar. These proclamations were known as the Donations of Alexandria and caused a fatal breach in Antony's relations with Rome. While the distribution of nations among Cleopatra's children was hardly a conciliatory gesture, it did not pose an immediate threat to Octavian's political position. Far more dangerous was the acknowledgment of Caesarion as legitimate and heir to Caesar's name. Octavian's base of power was his link with Caesar through adoption, which granted him much-needed popularity and loyalty of the legions. To see this convenient situation attacked by a child borne by the richest woman in the world was something Octavian could not accept. The triumvirate expired on the last day of 33 BC and was not renewed. Another civil war was beginning. During 33 and 32 BC, a propaganda war was fought in the political arena of Rome, with accusations flying between sides. Antony (in Egypt) divorced Octavia and accused Octavian of being a social upstart, of usurping power, and of forging the adoption papers by Caesar. Octavian responded with treason charges: of illegally keeping provinces that should be given to other men by lots, as was Rome's tradition, and of starting wars against foreign nations (Armenia and Parthia) without the consent of the senate. Antony was also held responsible for Sextus Pompey's execution without a trial. In 32 BC, the senate deprived him of his powers and declared war against Cleopatra – not Antony, because Octavian had no wish to advertise his role in perpetuating Rome's internecine bloodshed. Both consuls, Gnaeus Domitius Ahenobarbus and Gaius Sosius, and a third of the senate abandoned Rome to meet Antony and Cleopatra in Greece. In 31 BC, the war started. Octavian's general Marcus Vipsanius Agrippa captured the Greek city and naval port of Methone, loyal to Antony. The enormous popularity of Octavian with the legions secured the defection of the provinces of Cyrenaica and Greece to his side. On 2 September, the naval Battle of Actium took place. Antony and Cleopatra's navy was overwhelmed, and they were forced to escape to Egypt with 60 ships. Death Octavian, now close to absolute power, invaded Egypt in August, 30 BC, assisted by Agrippa. With no other refuge to escape to, Antony stabbed himself with his sword in the mistaken belief that Cleopatra had already done so. When he found out that Cleopatra was still alive, his friends brought him to Cleopatra's monument in which she was hiding, and he died in her arms. Cleopatra was allowed to conduct Antony's burial rites after she had been captured by Octavian. Realising that she was destined for Octavian's triumph in Rome, she made several attempts to take her life and finally succeeded in mid-August. Octavian had Caesarion and Antyllus killed, but he spared Iullus as well as Antony's children by Cleopatra, who were paraded through the streets of Rome. Aftermath and legacy Cicero's son, Cicero Minor, announced Antony's death to the senate. Antony's honours were revoked and his statues removed, but he was not subject to a complete damnatio memoriae. Cicero Minor also made a decree that no member of the Antonii would ever bear the name Marcus again. "In this way Heaven entrusted the family of Cicero the final acts in the punishment of Antony." When Antony died, Octavian became uncontested ruler of Rome. In the following years, Octavian, who was known as Augustus after 27 BC, managed to accumulate in his person all administrative, political, and military offices. When Augustus died in AD 14, his political powers passed to his adopted son Tiberius; the Roman Empire had begun. The rise of Caesar and the subsequent civil war between his two most powerful adherents effectively ended the credibility of the Roman oligarchy as a governing power and ensured that all future power struggles would centre upon which one individual would achieve supreme control of the government, eliminating the senate and the former magisterial structure as important foci of power in these conflicts. Thus, in history, Antony appears as one of Caesar's main adherents, he and Octavian Augustus being the two men around whom power coalesced following the assassination of Caesar, and finally as one of the three men chiefly responsible for the demise of the Roman Republic. Marriages and issue Antony was known to have an obsession with women and sex. He had many mistresses (including Cytheris) and was married in succession to Fadia, Antonia, Fulvia, Octavia and Cleopatra. He left a number of children. Through his daughters by Octavia, he would be ancestor to the Roman emperors Caligula, Claudius and Nero. Marriage to Fadia, a daughter of a freedman. According to Cicero, Fadia bore Antony several children. Nothing is known about Fadia or their children. Cicero is the only Roman source that mentions Antony's first wife. Marriage to first paternal cousin Antonia Hybrida Minor. According to Plutarch, Antony threw her out of his house in Rome because she slept with his friend, the tribune Publius Cornelius Dolabella. This occurred by 47 BC and Antony divorced her. By Antonia, he had a daughter: Antonia, married the wealthy Greek Pythodoros of Tralles. Marriage to Fulvia, by whom he had two sons: Marcus Antonius Antyllus, murdered by Octavian in 30 BC. Iullus Antonius, married Claudia Marcella the Elder, daughter of Octavia. Marriage to Octavia the Younger, sister of Octavian, later emperor Augustus; they had two daughters: Antonia the Elder married Lucius Domitius Ahenobarbus (consul 16 BC); maternal grandmother of the Empress Valeria Messalina and paternal grandmother of the emperor Nero. Antonia the Younger married Nero Claudius Drusus, the younger son of the Empress Livia Drusilla and brother of the emperor Tiberius; mother of the emperor Claudius, paternal grandmother of the emperor Caligula and empress Agrippina the Younger, and maternal great-grandmother of the emperor Nero. Children with the Queen Cleopatra VII of Egypt, the former lover of Julius Caesar: Alexander Helios Cleopatra Selene II, married King Juba II of Numidia and later Mauretania; the queen of Syria, Zenobia of Palmyra, was reportedly descended from Selene and Juba II. Ptolemy Philadelphus. Descendants Through his daughters by Octavia, he was the paternal great grandfather of Roman emperor Caligula, the maternal grandfather of emperor Claudius, and both maternal great-great-grandfather and paternal great-great uncle of the emperor Nero of the Julio-Claudian dynasty. Through his eldest daughter, he was ancestor to the long line of kings and co-rulers of the Bosporan Kingdom, the longest-living Roman client kingdom, as well as the rulers and royalty of several other Roman client states. Through his daughter by Cleopatra, Antony was ancestor to the royal family of Mauretania, another Roman client kingdom, while through his sole surviving son Iullus, he was ancestor to several famous Roman statesmen. 1. Antonia, born 50 BC, had 1 child A. Pythodorida of Pontus, 30 BC or 29 BC – 38 AD, had 3 children I. Artaxias III, King of Armenia, 13 BC – 35 AD, died without issue II. Polemon II, King of Pontus, 12 BC or 11 BC – 74 AD, died without issue III. Antonia Tryphaena, Queen of Thrace, 10 BC – 55 AD, had 4 children a. Rhoemetalces II, King of Thrace, died 38 AD, died without issue b. Gepaepyris, Queen of the Bosporan Kingdom, had 2 children i. Tiberius Julius Mithridates, King of the Bosporan Kingdom, died 68 AD, died without issue ii. Tiberius Julius Cotys I, King of the Bosporan Kingdom, had 1 child i. Tiberius Julius Rhescuporis I, King of the Bosporan Kingdom, died 90 AD, had 1 child i. Tiberius Julius Sauromates I, King of the Bosporan Kingdom, had 1 child i. Tiberius Julius Cotys II, King of the Bosporan Kingdom, had 1 child i. Rhoemetalces, King of the Bosporan Kingdom, died 153 AD, had 1 child i. Eupator, King of the Bosporan Kingdom, died 174 AD, had 1 child i. Tiberius Julius Sauromates II, King of the Bosporan Kingdom, died 210 AD or 211 AD, had 2 children i. Tiberius Julius Rhescuporis II, King of the Bosporan Kingdom, died 227 AD, had 1 child i. Tiberius Julius Rhescuporis III, King of the Bosporan Kingdom, died 227 AD ii. Tiberius Julius Cotys III, King of the Bosporan Kingdom, died 235 AD, had 3 children i. Tiberius Julius Sauromates III, King of the Bosporan Kingdom, died 232 AD ii. Tiberius Julius Rhescuporis IV, King of the Bosporan Kingdom, died 235 AD iii. Tiberius Julius Ininthimeus, King of the Bosporan Kingdom, died 240 AD, had 1 child i. Tiberius Julius Rhescuporis V, King of the Bosporan Kingdom, died 276 AD, had 3 children i. Tiberius Julius Pharsanzes, King of the Bosporan Kingdom, died 254 AD ii. Synges, King of the Bosporan Kingdom, died 276 AD iii. Tiberius Julius Teiranes, King of the Bosporan Kingdom, died 279 AD, had 2 children i. Tiberius Julius Sauromates IV, King of the Bosporan Kingdom, died 276 AD ii. Theothorses, King of the Bosporan Kingdom, died 309 AD, had 3 children i. Tiberius Julius Rhescuporis VI, King of the Bosporan Kingdom, died 342 AD ii. Rhadamsades, King of the Bosporan Kingdom, died 323 AD iii. Nana, Queen of Caucasian Iberia, died 363 AD i. Rev II of Iberia i. Sauromaces II of Iberia ii. Trdat of Iberia ii. Aspacures II of Iberia c. Cotys IX, King of Lesser Armenia d. Pythodoris II of Thrace, died without issue 2. Marcus Antonius Antyllus, 47–30 BC, died without issue 3. Iullus Antonius, 43–2 BC, had 3 children A. Antonius, died young, no issue B. Lucius Antonius, 20 BC – 25 AD, issue unknown C. Iulla Antonia ?? born after 19 BC, issue unknown 4. Prince Alexander Helios of Egypt, born 40 BC, died without issue (presumably) 5. Cleopatra Selene, Queen of Mauretania, 40 BC – 6 AD, had 2 children A. Ptolemy, King of Mauretania, 1 BC – 40 AD, had 1 child I. Drusilla, Queen of Emesa, 38–79 AD, had 1 child a. Gaius Julius Alexio, King of Emesa, had 1 child B. Princess Drusilla of Mauretania, born 5 AD or 8 BC 6. Antonia Major, 39 BC – before 25 AD, had 3 children A. Domitia Lepida the Elder, c. 19 BC – 59 AD, had 1 child I. Quintus Haterius Antoninus B. Gnaeus Domitius Ahenobarbus, 17 BC – 40 AD, had 1 child I. Nero (Lucius Domitius Ahenobarbus) (see line of Antonia Minor below) C. Domitia Lepida the Younger, 10 BC – 54 AD, had 3 children I. Marcus Valerius Messala Corvinus II. Valeria Messalina, 17 or 20–48 AD, had 2 children a. (Messalina was the mother of the two youngest children of the Roman emperor Claudius listed below) III. Faustus Cornelius Sulla Felix, 22–62 AD, had 1 child a. a son (this child and the only child of the Claudia Antonia listed below are the same person) 7. Antonia Minor, 36 BC – 37 AD, had 3 children A. Germanicus Julius Caesar, 15 BC – 19 AD, had 6 children I. Nero Julius Caesar Germanicus, 6–30 AD, died without issue II. Drusus Julius Caesar Germanicus, 8–33 AD, died without issue III. Gaius Julius Caesar Augustus Germanicus (Caligula), 12–41 AD, had 1 child; a. Julia Drusilla, 39–41 AD, died young IV. Julia Agrippina (Agrippina the Younger), 15–59 AD, had 1 child; a. Nero Claudius Caesar Augustus Germanicus, 37–68 AD, had 1 child; i. Claudia Augusta, January 63 AD – April 63 AD, died young V. Julia Drusilla, 16–38 AD, died without issue VI. Julia Livilla, 18–42 AD, died without issue B. Claudia Livia Julia (Livilla), 13 BC – 31 AD, had three children I. Julia Livia, 7–43 AD, had 4 children a. Gaius Rubellius Plautus, 33–62 AD, had several children b. Rubellia Bassa, born between 33 AD and 38 AD, had at least 1 child i. Octavius Laenas, had at least 1 child i. Sergius Octavius Laenas Pontianus c. Gaius Rubellius Blandus d. Rubellius Drusus II. Tiberius Julius Caesar Nero Gemellus, 19–37 or 38 AD, died without issue III. Tiberius Claudius Caesar Germanicus II Gemellus, 19–23 AD, died young C. Tiberius Claudius Caesar Augustus Germanicus, 10 BC – 54 AD, had 4 children I. Tiberius Claudius Drusus, died young II. Claudia Antonia, c. 30–66 AD, had 1 child a. a son, died young III. Claudia Octavia, 39 or 40–62 AD, died without issue IV. Tiberius Claudius Caesar Britannicus, 41–55 AD, died without issue 8. Prince Ptolemy Philadelphus of Egypt, 36–29 BC, died without issue (presumably) Artistic portrayals Works in which the character of Mark Antony plays a central role: William Shakespeare's Julius Caesar Julius Caesar (1950 film) based on this (played by Charlton Heston) Julius Caesar (1953 film) based on this (played by Marlon Brando) Julius Caesar (1970 film) based on this (played by Charlton Heston again) Antony and Cleopatra, several works with that title John Dryden's 1677 play All for Love Jules Massenet's 1914 opera Cléopâtre The 1934 film Cleopatra (played by Henry Wilcoxon) Orson Welles' innovative 1937 adaptation of William Shakespeare at Mercury Theatre has George Coulouris as Marcus Antonius. The 1953 film Serpent of the Nile (played by Raymond Burr) The 1963 film Cleopatra (played by Richard Burton) The 1964 film Carry On Cleo (played by Sid James) The 1983 miniseries The Cleopatras (played by Christopher Neame) The TV series Xena: Warrior Princess (played by Manu Bennett) In the Age of Empires: The Rise of Rome, Mark Antony featured as a short swordsman. The 1999 film Cleopatra (played by Billy Zane) The Capcom video game Shadow of Rome, in which he is depicted as the main antagonist The 2003 TV movie Imperium: Augustus (played by Massimo Ghini) The 2005 TV mini series Empire (played by Vincent Regan) The 2005–2007 HBO/BBC TV series Rome (played by James Purefoy) The 2009–2013 TV series Horrible Histories (played by Mathew Baynton), and the 2015 reboot series of the same name (portrayed by Tom Stourton in 2019) The 2006 BBC One docudrama Ancient Rome: The Rise and Fall of an Empire (played by Alex Ferns) As Cleopatra's guardian and level boss (of Lust) in the Xbox 360 game Dante's Inferno released by Visceral Games
Lepidus, and Antony met near Bononia. After two days of discussions, the group agreed to establish a three man dictatorship to govern the Republic for five years, known as the "Three Men for the Restoration of the Republic" (Latin: "Triumviri Rei publicae Constituendae"), known to modern historians as the Second Triumvirate. They shared military command of the Republic's armies and provinces among themselves: Antony received Gaul, Lepidus Spain, and Octavian (as the junior partner) Africa. They jointly governed Italy. The Triumvirate would have to conquer the rest of Rome's holdings; Brutus and Cassius held the Eastern Mediterranean, and Sextus Pompey held the Mediterranean islands. On 27 November 43 BC, the Triumvirate was formally established by a new law, the lex Titia. Octavian and Antony reinforced their alliance through Octavian's marriage to Antony's stepdaughter, Claudia. The primary objective of the Triumvirate was to avenge Caesar's death and to make war upon his murderers. Before marching against Brutus and Cassius in the East, the Triumvirs issued proscriptions against their enemies in Rome. The Dictator Lucius Cornelius Sulla had taken similar action to purge Rome of his opponents in 82 BC. The proscribed were named on public lists, stripped of citizenship, and outlawed. Their wealth and property were confiscated by the state, and rewards were offered to anyone who secured their arrest or death. With such encouragements, the proscription produced deadly results; two thousand Roman knights were executed, and one third of the senate, among them Cicero, who was executed on 7 December. The confiscations helped replenish the State Treasury, which had been depleted by Caesar's civil war the decade before; when this seemed insufficient to fund the imminent war against Brutus and Cassius, the Triumvirs imposed new taxes, especially on the wealthy. By January 42 BC the proscription had ended; it had lasted two months, and though less bloody than Sulla's, it traumatized Roman society. A number of those named and outlawed had fled to either Sextus Pompey in Sicily or to the Liberators in the East. Senators who swore loyalty to the Triumvirate were allowed to keep their positions; on 1 January 42 BC, the senate officially deified Caesar as "The Divine Julius", and confirmed Antony's position as his high priest. War against the Liberators Due to the infighting within the Triumvirate during 43 BC, Brutus and Cassius had assumed control of much of Rome's eastern territories, and amassed a large army. Before the Triumvirate could cross the Adriatic Sea into Greece where the Liberators had stationed their army, the Triumvirate had to address the threat posed by Sextus Pompey and his fleet. From his base in Sicily, Sextus raided the Italian coast and blockaded the Triumvirs. Octavian's friend and admiral Quintus Salvidienus Rufus thwarted an attack by Sextus against the southern Italian mainland at Rhegium, but Salvidienus was then defeated in the resulting naval battle because of the inexperience of his crews. Only when Antony arrived with his fleet was the blockade broken. Though the blockade was defeated, control of Sicily remained in Sextus' hand, but the defeat of the Liberators was the Triumvirate's first priority. In the summer of 42 BC, Octavian and Antony sailed for Macedonia to face the Liberators with nineteen legions, the vast majority of their army (approximately 100,000 regular infantry plus supporting cavalry and irregular auxiliary units), leaving Rome under the administration of Lepidus. Likewise, the army of the Liberators also commanded an army of nineteen legions; their legions, however, were not at full strength while the legions of Antony and Octavian were. While the Triumvirs commanded a larger number of infantry, the Liberators commanded a larger cavalry contingent. The Liberators, who controlled Macedonia, did not wish to engage in a decisive battle, but rather to attain a good defensive position and then use their naval superiority to block the Triumvirs' communications with their supply base in Italy. They had spent the previous months plundering Greek cities to swell their war-chest and had gathered in Thrace with the Roman legions from the Eastern provinces and levies from Rome's client kingdoms. Brutus and Cassius held a position on the high ground along both sides of the via Egnatia west of the city of Philippi. The south position was anchored to a supposedly impassable marsh, while the north was bordered by impervious hills. They had plenty of time to fortify their position with a rampart and a ditch. Brutus put his camp on the north while Cassius occupied the south of the via Egnatia. Antony arrived shortly and positioned his army on the south of the via Egnatia, while Octavian put his legions north of the road. Antony offered battle several times, but the Liberators were not lured to leave their defensive stand. Thus, Antony tried to secretly outflank the Liberators' position through the marshes in the south. This provoked a pitched battle on 3 October 42 BC. Antony commanded the Triumvirate's army due to Octavian's sickness on the day, with Antony directly controlling the right flank opposite Cassius. Because of his health, Octavian remained in camp while his lieutenants assumed a position on the left flank opposite Brutus. In the resulting first battle of Philippi, Antony defeated Cassius and captured his camp while Brutus overran Octavian's troops and penetrated into the Triumvirs' camp but was unable to capture the sick Octavian. The battle was a tactical draw but due to poor communications Cassius believed the battle was a complete defeat and committed suicide to prevent being captured. Brutus assumed sole command of the Liberator army and preferred a war of attrition over open conflict. His officers, however, were dissatisfied with these defensive tactics and his Caesarian veterans threatened to defect, forcing Brutus to give battle at the second battle of Philippi on 23 October. While the battle was initially evenly matched, Antony's leadership routed Brutus' forces. Brutus committed suicide the day after the defeat and the remainder of his army swore allegiance to the Triumvirate. Over fifty thousand Romans died in the two battles. While Antony treated the losers mildly, Octavian dealt cruelly with his prisoners and even beheaded Brutus' corpse. The battles of Philippi ended the civil war in favor of the Caesarian faction. With the defeat of the Liberators, only Sextus Pompey and his fleet remained to challenge the Triumvirate's control over the Republic. Master of the Roman East Division of the Republic The victory at Philippi left the members of the Triumvirate as masters of the Republic, save Sextus Pompey in Sicily. Upon returning to Rome, the Triumvirate repartitioned rule of Rome's provinces among themselves, with Antony as the clear senior partner. He received the largest distribution, governing all of the Eastern provinces while retaining Gaul in the West. Octavian's position improved, as he received Spain, which was taken from Lepidus. Lepidus was then reduced to holding only Africa, and he assumed a clearly tertiary role in the Triumvirate. Rule over Italy remained undivided, but Octavian was assigned the difficult and unpopular task of demobilizing their veterans and providing them with land distributions in Italy. Antony assumed direct control of the East while he installed one of his lieutenants as the ruler of Gaul. During his absence, several of his supporters held key positions in Rome to protect his interests there. The East was in need of reorganization after the rule of the Liberators in the previous years. In addition, Rome contended with the Parthian Empire for dominance of the Near East. The Parthian threat to the Triumvirate's rule was urgent due to the fact that the Parthians supported the Liberators in the recent civil war, aid which included the supply troops at Philippi. As ruler of the East, Antony also assumed responsibility for overseeing Caesar's planned invasion of Parthia to avenge the defeat of Marcus Licinius Crassus at the Battle of Carrhae in 53 BC. In 42 BC, the Roman East was composed of several directly controlled provinces and client kingdoms. The provinces included Macedonia, Asia, Bithynia, Cilicia, Cyprus, Syria, and Cyrenaica. Approximately half of the eastern territory was controlled by Rome's client kingdoms, nominally independent kingdoms subject to Roman direction. These kingdoms included: Odrysian Thrace in Eastern Europe The Bosporan Kingdom along the northern coast of the Black Sea Galatia, Pontus, Cappadocia, Armenia, and several smaller kingdoms in Asia Minor Judea, Commagene, and the Nabataean kingdom in the Middle East Ptolemaic Egypt in Africa Activities in the East Antony spent the winter of 42 BC in Athens, where he ruled generously towards the Greek cities. A proclaimed philhellene ("Friend of all things Greek"), Antony supported Greek culture to win the loyalty of the inhabitants of the Greek East. He attended religious festivals and ceremonies, including initiation into the Eleusinian Mysteries, a secret cult dedicated to the worship of the goddesses Demeter and Persephone. Beginning in 41 BC, he traveled across the Aegean Sea to Anatolia, leaving his friend Lucius Marcius Censorius as governor of Macedonia and Achaea. Upon his arrival in Ephesus in Asia, Antony was worshiped as the god Dionysus born anew. He demanded heavy taxes from the Hellenic cities in return for his pro-Greek culture policies, but exempted those cities which had remained loyal to Caesar during the civil war and compensated those cities which had suffered under Caesar's assassins, including Rhodes, Lycia, and Tarsus. He granted pardons to all Roman nobles living in the East who had supported the Optimate cause, except for Caesar's assassins. Ruling from Ephesus, Antony consolidated Rome's hegemony in the East, receiving envoys from Rome's client kingdoms and intervening in their dynastic affairs, extracting enormous financial "gifts" from them in the process. Though King Deiotarus of Galatia supported Brutus and Cassius following Caesar's assassination, Antony allowed him to retain his position. He also confirmed Ariarathes X as king of Cappadocia after the execution of his brother Ariobarzanes III of Cappadocia by Cassius before the Battle of Philippi. In Hasmonean Judea, several Jewish delegations complained to Antony of the harsh rule of Phasael and Herod, the sons of Rome's assassinated chief Jewish minister Antipater the Idumaean. After Herod offered him a large financial gift, Antony confirmed the brothers in their positions. Subsequently, influenced by the beauty and charms of Glaphyra, the widow of Archelaüs (formerly the high priest of Comana), Antony deposed Ariarathes, and appointed Glaphyra's son, Archelaüs, to rule Cappadocia. In October 41, Antony requested Rome's chief eastern vassal, the queen of Ptolemaic Egypt Cleopatra, meet him at Tarsus in Cilicia. Antony had first met a young Cleopatra while campaigning in Egypt in 55 BC and again in 48 BC when Caesar had backed her as queen of Egypt over the claims of her half-sister Arsinoe. Cleopatra would bear Caesar a son, Caesarion, in 47 BC and the two living in Rome as Caesar's guests until his assassination in 44 BC. After Caesar's assassination, Cleopatra and Caesarion returned to Egypt, where she named the child as her co-ruler. In 42 BC, the Triumvirate, in recognition for Cleopatra's help towards Publius Cornelius Dolabella in opposition to the Liberators, granted official recognition to Caesarion's position as king of Egypt. Arriving in Tarsus aboard her magnificent ship, Cleopatra invited Antony to a grand banquet to solidify their alliance. As the most powerful of Rome's eastern vassals, Egypt was indispensable in Rome's planned military invasion of the Parthian Empire. At Cleopatra's request, Antony ordered the execution of Arsinoe, who, though marched in Caesar's triumphal parade in 46 BC, had been granted sanctuary at the temple of Artemis in Ephesus. Antony and Cleopatra then spent the winter of 41 BC together in Alexandria. Cleopatra bore Antony twin children, Alexander Helios and Cleopatra Selene II, in 40 BC, and a third, Ptolemy Philadelphus, in 36 BC. Antony also granted formal control over Cyprus, which had been under Egyptian control since 47 BC during the turmoil of Caesar's civil war, to Cleopatra in 40 BC as a gift for her loyalty to Rome. Antony, in his first months in the East, raised money, reorganized his troops, and secured the alliance of Rome's client kingdoms. He also promoted himself as Hellenistic ruler, which won him the affection of the Greek peoples of the East but also made him the target of Octavian's propaganda in Rome. According to some ancient authors, Antony led a carefree life of luxury in Alexandria. Upon learning the Parthian Empire had invaded Rome's territory in early 40 BC, Antony left Egypt for Syria to confront the invasion. However, after a short stay in Tyre, he was forced to sail with his army to Italy to confront Octavian due to Octavian's war against Antony's wife and brother. Fulvia's Civil War Following the defeat of Brutus and Cassius, while Antony was stationed in the East, Octavian had authority over the West. Octavian's chief responsibility was distributing land to tens of thousands of Caesar's veterans who had fought for the Triumvirate. Additionally, tens of thousands of veterans who had fought for the Republican cause in the war also required land grants. This was necessary to ensure they would not support a political opponent of the Triumvirate. However, the Triumvirs did not possess sufficient state-controlled land to allot to the veterans. This left Octavian with two choices: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who might back a military rebellion against the Triumvirate's rule. Octavian chose the former. As many as eighteen Roman towns through Italy were affected by the confiscations of 41 BC, with entire populations driven out. Led by Fulvia, the wife of Antony, the senators grew hostile towards Octavian over the issue of the land confiscations. According to the ancient historian Cassius Dio, Fulvia was the most powerful woman in Rome at the time. According to Dio, while Publius Servilius Vatia and Lucius Antonius were the consuls for the year 41 BC, real power was vested in Fulvia. As the mother-in-law of Octavian and the wife of Antony, no action was taken by the senate without her support. Fearing Octavian's land grants would cause the loyalty of the Caesarian veterans to shift away from Antony, Fulvia traveled constantly with her children to the new veteran settlements in order to remind the veterans of their debt to Antony. Fulvia also attempted to delay the land settlements until Antony returned to Rome, so that he could share credit for the settlements. With the help of Antony's brother, the consul of 41 BC Lucius Antonius, Fulvia encouraged the senate to oppose Octavian's land policies. The conflict between Octavian and Fulvia caused great political and social unrest throughout Italy. Tensions escalated into open war, however, when Octavian divorced Claudia, Fulvia's daughter from her first husband Publius Clodius Pulcher. Outraged, Fulvia, supported by Lucius, raised an army to fight for Antony's rights against Octavian. According to the ancient historian Appian, Fulvia's chief reason for the war was her jealousy of Antony's affairs with Cleopatra in Egypt and desire to draw Antony back to Rome. Lucius and Fulvia took a political and martial gamble in opposing Octavian and Lepidus, however, as the Roman army still depended on the Triumvirs for their salaries. Lucius and Fulvia, supported by their army, marched on Rome and promised the people an end to the Triumvirate in favor of Antony's sole rule. However, when Octavian returned to the city with his army, the pair were forced to retreat to Perusia in Etruria. Octavian placed the city under siege while Lucius waited for Antony's legions in Gaul to come to his aid. Away in the East and embarrassed by Fulvia's actions, Antony gave no instructions to his legions. Without reinforcements, Lucius and Fulvia were forced to surrender in February 40 BC. While Octavian pardoned Lucius for his role in the war and even granted him command in Spain as his chief lieutenant there, Fulvia was forced to flee to Greece with her children. With the war over, Octavian was left in sole control over Italy. When Antony's governor of Gaul died, Octavian took over his legions there, further strengthening his control over the West. Despite the Parthian Empire's invasion of Rome's eastern territories, Fulvia's civil war forced Antony to leave the East and return to Rome in order to secure his position. Meeting her in Athens, Antony rebuked Fulvia for her actions before sailing on to Italy with his army to face Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their shared service under Caesar. The legions under their command followed suit. Meanwhile, in Sicyon, Fulvia died of a sudden and unknown illness. Fulvia's death and the mutiny of their soldiers allowed the triumvirs to effect a reconciliation through a new power sharing agreement in September 40 BC. The Roman world was redivided, with Antony receiving the Eastern provinces, Octavian the Western provinces, and Lepidus relegated to a clearly junior position as governor of Africa. This agreement, known as the Treaty of Brundisium, reinforced the Triumvirate and allowed Antony to begin preparing for Caesar's long-awaited campaign against the Parthian Empire. As a symbol of their renewed alliance, Antony married Octavia, Octavian's sister, in October 40 BC. Antony's Parthian War Roman–Parthian relations The rise of the Parthian Empire in the 3rd century BC and Rome's expansion into the Eastern Mediterranean during the 2nd century BC brought the two powers into direct contact, causing centuries of tumultuous and strained relations. Though periods of peace developed cultural and commercial exchanges, war was a constant threat. Influence over the buffer state of the Kingdom of Armenia, located to the north-east of Roman Syria, was often a central issue in the Roman-Parthian conflict. In 95 BC, Tigranes the Great, a Parthian ally, became king. Tigranes would later aid Mithradates of Pontus against Rome before being decisively defeated by Pompey in 66 BC. Thereafter, with his son Artavasdes in Rome as a hostage, Tigranes would rule Armenia as an ally of Rome until his death in 55 BC. Rome then released Artavasdes, who succeeded his father as king. In 53 BC, Rome's governor of Syria, Marcus Licinius Crassus, led an expedition across the Euphrates River into Parthian territory to confront the Parthian Shah Orodes II. Artavasdes II offered Crassus the aid of nearly forty thousand troops to assist his Parthian expedition on the condition that Crassus invade through Armenia as the safer route. Crassus refused, choosing instead the more direct route by crossing the Euphrates directly into desert Parthian territory. Crassus' actions proved disastrous as his army was defeated at the Battle of Carrhae by a numerically inferior Parthian force. Crassus' defeat forced Armenia to shift its loyalty to Parthia, with Artavasdes II's sister marrying Orodes' son and heir Pacorus. In early 44 BC, Julius Caesar announced his intentions to invade Parthia and restore Roman power in the East. His reasons were to punish the Parthians for assisting Pompey in the recent civil war, to avenge Crassus' defeat at Carrhae, and especially to match the glory of Alexander the Great for himself. Before Caesar could launch his campaign, however, he was assassinated. As part of the compromise between Antony and the Republicans to restore order following Caesar's murder, Publius Cornelius Dolabella was assigned the governorship of Syria and command over Caesar's planned Parthian campaign. The compromise did not hold, however, and the Republicans were forced to flee to the East. The Republicans directed Quintus Labienus to attract the Parthians to their side in the resulting war against Antony and Octavian. After the Republicans were defeated at the Battle of Philippi, Labienus joined the Parthians. Despite Rome's internal turmoil during the time, the Parthians did not immediately benefit from the power vacuum in the East due to Orodes II's reluctance despite Labienus' urgings to the contrary. In the summer of 41 BC, Antony, to reassert Roman power in the East, conquered Palmyra on the Roman-Parthian border. Antony then spent the winter of 41 BC in Alexandria with Cleopatra, leaving only two legions to defend the Syrian border against Parthian incursions. The legions, however, were composed of former Republican troops and Labienus convinced Orodes II to invade. Parthian Invasion A Parthian army, led by Orodes II's eldest son Pacorus, invaded Syria in early 40 BC. Labienus, the Republican ally of Brutus and Cassius, accompanied him to advise him and to rally the former Republican soldiers stationed in Syria to the Parthian cause. Labienus recruited many of the former Republican soldiers to the Parthian campaign in opposition to Antony. The joint Parthian–Roman force, after initial success in Syria, separated to lead their offensive in two directions: Pacorus marched south toward Hasmonean Judea while Labienus crossed the Taurus Mountains to the north into Cilicia. Labienus conquered southern Anatolia with little resistance. The Roman governor of Asia, Lucius Munatius Plancus, a partisan of Antony, was forced to flee his province, allowing Labienus to recruit the Roman soldiers stationed there. For his part, Pacorus advanced south to Phoenicia and Palestine. In Hasmonean Judea, the exiled prince Antigonus allied himself with the Parthians. When his brother, Rome's client king Hyrcanus II, refused to accept Parthian domination, he was deposed in favor of Antigonus as Parthia's client king in Judea. Pacorus' conquest had captured much of the Syrian and Palestinian interior, with much of the Phoenician coast occupied as well. The city of Tyre remained the last major Roman outpost in the region. Antony, then in Egypt with Cleopatra, did not respond immediately to the Parthian invasion. Though he left Alexandria for Tyre in early 40 BC, when he learned of the civil war between his wife and Octavian, he was forced to return to Italy with his army to secure his position in Rome rather than defeat the Parthians. Instead, Antony dispatched Publius Ventidius Bassus to check the Parthian advance. Arriving in the East in spring 39 BC, Ventidius surprised Labienus near the Taurus Mountains, claiming victory at the Cilician Gates. Ventidius ordered Labienus executed as a traitor and the formerly rebellious Roman soldiers under his command were reincorporated under Antony's control. He then met a Parthian army at the border between Cilicia and Syria, defeating it and killing a large portion of the Parthian soldiers at the Amanus Pass. Ventidius' actions temporarily halted the Parthian advance and restored Roman authority in the East, forcing Pacorus to abandon his conquests and return to Parthia. In the spring of 38 BC, the Parthians resumed their offensive with Pacorus leading an army across the Euphrates. Ventidius, in order to gain time, leaked disinformation to Pacorus implying that he should cross the Euphrates River at their usual ford. Pacorus did not trust this information and decided to cross the river much farther downstream; this was what Ventidius hoped would occur and gave him time to get his forces ready. The Parthians faced no opposition and proceeded to the town of Gindarus in Cyrrhestica where Ventidius' army was waiting. At the Battle of Cyrrhestica, Ventidius inflicted an overwhelming defeat against the Parthians which resulted in the death of Pacorus. Overall, the Roman army had achieved a complete victory with Ventidius' three successive victories forcing the Parthians back across the Euphrates. Pacorus' death threw the Parthian Empire into chaos. Shah Orodes II, overwhelmed by the grief of his son's death, appointed his younger son Phraates IV as his successor. However, Phraates IV assassinated Orodes II in late 38 BC, succeeding him on the throne. Ventidius feared Antony's wrath if he invaded Parthian territory, thereby stealing his glory; so instead he attacked and subdued the eastern kingdoms, which had revolted against Roman control following the disastrous defeat of Crassus at Carrhae. One such rebel was King Antiochus of Commagene, whom he besieged in Samosata. Antiochus tried to make peace with Ventidius, but Ventidius told him to approach Antony directly. After peace was concluded, Antony sent Ventidius back to Rome where he celebrated a triumph, the first Roman to triumph over the Parthians. Conflict with Sextus Pompey While Antony and the other Triumvirs ratified the Treaty of Brundisium to redivide the Roman world among themselves, the rebel Sextus Pompey, the son of Caesar's rival Pompey the Great, was largely ignored. From his stronghold on Sicily, he continued his piratical activities across Italy and blocked the shipment of grain to Rome. The lack of food in Rome caused the public to blame the Triumvirate and shift its sympathies towards Pompey. This pressure forced the Triumvirs to meet with Sextus in early 39 BC. While Octavian wanted an end to the ongoing blockade of Italy, Antony sought peace in the West in order to make the Triumvirate's legions available for his service in his planned campaign against the Parthians. Though the Triumvirs rejected Sextus' initial request to replace Lepidus as the third man within the Triumvirate, they did grant other concessions. Under the terms of the Treaty of Misenum, Sextus was allowed to retain control over Sicily and Sardinia, with the provinces of Corsica and Greece being added to his territory. He was also promised a future position with the Priestly College of Augurs and the consulship for 35 BC. In exchange, Sextus agreed to end his naval blockade of Italy, supply Rome with grain, and halt his piracy of Roman merchant ships. However, the most important provision of the Treaty was the end of the proscription the Trimumvirate had begun in late 43 BC. Many of the proscribed senators, rather than face death, fled to Sicily seeking Sextus' protection. With the exception of those responsible for Caesar's assassination, all those proscribed were allowed to return to Rome and promised compensation. This caused Sextus to lose many valuable allies as the formerly exiled senators gradually aligned themselves with either Octavian or Antony. To secure the peace, Octavian betrothed his three-year-old nephew and Antony's stepson Marcus Claudius Marcellus to Sextus' daughter Pompeia. With peace in the West secured, Antony planned to retaliate against Parthia by invading their territory. Under an agreement with Octavian, Antony would be supplied with extra troops for his campaign. With this military purpose on his mind, Antony sailed to Greece with Octavia, where he behaved in a most extravagant manner, assuming the attributes of the Greek god Dionysus in 39 BC. The peace with Sextus was short-lived, however. When Sextus demanded control over Greece as the agreement provided, Antony demanded the province's tax revenues be to fund the Parthian campaign. Sextus refused. Meanwhile, Sextus' admiral Menas betrayed him, shifting his loyalty to Octavian and thereby granting him control of Corsica, Sardinia, three of Sextus' legions, and a larger naval force. These actions worked to renew Sextus' blockade of Italy, preventing Octavian from sending the promised troops to Antony for the Parthian campaign. This new delay caused Antony to quarrel with Octavian, forcing Octavia to mediate a truce between them. Under the Treaty of Tarentum, Antony provided a large naval force for Octavian's use against Sextus while Octavian promised to raise new legions for Antony to support his invasion of Parthia. As the term of the Triumvirate was set to expire at the end of 38 BC, the two unilaterally extended their term of office another five years until 33 BC without seeking approval of the senate or the popular assemblies. To seal the Treaty, Antony's elder son Marcus Antonius Antyllus, then only 6 years old, was betrothed to Octavian's only daughter Julia, then only an infant. With the Treaty signed, Antony returned to the East, leaving Octavia in Italy. Reconquest of Judea With Publius Ventidius Bassus returned to Rome in triumph for his defensive campaign against the Parthians, Antony appointed Gaius Sosius as the new governor of Syria and Cilicia in early 38 BC. Antony, still in the West negotiating with Octavian, ordered Sosius to depose Antigonus, who had been installed in the recent Parthian invasion as the ruler of Hasmonean Judea, and to make Herod the new Roman client king in the region. Years before in 40 BC, the Roman senate had proclaimed Herod "King of the Jews" because Herod had been a loyal supporter of Hyrcanus II, Rome's previous client king before the Parthian invasion, and was from a family with long standing connections to Rome. The Romans hoped to use Herod as a bulwark against the Parthians in the coming campaign. Advancing south, Sosius captured the island-city of Aradus on the coast of Phoenicia by the end of 38 BC. The following year, the Romans besieged Jerusalem. After a forty-day siege, the Roman soldiers stormed the city and, despite Herod's pleas for restraint, acted without mercy, pillaging and killing all in their path, prompting Herod to complain to Antony. Herod finally resorted to bribing Sosius and his troops in order that they would not leave him "king of a desert". Antigonus was forced to surrender to Sosius, and was sent to Antony for the triumphal procession in Rome. Herod, however, fearing that Antigonus would win backing in Rome, bribed Antony to execute Antigonus. Antony, who recognized that Antigonus would remain a permanent threat to Herod, ordered him beheaded in Antioch. Now secure on his throne, Herod would rule the Herodian Kingdom until his death in 4 BC, and would be an ever-faithful client king of Rome. Parthian Campaign With the Triumvirate renewed in 38 BC, Antony returned to Athens in the winter with his new wife Octavia, the sister of Octavian. With the assassination of the Parthian king Orodes II by his son Phraates IV, who then seized the Parthian throne, in late 38 BC, Antony prepared to invade Parthia himself. Antony, however, realized Octavian had no intention of sending him the additional legions he had promised under the Treaty of Tarentum. To supplement his own armies, Antony instead looked to Rome's principal vassal in the East: his lover Cleopatra. In addition to significant financial resources, Cleopatra's backing of his Parthian campaign allowed Antony to amass the largest army Rome had ever assembled in the East. Wintering in Antioch during 37, Antony's combined Roman–Egyptian army numbered some 200,000, including sixteen legions (approximately 160,000 soldiers) plus an additional 40,000 auxiliaries. Such a force was twice the size of Marcus Licinius Crassus's army from his failed Parthian invasion of 53 BC and three times those of Lucius Licinius Lucullus and Lucius Cornelius Sulla during the Mithridatic Wars. The size of his army indicated Antony's intention to conquer Parthia, or at least receive its submission by capturing the Parthian capital of Ecbatana. Antony's rear was protected by Rome's client kingdoms in Anatolia, Syria, and Judea, while the client kingdoms of Cappadocia, Pontus, and Commagene would provide supplies along the march. Antony's first target for his invasion was the Kingdom of Armenia. Ruled by King Artavasdes II of Armenia, Armenia had been an ally of Rome since the defeat of Tigranes the Great by Pompey the Great in 66 BC during the Third Mithridatic War. However, following Marcus Licinius Crassus's defeat at the Battle of Carrhae in 53 BC, Armenia was forced into an alliance with Parthia due to Rome's weakened position in the East. Antony dispatched Publius Canidius Crassus to Armenia, receiving Artavasdes II's surrender without opposition. Canidius then led an invasion into the South Caucasus, subduing Iberia. There, Canidius forced the Iberian King Pharnavaz II into an alliance against Zober, king of neighboring Albania, subduing the kingdom and reducing it to a Roman protectorate. With Armenia and the Caucasus secured, Antony marched south, crossing into the Parthian province of Media Atropatene. Though Antony desired a pitched battle, the Parthians would not engage, allowing Antony to march deep into Parthian territory by mid-August of 36 BC. This forced Antony to leave his logistics train in the care of two legions (approximately 10,000 soldiers), which was then attacked and completely destroyed by the Parthian army before Antony could rescue them. Though the Armenian King Artavasdes II and his cavalry were present during the massacre, they did not intervene. Despite the ambush, Antony continued the campaign. However, Antony was soon forced to retreat in mid-October after a failed two-month siege of the provincial capital. The retreat soon proved a disaster as Antony's demoralized army faced increasing supply difficulties in the mountainous terrain during winter while constantly being harassed by the Parthian army. According to the Greek historian Plutarch, eighteen battles were fought between the retreating Romans and the Parthians during the month-long march back to Armenia, with approximately 20,000 infantry and 4,000 cavalry dying during the retreat alone. Once in Armenia, Antony quickly marched back to Syria to protect his interests there by late 36 BC, losing an additional 8,000 soldiers along the way. In all, two-fifths of his original army (some 80,000 men) had died during his failed campaign. Antony and Cleopatra Meanwhile, in Rome, the triumvirate was no more. Octavian forced Lepidus to resign after the older triumvir attempted to take control of Sicily after the defeat of Sextus. Now in sole power, Octavian was occupied in wooing the traditional Republican aristocracy to his side. He married Livia and started to attack Antony in order to raise himself to power. He argued that Antony was a man of low morals to have left his faithful wife abandoned in Rome with the children to be with the promiscuous queen of Egypt. Antony was accused of everything, but most of all, of "going native", an unforgivable crime to the proud Romans. Several times Antony was summoned to Rome, but remained in Alexandria with Cleopatra. Again with Egyptian money, Antony invaded Armenia, this time successfully. In the return, a mock Roman triumph was celebrated in the streets of Alexandria. The parade through the city was a pastiche of Rome's most important military celebration. For the finale, the whole city was summoned to hear a very important political statement. Surrounded by Cleopatra and her children, Antony ended his alliance with Octavian. He distributed kingdoms among his children: Alexander Helios was named king of Armenia, Media and Parthia (territories which were not for the most part under the control of Rome), his twin Cleopatra Selene got Cyrenaica and Libya, and the young Ptolemy Philadelphus was awarded Syria and Cilicia. As for Cleopatra, she was proclaimed Queen of Kings and Queen of Egypt, to rule with Caesarion (Ptolemy XV Caesar, son of Cleopatra by Julius Caesar), King of Kings and King of Egypt. Most important of all, Caesarion was declared legitimate son and heir of Caesar. These proclamations were known as the Donations of Alexandria and caused a fatal breach in Antony's relations with Rome. While the distribution of nations among Cleopatra's children was hardly a conciliatory gesture, it did not pose an immediate threat to Octavian's political position. Far more dangerous was the acknowledgment of Caesarion as legitimate and heir to Caesar's name. Octavian's base of power was his link with Caesar through adoption, which granted him much-needed popularity and loyalty of the legions. To see this convenient situation attacked by a child borne by the richest woman in the world was something Octavian could not accept. The triumvirate expired on the last day of 33 BC and was not renewed. Another civil war was beginning. During 33 and 32 BC, a propaganda war was fought in the political arena of Rome, with accusations flying between sides. Antony (in Egypt) divorced Octavia and accused Octavian of being a social upstart, of usurping power, and of forging the adoption papers by Caesar. Octavian responded with treason charges: of illegally keeping provinces that should be given to other men by lots, as was Rome's tradition, and of starting wars against foreign nations (Armenia and Parthia) without the consent of the senate. Antony was also held responsible for Sextus Pompey's execution without a trial. In 32 BC, the senate deprived him of his powers and declared war against Cleopatra – not Antony, because Octavian had no wish to advertise his role in perpetuating Rome's internecine bloodshed. Both consuls, Gnaeus Domitius Ahenobarbus and Gaius Sosius, and a third of the senate abandoned Rome to meet Antony and Cleopatra in Greece. In 31 BC, the war started. Octavian's general Marcus Vipsanius Agrippa captured the Greek city and naval port of Methone, loyal to Antony. The enormous popularity of Octavian with the legions secured the defection of the provinces of Cyrenaica and Greece to his side. On 2 September, the naval Battle of Actium took place. Antony and Cleopatra's navy was overwhelmed, and they were forced to escape to Egypt with 60 ships. Death Octavian, now close to absolute power, invaded Egypt in August, 30 BC, assisted by Agrippa. With no other refuge to escape to, Antony stabbed himself with his sword in the mistaken belief that Cleopatra had already done so. When he found out that Cleopatra was still alive, his friends brought him to Cleopatra's monument in which she was hiding, and he died in her arms. Cleopatra was allowed to conduct Antony's burial rites after she had been captured by Octavian. Realising that she was destined for Octavian's triumph in Rome, she made several attempts to take her life and finally succeeded in mid-August. Octavian had Caesarion and Antyllus killed, but he spared Iullus as well as Antony's children by Cleopatra, who were paraded through the streets of Rome. Aftermath and legacy Cicero's son, Cicero Minor, announced Antony's death to the senate. Antony's honours were revoked and his statues removed, but he was not subject to a complete damnatio memoriae. Cicero Minor also made a decree that no member of the Antonii would ever bear the name Marcus again. "In this way Heaven entrusted the family of Cicero the final acts in the punishment of Antony." When Antony died, Octavian became uncontested ruler of Rome. In the following years, Octavian, who was known as Augustus after 27 BC, managed to accumulate in his person all administrative, political, and military offices. When Augustus died in AD 14, his political powers passed to his adopted son Tiberius; the Roman Empire had begun. The rise of Caesar and the subsequent civil war between his two most powerful adherents effectively ended the credibility of the Roman oligarchy as a governing power and ensured that all future power struggles would centre upon which one individual would achieve supreme control of the government, eliminating the senate and the former magisterial structure as important foci of power in these conflicts. Thus, in history, Antony appears as one of Caesar's main adherents, he and Octavian Augustus being the two men around whom power coalesced following the assassination of Caesar, and finally as one of the three men chiefly responsible for the demise of the Roman Republic. Marriages and issue Antony was known to have an obsession with women and sex. He had many mistresses (including Cytheris) and was married in succession to Fadia, Antonia, Fulvia, Octavia and Cleopatra. He left a number of children. Through his daughters by Octavia, he would be ancestor to the Roman emperors Caligula, Claudius and Nero. Marriage to Fadia, a daughter of a freedman. According to Cicero, Fadia bore Antony several children. Nothing is known about Fadia or their children. Cicero is the only Roman source that mentions Antony's first wife. Marriage to first paternal cousin Antonia Hybrida Minor. According to Plutarch, Antony threw her out of his house in Rome because she slept with his friend, the tribune Publius Cornelius Dolabella. This occurred by 47 BC and Antony divorced her. By Antonia, he had a daughter: Antonia, married the wealthy Greek Pythodoros of Tralles. Marriage to Fulvia, by whom he had two sons: Marcus Antonius Antyllus, murdered by Octavian in 30 BC. Iullus Antonius, married Claudia Marcella the Elder, daughter of Octavia. Marriage to Octavia the Younger, sister of Octavian, later emperor Augustus; they had two daughters: Antonia the Elder married Lucius Domitius Ahenobarbus (consul 16 BC); maternal grandmother of the Empress Valeria Messalina and paternal grandmother of the emperor Nero. Antonia the Younger married Nero Claudius Drusus, the younger son of the Empress Livia Drusilla and brother of the emperor Tiberius; mother of the emperor Claudius, paternal grandmother of the emperor Caligula and empress Agrippina the Younger, and maternal great-grandmother of the emperor Nero. Children with the Queen Cleopatra VII of Egypt, the former lover of Julius Caesar: Alexander Helios Cleopatra Selene II, married King Juba II of Numidia and later Mauretania; the queen of Syria, Zenobia of Palmyra, was reportedly descended from Selene and Juba II. Ptolemy Philadelphus. Descendants Through his daughters by Octavia, he was the paternal great grandfather of Roman emperor Caligula, the maternal grandfather of emperor Claudius, and both maternal great-great-grandfather and paternal great-great uncle of the emperor Nero of the Julio-Claudian dynasty. Through his eldest daughter, he was ancestor to the long line of kings and co-rulers of the Bosporan Kingdom, the longest-living Roman client kingdom, as well as the rulers and royalty of several other Roman client states. Through his daughter by Cleopatra, Antony was ancestor to the royal family of Mauretania, another Roman client kingdom, while through his sole surviving son Iullus, he was ancestor to several famous Roman statesmen. 1. Antonia, born 50 BC, had 1 child A. Pythodorida of Pontus, 30 BC or 29 BC – 38 AD, had 3 children I. Artaxias III, King of Armenia, 13 BC – 35 AD, died without issue II. Polemon II, King of Pontus, 12 BC or 11 BC – 74 AD, died without issue III. Antonia Tryphaena, Queen of Thrace, 10 BC – 55 AD, had 4 children a. Rhoemetalces II, King of Thrace, died 38 AD, died without
restrictive for Davies' ambition; in February 1909, six weeks before the club's first FA Cup title, Old Trafford was named as the home of Manchester United, following the purchase of land for around £60,000. Architect Archibald Leitch was given a budget of £30,000 for construction; original plans called for seating capacity of 100,000, though budget constraints forced a revision to 77,000. The building was constructed by Messrs Brameld and Smith of Manchester. The stadium's record attendance was registered on 25 March 1939, when an FA Cup semi-final between Wolverhampton Wanderers and Grimsby Town drew 76,962 spectators. Bombing in the Second World War destroyed much of the stadium; the central tunnel in the South Stand was all that remained of that quarter. After the war, the club received compensation from the War Damage Commission in the amount of £22,278. While reconstruction took place, the team played its "home" games at Manchester City's Maine Road ground; Manchester United was charged £5,000 per year, plus a nominal percentage of gate receipts. Later improvements included the addition of roofs, first to the Stretford End and then to the North and East Stands. The roofs were supported by pillars that obstructed many fans' views, and they were eventually replaced with a cantilevered structure. The Stretford End was the last stand to receive a cantilevered roof, completed in time for the 1993–94 season. First used on 25 March 1957 and costing £40,000, four pylons were erected, each housing 54 individual floodlights. These were dismantled in 1987 and replaced by a lighting system embedded in the roof of each stand, which remains in use today. The Taylor Report's requirement for an all-seater stadium lowered capacity at Old Trafford to around 44,000 by 1993. In 1995, the North Stand was redeveloped into three tiers, restoring capacity to approximately 55,000. At the end of the 1998–99 season, second tiers were added to the East and West Stands, raising capacity to around 67,000, and between July 2005 and May 2006, 8,000 more seats were added via second tiers in the north-west and north-east quadrants. Part of the new seating was used for the first time on 26 March 2006, when an attendance of 69,070 became a new Premier League record. The record was pushed steadily upwards before reaching its peak on 31 March 2007, when 76,098 spectators saw Manchester United beat Blackburn Rovers 4–1, with just 114 seats (0.15 per cent of the total capacity of 76,212) unoccupied. In 2009, reorganisation of the seating resulted in a reduction of capacity by 255 to 75,957. Manchester United has the second highest average attendance of European football clubs only behind Borussia Dortmund. In 2021 United co-chairman Joel Glazer said that "early-stage planning work" for the redevelopment of Old Trafford was underway. This followed "increasing criticism" over the lack of development of the ground since 2006. Support Manchester United is one of the most popular football clubs in the world, with one of the highest average home attendances in Europe. The club states that its worldwide fan base includes more than 200 officially recognised branches of the Manchester United Supporters Club (MUSC), in at least 24 countries. The club takes advantage of this support through its worldwide summer tours. Accountancy firm and sports industry consultants Deloitte estimate that Manchester United has 75 million fans worldwide. The club has the third highest social media following in the world among sports teams (after Barcelona and Real Madrid), with over 72 million Facebook followers as of July 2020. A 2014 study showed that Manchester United had the loudest fans in the Premier League. Supporters are represented by two independent bodies; the Independent Manchester United Supporters' Association (IMUSA), which maintains close links to the club through the MUFC Fans Forum, and the Manchester United Supporters' Trust (MUST). After the Glazer family's takeover in 2005, a group of fans formed a splinter club, F.C. United of Manchester. The West Stand of Old Trafford – the "Stretford End" – is the home end and the traditional source of the club's most vocal support. Rivalries Manchester United has rivalries with Arsenal, Leeds United, Liverpool, and Manchester City, against whom they contest the Manchester derby. The rivalry with Liverpool is rooted in competition between the cities during the Industrial Revolution when Manchester was famous for its textile industry while Liverpool was a major port. The two clubs are the most successful English teams in both domestic and international competitions; and between them they have won 39 league titles, 9 European Cups, 4 UEFA Cups, 5 UEFA Super Cups, 19 FA Cups, 13 League Cups, 2 FIFA Club World Cups, 1 Intercontinental Cup and 36 FA Community Shields. It is considered to be one of the biggest rivalries in the football world and is considered the most famous fixture in English football. Former Manchester United manager Alex Ferguson said in 2002, "My greatest challenge was knocking Liverpool right off their fucking perch". The "Roses Rivalry" with Leeds stems from the Wars of the Roses, fought between the House of Lancaster and the House of York, with Manchester United representing Lancashire and Leeds representing Yorkshire. The rivalry with Arsenal arises from the numerous times the two teams, as well as managers Alex Ferguson and Arsène Wenger, have battled for the Premier League title. With 33 titles between them (20 for Manchester United, 13 for Arsenal) this fixture has become known as one of the finest Premier League match-ups in history. Global brand Manchester United has been described as a global brand; a 2011 report by Brand Finance, valued the club's trademarks and associated intellectual property at £412 million – an increase of £39 million on the previous year, valuing it at £11 million more than the second best brand, Real Madrid – and gave the brand a strength rating of AAA (Extremely Strong). In July 2012, Manchester United was ranked first by Forbes magazine in its list of the ten most valuable sports team brands, valuing the Manchester United brand at $2.23 billion. The club is ranked third in the Deloitte Football Money League (behind Real Madrid and Barcelona). In January 2013, the club became the first sports team in the world to be valued at $3 billion. Forbes magazine valued the club at $3.3 billion – $1.2 billion higher than the next most valuable sports team. They were overtaken by Real Madrid for the next four years, but Manchester United returned to the top of the Forbes list in June 2017, with a valuation of $3.689 billion. The core strength of Manchester United's global brand is often attributed to Matt Busby's rebuilding of the team and subsequent success following the Munich air disaster, which drew worldwide acclaim. The "iconic" team included Bobby Charlton and Nobby Stiles (members of England's World Cup winning team), Denis Law and George Best. The attacking style of play adopted by this team (in contrast to the defensive-minded "catenaccio" approach favoured by the leading Italian teams of the era) "captured the imagination of the English footballing public". Busby's team also became associated with the liberalisation of Western society during the 1960s; George Best, known as the "Fifth Beatle" for his iconic haircut, was the first footballer to significantly develop an off-the-field media profile. As the second English football club to float on the London Stock Exchange in 1991, the club raised significant capital, with which it further developed its commercial strategy. The club's focus on commercial and sporting success brought significant profits in an industry often characterised by chronic losses. The strength of the Manchester United brand was bolstered by intense off-the-field media attention to individual players, most notably David Beckham (who quickly developed his own global brand). This attention often generates greater interest in on-the-field activities, and hence generates sponsorship opportunities – the value of which is driven by television exposure. During his time with the club, Beckham's popularity across Asia was integral to the club's commercial success in that part of the world. Because higher league placement results in a greater share of television rights, success on the field generates greater income for the club. Since the inception of the Premier League, Manchester United has received the largest share of the revenue generated from the BSkyB broadcasting deal. Manchester United has also consistently enjoyed the highest commercial income of any English club; in 2005–06, the club's commercial arm generated £51 million, compared to £42.5 million at Chelsea, £39.3 million at Liverpool, £34 million at Arsenal and £27.9 million at Newcastle United. A key sponsorship relationship was with sportswear company Nike, who managed the club's merchandising operation as part of a £303 million 13-year partnership between 2002 and 2015. Through Manchester United Finance and the club's membership scheme, One United, those with an affinity for the club can purchase a range of branded goods and services. Additionally, Manchester United-branded media services – such as the club's dedicated television channel, MUTV – have allowed the club to expand its fan base to those beyond the reach of its Old Trafford stadium. Sponsorship In an initial five-year deal worth £500,000, Sharp Electronics became the club's first shirt sponsor at the beginning of the 1982–83 season, a relationship that lasted until the end of the 1999–2000 season, when Vodafone agreed a four-year, £30 million deal. Vodafone agreed to pay £36 million to extend the deal by four years, but after two seasons triggered a break clause in order to concentrate on its sponsorship of the Champions League. To commence at the start of the 2006–07 season, American insurance corporation AIG agreed a four-year £56.5 million deal which in September 2006 became the most valuable in the world. At the beginning of the 2010–11 season, American reinsurance company Aon became the club's principal sponsor in a four-year deal reputed to be worth approximately £80 million, making it the most lucrative shirt sponsorship deal in football history. Manchester United announced their first training kit sponsor in August 2011, agreeing a four-year deal with DHL reported to be worth £40 million; it is believed to be the first instance of training kit sponsorship in English football. The DHL contract lasted for over a year before the club bought back the contract in October 2012, although they remained the club's official logistics partner. The contract for the training kit sponsorship was then sold to Aon in April 2013 for a deal worth £180 million over eight years, which also included purchasing the naming rights for the Trafford Training Centre. The club's first kit manufacturer was Umbro, until a five-year deal was agreed with Admiral Sportswear in 1975. Adidas received the contract in 1980, before Umbro started a second spell in 1992. Umbro's sponsorship lasted for ten years, followed by Nike's record-breaking £302.9 million deal that lasted until 2015; 3.8 million replica shirts were sold in the first 22 months with the company. In addition to Nike and Chevrolet, the club also has several lower-level "platinum" sponsors, including Aon and Budweiser. On 30 July 2012, United signed a seven-year deal with American automotive corporation General Motors, which replaced Aon as the shirt sponsor from the 2014–15 season. The new $80m-a-year shirt deal is worth $559m over seven years and features the logo of General Motors brand Chevrolet. Nike announced that they would not renew their kit supply deal with Manchester United after the 2014–15 season, citing rising costs. Since the start of the 2015–16 season, Adidas has manufactured Manchester United's kit as part of a world-record 10-year deal worth a minimum of £750 million. Plumbing products manufacturer Kohler became the club's first sleeve sponsor ahead of the 2018–19 season. Manchester United and General Motors did not renew their sponsorship deal, and the club subsequently signed a five-year, £235m sponsorship deal with TeamViewer ahead of the 2021–22 season. Ownership and finances Originally funded by the Lancashire and Yorkshire Railway Company, the club became a limited company in 1892 and sold shares to local supporters for £1 via an application form. In 1902, majority ownership passed to the four local businessmen who invested £500 to save the club from bankruptcy, including future club president John Henry Davies. After his death in 1927, the club faced bankruptcy yet again, but was saved in December 1931 by James W. Gibson, who assumed control of the club after an investment of £2,000. Gibson promoted his son, Alan, to the board in 1948, but died three years later; the Gibson family retained ownership of the club through James' wife, Lillian, but the position of chairman passed to former player Harold Hardman. Promoted to the board a few days after the Munich air disaster, Louis Edwards, a friend of Matt Busby, began acquiring shares in the club; for an investment of approximately £40,000, he accumulated a 54 per cent shareholding and took control in January 1964. When Lillian Gibson died in January 1971, her shares passed to Alan Gibson who sold a percentage of his shares to Louis Edwards' son, Martin, in 1978; Martin Edwards went on to become chairman upon his father's death in 1980. Media tycoon Robert Maxwell attempted to buy the club in 1984, but did not meet Edwards' asking price. In 1989, chairman Martin Edwards attempted to sell the club to Michael Knighton for £20 million, but the sale fell through and Knighton joined the board of directors instead. Manchester United was floated on the stock market in June 1991 (raising £6.7 million), and received yet another takeover bid in 1998, this time from Rupert Murdoch's British Sky Broadcasting Corporation. This resulted in the formation of Shareholders United Against Murdoch – now the Manchester United Supporters' Trust – who encouraged supporters to buy shares in the club in an attempt to block any hostile takeover. The Manchester United board accepted a £623 million offer, but the takeover was blocked by the Monopolies and Mergers Commission at the final hurdle in April 1999. A few years later, a power struggle emerged between the club's manager, Alex Ferguson, and his horse-racing partners, John Magnier and J. P. McManus, who had gradually become the majority shareholders. In a dispute that stemmed from contested ownership of the horse Rock of Gibraltar, Magnier and McManus attempted to have Ferguson removed from his position as manager, and the board responded by approaching investors to attempt to reduce the Irishmen's majority. Glazer ownership In May 2005, Malcolm Glazer purchased the 28.7 per cent stake held by McManus and Magnier, thus acquiring a controlling interest through his investment vehicle Red Football Ltd in a highly leveraged takeover valuing the club at approximately £800 million (then approx. $1.5 billion). Once the purchase was complete, the club was taken off the stock exchange. Much of the takeover money was borrowed by the Glazers; the debts were transferred to the club. As a result, the club went from being debt-free to being saddled with debts of £540 million, at interest rates of between 7% to 20%. In July 2006, the club announced a £660 million debt refinancing package, resulting in a 30 per cent reduction in annual interest payments to £62 million a year. In January 2010, with debts of £716.5 million ($1.17 billion), Manchester United further refinanced through a bond issue worth £504 million, enabling them to pay off most of the £509 million owed to international banks. The annual interest payable on the bonds – which were to mature on 1 February 2017 – is approximately £45 million per annum. Despite restructuring, the club's debt prompted protests from fans on 23 January 2010, at Old Trafford and the club's Trafford Training Centre. Supporter groups encouraged match-going fans to wear green and gold, the colours of Newton Heath. On 30 January, reports emerged that the Manchester United Supporters' Trust had held meetings with a group of wealthy fans, dubbed the "Red Knights", with plans to buying out the Glazers' controlling interest. The club's debts reached a high of £777 million in June 2007. In August 2011, the Glazers were believed to have approached Credit Suisse in preparation for a $1 billion (approx. £600 million) initial public offering (IPO) on the Singapore stock exchange that would value the club at more than £2 billion; however, in July 2012, the club announced plans to list its IPO on the New York Stock Exchange instead. Shares were originally set to go on sale for between $16 and $20 each, but the price was cut to $14 by the launch of the IPO on 10 August, following negative comments from Wall Street analysts and Facebook's disappointing stock market debut in May. Even after the cut, Manchester United was valued at $2.3 billion, making it the most valuable football club in the world. The New York Stock Exchange allows for different shareholders to enjoy different voting rights over the club. Shares offered to the public ("Class A") had 10 times lesser voting rights than shares retained by the Glazers ("Class B"). Initially in 2012, only 10% of shares were offered to the public. As of 2019, the Glazers retain ultimate control over the club, with over 70% of shares, and even higher voting power. In 2012, The Guardian estimated that the club had paid a total of over £500 million in debt interest and other fees on behalf of the Glazers, and in 2019, reported that the total sum paid by the club for such fees had risen to £1 billion. At the end of 2019, the club had a net debt of nearly £400 million. Players First-team squad On loan Reserves and academy List of under-23s and academy players with articles On loan Player of the Year Coaching staff Managerial history Management Owner: Glazer family via Red Football Shareholder Limited Manchester United Limited Manchester United Football Club Honours Manchester United is one of the most successful clubs in Europe in terms of trophies won. The club's first trophy was the Manchester Cup, which they won as Newton Heath LYR in 1886. In 1908, the club won their first league title, and won the FA Cup for the first time the following year. Since then, they have gone on to win a record 20 top-division titles – including a record 13 Premier League titles
Cup Winners' Cup). Manchester United is one of the most widely supported football clubs in the world, and has rivalries with Liverpool, Manchester City, Arsenal and Leeds United. Manchester United was the highest-earning football club in the world for 2016–17, with an annual revenue of €676.3 million, and the world's third most valuable football club in 2019, valued at £3.15 billion ($3.81 billion). After being floated on the London Stock Exchange in 1991, the club was taken private in 2005 after a purchase by Malcolm Glazer valued at almost £800 million, of which over £500 million of borrowed money became the club's debt. From 2012, some shares of the club were listed on the New York Stock Exchange, although the Glazer family retains overall ownership and control of the club. History Early years (1878–1945) Manchester United was formed in 1878 as Newton Heath LYR Football Club by the Carriage and Wagon department of the Lancashire and Yorkshire Railway (LYR) depot at Newton Heath. The team initially played games against other departments and railway companies, but on 20 November 1880, they competed in their first recorded match; wearing the colours of the railway company – green and gold – they were defeated 6–0 by Bolton Wanderers' reserve team. By 1888, the club had become a founding member of The Combination, a regional football league. Following the league's dissolution after only one season, Newton Heath joined the newly formed Football Alliance, which ran for three seasons before being merged with The Football League. This resulted in the club starting the 1892–93 season in the First Division, by which time it had become independent of the railway company and dropped the "LYR" from its name. After two seasons, the club was relegated to the Second Division. In January 1902, with debts of £2,670 – equivalent to £ in – the club was served with a winding-up order. Captain Harry Stafford found four local businessmen, including John Henry Davies (who became club president), each willing to invest £500 in return for a direct interest in running the club and who subsequently changed the name; on 24 April 1902, Manchester United was officially born. Under Ernest Mangnall, who assumed managerial duties in 1903, the team finished as Second Division runners-up in 1906 and secured promotion to the First Division, which they won in 1908 – the club's first league title. The following season began with victory in the first ever Charity Shield and ended with the club's first FA Cup title. Manchester United won the First Division for the second time in 1911, but at the end of the following season, Mangnall left the club to join Manchester City. In 1922, three years after the resumption of football following the First World War, the club was relegated to the Second Division, where it remained until regaining promotion in 1925. Relegated again in 1931, Manchester United became a yo-yo club, achieving its all-time lowest position of 20th place in the Second Division in 1934. Following the death of principal benefactor John Henry Davies in October 1927, the club's finances deteriorated to the extent that Manchester United would likely have gone bankrupt had it not been for James W. Gibson, who, in December 1931, invested £2,000 and assumed control of the club. In the 1938–39 season, the last year of football before the Second World War, the club finished 14th in the First Division. Busby years (1945–1969) In October 1945, the impending resumption of football after the war led to the managerial appointment of Matt Busby, who demanded an unprecedented level of control over team selection, player transfers and training sessions. Busby led the team to second-place league finishes in 1947, 1948 and 1949, and to FA Cup victory in 1948. In 1952, the club won the First Division, its first league title for 41 years. They then won back-to-back league titles in 1956 and 1957; the squad, who had an average age of 22, were nicknamed "the Busby Babes" by the media, a testament to Busby's faith in his youth players. In 1957, Manchester United became the first English team to compete in the European Cup, despite objections from The Football League, who had denied Chelsea the same opportunity the previous season. En route to the semi-final, which they lost to Real Madrid, the team recorded a 10–0 victory over Belgian champions Anderlecht, which remains the club's biggest victory on record. The following season, on the way home from a European Cup quarter-final victory against Red Star Belgrade, the aircraft carrying the Manchester United players, officials and journalists crashed while attempting to take off after refuelling in Munich, Germany. The Munich air disaster of 6 February 1958 claimed 23 lives, including those of eight players – Geoff Bent, Roger Byrne, Eddie Colman, Duncan Edwards, Mark Jones, David Pegg, Tommy Taylor and Billy Whelan – and injured several more. Assistant manager Jimmy Murphy took over as manager while Busby recovered from his injuries and the club's makeshift side reached the FA Cup final, which they lost to Bolton Wanderers. In recognition of the team's tragedy, UEFA invited the club to compete in the 1958–59 European Cup alongside eventual League champions Wolverhampton Wanderers. Despite approval from The Football Association, The Football League determined that the club should not enter the competition, since it had not qualified. Busby rebuilt the team through the 1960s by signing players such as Denis Law and Pat Crerand, who combined with the next generation of youth players – including George Best – to win the FA Cup in 1963. The following season, they finished second in the league, then won the title in 1965 and 1967. In 1968, Manchester United became the first English club to win the European Cup, beating Benfica 4–1 in the final with a team that contained three European Footballers of the Year: Bobby Charlton, Denis Law and George Best. They then represented Europe in the 1968 Intercontinental Cup against Estudiantes of Argentina, but lost the tie after losing the first leg in Buenos Aires, before a 1–1 draw at Old Trafford three weeks later. Busby resigned as manager in 1969 before being replaced by the reserve team coach, former Manchester United player Wilf McGuinness. 1969–1986 Following an eighth-place finish in the 1969–70 season and a poor start to the 1970–71 season, Busby was persuaded to temporarily resume managerial duties, and McGuinness returned to his position as reserve team coach. In June 1971, Frank O'Farrell was appointed as manager, but lasted less than 18 months before being replaced by Tommy Docherty in December 1972. Docherty saved Manchester United from relegation that season, only to see them relegated in 1974; by that time the trio of Best, Law, and Charlton had left the club. The team won promotion at the first attempt and reached the FA Cup final in 1976, but were beaten by Southampton. They reached the final again in 1977, beating Liverpool 2–1. Docherty was dismissed shortly afterwards, following the revelation of his affair with the club physiotherapist's wife. Dave Sexton replaced Docherty as manager in the summer of 1977. Despite major signings, including Joe Jordan, Gordon McQueen, Gary Bailey, and Ray Wilkins, the team failed to win any trophies; they finished second in 1979–80 and lost to Arsenal in the 1979 FA Cup Final. Sexton was dismissed in 1981, even though the team won the last seven games under his direction. He was replaced by Ron Atkinson, who immediately broke the British record transfer fee to sign Bryan Robson from his former club West Bromwich Albion. Under Atkinson, Manchester United won the FA Cup in 1983 and 1985 and beat rivals Liverpool to win the 1983 Charity Shield. In 1985–86, after 13 wins and two draws in its first 15 matches, the club was favourite to win the league but finished in fourth place. The following season, with the club in danger of relegation by November, Atkinson was dismissed. Ferguson years (1986–2013) Alex Ferguson and his assistant Archie Knox arrived from Aberdeen on the day of Atkinson's dismissal, and guided the club to an 11th-place finish in the league. Despite a second-place finish in 1987–88, the club was back in 11th place the following season. Reportedly on the verge of being dismissed, Ferguson's job was saved by victory over Crystal Palace in the 1990 FA Cup Final. The following season, Manchester United claimed their first UEFA Cup Winners' Cup title. That triumph allowed the club to compete in the European Super Cup for the first time, where United beat European Cup holders Red Star Belgrade 1–0 at Old Trafford. The club appeared in two consecutive League Cup finals in 1991 and 1992, beating Nottingham Forest 1–0 in the second to claim that competition for the first time as well. In 1993, the club won its first league title since 1967, and a year later, for the first time since 1957, it won a second consecutive title – alongside the FA Cup – to complete the first "Double" in the club's history. United then became the first English club to do the Double twice when they won both competitions again in 1995–96, before retaining the league title once more in 1996–97 with a game to spare. In the 1998–99 season, Manchester United became the first team to win the Premier League, FA Cup and UEFA Champions League – "The Treble" – in the same season. Losing 1–0 going into injury time in the 1999 UEFA Champions League Final, Teddy Sheringham and Ole Gunnar Solskjær scored late goals to claim a dramatic victory over Bayern Munich, in what is considered one of the greatest comebacks of all time. That summer, Ferguson received a knighthood for his services to football. In November 1999, the club became the only British team to ever win the Intercontinental Cup with a 1–0 victory over Palmeiras in Tokyo. Manchester United won the league again in the 1999–2000 and 2000–01 seasons, becoming only the fourth club to win the English title three times in a row. The team finished third in 2001–02, before regaining the title in 2002–03. They won the 2003–04 FA Cup, beating Millwall 3–0 in the final at the Millennium Stadium in Cardiff to lift the trophy for a record 11th time. In the 2005–06 season, Manchester United failed to qualify for the knockout phase of the UEFA Champions League for the first time in over a decade, but recovered to secure a second-place league finish and victory over Wigan Athletic in the 2006 Football League Cup Final. The club regained the Premier League title in the 2006–07 season, before completing the European double in 2007–08 with a 6–5 penalty shoot-out victory over Chelsea in the 2008 UEFA Champions League Final in Moscow to go with their 17th English league title. Ryan Giggs made a record 759th appearance for the club in that game, overtaking previous record holder Bobby Charlton. In December 2008, the club became the first British team to win the FIFA Club World Cup and followed this with the 2008–09 Football League Cup, and its third successive Premier League title. That summer, forward Cristiano Ronaldo was sold to Real Madrid for a world record £80 million. In 2010, Manchester United defeated Aston Villa 2–1 at Wembley to retain the League Cup, its first successful defence of a knockout cup competition. After finishing as runners-up to Chelsea in the 2009–10 season, United achieved a record 19th league title in 2010–11, securing the championship with a 1–1 away draw against Blackburn Rovers on 14 May 2011. This was extended to 20 league titles in 2012–13, securing the championship with a 3–0 home win against Aston Villa on 22 April 2013. 2013–present On 8 May 2013, Ferguson announced that he was to retire as manager at the end of the football season, but would remain at the club as a director and club ambassador. He retired as the most decorated manager in football history. The club announced the next day that Everton manager David Moyes would replace him from 1 July, having signed a six-year contract. Ryan Giggs took over as interim player-manager 10 months later, on 22 April 2014, when Moyes was sacked after a poor season in which the club failed to defend their Premier League title and failed to qualify for the UEFA Champions League for the first time since 1995–96. They also failed to qualify for the Europa League, meaning that it was the first time Manchester United had not qualified for a European competition since 1990. On 19 May 2014, it was confirmed that Louis
Alto, California, United States. The language name was a pun based upon the programming language catchphrases of the time, because Mesa is a "high level" programming language. Mesa is an ALGOL-like language with strong support for modular programming. Every library module has at least two source files: a definitions file specifying the library's interface plus one or more program files specifying the implementation of the procedures in the interface. To use a library, a program or higher-level library must "import" the definitions. The Mesa compiler type-checks all uses of imported entities; this combination of separate compilation with type-checking was unusual at the time. Mesa introduced several other innovations in language design and implementation, notably in the handling of software exceptions, thread synchronization, and incremental compilation. Mesa was developed on the Xerox Alto, one of the first personal computers with a graphical user interface, however, most of the Alto's system software was written in BCPL. Mesa was the system programming language of the later Xerox Star workstations, and for the GlobalView desktop environment. Xerox PARC later developed Cedar, which was a superset of Mesa. Mesa and Cedar had a major influence on the design of other important languages, such as Modula-2 and Java, and was an important vehicle for the development and dissemination of the fundamentals of GUIs, networked environments, and the other advances Xerox contributed to the field of computer science. History Mesa was originally designed in the Computer Systems Laboratory (CSL), a branch of the Xerox Palo Alto Research Center, for the Alto, an experimental micro-coded workstation. Initially, its spread was confined to PARC and a few universities to which Xerox had donated some Altos. Mesa was later adopted as the systems programming language for Xerox's commercial workstations such as the Xerox 8010 (Xerox Star, Dandelion) and Xerox 6085 (Daybreak), in particular for the Pilot operating system. A secondary development environment, called the Xerox Development Environment (XDE) allowed developers to debug both the operating system Pilot as well as ViewPoint GUI applications using a world swap mechanism. This allowed the entire "state" of the world to be swapped out, and allowed low-level system crashes which paralyzed the whole system to be debugged. This technique did not scale very well to large application images (several megabytes), and so the Pilot/Mesa world in later releases moved away from the world swap view when the micro-coded machines were phased out in favor of SPARC workstations and Intel PCs running a Mesa PrincOps emulator for the basic hardware instruction set. Mesa was compiled into a stack-machine language, purportedly with the highest code density ever achieved (roughly 4 bytes per high-level language statement). This was touted in a 1981 paper where implementors from the Xerox Systems Development Department (then, the development arm of PARC), tuned up the instruction set and published a paper on the resultant code density. Mesa was taught via the Mesa Programming Course that took people through the wide range of technology Xerox had available at the time and ended with the programmer writing a "hack", a workable program designed to be useful. An actual example of such a hack is the BWSMagnifier, which was written in 1988 and allowed people to magnify sections of the workstation screen as defined by a resizable window and a changeable magnification factor. Trained Mesa programmers from Xerox were well versed in the fundamental of GUIs, networking, exceptions, and multi-threaded programming, almost a decade before they became standard tools of the trade. Within Xerox, Mesa was eventually superseded by the Cedar programming language. Many Mesa programmers and developers left
few universities to which Xerox had donated some Altos. Mesa was later adopted as the systems programming language for Xerox's commercial workstations such as the Xerox 8010 (Xerox Star, Dandelion) and Xerox 6085 (Daybreak), in particular for the Pilot operating system. A secondary development environment, called the Xerox Development Environment (XDE) allowed developers to debug both the operating system Pilot as well as ViewPoint GUI applications using a world swap mechanism. This allowed the entire "state" of the world to be swapped out, and allowed low-level system crashes which paralyzed the whole system to be debugged. This technique did not scale very well to large application images (several megabytes), and so the Pilot/Mesa world in later releases moved away from the world swap view when the micro-coded machines were phased out in favor of SPARC workstations and Intel PCs running a Mesa PrincOps emulator for the basic hardware instruction set. Mesa was compiled into a stack-machine language, purportedly with the highest code density ever achieved (roughly 4 bytes per high-level language statement). This was touted in a 1981 paper where implementors from the Xerox Systems Development Department (then, the development arm of PARC), tuned up the instruction set and published a paper on the resultant code density. Mesa was taught via the Mesa Programming Course that took people through the wide range of technology Xerox had available at the time and ended with the programmer writing a "hack", a workable program designed to be useful. An actual example of such a hack is the BWSMagnifier, which was written in 1988 and allowed people to magnify sections of the workstation screen as defined by a resizable window and a changeable magnification factor. Trained Mesa programmers from Xerox were well versed in the fundamental of GUIs, networking, exceptions, and multi-threaded programming, almost a decade before they became standard tools of the
commentaries on the works of Plato. vol. I, 2008, Phaedrus, and Ion, tr. by Michael J. B. Allen, vol. II, 2012, Parmenides, part I, tr. by Maude Vanhaelen, vol. III, 2012, Parmenides, part II, tr. by Maude Vanhaelen, Icastes. Marsilio Ficino's Interpretation of Plato's Sophist, edited and translated by Michael J. B. Allen, Berkeley: University of California Press, 1989. The Book of Life, translated with an introduction by Charles Boer, Dallas: Spring Publications, 1980. ISBN 0-88214-212-7 De vita libri tres (Three Books on Life, 1489) translated by Carol V. Kaske and John R. Clarke, Tempe, Arizona: The Renaissance Society of America, 2002. With notes, commentaries, and Latin text on facing pages. De religione Christiana et fidei pietate (1475–6), dedicated to Lorenzo de' Medici. In Epistolas Pauli commentaria, Marsilii Ficini Epistolae (Venice, 1491; Florence, 1497). Meditations on the Soul: Selected letters of Marsilio Ficino, tr. by the Language Department of the School of Economic Science, London. Rochester, Vermont: Inner Traditions International, 1996. . Note for instance, letter 31: A man is not rightly formed who does not delight in harmony, pp. 5–60; letter 9: One can have patience without religion, pp. 16–18; Medicine heals the body, music the spirit, theology the soul, pp. 63–64; letter 77: The good will rule over the stars, p. 166. Commentary on Plato's Symposium on Love, translated with an introduction and notes by Sears Jayne. Woodstock, Conn.: Spring Publications (1985), 2nd edition, 2000. Collected works: Opera (Florence,1491, Venice, 1516, Basel, 1561). See also References Further reading Allen, Michael J. B., Nuptial Arithmetic: Marsilio Ficino's Commentary on the Fatal Number in Book VIII of Plato's Republic. Berkeley: University of California Press, 1994. Ernst Cassirer, Paul Oskar Kristeller, John Herman Randall, Jr., The Renaissance Philosophy of Man. The University of Chicago Press (Chicago, 1948.) Marsilio Ficino, Five Questions Concerning the Mind, pp. 193–214. Anthony Gottlieb, The Dream of Reason: A History of Western Philosophy from the Greeks to the Renaissance (Penguin, London, 2001) James Heiser, Prisci Theologi and the Hermetic Reformation in the Fifteenth Century (Repristination Press, Malone, Texas, 2011) Paul Oskar Kristeller, Eight Philosophers of the Italian Renaissance. Stanford University Press (Stanford California, 1964) Chapter 3, "Ficino," pp. 37–53. Raffini, Christine, "Marsilio Ficino, Pietro Bembo, Baldassare Castiglione: Philosophical, Aesthetic, and Political Approaches in Renaissance Platonism", Renaissance and Baroque Studies and Texts, v.21, Peter Lang Publishing, 1998. Robb, Nesca A., Neoplatonism of the Italian Renaissance, New York: Octagon Books, Inc., 1968. Reeser, Todd W. Setting Plato Straight: Translating Ancient Sexuality in the Renaissance. Chicago: UChicagoP, 2016. Field, Arthur, The Origins of the Platonic Academy of Florence, New Jersey: Princeton, 1988. Allen, Michael J.B., and Valery Rees, with Martin Davies, eds. Marsilio Ficino : His Theology, His Philosophy, His Legacy.Leiden : E.J.Brill, 2002. A wide range of new essays. Voss, Angela, Marsilio Ficino, Western Esoteric Masters series. North Atlantic Books, 2006. External links Platonis Opera Omnia (Latin) Marsilio Ficino entry by James G. Snyder in Internet Encyclopedia of Philosophy Short Biography of Ficino Catholic Encyclopedia entry The Influence of Marsilio Ficino www.ficino.it Website of the International Ficino Society Online Galleries, History of Science Collections, University of Oklahoma Libraries. High resolution images of works by and/or portraits
Italian Renaissance and the development of European philosophy. Early life Ficino was born at Figline Valdarno. His father, Diotifeci d'Agnolo, was a physician under the patronage of Cosimo de' Medici, who took the young man into his household and became the lifelong patron of Marsilio, who was made tutor to his grandson, Lorenzo de' Medici. Giovanni Pico della Mirandola, the Italian humanist philosopher and scholar was another of his students. Career and thought Platonic Academy During the sessions at Florence of the Council of Ferrara-Florence in 1438–1445, during the failed attempts to heal the schism of the Eastern (Orthodox) and Western (Catholic) churches, Cosimo de' Medici and his intellectual circle had made acquaintance with the Neoplatonic philosopher George Gemistos Plethon, whose discourses upon Plato and the Alexandrian mystics so fascinated the humanists of Florence that they named him the second Plato. In 1459 John Argyropoulos was lecturing on Greek language and literature at Florence, and Ficino became his pupil. When Cosimo decided to refound Plato's Academy at Florence, he chose Ficino as its head. In 1462, Cosimo supplied Ficino with Greek manuscripts of Plato's work, whereupon Ficino started translating the entire corpus into Latin (draft translation of the dialogues finished 1468–9; published 1484). Ficino also produced a translation of a collection of Hellenistic Greek documents found by Leonardo da Pistoia later called Hermetica, and the writings of many of the Neoplatonists, including Porphyry, Iamblichus, and Plotinus. Among his many students was Francesco Cattani da Diacceto, who was considered by Ficino to be his successor as the head of the Florentine Platonic Academy. Diacceto's student, Giovanni di Bardo Corsi, produced a short biography of Ficino in 1506. Theology, astrology, and the soul Though trained as a physician, Ficino became a priest in 1473. In 1474 Ficino completed his treatise on the immortality of the soul, Theologia Platonica de immortalitate animae (Platonic Theology). In the rush of enthusiasm for every rediscovery from Antiquity, he exhibited a great interest in the arts of astrology, which landed him in trouble with the Catholic Church. In 1489 he was accused of heresy before Pope Innocent VIII and was acquitted. Writing in 1492 Ficino proclaimed: Ficino's letters, extending over the years 1474–1494, survive and have been published. He wrote De amore (Of Love) in 1484. De vita libri tres (Three books on life), or De triplici vita (The Book of Life), published in 1489, provides a great deal of medical and astrological advice for maintaining health and vigor, as well as espousing the Neoplatonist view of the world's ensoulment and its integration with the human soul: One metaphor for this integrated "aliveness" is Ficino's astrology. In the Book of Life, he details the interlinks between behavior and consequence. It talks about a list of things that hold sway over a man's destiny. Medical works Probably due to early influences from his father, Diotifeci, who was a doctor to Cosimo de' Medici, Ficino published Latin and Italian treatises on medical subjects such as Consiglio contro la pestilenza (Recommendations for the treatment of the plague) and De vita libri tres (Three books on life). His medical works exerted considerable influence on Renaissance physicians such as Paracelsus, with whom he shared the perception on the unity of the microcosmos and macrocosmos, and their interactions, through somatic and psychological manifestations, with the aim to investigate their signatures to cure diseases. Those works, which were very popular at the time, dealt with astrological and alchemical
can diffuse and carry signals that control cell differentiation via concentration gradients. Morphogens typically act through binding to specific protein receptors. An important class of molecules involved in morphogenesis are transcription factor proteins that determine the fate of cells by interacting with DNA. These can be coded for by master regulatory genes, and either activate or deactivate the transcription of other genes; in turn, these secondary gene products can regulate the expression of still other genes in a regulatory cascade of gene regulatory networks. At the end of this cascade are classes of molecules that control cellular behaviors such as cell migration, or, more generally, their properties, such as cell adhesion or cell contractility. For example, during gastrulation, clumps of stem cells switch off their cell-to-cell adhesion, become migratory, and take up new positions within an embryo where they again activate specific cell adhesion proteins and form new tissues and organs. Developmental signaling pathways implicated in morphogenesis include Wnt, Hedgehog, and ephrins. Cellular basis At a tissue level, ignoring the means of control, morphogenesis arises because of cellular proliferation and motility. Morphogenesis also involves changes in the cellular structure or how cells interact in tissues. These changes can result in tissue elongation, thinning, folding, invasion or separation of one tissue into distinct layers. The latter case is often referred as cell sorting. Cell "sorting out" consists of cells moving so as to sort into clusters that maximize contact between cells of the same type. The ability of cells to do this has been proposed to arise from differential cell adhesion by Malcolm Steinberg through his differential adhesion hypothesis. Tissue separation can also occur via more dramatic cellular differentiation events during which epithelial cells become mesenchymal (see Epithelial–mesenchymal transition). Mesenchymal cells typically leave the epithelial tissue as a consequence of changes in cell adhesive and contractile properties. Following epithelial-mesenchymal transition, cells can migrate away from an epithelium and then associate with other similar cells in a new location. In plants, cellular morphogenesis is tightly linked to the chemical composition and the mechanical properties of the cell wall. Cell-to-cell adhesion During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to-cell adhesion molecules. For instance, homotypic cell adhesion can maintain boundaries between
fate of cells by interacting with DNA. These can be coded for by master regulatory genes, and either activate or deactivate the transcription of other genes; in turn, these secondary gene products can regulate the expression of still other genes in a regulatory cascade of gene regulatory networks. At the end of this cascade are classes of molecules that control cellular behaviors such as cell migration, or, more generally, their properties, such as cell adhesion or cell contractility. For example, during gastrulation, clumps of stem cells switch off their cell-to-cell adhesion, become migratory, and take up new positions within an embryo where they again activate specific cell adhesion proteins and form new tissues and organs. Developmental signaling pathways implicated in morphogenesis include Wnt, Hedgehog, and ephrins. Cellular basis At a tissue level, ignoring the means of control, morphogenesis arises because of cellular proliferation and motility. Morphogenesis also involves changes in the cellular structure or how cells interact in tissues. These changes can result in tissue elongation, thinning, folding, invasion or separation of one tissue into distinct layers. The latter case is often referred as cell sorting. Cell "sorting out" consists of cells moving so as to sort into clusters that maximize contact between cells of the same type. The ability of cells to do this has been proposed to arise from differential cell adhesion by Malcolm Steinberg through his differential adhesion hypothesis. Tissue separation can also occur via more dramatic cellular differentiation events during which epithelial cells become mesenchymal (see Epithelial–mesenchymal transition). Mesenchymal cells typically leave the epithelial tissue as a consequence of changes in cell adhesive and contractile properties. Following epithelial-mesenchymal transition, cells can migrate away from an epithelium and then associate with other similar cells in a new location. In plants, cellular morphogenesis is tightly linked to the chemical composition and the mechanical properties of the cell wall. Cell-to-cell adhesion During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to-cell adhesion molecules. For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins. There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin. Extracellular matrix The extracellular matrix (ECM) is involved in keeping tissues separated, providing structural support or providing a structure for cells to migrate on. Collagen, laminin, and fibronectin are major ECM molecules that are secreted and assembled into sheets, fibers, and gels. Multisubunit transmembrane receptors called integrins are used to bind to the ECM. Integrins bind extracellularly to fibronectin, laminin, or other ECM components, and intracellularly to microfilament-binding proteins α-actinin and talin to link the cytoskeleton with the outside. Integrins also serve as receptors to trigger signal transduction cascades when binding to the ECM. A well-studied example of morphogenesis that involves ECM is mammary gland ductal branching. Cell contractility Tissues can change their shape and separate into distinct layers via cell contractility. Just as in muscle cells, myosin can contract different parts of the cytoplasm to change its shape or structure. Myosin-driven contractility in embryonic tissue morphogenesis
of instruction, a language or other tool used to educate, train, or instruct Wave physics Transmission medium, in physics and telecommunications, any material substance which can propagate waves or energy Active laser medium (also called gain medium or lasing medium), a quantum system that allows amplification of power (gain) of waves passing through (usually by stimulated emission) Optical medium, in physics, a material through with electromagnetic waves propagate Excitable medium, a non-linear dynamic system which has the capacity to propagate a wave Other uses in science and technology Data storage medium, a storage container in computing Growth medium (or culture medium), in biotechnology, an object in which microorganisms or cells experience growth Interplanetary medium, in astronomy, material which fills the solar system Interstellar medium, in astronomy, the matter and energy content that exists between the stars within a galaxy Porous medium, in engineering and earth sciences, a material that allows fluid to pass through it, such as sand Processing medium, in industrial engineering, a
a wave Other uses in science and technology Data storage medium, a storage container in computing Growth medium (or culture medium), in biotechnology, an object in which microorganisms or cells experience growth Interplanetary medium, in astronomy, material which fills the solar system Interstellar medium, in astronomy, the matter and energy content that exists between the stars within a galaxy Porous medium, in engineering and earth sciences, a material that allows fluid to pass through it, such as sand Processing medium, in industrial engineering, a material that plays a role in manufacturing processes Arts, entertainment, and media Films The Medium (1921 film), a German silent film The Medium (1951 film), a film version of the opera directed by Menotti The Medium (1960 film), an Australian television play The Medium (1992 film), an English film from Singapore The Medium (2021 film), a Thai film Periodicals The Medium (Rutgers), an entertainment weekly at Rutgers University The Medium (University of Toronto Mississauga), a student newspaper at the University of Toronto Mississauga Other arts, entertainment, and media List of art media (plural: media), materials and techniques used by an artist to produce a work Medium (TV series), an American television series starring Patricia Arquette about a medium (psychic intermediary) working as a consultant for a district attorney's
(Philippines) Master of Computer Applications, a three-year master's (postgraduate) degree in Computer Science/Applied Computer Science offered in India Educational institutions Maranatha Christian Academy, the former name of National Christian Life College, Marikina, Philippines Marist College Ashgrove, an Australian School McIntosh County Academy, a high school in McIntosh County, Georgia, United States Memphis College of Art, an art school in Tennessee, United States Morrison Christian Academy, an American school in Taiwan Professional courses Microsoft Certified Architect, a certification available from Microsoft Minnesota Comprehensive Assessments—Series II, a standardized test in Minnesota Legal Depository Institutions Deregulation and Monetary Control Act, a US financial statute passed in 1980 Mental Capacity Act 2005, an Act of the Parliament of the United Kingdom applying to England and Wales Military Commissions Act of 2006, US legislation Organizations Maharashtra Chess Association Malaysian Chinese Association, a political party in Malaysia Maritime and Coastguard Agency, an agency of the United Kingdom Government Medal Collectors of America Medicines Control Agency, which merged with the Medical Devices Agency to become the Medicines and Healthcare products Regulatory Agency Metal Construction Association Millennium Challenge Account, a U.S. program for aid to developing countries Ministry of Corporate Affairs, an Indian government ministry MultiCultural Aotearoa, a New Zealand political action group Mumbai Cricket Association, ruling body for cricket in Mumbai Multicore Association, an industry association regrouping companies and universities interested in multicore computing research. Museum of Contemporary Art (disambiguation), numerous museums around the world People Adam Yauch (1964–2012), a.k.a. "MCA" of the Beastie Boys Michiel van den Bos (born 1975), a.k.a. "M.C.A.", Dutch composer Chris Avellone (born 1971), a.k.a. "MCA", Video game designer Sports MC Alger, a football club based in Algiers, Algeria
pathway is distributed Middle cerebral artery, one of the three major blood supplies to the brain Climate Medieval Climatic Anomaly (Medieval Warm Period, also Medieval Climate Optimum), a notably warm climatic period in the North Atlantic region from about 950 to 1250. Companies MCA Inc., a now defunct company (originally called Music Corporation of America) and its subsidiary companies: MCA Records MCA Nashville Records MCA Home Video, former name of Universal Studios Home Entertainment MCA Music Inc. (Philippines), a Philippine branch of Universal Music Group which uses the MCA brand due to a trademark issue Maubeuge Construction Automobile (MCA), a subsidiary of French car manufacturer Renault Minato Communications Association, a former company name of the Japan Electronics and Information Technology Industries Association Education Degrees Master in Customs Administration, a trade-related graduate degree offered in PMI Colleges (Philippines) Master of Computer Applications, a three-year master's (postgraduate) degree in Computer Science/Applied Computer Science offered in India Educational institutions Maranatha Christian Academy, the former name of National Christian Life College, Marikina, Philippines Marist College Ashgrove, an Australian School McIntosh County Academy, a high school in
different terminology and similar though diverse practices. It is also sometimes difficult to determine if an organization is sincerely practicing magic. For example, The Satanic Temple is perhaps a human rights lobby organization posing as a magical organization. 19th century The Hermetic Order of the Golden Dawn has been credited with a vast revival of occult literature and practices and was founded in 1887 or 1888 by William Wynn Westcott, Samuel Liddell MacGregor Mathers and William Robert Woodman. The teachings of the Order include ceremonial magic, Enochian magic, Christian mysticism, Qabalah, Hermeticism, the paganism of ancient Egypt, theurgy, and alchemy. Ordo Aurum Solis, founded in 1897, is a Western mystery tradition group teaching Hermetic Qabalah. Its rituals and system are different from the more popular Golden Dawn, because the group follows the ogdoadic tradition instead of Rosicrucianism. Ordo Templi Orientis (OTO) was founded by Carl Kellner in 1895. 20th century A∴A∴ was created in 1907 by Aleister Crowley and teaches "magick" and Thelema. Thelema is a religion shared by several occult organizations. The main text of Thelema is The Book of the Law.
Ordo Templi Orientis (OTO) was founded by Carl Kellner in 1895. 20th century A∴A∴ was created in 1907 by Aleister Crowley and teaches "magick" and Thelema. Thelema is a religion shared by several occult organizations. The main text of Thelema is The Book of the Law. Ordo Templi Orientis was reworked by Aleister Crowley after he took control of the Order in the early 1920s. Ecclesia Gnostica Catholica functions as the ecclesiastical arm of OTO. Builders of the Adytum (or B.O.T.A.) was created in 1922 by Paul Foster Case and was extended by Dr. Ann Davies. It teaches Hermetic Qabalah, astrology and occult tarot. In 1954, Kenneth Grant began the work of founding the New Isis Lodge, which became operational in 1955. This became the Typhonian Ordo Templi Orientis (TOTO), which was eventually renamed to Typhonian Order. In 1976, James Lees founded the order
with his father, ʿAbd al-Wahhāb, having been the Hanbali jurisconsult of the Najd and his grandfather, Sulaymān, having been a judge of Hanbali law. Early studies Ibn ʿAbd-al-Wahhab's early education was taught by his father, and consisted of learning the Quran by heart and studying a rudimentary level of Hanbali jurisprudence and Islamic theology as outlined in the works of Ibn Qudamah (d. 1223), one of the most influential medieval representatives of the Hanbali school, whose works were regarded "as having great authority" in the Najd. The affirmation of Islamic sainthood and the ability of saints to perform miracles (karamat) by the grace of God had become a major aspect of Sunni Muslim belief throughout the Islamic world, being agreed-upon by majority of the classical Islamic scholars. Ibn ʿAbd-al-Wahhab had encountered various excessive beliefs and practices associated with saint-veneration and saint-cults which were prevelant in his area. He probably chose to leave Najd and look elsewhere for studies to see if such beliefs and rituals were as popular in the neighboring places of the Muslim world or the possibility that his home town offered inadequate educational resources. Even today, the reasoning for why he left Najd is unclear. Pilgrimage to Mecca After leaving 'Uyayna, Ibn ʿAbd al-Wahhab performed the Greater Pilgrimage in Mecca, where the scholars appear to have held opinions and espoused teachings that were unpalatable to him. After this, he went to Medina, the stay at which seems to have been "decisive in shaping the later direction of his thought." In Medina, he met a Hanbali theologian from Najd named ʿAbd Allāh ibn Ibrāhīm al-Najdī, who had been a supporter of the neo-Hanbali works of Ibn Taymiyyah (d. 1328), the controversial medieval scholar whose teachings had been considered heterodox and misguided on several important points by the vast majority of Sunni Muslim scholars up to that point in history. Tutelage under Al-Sindhi Ibn ʿAbd al-Wahhab's teacher, 'Abdallah ibn Ibrahim ibn Sayf, introduced the relatively young man to Mohammad Hayya Al-Sindhi in Medina, who belonged to the Naqshbandi order (tariqa) of Sufism, and recommended him as a student. Muhammad Ibn ʿAbd-al-Wahhab and al-Sindhi became very close, and Ibn ʿAbd-al-Wahhab stayed with him for some time. Muhammad Hayya taught Muhammad Ibn ʿAbd-al-Wahhab to reject popular religious practices associated with walis and their tombs. He also encouraged him to reject rigid imitation (Taqlid) of medieval legal commentaries and develop individual research of scriptures (Ijtihad). Influenced by Al-Sindi's teachings, Ibn 'Abd al-Wahhab became critical of the established Madh'hab system, prompting him to disregard the instruements of Usul al-Fiqh in his intellectual approach. Ibn 'Abd al-Wahhab rarely made use of Fiqh (Islamic jurisprudence) and various legal opinions in his writings, by and large forming views based on his direct understanding of Scriptures. Apart from his emphasis on hadith studies, aversion for the madhhab system and disregard for technical juristic discussions involving legal principles, Ibn ‘Abd al-Wahhāb’s views on ziyārah (visitations to the shrines of Awliyaa) were also shaped by Al-Sindhi. Sindi encouraged his student to reject folk practices associated with graves and saints. Various themes in Al-Sindi's writings, such as his opposition to erecting tombs and drawing human images, would be revived later by the Wahhabi movement. Sindi instilled in Ibn 'Abd al-Wahhab the belief that practices like beseeching the dead saints constitued apostasy and resembled the customs of the people of Jahiliyya (pre-Islamic era). In a significant encounter between a young Ibn 'Abd al-Wahhab and Al-Sindhi reported by the Najdi historian 'Uthman Ibn Bishr (d. 1288 A.H./ 1871/2 C.E.):"... one day Shaykh Muḥammad [Ibn ‘Abdi’l-Wahhāb] stood by the chamber of the Prophet where people were calling [upon him or supplicating] and seeking help by the Prophet’s chamber, blessings and peace be upon him. He then saw Muḥammad Ḥayāt [al Sindī] and came to him. The shaykh [Ibn ‘Abdi’l-Wahhāb] asked, “What do you say about them?” He [al-Sindī] said, “Verily that in which they are engaged shall be destroyed and their acts are invalid.”" Journey to Basra Following his early education in Medina, Ibn ʿAbd-al-Wahhab traveled outside of the Arabian Peninsula, venturing first to Basra which was still an active center of Islamic culture. During his stay in Basra, Ibn 'Abd al-Wahhab studied Hadith and Fiqh under the Islamic scholar Muhammad al-Majmu'i. In Basra, Ibn 'Abd al-Wahhab came into contact with Shi'is and would write a treatise repudiating the theological doctrines of Rafidah, an extreme sect of Shiism. Early preaching His leave from Basra marked the end of his education and by the time of his return to 'Uyayna, Ibn 'Abd al-Wahhab had mastered various religious disciplines such as Islamic Fiqh (jurisprudence), theology, hadith sciences and Tasawwuf. His exposure to various practices centered around the cult of saints and grave veneration would eventually propel Ibn 'Abd al-Wahhab to grow critical of Sufi superstitious accretions and practices. Rather than targeting “Sufism” as a phenomenon or a group, Ibn 'Abd al-Wahhab denounced particular practices which he considered sinful. As a gifted communicator with a talent for breaking down his ideas into shorter units, Ibn 'Abd al-Wahhab entitled his treatises with terms such as qawāʿid (“principles”), masāʾil (“matters”), kalimāt (“phrases”) or uṣūl (“foundations”), simplifying his texts point by point for mass reading. Calling upon the people to follow his call for religious revival (tajdid ) based on following the founding texts and the authoritative practices of the first generations of Muslims, Ibn 'Abd al-Wahhab declared: "I do not - God be blessed - conform to any particular sufi order or faqih, nor follow the course of any speculative theologian (mutakalim) or any other Imam for that matter, not even such dignitaries as ibn al-Qayyim, al-Dhahabi, or ibn Kathir, I summon you only to God, and Only Him as well as observe the path laid by His Prophet, God’s messenger."Ibn ʿAbd al-Wahhab's call gradually began to attract followers, including the ruler of 'Uyayna, Uthman ibn Mu'ammar. Upon returning to Huraymila, where his father had settled, Ibn ʿAbd al-Wahhab wrote his first work on the Unity of god. With Ibn Mu'ammar, Ibn ʿAbd al-Wahhab agreed to support Ibn Mu'ammar's political ambitions to expand his rule "over Najd and possibly beyond", in exchange for the ruler's support for Ibn ʿAbd al-Wahhab's religious teachings. Initially, he condemned popular folk practices prevalent in Najd on doctrinal grounds, without seeking to enforce his views in practical terms. Starting from 1742, Ibn 'Abd al-Wahhab would shift towards an activist stance; and began to implement his reformist ideas. First, he persuaded Ibn Mu'ammar to help him level the grave of Zayd ibn al-Khattab, a companion of Muhammad, whose grave was revered by locals. Secondly, he ordered the cutting down of trees considered sacred by locals, cutting down "the most glorified of all of the trees" himself. Third, he organized the stoning of a woman who confessed to having committed adultery. These actions gained the attention of Sulaiman ibn Muhammad ibn Ghurayr of the tribe of Bani Khalid, the chief of Al-Hasa and Qatif, who held substantial influence in Najd. Ibn Ghurayr threatened Ibn Mu'ammar by denying him the ability to collect a land tax for some properties that Ibn Mu'ammar owned in Al-Hasa if he did not kill or drive away from Ibn ʿAbd al-Wahhab. Consequently, Ibn Mu'ammar forced Ibn ʿAbd al-Wahhab to leave. The early Wahhabis had been protected by Ibn Mu'ammar in Uyayna, despite being persecuted in other settlements. As soon as Ibn Mu'ammar disowned them, Wahhabis were subject to excommunication (Takfir); exposing themselves to loss of lives and property. This experience of suffering reminded them of the Mihna against Ahmad Ibn Hanbal and his followers, and shaped the collective Wahhabi memory. As late as 1749, the sharif of Mecca imprisoned those Wahhabis who went to Mecca to perform the Hajj (annual pilgrimage). Emergence of Saudi state Pact with Muhammad bin Saud Upon his expulsion from 'Uyayna, Ibn ʿAbd al-Wahhab was invited to settle in neighboring Diriyah by its ruler Muhammad ibn Saud Al Muqrin. After some time in Diriyah, Ibn ʿAbd al-Wahhab concluded his second and more successful agreement with a ruler. Ibn ʿAbd al-Wahhab and Muhammad bin Saud agreed that, together, they would bring the Arabs of the peninsula back to the "true" principles of Islam as they saw it. According to the anonymous author of Lam al-Shihab (Brilliance of the Meteor), when they first met, Ibn Saud declared:"This oasis is yours, do not fear your enemies. By the name of God, if all Nejd was summoned to throw you out, we will never agree to expel you." Muhammad ibn ʿAbd al-Wahhab replied:"You are the settlement's chief and wise man. I want you to grant me an oath that you will perform jihad against the unbelievers. In return, you will be imam, leader of the Muslim community and I will be leader in religious matters." The agreement was confirmed with a mutual oath of loyalty (bay'ah) in 1744. Once Al-Sa'ud made Dir'iyya a safe haven, Wahhabis from other towns took refuge. These included dissenters from Ibn Mu'ammar clan who had sworn allegiance to Ibn 'Abd al-Wahhab. The nucleus of Ibn 'Abd al-Wahhab's supporters all across Najd retreated to Dir'iyyah and formed the vanguard of the insurgency launched by Al-Saud against other towns. From a person who started his career as a lone activist, Ibn 'Abd al-Wahhab would become the spiritual guide of the nascent Emirate of Muhammad ibn Saud Al-Muqrin. Ibn 'Abd al-Wahhab would be responsible for religious matters and Ibn Saud in charge of political and military issues. This agreement became a "mutual support pact" and power-sharing arrangement between the Aal Saud family, and the Aal ash-Sheikh and followers of Ibn ʿAbd al-Wahhab, which had remained in place for nearly 300 years, providing the ideological impetus to Saudi expansion. Reviving the teachings of Ibn Taymiyya, the Muwaḥḥidūn (Unitarian) movement emphasized strict adherence to Qur'an and Sunnah; while simultaneously championing the conception of an Islamic state based on the model of early Muslim community in Medina. Meanwhile, it's Muslim and Western opponents derogatorily labelled the movement as the "Wahhābiyyah" ( anglicised as "Wahhabism" ). Emirate of Diriyah (First Saudi State) The 1744 pact between Muhammad ibn Saud and Muhammad ibn ʿAbd al-Wahhab marked the emergence of the first Saudi state, the Emirate of Diriyah. By offering the Aal-Saud a clearly defined religious mission, the alliance provided the ideological impetus to Saudi expansion. Deducing from his bitter experiences in 'Uyaynah, Ibn 'Abd al-Wahhab had understood the necessity of political backing from a strong Islamic political entity to transform the local socio-religious status quo and also safeguard Wahhabism’s territorial base from external pressure. After consolidating his position in Diriyah, he wrote to the rulers and clerics of other towns; appealing them to embrace his doctrines. While some heeded his calls, others rejected it; accusing him of ignorance or sorcery. War with Riyadh (1746-1773) Realising the signifiance of efficient religious preaching (da'wa), Ibn 'Abd al-Wahhab called upon his students to master the path of reasoning and proselytising over warfare to convince other Muslims of their reformist endeavour. Between 1744-1746, Ibn 'Abd al-Wahhab's preaching continued in the same non-violent manner as before and spread widely across the people of Najd. Rulers of various towns across Najd pledged their allegiance to Ibn Suʿūd. This situation changed drastically around 1158/1746; when the powerful anti-Wahhabi chieftain of Riyadh, Dahhām ibn Dawwās (fl. 1187/1773), attacked the town of Manfuha which had pledged allegiance to Diriyah. This would spark a nearly 30-year long between Diriyah and Riyadh, which lasted until 1187/1773, barring some interruptions. First conquering Najd, Muhammad ibn Saud's forces expanded the Wahhabi influence to most of the present-day territory of Saudi Arabia, eradicating various popular practices they viewed as akin to polytheism and propagating the doctrines of ʿAbd al-Wahhab. Muhammad Ibn ʿAbd al-Wahhāb maintained that the military campaigns of the Emirate of Dirʿiyya were strictly defensive and rebuked his opponents as being the first to initiate Takfir (excommunication). Ibn 'Abd al-Wahhab had defined jihad as an activity that must have a valid religious justification and which can only be declared by an Imam whose purpose must be strictly defensive in nature. Justifying the Wahhabi military campaigns as defensive operations against their enemies, Ibn 'Abd al-Wahhab asserts:"As for warfare, until today, we did not fight anyone, except in defense of our lives and honor. They came to us in our area and did not spare any effort in fighting us. We only initiated fighting against some of them in retaliation for their continued aggression, [The recompense for an evil is an evil like thereof] (42:40)... they are the ones who started declaring us to be unbelievers and fighting us" Rebellion in Huraymila (1752-1755) In 1753–4, the Wahhabis were confronted by an alarming number of towns renouncing allegiance and aligning with their opponents. Most prominent amongst these was the town of Huraymila, which had pledged allegiance to Dir'iyah in 1747. However, by 1752, a group of rebels encouraged by Ibn ʿAbd al-Wahhāb’s brother, Sulaymān, had initiated a coup in Huraymila and installed a new ruler that threatened to topple the Wahhābī order. A fierce war between Diriyah and Huraymila began in a magnitude that was unprecedented. Ibn ‘Abd al-Wahhab held a convocation of Wahhabis from all the settlements across Najd. Reviewing the recent desertions and defeats, he encouraged them to hold fast to their faith and recommit to the struggle. The ensuing battles and the re-capture of Huraymila in 1168/1775, constituted a significant development in Wahhabi expansionist stage. Abd al-Azeez, the son of Muhammad ibn Saud, had emerged as the principal leader of the Wahhabi military operations. Alongside a force of 800 men, accompanied by an additional 200 under the command of the deposed ruler of Huraymila, Abd al Azeez was able to subdue the rebels. More significantly, the rationale behind the campaign was based on Ibn ʿAbd al-Wahhāb’s newly written epistle Mufīd al-mustafīd, which marked a shift from the earlier posture of defensive Jihad to justify a more aggressive one. In the treatise, compiled to justify Jihad pursued by Dir'iyyah and its allies, Ibn 'Abd al-Wahhab excommunicated the inhabitants of Huraymila and declared it as a duty of Wahhabi soldiers to fight them as apostates. He also quoted several Qur'anic verses indicative of offensive forms of jihād. Capture of Riyadh and Retirement (1773) The last point of serious threat to the Saudi state was in 1764-1765. During this period, the Ismāʿīlī Shīʿa of Najrān alongside their allied tribe of 'Ujman, combined forces to inflict a major defeat on the Saudis at the Battle of Hair in October 1764, killing around 500 men. The anti-Wahhabi forces allied with the invaders and participated in the combined seige of Dirʿiyya. However, the defenders were able to hold onto their town due to the unexpected departure of the Najranis after a truce concluded with the Saudis. A decade later in 1773-'4, 'Abd al-Azeez had conquered Riyadh and secured the entirety of al-ʿĀriḍ, after its chieftain Dahham ibn Dawwas fled. By 1776/7, Sulayman ibn Abd al-Wahhab had surrendered. The capture of Riyadh marked the point at which Muhammad Ibn ‘Abd al-Wahhab delegated all affairs of governing to 'Abd al-Azeez, withdrew from public life and devoted himself to teaching, preaching and worshipping. Meanwhile, 'Abd al-Azeez would proceed with his military campaigns, conquering towns like Sudayr (1196/1781), al-Kharj (1199/1784), etc. Opposition in towns to the North like al-Qaṣīm was stamped out by 1196/1781, and the rebels in ʿUnayza were subdued by 1202/1787. Further north, the town of Ḥāʾil, was captured in 1201/1786 and by the 1780s; Wahhābīs were able to establish their jurisdiction over most of Najd. Death After his departure from public affairs, Ibn 'Abd al-Wahhab would remain a consultant to 'Abd al-Azeez, who followed his recommendations. However, he withdrew from any active military and political activities of the Emirate and devoted himself to educational endeavours, preaching and worship. His last major activity in state affairs was in 1202/1787; when he called on the people to give bay'ah (allegiance) to Suʿūd, ʿAbd al-ʿAzīz’s son, as heir apparent. Muhammad ibn 'Abd al-Wahhab fell ill and died in June 1792 C.E or 1206 A.H in the lunar month of Dhul-Qa'dah, at the age of eighty-nine. He would be buried in an unmarked grave at al-Turayf in al-Dir‘iyya. Ibn 'Abd al-Wahhab left behind four daughters and six sons, many of whom became clerics of greater or lesser distinction. A clear separation
our enjoining Tawheed and forbidding Shirk... Among the false accusations they propagated, ... is the claim that I accuse all Muslims, except my followers, of being Kuffar (Unbelievers)... This is truly incredible. How can any sane person accept such accusations? Would a Muslim say these things? I declare that I renounce, before Allah, these statements that only a mad person would utter. In short, what I was accused of calling to, other than enjoining Tawheed and forbidding Shirk, is all false." On Taqlid Muhammad ibn 'Abd al-Wahhab was highly critical of the practice of Taqlid ( blind-following), which in his view, deviated people away from Qur'an and Sunnah. He also advocated for Ijtihad of qualified scholars in accordance with the teachings of Qur'an and Hadith. In his legal writings, Ibn 'Abd al-Wahhab referred to a number of sources- Qur'an, hadith , opinions of companions, Salaf as well as the treatises of the 4 schools of thought. Ibn 'Abd al-Wahhab argued that Qur'an condemned blind emulation of forefathers and nowhere did it stipulate scholarly credentials for a person to refer to it directly. His advocacy of Ijtihad and harsh denunciation of Taqlid arose widespread condemnation from Sufi orthodoxy in Najd and beyond, compelling him to express many of his legal verdicts ( fatwas ) discreetly, using convincing juristic terms. He differed from Hanbali school in various points of law and in some cases, also departed from the positions of the 4 schools. In his treatise Usul al-Sittah (Six Foundations), Ibn 'Abd al-Wahhab vehemently rebuked his detractors for raising the description of Mujtahids to what he viewed as humanely unattainable levels. He condemned the establishment clergy as a class of oppressors who ran a "tyranny of wordly possessions" by exploiting the masses to make money out of their religious activities. The teachings of Medinan hadith scholar Muhammad Hayat as Sindi highly influenced the anti-taqlid views of Ibn 'Abd al Wahhab. Muhammad Ibn Abd al-Wahhab opposed partisanship to madhabs (legal schools) and didn't consider it obligatory to follow a particular madhab. Rather, in his view, the obligation is to follow Qur'an and the Sunnah. Referring to the classical scholars Ibn Taymiyya and Ibn Qayyim, ibn 'Abd al-Wahhab condemned the popular practice prevalent amongst his contemporary scholars to blindfollow latter-day legal works and urged Muslims to take directly from Qur'an and Sunnah. He viewed it as a duty upon every Muslim, laymen and scholar, male & female, to seek knowledge directly from the sources. Radically departing from both Ibn Taymiyya and Ibn Qayyim, Ibn 'Abd al-Wahhab viewed the entirety of the prevalent mad'hab system of jurisprudence (Fiqh) as a fundamentally corrupt institution, seeking a radical reform of scholarly institutions and preached the obligation of all Muslims to directly refer to the foundational texts of revelation. He advocated a form of scholarly authority based upon the revival of the practice of ittiba, i.e., laymen following the scholars only after seeking evidences. The prevalent legal system was, in his view, a "factory for the production of slavish emulators" symbolic of Muslim decline. On the Nature of Nubuwwah (Prophethood) Muhammad Ibn 'Abd al-Wahhab elucidated his concept on the nature of Prophethood in his book Mukhtaṣar sīrat al-Rasūl ("Abridgement of the life of the Prophet"), an extensive biographical work on Prophet Muhammad. Mukhtaṣar was written with the purpose of explaining Muhammad's role in universal history by undermining certain prophetologic conceptions that had come to prominence among Sunnī religious circles during the twelfth Islamic century. These included negating those concepts and beliefs that bestowed the Prophet with mystical attributes that elevated Muhammad beyond the status of ordinary humans. In his introduction to Mukhtasar, Ibn 'Abd al-Wahhab asserts that every Prophet came with the mission of upholding Tawhid and prohibiting shirk. Ibn 'Abd al-Wahhab further tries to undermine the belief in the pre-existence of Muḥammad as a divine light preceding all other creation, a salient concept that served as an aspect of Prophetic devotion during the eleventh Islamic century. Additionally, Ibn ʿAbd al-Wahhāb omitted mentioning other episodes narrated in various sirah (Prophetic biography) works such as trees and stones allegedly expressing veneration for Muḥammad, purification of Muhammad's heart by angels, etc. which suggested that Muḥammad possessed characteristics that transcend those of ordinary humans. Ibn 'Abd al-Wahhab adhered to Ibn Taymiyya's understanding of the concept of Isma (infallibility) which insisted that ʿiṣma does not prevent prophets from commiting minor sins or speaking false things. This differed from the alternative understanding of Sunni theologians like Fakhr al-Dīn al-Rāzi, Qāḍī ʿIyāḍ, etc. who had emphasised the complete independence of the Prophet from any form of error or sin. Following Ibn Taymiyya, Muhammad ibn 'Abd al-Wahhab affirmed the incident of qiṣṣat al-gharānīq (the "story of cranes" or "Satanic Verses") which demonstrated that Muhammad was afflicted by "Satanic interference". This idea of Ibn Taymiyya had been recently revived in the circles of Kurdish hadith scholar Ibrāhīm al-Kūrānī (1025/1616–1101/1686); whose son Abūl-Ṭāhir al-Kūrānī was the teacher of Muḥammad Ḥayāt al-Sindi, the master of Ibn 'Abd al-Wahhab. Using this concept to explain Tawhid al-ulūhiyya (Oneness of Worship), Ibn 'Abd al-Wahhab rejected the idea that anybody could act as intercessor between God and man by employing the Qurʾānic verses related to the event. He also used these and other similar incidents to undermine the belief regarding prophets being completely free from sin, error, or Satanic afflictions. Furthermore, Ibn 'Abd al-Wahhab had given little importance to Prophetic miracles in his Mukhtaṣar. Although he hadn't denied miracles as an expression of Divine Omnipotence so long as they are attested by Qur'an or authentic hadith, Al-Mukhtasar represented an open protest against the exuberance of miracles that characterised later biographies of Muḥammad. In Ibn 'Abd al-Wahhab's view, miracles are of little significance in the life of Muḥammad in comparison to that of the previous prophets, since central to his prophethood were the institutionalisation of Jihād and the ḥudud punishments. Contrary to prevalent religious beliefs, Muḥammad was not portrayed as the central purpose of creation in the historical conception of Mukhtaṣar. Instead, he has a function within creation and for the created beings. Rather than being viewed as an extraordinary performer of miracles, Muhammad should instead be upheld as a model of emulation. By depriving the person of Muḥammad of all supernatural aspects not related to Wahy (revelation) and Divine intervention, Ibn 'Abd al-Wahhab also re-inforced his rejection of beliefs and practices related to cult of saints and veneration of graves. Thus, Ibn ʿAbd al-Wahhāb’s conception of history emphasised the necessity to follow the role-model of Muḥammad and re-establish the Islamic order. Influence on Salafism Ibn ʿAbd al-Wahhab's movement is known today as Wahhabism (). The designation of his doctrine as Wahhābiyyah actually derives from his father's name, ʿAbd al-Wahhab. Many adherents consider the label "Wahhabism" as a derogatory term coined by his opponents, and prefer it to be known as the Salafi movement. Modern scholars of Islamic studies point out that "Salafism" is a term applied to several forms of puritanical Islam in various parts of the world, while Wahhabism refers to the specific Saudi school, which is seen as a more strict form of Salafism. However, modern scholars remark that Ibn 'Abd al-Wahhab's followers adopted the term "Salafi" as a self-designation much later. His early followers denominated themselves as Ahl al-Tawhid and al-Muwahhidun ("Unitarians" or "those who affirm/defend the unity of God"), and were labeled "Wahhabis" by their opponents. Salafiyya movement was not directly connected to Ibn 'Abd al-Wahhab's movement in Najd. According to professor Abdullah Saeed, Ibn ʿAbd al-Wahhab should rather be considered as one of the "precursors" of the modern Salafiyya movement since he called for a return to the pristine purity of the early eras of Islam by adhering to the Qur'an and the Sunnah, rejection of the blind following (Taqlid) of earlier scholars and advocating for Ijtihad. Scholars like Adam J. Silverstein consider Wahhabi movement as "the most influential expression of Salafism of the Islamist sort, both for its role in shaping (some might say: 'creating') modern Islamism, and for disseminating salafi ideas widely across the Muslim world." On Islamic Revival As a young scholar in Medina, Muhammad Ibn 'Abd al-Wahhab was profoundly influenced by the revivalist doctrines taught by his teachers Muhammad Hayyat ibn Ibrahim al-Sindhi and Abdullah Ibn Ibrahim Ibn Sayf. Much of the Wahhabi teachings such as opposition to saint-cults, radical denunciation of blind-following medieval commentaries, adherence to Scriptures and other revivalist thoughts came from Muhammad Hayyat. Ibn Abd al-Wahhab's revivalist efforts were based on a strong belief in Tawhid (Oneness of Allah) and a firm adherence to the Sunnah. His reformative efforts left exemplary marks on contemporary Islamic scholarship. Viewing Blind adherence ( Taqlid ) as an obstacle to the progress of Muslims, he dedicated himself to educating the masses for them to be vanguards of Islam. According to Ibn Abd al-Wahhab, the degradation and lagging behind of Muslims was due to their neglect of the teachings of Islam, emphasizing that progress could be achieved only by firmly adhering to Islam. He also campaigned against popular Sufi practices associated with istigatha, myths and superstitions. On Sufism Ibn ʿAbd al-Wahhab praised Tasawwuf. He stated the popular saying: "From among the wonders is to find a Sufi who is a faqih and a scholar who is an ascetic (zahid)". He described Tasawwuf as "the science of the deeds of the heart, which is known as the science of Suluk", and considered it as an important branch of Islamic religious sciences. At the end of his treatise, Al-Hadiyyah al-Suniyyah, Ibn ʿAbd al-Wahhab's son 'Abd Allah speaks positively on the practice of tazkiah (purification of the inner self). 'Abd Allah Ibn ʿAbd al-Wahhab ends his treatise saying: We do not negate the way of the Sufis and the purification of the inner self from the vices of those sins connected to the heart and the limbs as long as the individual firmly adheres to the rules of Shari‘ah and the correct and observed way. However, we will not take it on ourselves to allegorically interpret (ta’wil) his speech and his actions. We only place our reliance on, seek help from, beseech aid from and place our confidence in all our dealings in Allah Most High. He is enough for us, the best trustee, the best mawla and the best helper. May Allah send peace on our master Muhammad, his family and companions. On Social Reforms Muhammad ibn 'Abd al-Wahhab concerned himself with the social reformation of his people. As an 18th-century reformer, Muhammad ibn 'Abd al Wahhab called for the re-opening of Ijtihad by qualified persons through strict adherence to Scriptures in reforming society. His thoughts reflected the major trends apparent in the 18th-century Islamic reform movements. Unlike other reform movements which were restricted to da'wa, Ibn 'Abd al-Wahhab was also able to transform his movement into a successful Islamic state. Thus, his teachings had a profound influence on majority of Islamic reform-revivalist movements since the 18th century. Numerous significant socio-economic reforms would be advocated by the Imam during his lifetime. His reforms touched over various fields such as aqeeda, ibaadat (ritual acts of worship), muamalaat (social interactions), etc. In the affairs of mu'amalat, he harshly rebuked the practice of leaving endowments to prevent the rightful heirs (particularly the females) from receiving their deserved inheritance. He also objected to various forms of riba (usury) as well as the practice of presenting judges with gifts, which according to him, was nothing more than bribing. He also opposed and brought an end to numerous un-Islamic taxes that were forced upon the people. The legal writings of Ibn 'Abd al-Wahhab reflected a general concern of female welfare and gender justice. In line with this approach, Ibn 'Abd al-Wahhab denounced the practice of instant triple talaq, counting it as only a single talaq (regardless of the number of pronouncements). The outlawing of triple talaq is considered to be one of the most significant reforms across the Islamic World in the 20th and 21st centuries. Following a balanced approach in issues of gender, Ibn 'Abd al-Wahhab advocated moderation between men and women in social interactions as well as spirituality. According to Ibn 'Abd al-Wahhab, women has a place in society with both rights and responsibility, with the society being obliged to respect her status and protect her. He also condemned forced marriages and declared any marriage contracted without the consent of a woman (be it minor, virgin or non-virgin) to be "invalid". This too was a significant reform as well as a break from the four Sunni schools which allowed the wali (ward/guardian) to compel minor daughters into marriage without consent. Ibn 'Abd al-Wahhab also stipulated the permission of the guardian as a condition in marriage (in line with traditional Hanbali, Shafi'i and Maliki schools). Nevertheless, as a practical jurist, Ibn 'Abd al-Wahhab allowed guardians to delegate the right to contract marriages to women herself, after which his permission cannot be denied. He also allowed women the right to stipulate favourable conditions for her in the marriage contract. Ibn 'Abd al-Wahhab also defended the woman's right to divorce through Khul' for various reasons, including in cases wherein she despised her husband. He also prohibited the killing of women, children and various non-combatants such as monks, elderly, blind, shaykhs, slaves and peasants in warfare. On Muslim saints Ibn ʿAbd al-Wahhab strongly condemned the veneration of Muslim saints (Which he described as worship) or associating divinity to beings other than God, labeling it as shirk. Despite his great aversion to venerating the saints after their earthly passing and seeking their intercession, it should nevertheless be noted that Muhammad ibn ʿAbd al-Wahhab did not deny the existence of saints as such; on the contrary, he acknowledged that "the miracles of saints (karāmāt al-awliyāʾ) are not to be denied, and their right guidance by God is acknowledged" when they acted properly during their life. Muhammad ibn Abd al-Wahhab opposing the practice of the pilgrimage of the saint's tombs as it is considered as Bidʻah (heresy), such as the practice of the pilgrimage towards a tomb believed belong to a companion of the Prophet named Dhiraar ibn al-Azwar in the valley of Ghobaira. On Non-Muslims According to the political scientist Dore Gold, Muhammad ibn ʿAbd al-Wahhab presented a strong anti-Christian and anti-Judaic stance in his main theological treatise Kitāb at-Tawḥīd, describing the followers of both Christian and Jewish faiths as sorcerers who believe in devil-worship, and by citing a hadith attributed to the Islamic prophet Muhammad he stated that capital punishment for the sorcerer is "that he be struck with the sword". Ibn ʿAbd al-Wahhab asserted that both the Christian and Jewish religions had improperly made the graves of their prophet into places of worship and warned Muslims not to imitate this practice. Ibn ʿAbd al-Wahhab concluded that "The ways of the People of the Book are condemned as those of polytheists." However, Western scholar Natana J. DeLong-Bas defended the position of Muhammad ibn ʿAbd al-Wahhab, stating that despite his at times vehement denunciations of other religious groups for their supposedly heretical beliefs, Ibn Abd al Wahhab never called for their destruction or death … he assumed that these people would be punished in the Afterlife …"According to Vahid Hussein Ranjbar, "Muhammad ibn ʿAbd al-Wahhab saw it as his mission to restore a more purer and original form of the faith of Islam". In accordance with the his own theology, which upheld a strict doctrine of tawhid (oneness of God), Ibn ʿAbd al-Wahhab condemned the veneration of any personality other than God and sought the demolition of the tombs of Muslim saints (awliya). Those who didn't adhere to his interpretation of monotheism were considered disbelieving polytheists (including Sufi and Shia Muslims), Christians, Jews, and other Non-Muslims. He also advocated for
land features. Much of Maine's geomorphology was created by extended glacial activity at the end of the last ice age. Prominent glacial features include Somes Sound and Bubble Rock, both part of Acadia National Park on Mount Desert Island. Carved by glaciers, Somes Sound is considered to be the only fjord on the eastern seaboard and reaches depths of . The extreme depth and steep drop-off allow large ships to navigate almost the entire length of the sound. These features also have made it attractive for boat builders, such as the prestigious Hinckley Yachts. Bubble Rock, a glacial erratic, is a large boulder perched on the edge of Bubble Mountain in Acadia National Park. By analyzing the type of granite, geologists discovered that glaciers carried Bubble Rock to its present location from near Lucerne, away. The Iapetus Suture runs through the north and west of the state, being underlain by the ancient Laurentian terrane, and the south and east underlain by the Avalonian terrane. Acadia National Park is the only national park in New England. Areas under the protection and management of the National Park Service include: Acadia National Park near Bar Harbor Appalachian National Scenic Trail Maine Acadian Culture in St. John Valley Roosevelt Campobello International Park on Campobello Island in New Brunswick, Canada, operated by both the U.S. and Canada, just across the Franklin Delano Roosevelt Bridge from Lubec Saint Croix Island International Historic Site at Calais Katahdin Woods and Waters National Monument Lands under the control of the state of Maine include: Maine State Parks Maine Wildlife Management Areas (WMA) Climate Maine has a humid continental climate (Köppen climate classification Dfb), with warm and sometimes humid summers, and long, cold and very snowy winters. Winters are especially severe in the Northern and Western parts of Maine, while coastal areas are moderated slightly by the Atlantic Ocean, resulting in marginally milder winters and cooler summers than inland regions. Daytime highs are generally in the range throughout the state in July, with overnight lows in the high 50s°F (around 15°C). January temperatures range from highs near on the southern coast to overnight lows averaging below in the far north. The state's record high temperature is , set in July 1911, at North Bridgton. Precipitation in Maine is evenly distributed year-round, but with a slight summer maximum in northern/northwestern Maine and a slight late-fall or early-winter maximum along the coast due to "nor'easters" or intense cold-season rain and snowstorms. In coastal Maine, the late spring and summer months are usually driest—a rarity across the Eastern United States. Maine has fewer days of thunderstorms than any other state east of the Rockies, with most of the state averaging fewer than twenty days of thunderstorms a year. Tornadoes are rare in Maine, with the state averaging fewer than four per year, although this number is increasing. Most severe thunderstorms and tornadoes occur in the Sebago Lakes & Foothills region of the state. Maine rarely sees the effect of tropical cyclones, as they tend to pass well east and south or are greatly weakened by the time they reach Maine. In January 2009, a new record low temperature for the state was set at Big Black River of , tying the New England record. Annual precipitation varies from in Presque Isle to in Acadia National Park. Demographics Population The United States Census Bureau estimates that the population of Maine was 1,344,212 on July 1, 2019, a 1.19% increase since the 2010 United States census. At the 2020 census, 1,362,359 people lived in the state. The state's population density is 41.3 people per square mile, making it the least densely populated state east of the Mississippi River. As of 2010, Maine was also the most rural state in the Union, with only 38.7% of the state's population living within urban areas. As explained in detail under "Geography", there are large tracts of uninhabited land in some remote parts of the interior of the state, particularly in the North Maine Woods. The mean population center of Maine is located in Kennebec County, just east of Augusta. The Greater Portland metropolitan area is the most densely populated with nearly 40% of Maine's population. This area spans three counties and includes many farms and wooded areas; the 2016 population of Portland proper was 66,937. Maine has experienced a very slow rate of population growth since the 1990 census; its rate of growth (0.57%) since the 2010 census ranks 45th of the 50 states. The modest population growth in the state has been concentrated in the southern coastal counties; with more diverse populations slowly moving into these areas of the state. However, the northern, more rural areas of the state have experienced a slight decline in population in recent years. According to the 2010 Census, Maine has the highest percentage of non-Hispanic whites of any state, at 94.4% of the total population. In 2011, 89.0% of all births in the state were to non-Hispanic white parents. Maine also has the second-highest residential senior population. The table below shows the racial composition of Maine's population as of 2016. According to the 2016 American Community Survey, 1.5% of Maine's population were of Hispanic or Latino origin (of any race): Mexican (0.4%), Puerto Rican (0.4%), Cuban (0.1%), and other Hispanic or Latino origin (0.6%). The five largest ancestry groups were: English (20.7%), Irish (17.3%), French (15.7%), German (8.1%), and American (7.8%). People citing that they are American are of overwhelmingly English descent, but have ancestry that has been in the region for so long (often since the 17th century) that they choose to identify simply as Americans. Maine has the highest percentage of French Americans of any state. Most of them are of Canadian origin, but in some cases have been living there since prior to the American Revolutionary War. There are particularly high concentrations in the northern part of Maine in Aroostook County, which is part of a cultural region known as Acadia that goes over the border into New Brunswick. Along with the Acadian population in the north, many French came from Quebec as immigrants between 1840 and 1930. The upper Saint John River valley area was once part of the so-called Republic of Madawaska, before the frontier was decided in the Webster-Ashburton Treaty of 1842. Over a quarter of the population of Lewiston, Waterville, and Biddeford are Franco-American. Most of the residents of the Mid Coast and Down East sections are chiefly of British heritage. Smaller numbers of various other groups, including Irish, Italian and Polish, have settled throughout the state since the late 19th and early 20th century immigration waves. Birth data Note: Births in table do not sum to 100% because Hispanics are counted both by their ethnicity and by their race. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Language Maine does not have an official language, but the most widely spoken language in the state is English. The 2000 Census reported 92.25% of Maine residents aged five and older spoke only English at home. French-speakers are the state's chief linguistic minority; census figures show that Maine has the highest percentage of people speaking French at home of any state: 5.28% of Maine households are French-speaking, compared with 4.68% in Louisiana, which is the second highest state. Although rarely spoken, Spanish is the third-most-common language in Maine, after English and French. Religion According to the Association of Religion Data Archives (ARDA), the religious affiliations of Maine in 2010 were: Protestant 37% Evangelical Protestant 4% Unclaimed 31% Catholic Church 28% Other religions 1.7% Non-Christian religions include Hinduism, Islam, Buddhism and Baháʼí. The Catholic Church was the largest religious institution with 202,106 members, the United Methodist Church had 28,329 members, the United Church of Christ had 22,747 members In 2010, a study named Maine as the least religious state in the United States. Economy Total employment 2016 511,936 Total employer establishments 2016 41,178 The Bureau of Economic Analysis estimates that Maine's total gross state product for 2010 was $52 billion. Its per capita personal income for 2007 was $33,991, 34th in the nation. , Maine's unemployment rate is 3.0% Maine's agricultural outputs include poultry, eggs, dairy products, cattle, wild blueberries, apples, maple syrup, and maple sugar. Aroostook County is known for its potato crops. Commercial fishing, once a mainstay of the state's economy, maintains a presence, particularly lobstering and groundfishing. While lobster is the main seafood focus for Maine, the harvest of both oysters and seaweed are on the rise. In 2015, 14% of the Northeast's total oyster supply came from Maine. In 2017, the production of Maine's seaweed industry was estimated at $20 million per year. The shrimp industry of Maine is on a government-mandated hold. With an ever-decreasing Northern shrimp population, Maine fishermen are no longer allowed to catch and sell shrimp. The hold began in 2014 and is expected to continue until 2021. Western Maine aquifers and springs are a major source of bottled water. Maine's industrial outputs consist chiefly of paper, lumber and wood products, electronic equipment, leather products, food products, textiles, and bio-technology. Naval shipbuilding and construction remain key as well, with Bath Iron Works in Bath and Portsmouth Naval Shipyard in Kittery. Brunswick Landing, formerly Naval Air Station Brunswick, is also in Maine. Formerly a large support base for the U.S. Navy, the BRAC campaign initiated the Naval Air Station's closing, despite a government-funded effort to upgrade its facilities. The former base has since been changed into a civilian business park, as well as a new satellite campus for Southern Maine Community College. Maine is the number one U.S. producer of low-bush blueberries (Vaccinium angustifolium). Preliminary data from the USDA for 2012 also indicate Maine was the largest blueberry producer of the major blueberry producing states, with 91,100,000 lbs. This data includes both low (wild), and high-bush (cultivated) blueberries: Vaccinium corymbosum. The largest toothpick manufacturing plant in the United States used to be located in Strong, Maine. The Strong Wood Products plant produced 20 million toothpicks a day. It closed in May 2003. Tourism and outdoor recreation play a major and increasingly important role in Maine's economy. The state is a popular destination for sport hunting (particularly deer, moose and bear), sport fishing, snowmobiling, skiing, boating, camping and hiking, among other activities. Concomitantly with the tourist and recreation-oriented economy, Maine has developed a burgeoning creative economy, most notably centered in the Greater Portland vicinity. Historically, Maine ports played a key role in national transportation. Beginning around 1880, Portland's rail link and ice-free port made it Canada's principal winter port, until the aggressive development of Halifax, Nova Scotia, in the mid-20th century. In 2013, 12,039,600 short tons passed into and out of Portland by sea, which places it 45th of U.S. water ports. Portland International Jetport has been expanded, providing the state with increased air traffic from carriers such as JetBlue and Southwest Airlines. Maine has very few large companies that maintain headquarters in the state, and that number has fallen due to consolidations and mergers, particularly in the pulp and paper industry. Some of the larger companies that do maintain headquarters in Maine include Covetrus in Portland, Fairchild Semiconductor in South Portland, IDEXX Laboratories in Westbrook, Hannaford Bros. Co. in Scarborough; TD Bank in Portland and L.L.Bean in Freeport. Maine is also the home of the Jackson Laboratory, the world's largest non-profit mammalian genetic research facility and the world's largest supplier of genetically purebred mice. Taxation Maine has an income tax structure containing two brackets, 6.5 and 7.95 percent of personal income. Before July 2013 Maine had four brackets: 2, 4.5, 7, and 8.5 percent. Maine's general sales tax rate is 5.5 percent. The state also levies charges of nine percent on lodging and prepared food and ten percent on short-term auto rentals. Commercial sellers of blueberries, a Maine staple, must keep records of their transactions and pay the state 1.5 cents per pound ($1.50 per 100 pounds) of the fruit sold each season. All real and tangible personal property located in the state of Maine is taxable unless specifically exempted by statute. The administration of property taxes is handled by the local assessor in incorporated cities and towns, while property taxes in the unorganized territories are handled by the State Tax Assessor. Shipbuilding Maine has a long-standing tradition of being home to many shipbuilding companies. In the 18th and 19th centuries, Maine was home to many shipyards that produced wooden sailing ships. The main function of these ships was to transport either cargos or passengers overseas. One of these yards was located in Pennellville Historic District in what is now Brunswick, Maine. This yard, owned by the Pennell family, was typical of the many family-owned shipbuilding companies of the time period. Other such examples of shipbuilding families were the Skolfields and the Morses. During the 18th and 19th centuries, wooden shipbuilding of this sort made up a sizable portion of the economy. Transportation Airports Maine receives passenger jet service at its two largest airports, the Portland International Jetport in Portland, and the Bangor International Airport in Bangor. Both are served daily by many major airlines to destinations such as New York, Atlanta, and Orlando. Essential Air Service also subsidizes service to a number of smaller airports in Maine, bringing small turboprop aircraft to regional airports such as the Augusta State Airport, Hancock County-Bar Harbor Airport, Knox County Regional Airport, and the Northern Maine Regional Airport at Presque Isle. These airports are served by regional providers such as Cape Air with Cessna 402s, and CommutAir with Embraer ERJ 145 aircraft. Many smaller airports are scattered throughout Maine, serving only general aviation traffic. The Eastport Municipal Airport, for example, is a city-owned public-use airport with 1,200 general aviation aircraft operations each year from single-engine and ultralight aircraft. Highways Interstate95 (I-95) travels through Maine, as well as its easterly branch I-295 and spurs I-195, I-395 and the unsigned I-495 (the Falmouth Spur). In addition, U.S. Route1 (US1) starts in Fort Kent and travels to Florida. The eastern terminus of the eastern section of US2 starts in Houlton, near the New Brunswick, Canada border to Rouses Point, New York, at US11. US2A connects Old Town and Orono, primarily serving the University of Maine campus. US201 and US202 flow through the state. US2, Maine State Route 6 (SR6), and SR9 are often used by truckers and other motorists of the Maritime Provinces en route to other destinations in the United States or as a short cut to Central Canada. Rail Passenger The Downeaster passenger train, operated by Amtrak, provides passenger service between Brunswick and Boston's North Station, with stops in Freeport, Portland, Old Orchard Beach, Saco, and Wells. The Downeaster makes five daily trips. Freight Freight service throughout the state is provided by a handful of regional and shortline carriers: Pan Am Railways (formerly known as Guilford Rail System), which operates the former Boston & Maine and Maine Central railroads; St. Lawrence and Atlantic Railroad; Maine Eastern Railroad; Central Maine and Quebec Railway; and New Brunswick Southern Railway. Law and government The Maine Constitution structures Maine's state government, composed of three co-equal branches—the executive, legislative, and judicial branches. The state of Maine also has three Constitutional Officers (the Secretary of State, the State Treasurer, and the State Attorney General) and one Statutory Officer (the State Auditor). The legislative branch is the Maine Legislature, a bicameral body composed of the Maine House of Representatives, with 151 members, and the Maine Senate, with 35 members. The Legislature is charged with introducing and passing laws. The executive branch is responsible for the execution of the laws created by the Legislature and is headed by the Governor of Maine (currently Janet Mills). The Governor is elected every four years; no individual may serve more than two consecutive terms in this office. The current attorney general of Maine is Aaron Frey. As with other state legislatures, the Maine Legislature can by a two-thirds majority vote from both the House and Senate override a gubernatorial veto. Maine is one of seven states that do not have a lieutenant governor. The judicial branch is responsible for interpreting state laws. The highest court of the state is the Maine Supreme Judicial Court. The lower courts are the District Court, Superior Court and Probate Court. All judges except for probate judges serve full-time, are nominated by the Governor and confirmed by the Legislature for terms of seven years. Probate judges serve part-time and are elected by the voters of each county for four-year terms. In a 2020 study, Maine was ranked as the 14th easiest state for citizens to vote in. Counties Maine is divided into political jurisdictions designated as counties. Since 1860 there have been 16 counties in the state, ranging in size from . Politics State and local politics In state general elections, Maine voters tend to accept independent and third-party candidates more frequently than most states. Maine has had two independent governors recently (James B. Longley, 1975–1979 and current U.S. Senator Angus King, 1995–2003). Maine state politicians, Democrats and Republicans alike, are noted for having more moderate views than many in the national wings of their respective parties. Maine is an alcoholic beverage control state. On May 6, 2009, Maine became the fifth state to legalize same-sex marriage; however, the law was repealed by voters on November 3, 2009. On November 6, 2012, Maine, along with Maryland and Washington, became the first state to legalize same-sex marriage at the ballot box. Federal politics In the 1930s, Maine was one of very few states which retained Republican sentiments. In the 1936 presidential election, Franklin D. Roosevelt received the electoral votes of every state other than Maine and Vermont; these were the only two states in the nation that never voted for Roosevelt in any of his presidential campaigns, though Maine was closely fought in 1940 and 1944. In the 1960s, Maine began to lean toward the Democrats, especially in presidential elections. In 1968, Hubert Humphrey became just the second Democrat in half a century to carry Maine, perhaps because of the presence of his running mate, Maine Senator Edmund Muskie, although the state voted Republican in every presidential election in the 1970s and 1980s. Since 1969, two of Maine's four electoral votes have been awarded based on the winner of the statewide election; the other two go to the highest vote-getter in each of the state's two congressional districts. Every other state except Nebraska gives all its electoral votes to the candidate who wins the popular vote in the state at large, without regard to performance within districts. Maine split its electoral vote for the first time in 2016, with Donald Trump's strong showing in the more rural central and northern Maine allowing him to capture one of the state's four votes in the Electoral College. Ross Perot achieved a great deal of success in Maine in the presidential elections of 1992 and 1996. In 1992, as an independent candidate, Perot came in second to Democrat Bill Clinton, despite the long-time presence of the Bush family summer home in Kennebunkport. In 1996, as the nominee of the Reform Party, Perot did better in Maine than in any other state. Maine has voted for Democratic Bill Clinton twice, Al Gore in 2000, John Kerry in 2004, and Barack Obama in 2008 and 2012. In 2016, Republican Donald Trump won one of Maine's electoral votes with Democratic opponent Hillary Clinton winning the other three. Although Democrats have mostly carried the state in presidential elections in recent years, Republicans have largely maintained their control of the state's U.S. Senate seats, with Edmund Muskie, William Hathaway and George J. Mitchell being the only Maine Democrats serving in the U.S. Senate in the past fifty years. In the 2010 midterm elections, Republicans made major gains in Maine. They captured the governor's office as well as majorities in both chambers of the state legislature for the first time since the early 1970s. However, in the 2012 elections Democrats managed to recapture both houses of Maine Legislature. Maine's U.S. senators are Republican Susan Collins and Independent Angus King. The governor is Democrat Janet Mills. The state's two members of the United States House of Representatives are Democrats Chellie Pingree and Jared Golden. Maine is the first state to have introduced ranked-choice voting in federal elections. Municipalities Organized municipalities An organized municipality has a form of elected local government which administers and provides local services, keeps records, collects licensing fees, and can pass locally binding ordinances, among other
growth in the state has been concentrated in the southern coastal counties; with more diverse populations slowly moving into these areas of the state. However, the northern, more rural areas of the state have experienced a slight decline in population in recent years. According to the 2010 Census, Maine has the highest percentage of non-Hispanic whites of any state, at 94.4% of the total population. In 2011, 89.0% of all births in the state were to non-Hispanic white parents. Maine also has the second-highest residential senior population. The table below shows the racial composition of Maine's population as of 2016. According to the 2016 American Community Survey, 1.5% of Maine's population were of Hispanic or Latino origin (of any race): Mexican (0.4%), Puerto Rican (0.4%), Cuban (0.1%), and other Hispanic or Latino origin (0.6%). The five largest ancestry groups were: English (20.7%), Irish (17.3%), French (15.7%), German (8.1%), and American (7.8%). People citing that they are American are of overwhelmingly English descent, but have ancestry that has been in the region for so long (often since the 17th century) that they choose to identify simply as Americans. Maine has the highest percentage of French Americans of any state. Most of them are of Canadian origin, but in some cases have been living there since prior to the American Revolutionary War. There are particularly high concentrations in the northern part of Maine in Aroostook County, which is part of a cultural region known as Acadia that goes over the border into New Brunswick. Along with the Acadian population in the north, many French came from Quebec as immigrants between 1840 and 1930. The upper Saint John River valley area was once part of the so-called Republic of Madawaska, before the frontier was decided in the Webster-Ashburton Treaty of 1842. Over a quarter of the population of Lewiston, Waterville, and Biddeford are Franco-American. Most of the residents of the Mid Coast and Down East sections are chiefly of British heritage. Smaller numbers of various other groups, including Irish, Italian and Polish, have settled throughout the state since the late 19th and early 20th century immigration waves. Birth data Note: Births in table do not sum to 100% because Hispanics are counted both by their ethnicity and by their race. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Language Maine does not have an official language, but the most widely spoken language in the state is English. The 2000 Census reported 92.25% of Maine residents aged five and older spoke only English at home. French-speakers are the state's chief linguistic minority; census figures show that Maine has the highest percentage of people speaking French at home of any state: 5.28% of Maine households are French-speaking, compared with 4.68% in Louisiana, which is the second highest state. Although rarely spoken, Spanish is the third-most-common language in Maine, after English and French. Religion According to the Association of Religion Data Archives (ARDA), the religious affiliations of Maine in 2010 were: Protestant 37% Evangelical Protestant 4% Unclaimed 31% Catholic Church 28% Other religions 1.7% Non-Christian religions include Hinduism, Islam, Buddhism and Baháʼí. The Catholic Church was the largest religious institution with 202,106 members, the United Methodist Church had 28,329 members, the United Church of Christ had 22,747 members In 2010, a study named Maine as the least religious state in the United States. Economy Total employment 2016 511,936 Total employer establishments 2016 41,178 The Bureau of Economic Analysis estimates that Maine's total gross state product for 2010 was $52 billion. Its per capita personal income for 2007 was $33,991, 34th in the nation. , Maine's unemployment rate is 3.0% Maine's agricultural outputs include poultry, eggs, dairy products, cattle, wild blueberries, apples, maple syrup, and maple sugar. Aroostook County is known for its potato crops. Commercial fishing, once a mainstay of the state's economy, maintains a presence, particularly lobstering and groundfishing. While lobster is the main seafood focus for Maine, the harvest of both oysters and seaweed are on the rise. In 2015, 14% of the Northeast's total oyster supply came from Maine. In 2017, the production of Maine's seaweed industry was estimated at $20 million per year. The shrimp industry of Maine is on a government-mandated hold. With an ever-decreasing Northern shrimp population, Maine fishermen are no longer allowed to catch and sell shrimp. The hold began in 2014 and is expected to continue until 2021. Western Maine aquifers and springs are a major source of bottled water. Maine's industrial outputs consist chiefly of paper, lumber and wood products, electronic equipment, leather products, food products, textiles, and bio-technology. Naval shipbuilding and construction remain key as well, with Bath Iron Works in Bath and Portsmouth Naval Shipyard in Kittery. Brunswick Landing, formerly Naval Air Station Brunswick, is also in Maine. Formerly a large support base for the U.S. Navy, the BRAC campaign initiated the Naval Air Station's closing, despite a government-funded effort to upgrade its facilities. The former base has since been changed into a civilian business park, as well as a new satellite campus for Southern Maine Community College. Maine is the number one U.S. producer of low-bush blueberries (Vaccinium angustifolium). Preliminary data from the USDA for 2012 also indicate Maine was the largest blueberry producer of the major blueberry producing states, with 91,100,000 lbs. This data includes both low (wild), and high-bush (cultivated) blueberries: Vaccinium corymbosum. The largest toothpick manufacturing plant in the United States used to be located in Strong, Maine. The Strong Wood Products plant produced 20 million toothpicks a day. It closed in May 2003. Tourism and outdoor recreation play a major and increasingly important role in Maine's economy. The state is a popular destination for sport hunting (particularly deer, moose and bear), sport fishing, snowmobiling, skiing, boating, camping and hiking, among other activities. Concomitantly with the tourist and recreation-oriented economy, Maine has developed a burgeoning creative economy, most notably centered in the Greater Portland vicinity. Historically, Maine ports played a key role in national transportation. Beginning around 1880, Portland's rail link and ice-free port made it Canada's principal winter port, until the aggressive development of Halifax, Nova Scotia, in the mid-20th century. In 2013, 12,039,600 short tons passed into and out of Portland by sea, which places it 45th of U.S. water ports. Portland International Jetport has been expanded, providing the state with increased air traffic from carriers such as JetBlue and Southwest Airlines. Maine has very few large companies that maintain headquarters in the state, and that number has fallen due to consolidations and mergers, particularly in the pulp and paper industry. Some of the larger companies that do maintain headquarters in Maine include Covetrus in Portland, Fairchild Semiconductor in South Portland, IDEXX Laboratories in Westbrook, Hannaford Bros. Co. in Scarborough; TD Bank in Portland and L.L.Bean in Freeport. Maine is also the home of the Jackson Laboratory, the world's largest non-profit mammalian genetic research facility and the world's largest supplier of genetically purebred mice. Taxation Maine has an income tax structure containing two brackets, 6.5 and 7.95 percent of personal income. Before July 2013 Maine had four brackets: 2, 4.5, 7, and 8.5 percent. Maine's general sales tax rate is 5.5 percent. The state also levies charges of nine percent on lodging and prepared food and ten percent on short-term auto rentals. Commercial sellers of blueberries, a Maine staple, must keep records of their transactions and pay the state 1.5 cents per pound ($1.50 per 100 pounds) of the fruit sold each season. All real and tangible personal property located in the state of Maine is taxable unless specifically exempted by statute. The administration of property taxes is handled by the local assessor in incorporated cities and towns, while property taxes in the unorganized territories are handled by the State Tax Assessor. Shipbuilding Maine has a long-standing tradition of being home to many shipbuilding companies. In the 18th and 19th centuries, Maine was home to many shipyards that produced wooden sailing ships. The main function of these ships was to transport either cargos or passengers overseas. One of these yards was located in Pennellville Historic District in what is now Brunswick, Maine. This yard, owned by the Pennell family, was typical of the many family-owned shipbuilding companies of the time period. Other such examples of shipbuilding families were the Skolfields and the Morses. During the 18th and 19th centuries, wooden shipbuilding of this sort made up a sizable portion of the economy. Transportation Airports Maine receives passenger jet service at its two largest airports, the Portland International Jetport in Portland, and the Bangor International Airport in Bangor. Both are served daily by many major airlines to destinations such as New York, Atlanta, and Orlando. Essential Air Service also subsidizes service to a number of smaller airports in Maine, bringing small turboprop aircraft to regional airports such as the Augusta State Airport, Hancock County-Bar Harbor Airport, Knox County Regional Airport, and the Northern Maine Regional Airport at Presque Isle. These airports are served by regional providers such as Cape Air with Cessna 402s, and CommutAir with Embraer ERJ 145 aircraft. Many smaller airports are scattered throughout Maine, serving only general aviation traffic. The Eastport Municipal Airport, for example, is a city-owned public-use airport with 1,200 general aviation aircraft operations each year from single-engine and ultralight aircraft. Highways Interstate95 (I-95) travels through Maine, as well as its easterly branch I-295 and spurs I-195, I-395 and the unsigned I-495 (the Falmouth Spur). In addition, U.S. Route1 (US1) starts in Fort Kent and travels to Florida. The eastern terminus of the eastern section of US2 starts in Houlton, near the New Brunswick, Canada border to Rouses Point, New York, at US11. US2A connects Old Town and Orono, primarily serving the University of Maine campus. US201 and US202 flow through the state. US2, Maine State Route 6 (SR6), and SR9 are often used by truckers and other motorists of the Maritime Provinces en route to other destinations in the United States or as a short cut to Central Canada. Rail Passenger The Downeaster passenger train, operated by Amtrak, provides passenger service between Brunswick and Boston's North Station, with stops in Freeport, Portland, Old Orchard Beach, Saco, and Wells. The Downeaster makes five daily trips. Freight Freight service throughout the state is provided by a handful of regional and shortline carriers: Pan Am Railways (formerly known as Guilford Rail System), which operates the former Boston & Maine and Maine Central railroads; St. Lawrence and Atlantic Railroad; Maine Eastern Railroad; Central Maine and Quebec Railway; and New Brunswick Southern Railway. Law and government The Maine Constitution structures Maine's state government, composed of three co-equal branches—the executive, legislative, and judicial branches. The state of Maine also has three Constitutional Officers (the Secretary of State, the State Treasurer, and the State Attorney General) and one Statutory Officer (the State Auditor). The legislative branch is the Maine Legislature, a bicameral body composed of the Maine House of Representatives, with 151 members, and the Maine Senate, with 35 members. The Legislature is charged with introducing and passing laws. The executive branch is responsible for the execution of the laws created by the Legislature and is headed by the Governor of Maine (currently Janet Mills). The Governor is elected every four years; no individual may serve more than two consecutive terms in this office. The current attorney general of Maine is Aaron Frey. As with other state legislatures, the Maine Legislature can by a two-thirds majority vote from both the House and Senate override a gubernatorial veto. Maine is one of seven states that do not have a lieutenant governor. The judicial branch is responsible for interpreting state laws. The highest court of the state is the Maine Supreme Judicial Court. The lower courts are the District Court, Superior Court and Probate Court. All judges except for probate judges serve full-time, are nominated by the Governor and confirmed by the Legislature for terms of seven years. Probate judges serve part-time and are elected by the voters of each county for four-year terms. In a 2020 study, Maine was ranked as the 14th easiest state for citizens to vote in. Counties Maine is divided into political jurisdictions designated as counties. Since 1860 there have been 16 counties in the state, ranging in size from . Politics State and local politics In state general elections, Maine voters tend to accept independent and third-party candidates more frequently than most states. Maine has had two independent governors recently (James B. Longley, 1975–1979 and current U.S. Senator Angus King, 1995–2003). Maine state politicians, Democrats and Republicans alike, are noted for having more moderate views than many in the national wings of their respective parties. Maine is an alcoholic beverage control state. On May 6, 2009, Maine became the fifth state to legalize same-sex marriage; however, the law was repealed by voters on November 3, 2009. On November 6, 2012, Maine, along with Maryland and Washington, became the first state to legalize same-sex marriage at the ballot box. Federal politics In the 1930s, Maine was one of very few states which retained Republican sentiments. In the 1936 presidential election, Franklin D. Roosevelt received the electoral votes of every state other than Maine and Vermont; these were the only two states in the nation that never voted for Roosevelt in any of his presidential campaigns, though Maine was closely fought in 1940 and 1944. In the 1960s, Maine began to lean toward the Democrats, especially in presidential elections. In 1968, Hubert Humphrey became just the second Democrat in half a century to carry Maine, perhaps because of the presence of his running mate, Maine Senator Edmund Muskie, although the state voted Republican in every presidential election in the 1970s and 1980s. Since 1969, two of Maine's four electoral votes have been awarded based on the winner of the statewide election; the other two go to the highest vote-getter in each of the state's two congressional districts. Every other state except Nebraska gives all its electoral votes to the candidate who wins the popular vote in the state at large, without regard to performance within districts. Maine
people such as the Hutterites and Mennonites, many of whom were also of Germanic heritage. In turn, pro-War groups formed, such as the Montana Council of Defense, created by Governor Samuel V. Stewart and local "loyalty committees". War sentiment was complicated by labor issues. The Anaconda Copper Company, which was at its historic peak of copper production, was an extremely powerful force in Montana, but it also faced criticism and opposition from socialist newspapers and unions struggling to make gains for their members. In Butte, a multiethnic community with a significant European immigrant population, labor unions, particularly the newly formed Metal Mine Workers' Union, opposed the war on grounds it mostly profited large lumber and mining interests. In the wake of ramped-up mine production and the Speculator Mine disaster in June 1917, Industrial Workers of the World organizer Frank Little arrived in Butte to organize miners. He gave some speeches with inflammatory antiwar rhetoric. On August 1, 1917, he was dragged from his boarding house by masked vigilantes, and hanged from a railroad trestle, considered a lynching. Little's murder and the strikes that followed resulted in the National Guard being sent to Butte to restore order. Overall, anti-German and antilabor sentiment increased and created a movement that led to the passage of the Montana Sedition Act the following February. In addition, the Council of Defense was made a state agency with the power to prosecute and punish individuals deemed in violation of the Act. The council also passed rules limiting public gatherings and prohibiting the speaking of German in public. In the wake of the legislative action in 1918, emotions rose. U.S. Attorney Burton K. Wheeler and several district court judges who hesitated to prosecute or convict people brought up on charges were strongly criticized. Wheeler was brought before the Council of Defense, though he avoided formal proceedings, and a district court judge from Forsyth was impeached. Burnings of German-language books and several near-hangings occurred. The prohibition on speaking German remained in effect into the early 1920s. Complicating the wartime struggles, the 1918 influenza epidemic claimed the lives of more than 5,000 Montanans. The suppression of civil liberties that occurred led some historians to dub this period "Montana's Agony". Depression era An economic depression began in Montana after World War I and lasted through the Great Depression until the beginning of World War II. This caused great hardship for farmers, ranchers, and miners. The wheat farms in eastern Montana make the state a major producer; the wheat has a relatively high protein content, thus commands premium prices. Montana and World War II By the time the U.S. entered World War II on December 8, 1941, many Montanans had enlisted in the military to escape the poor national economy of the previous decade. Another 40,000-plus Montanans entered the armed forces in the first year following the declaration of war, and more than 57,000 joined up before the war ended. These numbers constituted about ten percent of the state's population, and Montana again contributed one of the highest numbers of soldiers per capita of any state. Many Native Americans were among those who served, including soldiers from the Crow Nation who became Code Talkers. At least 1,500 Montanans died in the war. Montana also was the training ground for the First Special Service Force or "Devil's Brigade", a joint U.S-Canadian commando-style force that trained at Fort William Henry Harrison for experience in mountainous and winter conditions before deployment. Air bases were built in Great Falls, Lewistown, Cut Bank, and Glasgow, some of which were used as staging areas to prepare planes to be sent to allied forces in the Soviet Union. During the war, about 30 Japanese Fu-Go balloon bombs were documented to have landed in Montana, though no casualties nor major forest fires were attributed to them. In 1940, Jeannette Rankin was again elected to Congress. In 1941, as she had in 1917, she voted against the United States' declaration of war after the Japanese attack on Pearl Harbor. Hers was the only vote against the war, and in the wake of public outcry over her vote, Rankin required police protection for a time. Other pacifists tended to be those from "peace churches" who generally opposed war. Many individuals claiming conscientious objector status from throughout the U.S. were sent to Montana during the war as smokejumpers and for other forest fire-fighting duties. In 1942, the US Army established Camp Rimini near Helena for the purpose of training sled dogs in winter weather. Other military During World War II, the planned battleship USS Montana was named in honor of the state but it was never completed. Montana is the only one of the first 48 states lacking a completed battleship being named for it. Alaska and Hawaii have both had nuclear submarines named after them. Montana is the only state in the union without a modern naval ship named in its honor. However, in August 2007, Senator Jon Tester asked that a submarine be christened USS Montana. Secretary of the Navy Ray Mabus announced on September 3, 2015, that Virginia Class attack submarine SSN-794 will become the second commissioned warship to bear the name. Cold War Montana In the post-World War II Cold War era, Montana became host to U.S. Air Force Military Air Transport Service (1947) for airlift training in C-54 Skymasters and eventually, in 1953 Strategic Air Command air and missile forces were based at Malmstrom Air Force Base in Great Falls. The base also hosted the 29th Fighter Interceptor Squadron, Air Defense Command from 1953 to 1968. In December 1959, Malmstrom AFB was selected as the home of the new Minuteman I intercontinental ballistic missile. The first operational missiles were in place and ready in early 1962. In late 1962, missiles assigned to the 341st Strategic Missile Wing played a major role in the Cuban Missile Crisis. When the Soviets removed their missiles from Cuba, President John F. Kennedy said the Soviets backed down because they knew he had an "ace in the hole", referring directly to the Minuteman missiles in Montana. Montana eventually became home to the largest ICBM field in the U.S. covering . Geography Montana is one of the eight Mountain States, located in the north of the region known as the Western United States. It borders North Dakota and South Dakota to the east. Wyoming is to the south, Idaho is to the west and southwest, and the Canadian provinces of British Columbia, Alberta, and Saskatchewan are to the north, making it the only state to border three Canadian provinces. With an area of , Montana is slightly larger than Japan. It is the fourth-largest state in the United States after Alaska, Texas, and California, and the largest landlocked state. Topography The state's topography is roughly defined by the Continental Divide, which splits much of the state into distinct eastern and western regions. Most of Montana's hundred or more named mountain ranges are in the state's western half, most of which is geologically and geographically part of the northern Rocky Mountains. The Absaroka and Beartooth ranges in the state's south-central part are technically part of the Central Rocky Mountains. The Rocky Mountain Front is a significant feature in the state's north-central portion, and isolated island ranges that interrupt the prairie landscape common in the central and eastern parts of the state. About 60 percent of the state is prairie, part of the northern Great Plains. The Bitterroot Mountains—one of the longest continuous ranges in the Rocky Mountain chain from Alaska to Mexico—along with smaller ranges, including the Coeur d'Alene Mountains and the Cabinet Mountains, divide the state from Idaho. The southern third of the Bitterroot range blends into the Continental Divide. Other major mountain ranges west of the divide include the Cabinet Mountains, the Anaconda Range, the Missions, the Garnet Range, the Sapphire Mountains, and the Flint Creek Range. The divide's northern section, where the mountains rapidly give way to prairie, is part of the Rocky Mountain Front. The front is most pronounced in the Lewis Range, located primarily in Glacier National Park. Due to the configuration of mountain ranges in Glacier National Park, the Northern Divide (which begins in Alaska's Seward Peninsula) crosses this region and turns east in Montana at Triple Divide Peak. It causes the Waterton River, Belly, and Saint Mary rivers to flow north into Alberta, Canada. There they join the Saskatchewan River, which ultimately empties into Hudson Bay. East of the divide, several roughly parallel ranges cover the state's southern part, including the Gravelly Range, Madison Range, Gallatin Range, Absaroka Mountains, and Beartooth Mountains. The Beartooth Plateau is the largest continuous land mass over high in the continental United States. It contains the state's highest point, Granite Peak, high. North of these ranges are the Big Belt Mountains, Bridger Mountains, Tobacco Roots, and several island ranges, including the Crazy Mountains and Little Belt Mountains. Between many mountain ranges are several rich river valleys. The Big Hole Valley, Bitterroot Valley, Gallatin Valley, Flathead Valley, and Paradise Valley have extensive agricultural resources and multiple opportunities for tourism and recreation. East and north of this transition zone are the expansive and sparsely populated Northern Plains, with tableland prairies, smaller island mountain ranges, and badlands. The isolated island ranges east of the Divide include the Bear Paw Mountains, Bull Mountains, Castle Mountains, Crazy Mountains, Highwood Mountains, Judith Mountains, Little Belt Mountains, Little Rocky Mountains, the Pryor Mountains, Little Snowy Mountains, Big Snowy Mountains, Sweet Grass Hills, and—in the state's southeastern corner near Ekalaka—the Long Pines. Many of these isolated eastern ranges were created about 120 to 66 million years ago when magma welling up from the interior cracked and bowed the earth's surface here. The area east of the divide in the state's north-central portion is known for the Missouri Breaks and other significant rock formations. Three buttes south of Great Falls are major landmarks: Cascade, Crown, Square, Shaw, and Buttes. Known as laccoliths, they formed when igneous rock protruded through cracks in the sedimentary rock. The underlying surface consists of sandstone and shale. Surface soils in the area are highly diverse, and greatly affected by the local geology, whether glaciated plain, intermountain basin, mountain foothills, or tableland. Foothill regions are often covered in weathered stone or broken slate, or consist of uncovered bare rock (usually igneous, quartzite, sandstone, or shale). The soil of intermountain basins usually consists of clay, gravel, sand, silt, and volcanic ash, much of it laid down by lakes which covered the region during the Oligocene 33 to 23 million years ago. Tablelands are often topped with argillite gravel and weathered quartzite, occasionally underlain by shale. The glaciated plains are generally covered in clay, gravel, sand, and silt left by the proglacial Lake Great Falls or by moraines or gravel-covered former lake basins left by the Wisconsin glaciation 85,000 to 11,000 years ago. Farther east, areas such as Makoshika State Park near Glendive and Medicine Rocks State Park near Ekalaka contain some of the most scenic badlands regions in the state. The Hell Creek Formation in Northeast Montana is a major source of dinosaur fossils. Paleontologist Jack Horner of the Museum of the Rockies in Bozeman brought this formation to the world's attention with several major finds. Rivers, lakes and reservoirs Montana has thousands of named rivers and creeks, of which are known for "blue-ribbon" trout fishing. Montana's water resources provide for recreation, hydropower, crop and forage irrigation, mining, and water for human consumption. Montana is one of few geographic areas in the world whose rivers form parts of three major watersheds (i.e. where two continental divides intersect). Its rivers feed the Pacific Ocean, the Gulf of Mexico, and Hudson Bay. The watersheds divide at Triple Divide Peak in Glacier National Park. If Hudson Bay is considered part of the Arctic Ocean, Triple Divide Peak is the only place on Earth with drainage to three different oceans. Pacific Ocean drainage basin All waters in Montana west of the divide flow into the Columbia River. The Clark Fork of the Columbia (not to be confused with the Clarks Fork of the Yellowstone River) rises near Butte and flows northwest to Missoula, where it is joined by the Blackfoot River and Bitterroot River. Farther downstream, it is joined by the Flathead River before entering Idaho near Lake Pend Oreille. The Pend Oreille River forms the outflow of Lake Pend Oreille. The Pend Oreille River joined the Columbia River, which flows to the Pacific Ocean—making the long Clark Fork/Pend Oreille (considered a single river system) the longest river in the Rocky Mountains. The Clark Fork discharges the greatest volume of water of any river exiting the state. The Kootenai River in northwest Montana is another major tributary of the Columbia. Gulf of Mexico drainage basin East of the divide the Missouri River, which is formed by the confluence of the Jefferson, Madison, and Gallatin Rivers near Three Forks, flows due north through the west-central part of the state to Great Falls. From this point, it then flows generally east through fairly flat agricultural land and the Missouri Breaks to Fort Peck reservoir. The stretch of river between Fort Benton and the Fred Robinson Bridge at the western boundary of Fort Peck Reservoir was designated a National Wild and Scenic River in 1976. The Missouri enters North Dakota near Fort Union, having drained more than half the land area of Montana (). Nearly one-third of the Missouri River in Montana lies behind 10 dams: Toston, Canyon Ferry, Hauser, Holter, Black Eagle, Rainbow, Cochrane, Ryan, Morony, and Fort Peck. Other major Montana tributaries of the Missouri include the Smith, Milk, Marias, Judith, and Musselshell Rivers. Montana also claims the disputed title of possessing the world's shortest river, the Roe River, just outside Great Falls. Through the Missouri, these rivers ultimately join the Mississippi River and flow into the Gulf of Mexico. Hell Roaring Creek begins in southern Montana, and when combined with the Red Rock, Beaverhead, Jefferson, Missouri, and Mississippi River, is the longest river in North America and the fourth longest river in the world. The Yellowstone River rises on the Continental Divide near Younts Peak in Wyoming's Teton Wilderness. It flows north through Yellowstone National Park, enters Montana near Gardiner, and passes through the Paradise Valley to Livingston. It then flows northeasterly across the state through Billings, Miles City, Glendive, and Sidney. The Yellowstone joins the Missouri in North Dakota just east of Fort Union. It is the longest undammed, free-flowing river in the contiguous United States, and drains about a quarter of Montana (). Major tributaries of the Yellowstone include the Boulder, Stillwater, Clarks Fork, Bighorn, Tongue, and Powder Rivers. Hudson Bay drainage basin The Northern Divide turns east in Montana at Triple Divide Peak, causing the Waterton, Belly, and Saint Mary Rivers to flow north into Alberta. There they join the Saskatchewan River, which ultimately empties into Hudson Bay. Lakes and reservoirs Montana has some 3,000 named lakes and reservoirs, including Flathead Lake, the largest natural freshwater lake in the western United States. Other major lakes include Whitefish Lake in the Flathead Valley and Lake McDonald and St. Mary Lake in Glacier National Park. The largest reservoir in the state is Fort Peck Reservoir on the Missouri river, which is contained by the second largest earthen dam and largest hydraulically filled dam in the world. Other major reservoirs include Hungry Horse on the Flathead River; Lake Koocanusa on the Kootenai River; Lake Elwell on the Marias River; Clark Canyon on the Beaverhead River; Yellowtail on the Bighorn River, Canyon Ferry, Hauser, Holter, Rainbow; and Black Eagle on the Missouri River. Flora and fauna Vegetation of the state includes lodgepole pine, ponderosa pine, Douglas fir, larch, spruce, aspen, birch, red cedar, hemlock, ash, alder, rocky mountain maple and cottonwood trees. Forests cover about 25% of the state. Flowers native to Montana include asters, bitterroots, daisies, lupins, poppies, primroses, columbine, lilies, orchids, and dryads. Several species of sagebrush and cactus and many species of grasses are common. Many species of mushrooms and lichens are also found in the state. Montana is home to diverse fauna including 14 amphibian, 90 fish, 117 mammal, 20 reptile, and 427 bird species. Additionally, more than 10,000 invertebrate species are present, including 180 mollusks and 30 crustaceans. Montana has the largest grizzly bear population in the lower 48 states. Montana hosts five federally endangered species–black-footed ferret, whooping crane, least tern, pallid sturgeon, and white sturgeon and seven threatened species including the grizzly bear, Canadian lynx, and bull trout. Since re-introduction the gray wolf population has stabilized at about 900 animals, and they have been delisted as endangered. The Montana Department of Fish, Wildlife and Parks manages fishing and hunting seasons for at least 17 species of game fish, including seven species of trout, walleye, and smallmouth bass and at least 29 species of game birds and animals including ring-neck pheasant, grey partridge, elk, pronghorn antelope, mule deer, whitetail deer, gray wolf, and bighorn sheep. Protected lands Montana contains Glacier National Park, "The Crown of the Continent"; and parts of Yellowstone National Park, including three of the park's five entrances. Other federally recognized sites include the Little Bighorn National Monument, Bighorn Canyon National Recreation Area, and Big Hole National Battlefield. The Bison Range is managed by the Confederated Salish and Kootenai Tribes and the American Prairie is owned and operated by a non-profit organization. Federal and state agencies administer approximately , or 35 percent of Montana's land. The U.S. Department of Agriculture Forest Service administers of forest land in ten National Forests. There are approximately of wilderness in 12 separate wilderness areas that are part of the National Wilderness Preservation System established by the Wilderness Act of 1964. The U.S. Department of the Interior Bureau of Land Management controls of federal land. The U.S. Department of the Interior Fish and Wildlife Service administers of 1.1 million acres of National Wildlife Refuges and waterfowl production areas in Montana. The U.S. Department of the Interior Bureau of Reclamation administers approximately of land and water surface in the state. The Montana Department of Fish, Wildlife and Parks operate approximately of state parks and access points on the state's rivers and lakes. The Montana Department of Natural Resources and Conservation manages of School Trust Land ceded by the federal government under the Land Ordinance of 1785 to the state in 1889 when Montana was granted statehood. These lands are managed by the state for the benefit of public schools and institutions in the state. Areas managed by the National Park Service include: Big Hole National Battlefield near Wisdom Bighorn Canyon National Recreation Area near Fort Smith Glacier National Park Grant-Kohrs Ranch National Historic Site at Deer Lodge Lewis and Clark National Historic Trail Little Bighorn Battlefield National Monument near Crow Agency Nez Perce National Historical Park Yellowstone National Park Climate Montana is a large state with considerable variation in geography, topography and elevation, and the climate is equally varied. The state spans from below the 45th parallel (the line equidistant between the equator and North Pole) to the 49th parallel, and elevations range from under to nearly above sea level. The western half is mountainous, interrupted by numerous large valleys. Eastern Montana comprises plains and badlands, broken by hills and isolated mountain ranges, and has a semiarid, continental climate (Köppen climate classification BSk). The Continental Divide has a considerable effect on the climate, as it restricts the flow of warmer air from the Pacific from moving east, and drier continental air from moving west. The area west of the divide has a modified northern Pacific Coast climate, with milder winters, cooler summers, less wind, and a longer growing season. Low clouds and fog often form in the valleys west of the divide in winter, but this is rarely seen in the east. Average daytime temperatures vary from in January to in July. The variation in geography leads to great variation in temperature. The highest observed summer temperature was at Glendive on July 20, 1893, and Medicine Lake on July 5, 1937. Throughout the state, summer nights are generally cool and pleasant. Extreme hot weather is less common above . Snowfall has been recorded in all months of the year in the more mountainous areas of central and western Montana, though it is rare in July and August. The coldest temperature on record for Montana is also the coldest temperature for the contiguous United States. On January 20, 1954, was recorded at a gold mining camp near Rogers Pass. Temperatures vary greatly on cold nights, and Helena, to the southeast had a low of only on the same date, and an all-time record low of . Winter cold spells are usually the result of cold continental air coming south from Canada. The front is often well defined, causing a large temperature drop in a 24-hour period. Conversely, air flow from the southwest results in "chinooks". These steady (or more) winds can suddenly warm parts of Montana, especially areas just to the east of the mountains, where temperatures sometimes rise up to for 10 days or longer. Loma is the site of the most extreme recorded temperature change in a 24-hour period in the United States. On January 15, 1972, a chinook wind blew in and the temperature rose from . Average annual precipitation is , but great variations are seen. The mountain ranges block the moist Pacific air, holding moisture in the western valleys, and creating rain shadows to the east. Heron, in the west, receives the most precipitation, . On the eastern (leeward) side of a mountain range, the valleys are much drier; Lonepine averages , and Deer Lodge of precipitation. The mountains can receive over , for example the Grinnell Glacier in Glacier National Park gets . An area southwest of Belfry averaged only over a 16-year period. Most of the larger cities get of snow each year. Mountain ranges can accumulate of snow during a winter. Heavy snowstorms may occur from September through May, though most snow falls from November to March. The climate has become warmer in Montana and continues to do so. The glaciers in Glacier National Park have receded and are predicted to melt away completely in a few decades. Many Montana cities set heat records during July 2007, the hottest month ever recorded in Montana. Winters are warmer, too, and have fewer cold spells. Previously, these cold spells had killed off bark beetles, but these are now attacking the forests of western Montana. The warmer winters in the region have allowed various species to expand their ranges and proliferate. The combination of warmer weather, attack by beetles, and mismanagement has led to a substantial increase in the severity of forest fires in Montana. According to a study done for the U.S. Environmental Protection Agency by the Harvard School of Engineering and Applied Science, parts of Montana will experience a 200% increase in area burned by wildfires and an 80% increase in related air pollution. The table below lists average temperatures for the warmest and coldest month for Montana's seven largest cities. The coldest month varies between December and January depending on location, although figures are similar throughout. Antipodes Montana is one of only two contiguous states (along with Colorado) that are antipodal to land. The Kerguelen Islands are antipodal to the Montana–Saskatchewan–Alberta border. No towns are precisely antipodal to Kerguelen, though Chester and Rudyard are close. Cities and towns Montana has 56 counties and a total of 364 "places" as defined by the United States Census Bureau; the latter comprising 129 incorporated places and 235 census-designated places. The incorporated places are made up of 52 cities, 75 towns, and two consolidated city-counties. Montana has one city, Billings, with a population over 100,000; and three cities with populations over 50,000: Missoula, Great Falls and Bozeman. The state also has five Micropolitan Statistical Areas, centered on Bozeman, Butte, Helena, Kalispell and Havre. Collectively all of these areas (excluding Havre) are known informally as the "big seven", as they are consistently the seven largest communities in the state (their rank order in terms of population is Billings, Missoula, Great Falls, Bozeman, Butte, Helena and Kalispell, according to the 2010 U.S. Census). Based on 2013 census numbers, they contain 35 percent of Montana's population, and the counties in which they are located are home to 62 percent of the state's population. The geographic center of population of Montana is in sparsely populated Meagher County, in the town of White Sulphur Springs. Demographics The United States Census Bureau states that the population of Montana was 1,085,407 on April 1, 2020, an 9.7% increase since the 2010 United States census. The 2010 census put Montana's population at 989,415. During the first decade of the new century, growth was mainly concentrated in Montana's seven largest counties, with the highest percentage growth in Gallatin County, which had a 32% increase in its population from 2000 to 2010. The city having the largest percentage growth was Kalispell, with 40.1%, and the city with the largest increase in actual residents was Billings, with an increase in population of 14,323 from 2000 to 2010. On January 3, 2012, the Census and Economic Information Center (CEIC) at the Montana Department of Commerce estimated Montana had hit the one million population mark sometime between November and December 2011. According to the 2020 census, 88.9% of the population was White (87.8% non-Hispanic White), 6.7% American Indian and Alaska Native, 4.1% Hispanics and Latinos of any race, 0.9% Asian, 0.6% Black or African American, 0.1% Native Hawaiian and other Pacific Islander, and 2.8% from two or more races. The largest European ancestry groups in Montana as of 2010 were: German (27.0%), Irish (14.8%), English (12.6%), Norwegian (10.9%), French (4.7%), and Italian (3.4%). Intrastate demographics Montana has a larger Native American population, both numerically and as a percentage, than most U.S. states. Ranked 45th in population (by the 2010 Census) it is 19th in native people, who are 6.5% of the state's population—the sixth-highest percentage of all fifty. Of Montana's 56 counties, Native Americans constitute a majority in three: Big Horn, Glacier, and Roosevelt. Other counties with large Native American populations include Blaine, Cascade, Hill, Missoula, and Yellowstone Counties. The state's Native American population grew by 27.9% between 1980 and 1990 (at a time when Montana's entire population rose 1.6%), and by 18.5 percent between 2000 and 2010. As of 2009, almost two-thirds of Native Americans in the state live in urban areas. Of Montana's 20 largest cities, Polson (15.7%), Havre (13.0%), Great Falls (5.0%), Billings (4.4%), and Anaconda (3.1%) had the greatest percentages of Native American residents in 2010. Billings (4,619), Great Falls (2,942), Missoula (1,838), Havre (1,210), and Polson (706) have the most Native Americans living there. The state's seven reservations include more than 12 distinct Native American ethnolinguistic groups. While the largest European-American population in Montana overall is German (which may also include Austrian and Swiss, among other groups), pockets of significant Scandinavian ancestry are prevalent in some of the farming-dominated northern and eastern prairie regions, parallel to nearby regions of North Dakota and Minnesota. Farmers of Irish, Scots, and English roots also settled in Montana. The historically mining-oriented communities of western Montana such as Butte have a wider range of European-American ethnicity; Finns, Eastern Europeans and especially Irish settlers left an indelible mark on the area, as well as people originally from British mining regions such as Cornwall, Devon, and Wales. The nearby city of Helena, also founded as a mining camp, had a similar mix in addition to a small Chinatown. Many of Montana's historic logging communities originally attracted people of Scottish, Scandinavian, Slavic, English, and Scots-Irish descent. The Hutterites, an Anabaptist sect originally from Switzerland, settled here, and today Montana is second only to South Dakota in U.S. Hutterite population, with several colonies spread across the state. Beginning in the mid-1990s, the state also had an influx of Amish, who moved to Montana from the increasingly urbanized areas of Ohio and Pennsylvania. Montana's Hispanic population is concentrated in the Billings area in south-central Montana, where many of Montana's Mexican-Americans have been in the state for generations. Great Falls has the highest percentage of African-Americans in its population, although Billings has more African-American residents than Great Falls. The Chinese in Montana, while a low percentage today, have been an important presence. About 2000–3000 Chinese miners were in the mining areas of Montana by 1870, and 2500 in 1890. However, public opinion grew increasingly negative toward them in the 1890s, and nearly half of the state's Asian population left the state by 1900. Today, the Missoula area has a large Hmong population and the nearly 3,000 Montanans who claim Filipino ancestry are the largest Asian-American group in the state. In the 2015 United States census estimates, Montana had the second-highest percentage of U.S. military veterans of another state. Only the state of Alaska had a higher percentage with Alaska having roughly 14 percent of its population over 18 being veterans and Montana having roughly 12 percent of its population over 18 being veterans. Native Americans About 66,000 people of Native American heritage live in Montana. Stemming from multiple treaties and federal legislation, including the Indian Appropriations Act (1851), the Dawes Act (1887), and the Indian Reorganization Act (1934), seven Indian reservations, encompassing 11 federally recognized tribal nations, were created in Montana. A 12th nation, the Little Shell Chippewa is a "landless" people headquartered in Great Falls; it is recognized by the state of Montana, but not by the U.S. government. The Blackfeet nation is headquartered on the Blackfeet Indian Reservation (1851) in Browning, Crow on the Crow Indian Reservation (1868) in Crow Agency, Confederated Salish and Kootenai and Pend d'Oreille on the Flathead Indian Reservation (1855) in Pablo, Northern Cheyenne on the Northern Cheyenne Indian Reservation (1884) at Lame Deer, Assiniboine and Gros Ventre on the Fort Belknap Indian Reservation (1888) in Fort Belknap Agency, Assiniboine and Sioux on the Fort Peck Indian Reservation (1888) at Poplar, and Chippewa-Cree on the Rocky Boy's Indian Reservation (1916) near Box Elder. Approximately 63% of all Native people live off the reservations, concentrated in the larger Montana cities, with the largest concentration of urban Indians in Great Falls. The state also has a small Métis population and 1990 census data indicated that people from as many as 275 different tribes lived in Montana. Montana's Constitution specifically reads, "the state recognizes the distinct and unique cultural heritage of the American Indians and is committed in its educational goals to the preservation of their cultural integrity." It is the only state in the U.S. with such a constitutional mandate. The Indian Education for All Act was passed in 1999 to provide funding for this mandate and ensure implementation. It mandates that all schools teach American Indian history, culture, and heritage from preschool through college. For kindergarten through 12th-grade students, an "Indian Education for All" curriculum from the Montana Office of Public Instruction is available free to all schools. The state was sued in 2004 because of lack of funding, and the state has increased its support of the program. South Dakota passed similar legislation in 2007, and Wisconsin was working to strengthen its own program based on this model—and the current practices of Montana's schools. Each Indian reservation in the state has a fully accredited tribal college. The University of Montana "was the first to establish dual admission agreements with all of the tribal colleges and as such it was the first institution in the nation to actively facilitate student transfer from the tribal colleges." Birth data Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Languages English is the official language in the state of Montana, as it is in many U.S. states. According to the 2000 Census, 94.8% of the population aged five and older speak English at home. Spanish is the language next most commonly spoken at home, with about 13,040 Spanish-language speakers in the state (1.4% of the population) in 2011. Also, 15,438 (1.7% of the state population) were speakers of Indo-European languages other than English or Spanish, 10,154 (1.1%) were speakers of a Native American language, and 4,052 (0.4%) were speakers of an Asian or Pacific Islander language. Other languages spoken in Montana (as of 2013) include Assiniboine (about 150 speakers in Montana and Canada), Blackfoot (about 100 speakers), Cheyenne (about 1,700 speakers), Plains Cree (about 100 speakers), Crow (about 3,000 speakers), Dakota (about 18,800 speakers in Minnesota, Montana, Nebraska, North Dakota, and South Dakota), German Hutterite (about 5,600 speakers), Gros Ventre (about 10 speakers), Kalispel-Pend d'Oreille (about 64 speakers), Kutenai (about six speakers), and Lakota (about 6,000 speakers in Minnesota, Montana, Nebraska, North Dakota, South Dakota). The United States Department of Education estimated in 2009 that 5,274 students in Montana spoke a language at home other than English. These included a Native American language (64%), German (4%), Spanish (3%), Russian (1%), and Chinese (less than 0.5%). Religion According to the Pew Forum, the religious affiliations of the people of Montana are: Protestant 47%, Catholic 23%, LDS (Mormon) 5%, Jehovah's Witness 2%, Buddhist 1%, Jewish 0.5%, Muslim 0.5%, Hindu 0.5% and nonreligious at 20%. The largest denominations in Montana as of 2010 were the Catholic Church with 127,612 adherents, the Church of Jesus Christ of Latter-day Saints with 46,484 adherents, Evangelical Lutheran Church in America with 38,665 adherents, and nondenominational Evangelical Protestant with 27,370 adherents. Economy , the U.S. Bureau of Economic Analysis estimated Montana's state product was $51.91 billion (47th in the nation) and per capita personal income was $41,280 (37th in the nation). Total employment: 371,239 () Total employer
the war. Montana also was the training ground for the First Special Service Force or "Devil's Brigade", a joint U.S-Canadian commando-style force that trained at Fort William Henry Harrison for experience in mountainous and winter conditions before deployment. Air bases were built in Great Falls, Lewistown, Cut Bank, and Glasgow, some of which were used as staging areas to prepare planes to be sent to allied forces in the Soviet Union. During the war, about 30 Japanese Fu-Go balloon bombs were documented to have landed in Montana, though no casualties nor major forest fires were attributed to them. In 1940, Jeannette Rankin was again elected to Congress. In 1941, as she had in 1917, she voted against the United States' declaration of war after the Japanese attack on Pearl Harbor. Hers was the only vote against the war, and in the wake of public outcry over her vote, Rankin required police protection for a time. Other pacifists tended to be those from "peace churches" who generally opposed war. Many individuals claiming conscientious objector status from throughout the U.S. were sent to Montana during the war as smokejumpers and for other forest fire-fighting duties. In 1942, the US Army established Camp Rimini near Helena for the purpose of training sled dogs in winter weather. Other military During World War II, the planned battleship USS Montana was named in honor of the state but it was never completed. Montana is the only one of the first 48 states lacking a completed battleship being named for it. Alaska and Hawaii have both had nuclear submarines named after them. Montana is the only state in the union without a modern naval ship named in its honor. However, in August 2007, Senator Jon Tester asked that a submarine be christened USS Montana. Secretary of the Navy Ray Mabus announced on September 3, 2015, that Virginia Class attack submarine SSN-794 will become the second commissioned warship to bear the name. Cold War Montana In the post-World War II Cold War era, Montana became host to U.S. Air Force Military Air Transport Service (1947) for airlift training in C-54 Skymasters and eventually, in 1953 Strategic Air Command air and missile forces were based at Malmstrom Air Force Base in Great Falls. The base also hosted the 29th Fighter Interceptor Squadron, Air Defense Command from 1953 to 1968. In December 1959, Malmstrom AFB was selected as the home of the new Minuteman I intercontinental ballistic missile. The first operational missiles were in place and ready in early 1962. In late 1962, missiles assigned to the 341st Strategic Missile Wing played a major role in the Cuban Missile Crisis. When the Soviets removed their missiles from Cuba, President John F. Kennedy said the Soviets backed down because they knew he had an "ace in the hole", referring directly to the Minuteman missiles in Montana. Montana eventually became home to the largest ICBM field in the U.S. covering . Geography Montana is one of the eight Mountain States, located in the north of the region known as the Western United States. It borders North Dakota and South Dakota to the east. Wyoming is to the south, Idaho is to the west and southwest, and the Canadian provinces of British Columbia, Alberta, and Saskatchewan are to the north, making it the only state to border three Canadian provinces. With an area of , Montana is slightly larger than Japan. It is the fourth-largest state in the United States after Alaska, Texas, and California, and the largest landlocked state. Topography The state's topography is roughly defined by the Continental Divide, which splits much of the state into distinct eastern and western regions. Most of Montana's hundred or more named mountain ranges are in the state's western half, most of which is geologically and geographically part of the northern Rocky Mountains. The Absaroka and Beartooth ranges in the state's south-central part are technically part of the Central Rocky Mountains. The Rocky Mountain Front is a significant feature in the state's north-central portion, and isolated island ranges that interrupt the prairie landscape common in the central and eastern parts of the state. About 60 percent of the state is prairie, part of the northern Great Plains. The Bitterroot Mountains—one of the longest continuous ranges in the Rocky Mountain chain from Alaska to Mexico—along with smaller ranges, including the Coeur d'Alene Mountains and the Cabinet Mountains, divide the state from Idaho. The southern third of the Bitterroot range blends into the Continental Divide. Other major mountain ranges west of the divide include the Cabinet Mountains, the Anaconda Range, the Missions, the Garnet Range, the Sapphire Mountains, and the Flint Creek Range. The divide's northern section, where the mountains rapidly give way to prairie, is part of the Rocky Mountain Front. The front is most pronounced in the Lewis Range, located primarily in Glacier National Park. Due to the configuration of mountain ranges in Glacier National Park, the Northern Divide (which begins in Alaska's Seward Peninsula) crosses this region and turns east in Montana at Triple Divide Peak. It causes the Waterton River, Belly, and Saint Mary rivers to flow north into Alberta, Canada. There they join the Saskatchewan River, which ultimately empties into Hudson Bay. East of the divide, several roughly parallel ranges cover the state's southern part, including the Gravelly Range, Madison Range, Gallatin Range, Absaroka Mountains, and Beartooth Mountains. The Beartooth Plateau is the largest continuous land mass over high in the continental United States. It contains the state's highest point, Granite Peak, high. North of these ranges are the Big Belt Mountains, Bridger Mountains, Tobacco Roots, and several island ranges, including the Crazy Mountains and Little Belt Mountains. Between many mountain ranges are several rich river valleys. The Big Hole Valley, Bitterroot Valley, Gallatin Valley, Flathead Valley, and Paradise Valley have extensive agricultural resources and multiple opportunities for tourism and recreation. East and north of this transition zone are the expansive and sparsely populated Northern Plains, with tableland prairies, smaller island mountain ranges, and badlands. The isolated island ranges east of the Divide include the Bear Paw Mountains, Bull Mountains, Castle Mountains, Crazy Mountains, Highwood Mountains, Judith Mountains, Little Belt Mountains, Little Rocky Mountains, the Pryor Mountains, Little Snowy Mountains, Big Snowy Mountains, Sweet Grass Hills, and—in the state's southeastern corner near Ekalaka—the Long Pines. Many of these isolated eastern ranges were created about 120 to 66 million years ago when magma welling up from the interior cracked and bowed the earth's surface here. The area east of the divide in the state's north-central portion is known for the Missouri Breaks and other significant rock formations. Three buttes south of Great Falls are major landmarks: Cascade, Crown, Square, Shaw, and Buttes. Known as laccoliths, they formed when igneous rock protruded through cracks in the sedimentary rock. The underlying surface consists of sandstone and shale. Surface soils in the area are highly diverse, and greatly affected by the local geology, whether glaciated plain, intermountain basin, mountain foothills, or tableland. Foothill regions are often covered in weathered stone or broken slate, or consist of uncovered bare rock (usually igneous, quartzite, sandstone, or shale). The soil of intermountain basins usually consists of clay, gravel, sand, silt, and volcanic ash, much of it laid down by lakes which covered the region during the Oligocene 33 to 23 million years ago. Tablelands are often topped with argillite gravel and weathered quartzite, occasionally underlain by shale. The glaciated plains are generally covered in clay, gravel, sand, and silt left by the proglacial Lake Great Falls or by moraines or gravel-covered former lake basins left by the Wisconsin glaciation 85,000 to 11,000 years ago. Farther east, areas such as Makoshika State Park near Glendive and Medicine Rocks State Park near Ekalaka contain some of the most scenic badlands regions in the state. The Hell Creek Formation in Northeast Montana is a major source of dinosaur fossils. Paleontologist Jack Horner of the Museum of the Rockies in Bozeman brought this formation to the world's attention with several major finds. Rivers, lakes and reservoirs Montana has thousands of named rivers and creeks, of which are known for "blue-ribbon" trout fishing. Montana's water resources provide for recreation, hydropower, crop and forage irrigation, mining, and water for human consumption. Montana is one of few geographic areas in the world whose rivers form parts of three major watersheds (i.e. where two continental divides intersect). Its rivers feed the Pacific Ocean, the Gulf of Mexico, and Hudson Bay. The watersheds divide at Triple Divide Peak in Glacier National Park. If Hudson Bay is considered part of the Arctic Ocean, Triple Divide Peak is the only place on Earth with drainage to three different oceans. Pacific Ocean drainage basin All waters in Montana west of the divide flow into the Columbia River. The Clark Fork of the Columbia (not to be confused with the Clarks Fork of the Yellowstone River) rises near Butte and flows northwest to Missoula, where it is joined by the Blackfoot River and Bitterroot River. Farther downstream, it is joined by the Flathead River before entering Idaho near Lake Pend Oreille. The Pend Oreille River forms the outflow of Lake Pend Oreille. The Pend Oreille River joined the Columbia River, which flows to the Pacific Ocean—making the long Clark Fork/Pend Oreille (considered a single river system) the longest river in the Rocky Mountains. The Clark Fork discharges the greatest volume of water of any river exiting the state. The Kootenai River in northwest Montana is another major tributary of the Columbia. Gulf of Mexico drainage basin East of the divide the Missouri River, which is formed by the confluence of the Jefferson, Madison, and Gallatin Rivers near Three Forks, flows due north through the west-central part of the state to Great Falls. From this point, it then flows generally east through fairly flat agricultural land and the Missouri Breaks to Fort Peck reservoir. The stretch of river between Fort Benton and the Fred Robinson Bridge at the western boundary of Fort Peck Reservoir was designated a National Wild and Scenic River in 1976. The Missouri enters North Dakota near Fort Union, having drained more than half the land area of Montana (). Nearly one-third of the Missouri River in Montana lies behind 10 dams: Toston, Canyon Ferry, Hauser, Holter, Black Eagle, Rainbow, Cochrane, Ryan, Morony, and Fort Peck. Other major Montana tributaries of the Missouri include the Smith, Milk, Marias, Judith, and Musselshell Rivers. Montana also claims the disputed title of possessing the world's shortest river, the Roe River, just outside Great Falls. Through the Missouri, these rivers ultimately join the Mississippi River and flow into the Gulf of Mexico. Hell Roaring Creek begins in southern Montana, and when combined with the Red Rock, Beaverhead, Jefferson, Missouri, and Mississippi River, is the longest river in North America and the fourth longest river in the world. The Yellowstone River rises on the Continental Divide near Younts Peak in Wyoming's Teton Wilderness. It flows north through Yellowstone National Park, enters Montana near Gardiner, and passes through the Paradise Valley to Livingston. It then flows northeasterly across the state through Billings, Miles City, Glendive, and Sidney. The Yellowstone joins the Missouri in North Dakota just east of Fort Union. It is the longest undammed, free-flowing river in the contiguous United States, and drains about a quarter of Montana (). Major tributaries of the Yellowstone include the Boulder, Stillwater, Clarks Fork, Bighorn, Tongue, and Powder Rivers. Hudson Bay drainage basin The Northern Divide turns east in Montana at Triple Divide Peak, causing the Waterton, Belly, and Saint Mary Rivers to flow north into Alberta. There they join the Saskatchewan River, which ultimately empties into Hudson Bay. Lakes and reservoirs Montana has some 3,000 named lakes and reservoirs, including Flathead Lake, the largest natural freshwater lake in the western United States. Other major lakes include Whitefish Lake in the Flathead Valley and Lake McDonald and St. Mary Lake in Glacier National Park. The largest reservoir in the state is Fort Peck Reservoir on the Missouri river, which is contained by the second largest earthen dam and largest hydraulically filled dam in the world. Other major reservoirs include Hungry Horse on the Flathead River; Lake Koocanusa on the Kootenai River; Lake Elwell on the Marias River; Clark Canyon on the Beaverhead River; Yellowtail on the Bighorn River, Canyon Ferry, Hauser, Holter, Rainbow; and Black Eagle on the Missouri River. Flora and fauna Vegetation of the state includes lodgepole pine, ponderosa pine, Douglas fir, larch, spruce, aspen, birch, red cedar, hemlock, ash, alder, rocky mountain maple and cottonwood trees. Forests cover about 25% of the state. Flowers native to Montana include asters, bitterroots, daisies, lupins, poppies, primroses, columbine, lilies, orchids, and dryads. Several species of sagebrush and cactus and many species of grasses are common. Many species of mushrooms and lichens are also found in the state. Montana is home to diverse fauna including 14 amphibian, 90 fish, 117 mammal, 20 reptile, and 427 bird species. Additionally, more than 10,000 invertebrate species are present, including 180 mollusks and 30 crustaceans. Montana has the largest grizzly bear population in the lower 48 states. Montana hosts five federally endangered species–black-footed ferret, whooping crane, least tern, pallid sturgeon, and white sturgeon and seven threatened species including the grizzly bear, Canadian lynx, and bull trout. Since re-introduction the gray wolf population has stabilized at about 900 animals, and they have been delisted as endangered. The Montana Department of Fish, Wildlife and Parks manages fishing and hunting seasons for at least 17 species of game fish, including seven species of trout, walleye, and smallmouth bass and at least 29 species of game birds and animals including ring-neck pheasant, grey partridge, elk, pronghorn antelope, mule deer, whitetail deer, gray wolf, and bighorn sheep. Protected lands Montana contains Glacier National Park, "The Crown of the Continent"; and parts of Yellowstone National Park, including three of the park's five entrances. Other federally recognized sites include the Little Bighorn National Monument, Bighorn Canyon National Recreation Area, and Big Hole National Battlefield. The Bison Range is managed by the Confederated Salish and Kootenai Tribes and the American Prairie is owned and operated by a non-profit organization. Federal and state agencies administer approximately , or 35 percent of Montana's land. The U.S. Department of Agriculture Forest Service administers of forest land in ten National Forests. There are approximately of wilderness in 12 separate wilderness areas that are part of the National Wilderness Preservation System established by the Wilderness Act of 1964. The U.S. Department of the Interior Bureau of Land Management controls of federal land. The U.S. Department of the Interior Fish and Wildlife Service administers of 1.1 million acres of National Wildlife Refuges and waterfowl production areas in Montana. The U.S. Department of the Interior Bureau of Reclamation administers approximately of land and water surface in the state. The Montana Department of Fish, Wildlife and Parks operate approximately of state parks and access points on the state's rivers and lakes. The Montana Department of Natural Resources and Conservation manages of School Trust Land ceded by the federal government under the Land Ordinance of 1785 to the state in 1889 when Montana was granted statehood. These lands are managed by the state for the benefit of public schools and institutions in the state. Areas managed by the National Park Service include: Big Hole National Battlefield near Wisdom Bighorn Canyon National Recreation Area near Fort Smith Glacier National Park Grant-Kohrs Ranch National Historic Site at Deer Lodge Lewis and Clark National Historic Trail Little Bighorn Battlefield National Monument near Crow Agency Nez Perce National Historical Park Yellowstone National Park Climate Montana is a large state with considerable variation in geography, topography and elevation, and the climate is equally varied. The state spans from below the 45th parallel (the line equidistant between the equator and North Pole) to the 49th parallel, and elevations range from under to nearly above sea level. The western half is mountainous, interrupted by numerous large valleys. Eastern Montana comprises plains and badlands, broken by hills and isolated mountain ranges, and has a semiarid, continental climate (Köppen climate classification BSk). The Continental Divide has a considerable effect on the climate, as it restricts the flow of warmer air from the Pacific from moving east, and drier continental air from moving west. The area west of the divide has a modified northern Pacific Coast climate, with milder winters, cooler summers, less wind, and a longer growing season. Low clouds and fog often form in the valleys west of the divide in winter, but this is rarely seen in the east. Average daytime temperatures vary from in January to in July. The variation in geography leads to great variation in temperature. The highest observed summer temperature was at Glendive on July 20, 1893, and Medicine Lake on July 5, 1937. Throughout the state, summer nights are generally cool and pleasant. Extreme hot weather is less common above . Snowfall has been recorded in all months of the year in the more mountainous areas of central and western Montana, though it is rare in July and August. The coldest temperature on record for Montana is also the coldest temperature for the contiguous United States. On January 20, 1954, was recorded at a gold mining camp near Rogers Pass. Temperatures vary greatly on cold nights, and Helena, to the southeast had a low of only on the same date, and an all-time record low of . Winter cold spells are usually the result of cold continental air coming south from Canada. The front is often well defined, causing a large temperature drop in a 24-hour period. Conversely, air flow from the southwest results in "chinooks". These steady (or more) winds can suddenly warm parts of Montana, especially areas just to the east of the mountains, where temperatures sometimes rise up to for 10 days or longer. Loma is the site of the most extreme recorded temperature change in a 24-hour period in the United States. On January 15, 1972, a chinook wind blew in and the temperature rose from . Average annual precipitation is , but great variations are seen. The mountain ranges block the moist Pacific air, holding moisture in the western valleys, and creating rain shadows to the east. Heron, in the west, receives the most precipitation, . On the eastern (leeward) side of a mountain range, the valleys are much drier; Lonepine averages , and Deer Lodge of precipitation. The mountains can receive over , for example the Grinnell Glacier in Glacier National Park gets . An area southwest of Belfry averaged only over a 16-year period. Most of the larger cities get of snow each year. Mountain ranges can accumulate of snow during a winter. Heavy snowstorms may occur from September through May, though most snow falls from November to March. The climate has become warmer in Montana and continues to do so. The glaciers in Glacier National Park have receded and are predicted to melt away completely in a few decades. Many Montana cities set heat records during July 2007, the hottest month ever recorded in Montana. Winters are warmer, too, and have fewer cold spells. Previously, these cold spells had killed off bark beetles, but these are now attacking the forests of western Montana. The warmer winters in the region have allowed various species to expand their ranges and proliferate. The combination of warmer weather, attack by beetles, and mismanagement has led to a substantial increase in the severity of forest fires in Montana. According to a study done for the U.S. Environmental Protection Agency by the Harvard School of Engineering and Applied Science, parts of Montana will experience a 200% increase in area burned by wildfires and an 80% increase in related air pollution. The table below lists average temperatures for the warmest and coldest month for Montana's seven largest cities. The coldest month varies between December and January depending on location, although figures are similar throughout. Antipodes Montana is one of only two contiguous states (along with Colorado) that are antipodal to land. The Kerguelen Islands are antipodal to the Montana–Saskatchewan–Alberta border. No towns are precisely antipodal to Kerguelen, though Chester and Rudyard are close. Cities and towns Montana has 56 counties and a total of 364 "places" as defined by the United States Census Bureau; the latter comprising 129 incorporated places and 235 census-designated places. The incorporated places are made up of 52 cities, 75 towns, and two consolidated city-counties. Montana has one city, Billings, with a population over 100,000; and three cities with populations over 50,000: Missoula, Great Falls and Bozeman. The state also has five Micropolitan Statistical Areas, centered on Bozeman, Butte, Helena, Kalispell and Havre. Collectively all of these areas (excluding Havre) are known informally as the "big seven", as they are consistently the seven largest communities in the state (their rank order in terms of population is Billings, Missoula, Great Falls, Bozeman, Butte, Helena and Kalispell, according to the 2010 U.S. Census). Based on 2013 census numbers, they contain 35 percent of Montana's population, and the counties in which they are located are home to 62 percent of the state's population. The geographic center of population of Montana is in sparsely populated Meagher County, in the town of White Sulphur Springs. Demographics The United States Census Bureau states that the population of Montana was 1,085,407 on April 1, 2020, an 9.7% increase since the 2010 United States census. The 2010 census put Montana's population at 989,415. During the first decade of the new century, growth was mainly concentrated in Montana's seven largest counties, with the highest percentage growth in Gallatin County, which had a 32% increase in its population from 2000 to 2010. The city having the largest percentage growth was Kalispell, with 40.1%, and the city with the largest increase in actual residents was Billings, with an increase in population of 14,323 from 2000 to 2010. On January 3, 2012, the Census and Economic Information Center (CEIC) at the Montana Department of Commerce estimated Montana had hit the one million population mark sometime between November and December 2011. According to the 2020 census, 88.9% of the population was White (87.8% non-Hispanic White), 6.7% American Indian and Alaska Native, 4.1% Hispanics and Latinos of any race, 0.9% Asian, 0.6% Black or African American, 0.1% Native Hawaiian and other Pacific Islander, and 2.8% from two or more races. The largest European ancestry groups in Montana as of 2010 were: German (27.0%), Irish (14.8%), English (12.6%), Norwegian (10.9%), French (4.7%), and Italian (3.4%). Intrastate demographics Montana has a larger Native American population, both numerically and as a percentage, than most U.S. states. Ranked 45th in population (by the 2010 Census) it is 19th in native people, who are 6.5% of the state's population—the sixth-highest percentage of all fifty. Of Montana's 56 counties, Native Americans constitute a majority in three: Big Horn, Glacier, and Roosevelt. Other counties with large Native American populations include Blaine, Cascade, Hill, Missoula, and Yellowstone Counties. The state's Native American population grew by 27.9% between 1980 and 1990 (at a time when Montana's entire population rose 1.6%), and by 18.5 percent between 2000 and 2010. As of 2009, almost two-thirds of Native Americans in the state live in urban areas. Of Montana's 20 largest cities, Polson (15.7%), Havre (13.0%), Great Falls (5.0%), Billings (4.4%), and Anaconda (3.1%) had the greatest percentages of Native American residents in 2010. Billings (4,619), Great Falls (2,942), Missoula (1,838), Havre (1,210), and Polson (706) have the most Native Americans living there. The state's seven reservations include more than 12 distinct Native American ethnolinguistic groups. While the largest European-American population in Montana overall is German (which may also include Austrian and Swiss, among other groups), pockets of significant Scandinavian ancestry are prevalent in some of the farming-dominated northern and eastern prairie regions, parallel to nearby regions of North Dakota and Minnesota. Farmers of Irish, Scots, and English roots also settled in Montana. The historically mining-oriented communities of western Montana such as Butte have a wider range of European-American ethnicity; Finns, Eastern Europeans and especially Irish settlers left an indelible mark on the area, as well as people originally from British mining regions such as Cornwall, Devon, and Wales. The nearby city of Helena, also founded as a mining camp, had a similar mix in addition to a small Chinatown. Many of Montana's historic logging communities originally attracted people of Scottish, Scandinavian, Slavic, English, and Scots-Irish descent. The Hutterites, an Anabaptist sect originally from Switzerland, settled here, and today Montana is second only to South Dakota in U.S. Hutterite population, with several colonies spread across the state. Beginning in the mid-1990s, the state also had an influx of Amish, who moved to Montana from the increasingly urbanized areas of Ohio and Pennsylvania. Montana's Hispanic population is concentrated in the Billings area in south-central Montana, where many of Montana's Mexican-Americans have been in the state for generations. Great Falls has the highest percentage of African-Americans in its population, although Billings has more African-American residents than Great Falls. The Chinese in Montana, while a low percentage today, have been an important presence. About 2000–3000 Chinese miners were in the mining areas of Montana by 1870, and 2500 in 1890. However, public opinion grew increasingly negative toward them in the 1890s, and nearly half of the state's Asian population left the state by 1900. Today, the Missoula area has a large Hmong population and the nearly 3,000 Montanans who claim Filipino ancestry are the largest Asian-American group in the state. In the 2015 United States census estimates, Montana had the second-highest percentage of U.S. military veterans of another state. Only the state of Alaska had a higher percentage with Alaska having roughly 14 percent of its population over 18 being veterans and Montana having roughly 12 percent of its population over 18 being veterans. Native Americans About 66,000 people of Native American heritage live in Montana. Stemming from multiple treaties and federal legislation, including the Indian Appropriations Act (1851), the Dawes Act (1887), and the Indian Reorganization Act (1934), seven Indian reservations, encompassing 11 federally recognized tribal nations, were created in Montana. A 12th nation, the Little Shell Chippewa is a "landless" people headquartered in Great Falls; it is recognized by the state of Montana, but not by the U.S. government. The Blackfeet nation is headquartered on the Blackfeet Indian Reservation (1851) in Browning, Crow on the Crow Indian Reservation (1868) in Crow Agency, Confederated Salish and Kootenai and Pend d'Oreille on the Flathead Indian Reservation (1855) in Pablo, Northern Cheyenne on the Northern Cheyenne Indian Reservation (1884) at Lame Deer, Assiniboine and Gros Ventre on the Fort Belknap Indian Reservation (1888) in Fort Belknap Agency, Assiniboine and Sioux on the Fort Peck Indian Reservation (1888) at Poplar, and Chippewa-Cree on the Rocky Boy's Indian Reservation (1916) near Box Elder. Approximately 63% of all Native people live off the reservations, concentrated in the larger Montana cities, with the largest concentration of urban Indians in Great Falls. The state also has a small Métis population and 1990 census data indicated that people from as many as 275 different tribes lived in Montana. Montana's Constitution specifically reads, "the state recognizes the distinct and unique cultural heritage of the American Indians and is committed in its educational goals to the preservation of their cultural integrity." It is the only state in the U.S. with such a constitutional mandate. The Indian Education for All Act was passed in 1999 to provide funding for this mandate and ensure implementation. It mandates that all schools teach American Indian history, culture, and heritage from preschool through college. For kindergarten through 12th-grade students, an "Indian Education for All" curriculum from the Montana Office of Public Instruction is available free to all schools. The state was sued in 2004 because of lack of funding, and the state has increased its support of the program. South Dakota passed similar legislation in 2007, and Wisconsin was working to strengthen its own program based on this model—and the current practices of Montana's schools. Each Indian reservation in the state has a fully accredited tribal college. The University of Montana "was the first to establish dual admission agreements with all of the tribal colleges and as such it was the first institution in the nation to actively facilitate student transfer from the tribal colleges." Birth data Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Languages English is the official language in the state of Montana, as it is in many U.S. states. According to the 2000 Census, 94.8% of the population aged five and older speak English at home. Spanish is the language next most commonly spoken at home, with about 13,040 Spanish-language speakers in the state (1.4% of the population) in 2011. Also, 15,438 (1.7% of the state population) were speakers of Indo-European languages other than English or Spanish, 10,154 (1.1%) were speakers of a Native American language, and 4,052 (0.4%) were speakers of an Asian or Pacific Islander language. Other languages spoken in Montana (as of 2013) include Assiniboine (about 150 speakers in Montana and Canada), Blackfoot (about 100 speakers), Cheyenne (about 1,700 speakers), Plains Cree (about 100 speakers), Crow (about 3,000 speakers), Dakota (about 18,800 speakers in Minnesota, Montana, Nebraska, North Dakota, and South Dakota), German Hutterite (about 5,600 speakers), Gros Ventre (about 10 speakers), Kalispel-Pend d'Oreille (about 64 speakers), Kutenai (about six speakers), and Lakota (about 6,000 speakers in Minnesota, Montana, Nebraska, North Dakota, South Dakota). The United States Department of Education estimated in 2009 that 5,274 students in Montana spoke a language at home other than English. These included a Native American language (64%), German (4%), Spanish (3%), Russian (1%), and Chinese (less than 0.5%). Religion According to the Pew Forum, the religious affiliations of the people of Montana are: Protestant 47%, Catholic 23%, LDS (Mormon) 5%, Jehovah's Witness 2%, Buddhist 1%, Jewish 0.5%, Muslim 0.5%, Hindu 0.5% and nonreligious at 20%. The largest denominations in Montana as of 2010 were the Catholic Church with 127,612 adherents, the Church of Jesus Christ of Latter-day Saints with 46,484 adherents, Evangelical Lutheran Church in America with 38,665 adherents, and nondenominational Evangelical Protestant with 27,370 adherents. Economy , the U.S. Bureau of Economic Analysis estimated Montana's state product was $51.91 billion (47th in the nation) and per capita personal income was $41,280 (37th in the nation). Total employment: 371,239 () Total employer establishments: 38,720 () Montana is a relative hub of beer microbrewing, ranking third in the nation in number of craft breweries per capita in 2011. Significant industries exist for lumber and mineral extraction; the state's resources include gold, coal, silver, talc, and vermiculite. Ecotaxes on resource extraction are numerous. A 1974 state severance tax on coal (which varied from 20 to 30%) was upheld by the Supreme Court of the United States in Commonwealth Edison Co. v. Montana, 453 U.S. 609 (1981). Tourism is also important to the economy, with more than ten million visitors a year to Glacier National Park, Flathead Lake, the Missouri River headwaters, the site of the Battle of Little Bighorn, and three of the five entrances to Yellowstone National Park. Montana's personal income tax contains seven brackets, with rates ranging from 1.0 to 6.9 percent. Montana has no sales tax*, and household goods are exempt from property taxes. However, property taxes are assessed on livestock, farm machinery, heavy equipment, automobiles, trucks, and business equipment. The amount of property tax owed is not determined solely by the property's value. The property's value is multiplied by a tax rate, set by the Montana Legislature, to determine its taxable value. The taxable value is then multiplied by the mill levy established by various taxing jurisdictions—city and county government, school districts, and others. In the 1980s the absence of a sales tax became economically deleterious to communities bound to the state's tourism industry, as the revenue from income and property taxes provided by residents was grossly insignificant in regards to paying for the impact of non-residential travel—especially road repair. In 1985, the Montana Legislature passed a law allowing towns with fewer than 5,500 residents and unincorporated communities with fewer than 2,500 to levy a resort tax if more than half the community's income came from tourism. The resort tax is a sales tax that applies to hotels, motels and other lodging and camping facilities; restaurants, fast-food stores, and other food service establishments; taverns, bars, night clubs, lounges, or other public establishments that serve alcohol; as well as destination ski resorts or other destination recreational facilities. It also applies to "luxuries"- defined by law as any item normally sold to the public or to transient visitors or tourists that does not include food purchased unprepared or unserved, medicine, medical supplies and services, appliances, hardware supplies and tools, or any necessities of life. Approximately 12.2 million non-residents visited Montana in 2018, and the population was estimated to be 1.06 million. This extremely disproportionate ratio of residents paying taxes vs. non-residents using state-funded services and infrastructure makes Montana's resort tax crucial in order to safely maintain heavily used roads and highways, as well as protect and preserve state parks. , the state's unemployment rate is 3.5%. Education Colleges and universities The Montana University System consists of: Dawson Community College Flathead Valley Community College Miles Community College Montana State University Bozeman Gallatin College Montana State University Bozeman Montana State University Billings City College at Montana State University Billings Billings Montana State University-Northern Havre Great Falls College Montana State University Great Falls University of Montana Missoula Missoula College University of Montana Missoula Montana Tech of the University of Montana Butte Highlands College of Montana Tech Butte University of Montana Western Dillon Helena College University of Montana Helena Bitterroot College University of Montana Hamilton Tribal colleges in Montana include: Aaniiih Nakoda College Harlem Blackfeet Community College Browning Chief Dull Knife College Lame Deer Fort Peck Community College Poplar Little Big Horn College Crow Agency Salish Kootenai College Pablo Stone Child College Box Elder Four private colleges are in Montana: Carroll College Rocky Mountain College University of Providence Apollos University Schools The Montana Territory was formed on April 26, 1864, when the U.S. passed the Organic Act. Schools started forming in the area before it was officially a territory as families started settling into the area. The first schools were subscription schools that typically met in the teacher's home. The first formal school on record was at Fort Owen in Bitterroot valley in 1862. The students were Indian children and the children of Fort Owen employees. The first school term started in early winter and lasted only until February 28. Classes were taught by Mr. Robinson. Another early subscription school was started by Thomas Dimsdale in Virginia City in 1863. In this school students were charged $1.75 per week. The Montana Territorial Legislative Assembly had its inaugural meeting in 1864. The first legislature authorized counties to levy taxes for schools, which set the foundations for public schooling. Madison County was the first to take advantage of the newly authorized taxes and it formed the first public school in Virginia City in 1886. The first school year was scheduled to begin in January 1866, but severe weather postponed its opening until March. The first school year ran through the summer and did not end until August 17.
the original text, getting the "gist" of it (a process called "gisting"). This is sufficient for many purposes, including making best use of the finite and expensive time of a human translator, reserved for those cases in which total accuracy is indispensable. Approaches Machine translation can use a method based on linguistic rules, which means that words will be translated in a linguistic way – the most suitable (orally speaking) words of the target language will replace the ones in the source language. It is often argued that the success of machine translation requires the problem of natural language understanding to be solved first. Generally, rule-based methods parse a text, usually creating an intermediary, symbolic representation, from which the text in the target language is generated. According to the nature of the intermediary representation, an approach is described as interlingual machine translation or transfer-based machine translation. These methods require extensive lexicons with morphological, syntactic, and semantic information, and large sets of rules. Given enough data, machine translation programs often work well enough for a native speaker of one language to get the approximate meaning of what is written by the other native speaker. The difficulty is getting enough data of the right kind to support the particular method. For example, the large multilingual corpus of data needed for statistical methods to work is not necessary for the grammar-based methods. But then, the grammar methods need a skilled linguist to carefully design the grammar that they use. To translate between closely related languages, the technique referred to as rule-based machine translation may be used. Rule-based The rule-based machine translation paradigm includes transfer-based machine translation, interlingual machine translation and dictionary-based machine translation paradigms. This type of translation is used mostly in the creation of dictionaries and grammar programs. Unlike other methods, RBMT involves more information about the linguistics of the source and target languages, using the morphological and syntactic rules and semantic analysis of both languages. The basic approach involves linking the structure of the input sentence with the structure of the output sentence using a parser and an analyzer for the source language, a generator for the target language, and a transfer lexicon for the actual translation. RBMT's biggest downfall is that everything must be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity. Adapting to new domains in itself is not that hard, as the core grammar is the same across domains, and the domain-specific adjustment is limited to lexical selection adjustment. Transfer-based machine translation Transfer-based machine translation is similar to interlingual machine translation in that it creates a translation from an intermediate representation that simulates the meaning of the original sentence. Unlike interlingual MT, it depends partially on the language pair involved in the translation. Interlingual Interlingual machine translation is one instance of rule-based machine-translation approaches. In this approach, the source language, i.e. the text to be translated, is transformed into an interlingual language, i.e. a "language neutral" representation that is independent of any language. The target language is then generated out of the interlingua. One of the major advantages of this system is that the interlingua becomes more valuable as the number of target languages it can be turned into increases. However, the only interlingual machine translation system that has been made operational at the commercial level is the KANT system (Nyberg and Mitamura, 1992), which is designed to translate Caterpillar Technical English (CTE) into other languages. Dictionary-based Machine translation can use a method based on dictionary entries, which means that the words will be translated as they are by a dictionary. Statistical Statistical machine translation tries to generate translations using statistical methods based on bilingual text corpora, such as the Canadian Hansard corpus, the English-French record of the Canadian parliament and EUROPARL, the record of the European Parliament. Where such corpora are available, good results can be achieved translating similar texts, but such corpora are still rare for many language pairs. The first statistical machine translation software was CANDIDE from IBM. Google used SYSTRAN for several years, but switched to a statistical translation method in October 2007. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system; translation accuracy improved. Google Translate and similar statistical translation programs work by detecting patterns in hundreds of millions of documents that have previously been translated by humans and making intelligent guesses based on the findings. Generally, the more human-translated documents available in a given language, the more likely it is that the translation will be of good quality. Newer approaches into Statistical Machine translation such as METIS II and PRESEMT use minimal corpus size and instead focus on derivation of syntactic structure through pattern recognition. With further development, this may allow statistical machine translation to operate off of a monolingual text corpus. SMT's biggest downfall includes it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translating into such languages), and its inability to correct singleton errors. Example-based Example-based machine translation (EBMT) approach was proposed by Makoto Nagao in 1984. Example-based machine translation is based on the idea of analogy. In this approach, the corpus that is used is one that contains texts that have already been translated. Given a sentence that is to be translated, sentences from this corpus are selected that contain similar sub-sentential components. The similar sentences are then used to translate the sub-sentential components of the original sentence into the target language, and these phrases are put together to form a complete translation. Hybrid MT Hybrid machine translation (HMT) leverages the strengths of statistical and rule-based translation methodologies. Several MT organizations claim a hybrid approach that uses both rules and statistics. The approaches differ in a number of ways: Rules post-processed by statistics: Translations are performed using a rules based engine. Statistics are then used in an attempt to adjust/correct the output from the rules engine. Statistics guided by rules: Rules are used to pre-process data in an attempt to better guide the statistical engine. Rules are also used to post-process the statistical output to perform functions such as normalization. This approach has a lot more power, flexibility and control when translating. It also provides extensive control over the way in which the content is processed during both pre-translation (e.g. markup of content and non-translatable terms) and post-translation (e.g. post translation corrections and adjustments). More recently, with the advent of Neural MT, a new version of hybrid machine translation is emerging that combines the benefits of rules, statistical and neural machine translation. The approach allows benefitting from pre- and post-processing in a rule guided workflow as well as benefitting from NMT and SMT. The downside is the inherent complexity which makes the approach suitable only for specific use cases. Neural MT A deep learning-based approach to MT, neural machine translation has made rapid progress in recent years, and Google has announced its translation services are now using this technology in preference over its previous statistical methods. A Microsoft team claimed to have reached human parity on WMT-2017 ("EMNLP 2017 Second Conference On Machine Translation") in 2018, marking a historical milestone. However, many researchers have criticized this claim, rerunning and discussing their experiments; current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test suits- i.e., it lacks statistical significance power. There is still a long journey before NMT reaches real human parity performances. To address the idiomatic phrase translation, multi-word expressions, and low-frequency words (also called OOV, or out-of-vocabulary word translation), language-focused linguistic features have been explored in state-of-the-art neural machine translation (NMT) models. For instance, the Chinese character decompositions into radicals and strokes have proven to be helpful for translating multi-word expressions in NMT. Major issues Disambiguation Word-sense disambiguation concerns finding a suitable translation when a word can have more than one meaning. The problem was first raised in the 1950s by Yehoshua Bar-Hillel. He pointed out that without a "universal encyclopedia", a machine would never be able to distinguish between the two meanings of a word. Today there are numerous approaches designed to overcome this problem. They can be approximately divided into "shallow" approaches and "deep" approaches. Shallow approaches assume no knowledge of the text. They simply apply statistical methods to the words surrounding the ambiguous word. Deep approaches presume a comprehensive knowledge of the word. So far, shallow approaches have been more successful. Claude Piron, a long-time translator for the United Nations and the World Health Organization, wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved: The ideal deep approach would require the translation software to do all the research necessary for this kind of disambiguation on its own; but this would require a higher degree of AI than has yet been attained. A shallow approach which simply guessed at the sense of the ambiguous English phrase that Piron mentions (based, perhaps, on which kind of prisoner-of-war camp is more often mentioned in a given corpus) would have a reasonable chance of guessing wrong fairly often. A shallow approach that involves "ask the user about each ambiguity" would, by Piron's estimate, only automate about 25% of a professional translator's job, leaving the harder 75% still to be done by a human. Non-standard speech One of the major pitfalls of MT is its inability to translate non-standard language with the same accuracy as standard language. Heuristic or statistical based MT takes input from various sources in standard form of a language. Rule-based translation, by nature, does not include common non-standard usages. This causes errors in translation from a vernacular source or into colloquial language. Limitations on translation from casual speech present issues in the use of machine translation in mobile devices. Named entities In information extraction, named entities, in a narrow sense, refer to concrete or abstract entities in the real world such as people, organizations, companies, and places that have a proper name: George Washington, Chicago, Microsoft. It also refers to expressions of time, space and quantity such as 1 July 2011, $500. In the sentence "Smith is the president of Fabrionix" both Smith and Fabrionix are named entities, and can be further qualified via first name or other information; "president" is not, since Smith could have earlier held another position at Fabrionix, e.g. Vice President. The term rigid designator is what defines these usages for analysis in statistical machine translation. Named entities must first be identified in the text; if not, they may be erroneously translated as common nouns, which would most likely not affect the BLEU rating of the translation but would change the text's human readability. They may be omitted from the output translation, which would also have implications for the text's readability and message. Transliteration includes finding the letters in the target language that most closely correspond to the name in the source language. This, however, has been cited as sometimes worsening the quality of translation. For "Southern California" the first word should be translated directly, while the second word should be transliterated. Machines often transliterate both because they treated them as one entity. Words like these are hard for machine translators, even those with a transliteration component, to process. Use of a "do-not-translate" list, which has the same end goal – transliteration as opposed to translation. still relies on correct identification of named entities. A third approach is a class-based model. Named entities are replaced with a token to represent their "class"; "Ted" and "Erica" would both be replaced with "person" class token. Then the statistical distribution and use of person names, in general, can be analyzed instead of looking at the distributions of "Ted" and "Erica" individually, so that the probability of a given name in a specific language will not affect the assigned probability of a translation. A study by Stanford on improving this area of translation gives the examples that different probabilities will be assigned to "David is going for a walk" and "Ankit is going for a walk" for English as a target language due to the different number of occurrences for each name in the training data. A frustrating outcome of the same study by Stanford (and other attempts to improve named recognition translation) is that many times, a decrease in the BLEU scores for translation will result from the
interlingual machine translation and dictionary-based machine translation paradigms. This type of translation is used mostly in the creation of dictionaries and grammar programs. Unlike other methods, RBMT involves more information about the linguistics of the source and target languages, using the morphological and syntactic rules and semantic analysis of both languages. The basic approach involves linking the structure of the input sentence with the structure of the output sentence using a parser and an analyzer for the source language, a generator for the target language, and a transfer lexicon for the actual translation. RBMT's biggest downfall is that everything must be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity. Adapting to new domains in itself is not that hard, as the core grammar is the same across domains, and the domain-specific adjustment is limited to lexical selection adjustment. Transfer-based machine translation Transfer-based machine translation is similar to interlingual machine translation in that it creates a translation from an intermediate representation that simulates the meaning of the original sentence. Unlike interlingual MT, it depends partially on the language pair involved in the translation. Interlingual Interlingual machine translation is one instance of rule-based machine-translation approaches. In this approach, the source language, i.e. the text to be translated, is transformed into an interlingual language, i.e. a "language neutral" representation that is independent of any language. The target language is then generated out of the interlingua. One of the major advantages of this system is that the interlingua becomes more valuable as the number of target languages it can be turned into increases. However, the only interlingual machine translation system that has been made operational at the commercial level is the KANT system (Nyberg and Mitamura, 1992), which is designed to translate Caterpillar Technical English (CTE) into other languages. Dictionary-based Machine translation can use a method based on dictionary entries, which means that the words will be translated as they are by a dictionary. Statistical Statistical machine translation tries to generate translations using statistical methods based on bilingual text corpora, such as the Canadian Hansard corpus, the English-French record of the Canadian parliament and EUROPARL, the record of the European Parliament. Where such corpora are available, good results can be achieved translating similar texts, but such corpora are still rare for many language pairs. The first statistical machine translation software was CANDIDE from IBM. Google used SYSTRAN for several years, but switched to a statistical translation method in October 2007. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system; translation accuracy improved. Google Translate and similar statistical translation programs work by detecting patterns in hundreds of millions of documents that have previously been translated by humans and making intelligent guesses based on the findings. Generally, the more human-translated documents available in a given language, the more likely it is that the translation will be of good quality. Newer approaches into Statistical Machine translation such as METIS II and PRESEMT use minimal corpus size and instead focus on derivation of syntactic structure through pattern recognition. With further development, this may allow statistical machine translation to operate off of a monolingual text corpus. SMT's biggest downfall includes it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translating into such languages), and its inability to correct singleton errors. Example-based Example-based machine translation (EBMT) approach was proposed by Makoto Nagao in 1984. Example-based machine translation is based on the idea of analogy. In this approach, the corpus that is used is one that contains texts that have already been translated. Given a sentence that is to be translated, sentences from this corpus are selected that contain similar sub-sentential components. The similar sentences are then used to translate the sub-sentential components of the original sentence into the target language, and these phrases are put together to form a complete translation. Hybrid MT Hybrid machine translation (HMT) leverages the strengths of statistical and rule-based translation methodologies. Several MT organizations claim a hybrid approach that uses both rules and statistics. The approaches differ in a number of ways: Rules post-processed by statistics: Translations are performed using a rules based engine. Statistics are then used in an attempt to adjust/correct the output from the rules engine. Statistics guided by rules: Rules are used to pre-process data in an attempt to better guide the statistical engine. Rules are also used to post-process the statistical output to perform functions such as normalization. This approach has a lot more power, flexibility and control when translating. It also provides extensive control over the way in which the content is processed during both pre-translation (e.g. markup of content and non-translatable terms) and post-translation (e.g. post translation corrections and adjustments). More recently, with the advent of Neural MT, a new version of hybrid machine translation is emerging that combines the benefits of rules, statistical and neural machine translation. The approach allows benefitting from pre- and post-processing in a rule guided workflow as well as benefitting from NMT and SMT. The downside is the inherent complexity which makes the approach suitable only for specific use cases. Neural MT A deep learning-based approach to MT, neural machine translation has made rapid progress in recent years, and Google has announced its translation services are now using this technology in preference over its previous statistical methods. A Microsoft team claimed to have reached human parity on WMT-2017 ("EMNLP 2017 Second Conference On Machine Translation") in 2018, marking a historical milestone. However, many researchers have criticized this claim, rerunning and discussing their experiments; current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test suits- i.e., it lacks statistical significance power. There is still a long journey before NMT reaches real human parity performances. To address the idiomatic phrase translation, multi-word expressions, and low-frequency words (also called OOV, or out-of-vocabulary word translation), language-focused linguistic features have been explored in state-of-the-art neural machine translation (NMT) models. For instance, the Chinese character decompositions into radicals and strokes have proven to be helpful for translating multi-word expressions in NMT. Major issues Disambiguation Word-sense disambiguation concerns finding a suitable translation when a word can have more than one meaning. The problem was first raised in the 1950s by Yehoshua Bar-Hillel. He pointed out that without a "universal encyclopedia", a machine would never be able to distinguish between the two meanings of a word. Today there are numerous approaches designed to overcome this problem. They can be approximately divided into "shallow" approaches and "deep" approaches. Shallow approaches assume no knowledge of the text. They simply apply statistical methods to the words surrounding the ambiguous word. Deep approaches presume a comprehensive knowledge of the word. So far, shallow approaches have been more successful. Claude Piron, a long-time translator for the United Nations and the World Health Organization, wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved: The ideal deep approach would require the translation software to do all the research necessary for this kind of disambiguation on its own; but this would require a higher degree of AI than has yet been attained. A shallow approach which simply guessed at the sense of the ambiguous English phrase that Piron mentions (based, perhaps, on which kind of prisoner-of-war camp is more often mentioned in a given corpus) would have a reasonable chance of guessing wrong fairly often. A shallow approach that involves "ask the user about each ambiguity" would, by Piron's estimate, only automate about 25% of a professional translator's job, leaving the harder 75% still to be done by a human. Non-standard speech One of the major pitfalls of MT is its inability to translate non-standard language with the same accuracy as standard language. Heuristic or statistical based MT takes input from various sources in standard form of a language. Rule-based translation, by nature, does not include common non-standard usages. This causes errors in translation from a vernacular source or into colloquial language. Limitations on translation from casual speech present issues in the use of machine translation in mobile devices. Named entities In information extraction, named entities, in a narrow sense, refer to concrete or abstract entities in the real world such as people, organizations, companies, and places that have a proper name: George Washington, Chicago, Microsoft. It also refers to expressions of time, space and quantity such as 1 July 2011, $500. In the sentence "Smith is the president of Fabrionix" both Smith and Fabrionix are named entities, and can be further qualified via first name or other information; "president" is not, since Smith could have earlier held another position at Fabrionix, e.g. Vice President. The term rigid designator is what defines these usages for analysis in statistical machine translation. Named entities must first be identified in the text; if not, they may be erroneously translated as common nouns, which would most likely not affect the BLEU rating of the translation but would change the text's human readability. They may be omitted from the output translation, which would also have implications for the text's readability and message. Transliteration includes finding the letters in the target language that most closely correspond to the name in the source language. This, however, has been cited as sometimes worsening the quality of translation. For "Southern California" the first word should be translated directly, while the second word should be transliterated. Machines often transliterate both because they treated them as one entity. Words like these are hard for machine translators, even those with a transliteration component, to process. Use of a "do-not-translate" list, which has the same end goal – transliteration as opposed to translation. still relies on correct identification of named entities. A third approach is a class-based model. Named entities are replaced with a token to represent their "class"; "Ted" and "Erica" would both be replaced with "person" class token. Then the statistical distribution and use of person names, in general, can be analyzed instead of looking at the distributions of "Ted" and "Erica" individually, so that the probability of a given name in a specific language will not affect the assigned probability of a translation. A study by Stanford on improving this area of translation gives the examples that different probabilities will be assigned to "David is going for a walk" and "Ankit is going for a walk" for English as a target language due to the different number of occurrences for each name in the training data. A frustrating outcome of the same study by Stanford (and other attempts to improve named recognition translation) is that many times, a decrease in the BLEU scores for translation will result from the inclusion of methods for named entity translation. Somewhat related are the phrases "drinking tea with milk" vs. "drinking tea with Molly." Translation from multiparallel sources Some work has been done in the utilization of multiparallel corpora, that is a body of text that has been translated into 3 or more languages. Using these methods, a text that has been translated into 2 or more languages may be utilized in combination to provide a more accurate translation into a third language compared with if just one of those source languages were used alone. Ontologies in MT An ontology is a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon. In NLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, systems can be enabled to resolve many (especially lexical) ambiguities on their own. In the following classic examples, as humans, we are able to interpret the prepositional phrase according to the context because we use our world knowledge, stored in our lexicons: I saw a man/star/molecule with a microscope/telescope/binoculars. A machine translation system initially would not be able to differentiate between the meanings
to be confused with the first raw moment or the expected value μ). The second central moment μ2 is called the variance, and is usually denoted σ2, where σ represents the standard deviation. The third and fourth central moments are used to define the standardized moments which are used to define skewness and kurtosis, respectively. Properties The nth central moment is translation-invariant, i.e. for any random variable X and any constant c, we have For all n, the nth central moment is homogeneous of degree n: Only for n such that n equals 1, 2, or 3 do we have an additivity property for random variables X and Y that are independent: provided n ∈ }. A related functional that shares the translation-invariance and homogeneity properties with the nth central moment, but continues to have this additivity property even when n ≥ 4 is the nth cumulant κn(X). For n = 1, the nth cumulant is just the expected value; for n = either 2 or 3, the nth cumulant is just the nth central moment; for n ≥ 4, the nth cumulant is an nth-degree monic polynomial in the first n moments (about zero), and is also a (simpler) nth-degree polynomial in the first n central moments. Relation to moments about the origin Sometimes it is convenient to convert moments about the origin to moments about the mean. The general equation for converting the nth-order moment about the origin
4 is the nth cumulant κn(X). For n = 1, the nth cumulant is just the expected value; for n = either 2 or 3, the nth cumulant is just the nth central moment; for n ≥ 4, the nth cumulant is an nth-degree monic polynomial in the first n moments (about zero), and is also a (simpler) nth-degree polynomial in the first n central moments. Relation to moments about the origin Sometimes it is convenient to convert moments about the origin to moments about the mean. The general equation for converting the nth-order moment about the origin to the moment about the mean is where μ is the mean of the distribution, and the moment about the origin is given by For the cases n = 2, 3, 4 — which are of most interest because of the relations to variance, skewness, and kurtosis, respectively — this formula becomes (noting that and ): which is commonly referred to as ... and so on, following Pascal's triangle, i.e. because The following sum is a stochastic variable having a compound distribution where the are mutually independent random variables sharing the same common distribution and a random integer variable independent of the with its own distribution. The moments of are obtained as where
religious significance by the local Muslims. It has been vandalized between 1999–2006 and renovated recently. His other remains were carried to Bursa, his Anatolian capital city, and were buried in a tomb at the complex built in his name. Establishment of sultanate He established the sultanate by building up a society and government in the newly conquered city of Adrianople (Edirne in Turkish) and by expanding the realm in Europe, bringing most of the Balkans under Ottoman rule and forcing the Byzantine emperor to pay him tribute. It was Murad who established the former Osmanli tribe into an sultanate. He established the title of sultan in 1363 and the corps of the janissaries and the devşirme recruiting system. He also organised the government of the Divan, the system of timars and timar-holders (timariots) and the military judge, the kazasker. He also established the two provinces of Anadolu (Anatolia) and Rumeli (Europe). Family He was the son of Orhan and the Valide Hatun Nilüfer Hatun, daughter of the Lord of Yarhisar, who was of ethnic Greek descent. Wives Gülçiçek Hatun; Paşa Melek Hatun, daughter of Kızıl Murad Bey; In 1370 Thamara Hatun – daughter of Bulgarian Tsar Ivan Alexander; Sons Yahşi Bey; Şehzade Savcı Bey – son. He and his ally, Byzantine emperor John V Palaeologus' son Andronicus, rebelled against their fathers. Murad had Savcı killed. Andronicus, who had surrendered to his father, was imprisoned and blinded at Murad's insistence. Sultan Bayezid I (1354–1402) – son of Gülçiçek Hatun; Şehzade Yakub Çelebi (? – d. 1389) – son. Bayezid I had Yakub killed during or following the Battle of Kosovo at which their father had been killed. Şehzade Ibrahim; Daughter Nefise Hatun; Further reading Harris, Jonathan, The End of Byzantium. New Haven and London: Yale University
throne. In a letter from the Florentine senate (written by Coluccio Salutati) to the King Tvrtko I of Bosnia, dated 20 October 1389, Murad I's (and Jakub Bey's) killing was described. A party of twelve Serbian lords slashed their way through the Ottoman lines defending Murad I. One of them, allegedly Miloš Obilić, had managed to get through to the Sultan's tent and kill him with sword stabs to the throat and belly. Sultan Murad's internal organs were buried in Kosovo field and remains to this day on a corner of the battlefield in a location called Meshed-i Hudavendigar which has gained a religious significance by the local Muslims. It has been vandalized between 1999–2006 and renovated recently. His other remains were carried to Bursa, his Anatolian capital city, and were buried in a tomb at the complex built in his name. Establishment of sultanate He established the sultanate by building up a society and government in the newly conquered city of Adrianople (Edirne in Turkish) and by expanding the realm in Europe, bringing most of the Balkans under Ottoman rule and forcing the Byzantine emperor to pay him tribute. It was Murad who established the former Osmanli tribe into an sultanate. He established the title of sultan in 1363 and the corps of the janissaries and the devşirme recruiting system. He also organised the government of the Divan, the system of timars and timar-holders (timariots) and the military judge, the kazasker. He also established the two provinces of Anadolu (Anatolia) and Rumeli (Europe). Family He was the son of Orhan and the Valide Hatun Nilüfer Hatun, daughter of the Lord of Yarhisar, who was of ethnic Greek descent. Wives Gülçiçek Hatun; Paşa Melek Hatun, daughter of Kızıl Murad Bey; In 1370 Thamara Hatun –
Manuel II Palaiologos, who tried to use Orhan against Sultan Mehmed; however, the sultan found out about the plot and had Orhan blinded for betrayal, according to a common Byzantine practice. Furthermore, as a result of the Battle of Ankara and other civil wars, the population of the empire had become unstable and traumatized. A very powerful social and religious movement arose in the empire and became disruptive. The movement was led by Sheikh Bedreddin (1359–1420), a famous Muslim Sufi and charismatic theologian. He was an eminent Ulema, born of a Greek mother and a Muslim father in Simavna (Kyprinos) southwest of Edirne (formerly Adrianople). Mehmed's brother Musa had made Bedreddin his "qadi of the army," or the supreme judge. Bedreddin created a populist religious movement in the Ottoman Sultanate, "subversive conclusions promoting the suppression of social differences between rich and poor as well as the barriers between different forms of monotheism." Successfully developing a popular social revolution and syncretism of the various religions and sects of the empire, Bedreddin's movement began in the European side of the empire and underwent further expansion in western Anatolia. In 1416, Sheikh Bedreddin started his rebellion against the throne. After a four-year struggle, he was finally captured by Mehmed's grand vizier Bayezid Pasha and hanged in the city of Serres, a city in modern-day Greece, in 1420. Death The reign of Mehmed I as sultan of the re-united empire lasted only eight years before his death, but he had also been the most powerful brother contending for the throne and de facto ruler of most of the empire for nearly the whole preceding period of 11 years of the Ottoman Interregnum that passed between his father's captivity at Ankara and his own final victory over his brother Musa Çelebi at the Battle of Çamurlu. Before his death, to secure passing the throne safely to his son Murad II, Mehmed blinded his nephew Orhan Çelebi (son of Süleyman), and decided to send his two sons Yusuf and Mahmud to be held as a hostage by Emperor Manuel II, hoping to ensure the continuing custody of his brother Mustafa. He was buried in Bursa, in a mausoleum erected by himself near the celebrated mosque which he built there, and which, because of its decorations of green glazed tiles, is called the Green Mosque. Mehmed I also completed another mosque in Bursa, which his grandfather Murad I had commenced but which had been neglected during the reign of Bayezid. Mehmed founded in the vicinity of his own Green Mosque and mausoleum two other characteristic institutions, one a school and one a refectory for the poor, both of which he endowed with royal munificence. Wives and children Wives Şehzade Hatun, daughter of Dividdar Ahmed Pasha, third ruler of Kutluşah of Canik; Emine Hatun (m. 1403), daughter of Şaban Süli Bey, fifth ruler of Dulkadirids; Kumru Hatun, mother of Selçuk Hatun; Sons Sultan Murad II, son of Emine Hatun; Şehzade Küçük Mustafa Çelebi (1408 – killed October 1423); Şehzade Mahmud Çelebi (1413 – August 1429, buried in Mehmed I Mausoleum, Bursa); Şehzade Yusuf Çelebi (1414 – August 1429, buried in Mehmed I Mausoleum, Bursa); Şehzade Ahmed Çelebi (died in infancy); Daughters Selçuk Hatun (died 25 October 1485, buried in Mehmed I Mausoleum, Bursa), married Prince Damat Taceddin Ibrahim II Bey, ruler of Isfendiyarids (1392 – 30 May 1443), son of Prince İsfendiyar Bey, ruler of Isfendiyarids; Sultan Hatun (died 1444), married Prince Damat Kasim Bey (died 1464), son of Prince Isfendiar Bey, ruler of Isfendiyarids; Hatice Hatun, married to Damat Karaca Paşa (died 10 November 1444); Hafsa Hatun (buried in Mehmed I Mausoleum, Bursa), married Damat Mahmud Bey (died January 1444), son of Çandarlı Halil Pasha; İlaldi Hatun, married Prince Damat Ibrahim II Bey, ruler of Karamanids (died 16 July 1464), son of Prince Mehmed II Bey; A daughter, married to Prince Damat Isa Bey (died 1437), son of Prince Damat Mehmed II Bey; Ayşe Hatun (buried in Mehmed I Mausoleum, Bursa); Sitti Hatun (buried in Mehmed I Mausoleum, Bursa); A daughter, married to
of most of the empire for nearly the whole preceding period of 11 years of the Ottoman Interregnum that passed between his father's captivity at Ankara and his own final victory over his brother Musa Çelebi at the Battle of Çamurlu. Before his death, to secure passing the throne safely to his son Murad II, Mehmed blinded his nephew Orhan Çelebi (son of Süleyman), and decided to send his two sons Yusuf and Mahmud to be held as a hostage by Emperor Manuel II, hoping to ensure the continuing custody of his brother Mustafa. He was buried in Bursa, in a mausoleum erected by himself near the celebrated mosque which he built there, and which, because of its decorations of green glazed tiles, is called the Green Mosque. Mehmed I also completed another mosque in Bursa, which his grandfather Murad I had commenced but which had been neglected during the reign of Bayezid. Mehmed founded in the vicinity of his own Green Mosque and mausoleum two other characteristic institutions, one a school and one a refectory for the poor, both of which he endowed with royal munificence. Wives and children Wives Şehzade Hatun, daughter of Dividdar Ahmed Pasha, third ruler of Kutluşah of Canik; Emine Hatun (m. 1403), daughter of Şaban Süli Bey, fifth ruler of Dulkadirids; Kumru Hatun, mother of Selçuk Hatun; Sons Sultan Murad II, son of Emine Hatun; Şehzade Küçük Mustafa Çelebi (1408 – killed October 1423); Şehzade Mahmud Çelebi (1413 – August 1429, buried in Mehmed I Mausoleum, Bursa); Şehzade Yusuf Çelebi (1414 – August 1429, buried in Mehmed I Mausoleum, Bursa); Şehzade Ahmed Çelebi (died in infancy); Daughters Selçuk Hatun (died 25 October 1485, buried in Mehmed I Mausoleum, Bursa), married Prince Damat Taceddin Ibrahim II Bey, ruler of Isfendiyarids (1392 – 30 May 1443), son of Prince İsfendiyar Bey, ruler of Isfendiyarids; Sultan Hatun (died 1444), married Prince Damat Kasim Bey (died 1464), son of Prince Isfendiar Bey, ruler of Isfendiyarids; Hatice Hatun, married to Damat Karaca Paşa (died 10 November 1444); Hafsa Hatun (buried in Mehmed I Mausoleum, Bursa), married Damat Mahmud Bey (died January 1444), son of Çandarlı Halil Pasha; İlaldi Hatun, married Prince Damat Ibrahim II Bey, ruler of Karamanids (died 16 July 1464), son of Prince Mehmed II Bey; A daughter, married to Prince Damat Isa Bey (died 1437), son of Prince Damat Mehmed II Bey; Ayşe Hatun (buried in Mehmed I Mausoleum, Bursa); Sitti Hatun (buried in Mehmed I Mausoleum, Bursa); A daughter, married to Prince Damat Alaattin Ali Bey, ruler of Karamanids, son of Prince Halil Bey; References Sources Further reading Harris, Jonathan, The End of Byzantium. New Haven and London: Yale University Press, 2010. External links 15th-century Ottoman sultans People of the Ottoman Interregnum 1421 deaths 1389 births 15th-century people of the
Mustafa was taken and put to death by the sultan, who then turned his arms against the Roman emperor and declared his resolution to punish the Palaiologos for their unprovoked enmity by the capture of Constantinople. Murad II then formed a new army called Azap in 1421 and marched through the Byzantine Empire and laid siege to Constantinople. While Murad was besieging the city, the Byzantines, in league with some independent Turkish Anatolian states, sent the sultan's younger brother Küçük Mustafa (who was only 13 years old) to rebel against the sultan and besiege Bursa. Murad had to abandon the siege of Constantinople in order to deal with his rebellious brother. He caught Prince Mustafa and executed him. The Anatolian states that had been constantly plotting against him — Aydinids, Germiyanids, Menteshe and Teke — were annexed and henceforth became part of the Ottoman Sultanate. Murad II then declared war against Venice, the Karamanid Emirate, Serbia and Hungary. The Karamanids were defeated in 1428 and Venice withdrew in 1432 following the defeat at the second Siege of Thessalonica in 1430. In the 1430s Murad captured vast territories in the Balkans and succeeded in annexing Serbia in 1439. In 1441 the Holy Roman Empire and Poland joined the Serbian-Hungarian coalition. Murad II won the Battle of Varna in 1444 against John Hunyadi. Abdication and second reign Murad II relinquished his throne in 1444 to his son Mehmed II, but a Janissary revolt in the Empire forced him to return. In 1448 he defeated the Christian coalition at the Second Battle of Kosovo (the first one took place in 1389). When the Balkan front was secured, Murad II turned east to defeat Timur's son, Shah Rokh, and the emirates of Karamanid and Çorum-Amasya. In 1450 Murad II led his army into Albania and unsuccessfully besieged the Castle of Kruje in an effort to defeat the resistance led by Skanderbeg. In the winter of 1450–1451, Murad II fell ill, and died in Edirne. He was succeeded by his son Mehmed II (1451–1481). As Ghazi Sultan When Murad ascended to the throne, he sought to regain the lost Ottoman territories that had reverted to autonomy following his grandfather Bayezid I’s defeat at the Battle of Ankara in 1402 at the hands of Timur. He needed the support of both the public and the nobles “who would enable him to exercise his rule”, and utilized the old and potent Islamic trope of Ghazi King. In order to gain popular, international support for his conquests, Murad II modeled himself after the legendary Ghazi kings of old. The Ottomans already presented themselves as ghazis, painting their origins as rising from the ghazas of Osman, the founder of the dynasty. For them, ghaza was the noble championing of Islam and justice against non-Muslims and Muslims alike, if they were cruel; for example, Bayezid I labeled Timur Lang, also a Muslim, an apostate prior to the Battle of Ankara because of the violence his troops had committed upon innocent civilians and because “all
to the throne of Bayezid I (1389–1402). The Byzantine Emperor had first secured a stipulation that Mustafa should, if successful, repay him for his liberation by giving up a large number of important cities. The pretender was landed by the Byzantine galleys in the European dominion of the sultan and for a time made rapid progress. Many Turkish soldiers joined him, and he defeated and killed the veteran general Beyazid Pasha, whom Murad had sent to fight him. Mustafa defeated Murad's army and declared himself Sultan of Adrianople (modern Edirne). He then crossed the Dardanelles to Asia with a large army but Murad out-manoeuvered Mustafa. Mustafa's force passed over in large numbers to Murad II. Mustafa took refuge in the city of Gallipoli, but the sultan, who was greatly aided by a Genoese commander named Adorno, besieged him there and stormed the place. Mustafa was taken and put to death by the sultan, who then turned his arms against the Roman emperor and declared his resolution to punish the Palaiologos for their unprovoked enmity by the capture of Constantinople. Murad II then formed a new army called Azap in 1421 and marched through the Byzantine Empire and laid siege to Constantinople. While Murad was besieging the city, the Byzantines, in league with some independent Turkish Anatolian states, sent the sultan's younger brother Küçük Mustafa (who was only 13 years old) to rebel against the sultan and besiege Bursa. Murad had to abandon the siege of Constantinople in order to deal with his rebellious brother. He caught Prince Mustafa and executed him. The Anatolian states that had been constantly plotting against him — Aydinids, Germiyanids, Menteshe and Teke — were annexed and henceforth became part of the Ottoman Sultanate. Murad II then declared war against Venice, the Karamanid Emirate, Serbia and Hungary. The Karamanids were defeated in 1428 and Venice withdrew in 1432 following the defeat at the second Siege of Thessalonica in 1430. In the 1430s Murad captured vast territories in the Balkans and succeeded in annexing Serbia in 1439. In 1441 the Holy Roman Empire and Poland joined the Serbian-Hungarian coalition. Murad II won the Battle of Varna in 1444 against John Hunyadi. Abdication and second reign Murad II relinquished his throne in 1444 to his son Mehmed II, but a Janissary revolt in the Empire forced him to return. In 1448 he defeated the Christian coalition at the Second Battle of Kosovo (the first one took place in 1389). When the Balkan front was secured, Murad II turned east to defeat Timur's son, Shah Rokh, and the emirates of Karamanid and Çorum-Amasya. In 1450 Murad II led
Murad declared war, starting the Ottoman–Safavid War (1578–90), seeking to take advantage of the chaos in the Safavid court after the death of Shah Tahmasp I. Murad was influenced by viziers Lala Kara Mustafa Pasha and Sinan Pasha and disregarded the opposing counsel of Grand Vizier Sokollu. Murad also fought the Safavids which would drag on for 12 years, ending with the Treaty of Constantinople (1590), which resulted in temporary significant territorial gains for the Ottomans. Ottoman Activity in the Horn of Africa During his reign an Ottoman Admiral by the name of Ali Bey was successful in establishing Ottoman supremacy in numerous cities in the Swahili coast between Mogadishu and Kilwa. Ottoman suzerainty was recognised in Mogadishu in 1585 and Ottoman supremacy was also established in other cities such as Barawa, Mombasa, Kilifi, Pate, Lamu and Faza. Financial Affairs Murad's reign was a time of financial stress for the Ottoman state. To keep up with changing military techniques, the Ottomans trained infantrymen in the use of firearms, paying them directly from the treasury. By 1580 an influx of silver from the New World had caused high inflation and social unrest, especially among Janissaries and government officials who were paid in debased currency. Deprivation from the resulting rebellions, coupled with the pressure of over-population, was especially felt in Anatolia. Competition for positions within the government grew fierce, leading to bribery and corruption. Ottoman and Habsburg sources accuse Murad himself of accepting enormous bribes, including 20,000 ducats from a statesman in exchange for the governorship of Tripoli and Tunisia, thus outbidding a rival who had tried bribing the Grand Vizier. During his period, excessive inflation was experienced, the value of silver money was constantly played, food prices increased. 400 dirhams should be cut from 600 dirhams of silver, while 800 was cut, which meant 100 percent inflation. For the same reason, the purchasing power of wage earners was halved, and the consequence was an uprising. English Pact Numerous envoys and letters were exchanged between Elizabeth I and Sultan Murad III. In one correspondence, Murad entertained the notion that Islam and Protestantism had "much more in common than either did with Roman Catholicism, as both rejected the worship of idols", and argued for an alliance between England and the Ottoman Empire. To the dismay of Catholic Europe, England exported tin and lead (for cannon-casting) and ammunition to the Ottoman Empire, and Elizabeth seriously discussed joint military operations with Murad III during the outbreak of war with Spain in 1585, as Francis Walsingham was lobbying for a direct Ottoman military involvement against the common Spanish enemy. This diplomacy would be continued under Murad's successor Mehmed III, by both the sultan and Safiye Sultan alike. Personal life Palace life Following the example of his father Selim II, Murad was the second Ottoman sultan who never went on campaign during his reign, instead spending it entirely in Constantinople. During the final years of his reign, he did not even leave Topkapı Palace. For two consecutive years he did not attend the Friday procession to the imperial mosque—an unprecedented breaking of custom. The Ottoman historian Mustafa Selaniki wrote that whenever Murad planned to go out to Friday prayer, he changed his mind after hearing of alleged plots by the Janissaries to dethrone him once he left the palace. Murad withdrew from his subjects and spent the majority of his reign keeping to the company of few people and abiding by a daily routine structured by the five daily Islamic prayers. Murad's personal physician Domenico Hierosolimitano described a typical day in the life of the sultan: Murad's sedentary lifestyle and lack of participation in military campaigns earned him the disapproval of Mustafa Âlî and Mustafa Selaniki, the major Ottoman historians who lived during his reign. Their negative portrayals of Murad influenced later historians. Both historians also accused Murad of sexual excess. Children Before becoming sultan, Murad had been loyal to Safiye Sultan, his Albanian concubine who had given him a son, Mehmed, and two daughters. His monogamy was disapproved of by his mother Nurbanu Sultan, who worried that Murad needed more sons to succeed him in case Mehmed died young. She also worried about Safiye's influence over her son and the Ottoman dynasty. Five or six years after his accession to the throne, Murad was given a pair of concubines by his sister Ismihan. Upon attempting sexual intercourse with them, he proved impotent. "The arrow [of Murad], [despite] keeping with his created nature, for many times [and] for many days has been unable to reach at the target of union and pleasure," wrote Mustafa Ali. Nurbanu accused Safiyye and her retainers of causing Murad's impotence with witchcraft. Several of Safiye's servants were tortured by eunuchs in order to discover a culprit. Court physicians, working under Nurbanu's orders, eventually prepared a successful cure, but a side effect was a drastic increase in sexual appetite—by the time Murad died, he was said to have fathered over a hundred children. Nineteen of these were executed by Mehmed III when he became sultan. Women at court Influential ladies of his court included his mother Nurbanu Sultan, his sister Ismihan Sultan, wife of grand vizier Sokollu Mehmed Pasha, and musahibes (favourites) mistress of the housekeeper Canfeda Hatun, mistress of financial affairs Raziye Hatun, and the poet Hubbi Hatun, Finally, after the death of his mother and older sister, his wife Safiye Sultan was the only influential woman in the court. Eunuchs at court Before Murad, the palace eunuchs had been mostly white. This began to change in 1582 when Murad gave an important position to a black eunuch. By 1592, the eunuchs' roles in the palace were racially determined: black eunuchs guarded the Sultan and the women, and white eunuchs guarded the male pages in another part of the palace. The
that Murad needed more sons to succeed him in case Mehmed died young. She also worried about Safiye's influence over her son and the Ottoman dynasty. Five or six years after his accession to the throne, Murad was given a pair of concubines by his sister Ismihan. Upon attempting sexual intercourse with them, he proved impotent. "The arrow [of Murad], [despite] keeping with his created nature, for many times [and] for many days has been unable to reach at the target of union and pleasure," wrote Mustafa Ali. Nurbanu accused Safiyye and her retainers of causing Murad's impotence with witchcraft. Several of Safiye's servants were tortured by eunuchs in order to discover a culprit. Court physicians, working under Nurbanu's orders, eventually prepared a successful cure, but a side effect was a drastic increase in sexual appetite—by the time Murad died, he was said to have fathered over a hundred children. Nineteen of these were executed by Mehmed III when he became sultan. Women at court Influential ladies of his court included his mother Nurbanu Sultan, his sister Ismihan Sultan, wife of grand vizier Sokollu Mehmed Pasha, and musahibes (favourites) mistress of the housekeeper Canfeda Hatun, mistress of financial affairs Raziye Hatun, and the poet Hubbi Hatun, Finally, after the death of his mother and older sister, his wife Safiye Sultan was the only influential woman in the court. Eunuchs at court Before Murad, the palace eunuchs had been mostly white. This began to change in 1582 when Murad gave an important position to a black eunuch. By 1592, the eunuchs' roles in the palace were racially determined: black eunuchs guarded the Sultan and the women, and white eunuchs guarded the male pages in another part of the palace. The chief black eunuch was known as the Kizlar Agha, and the chief white eunuch was known as the Kapi Agha. Murad and the arts Murad took great interest in the arts, particularly miniatures and books. He actively supported the court Society of Miniaturists, commissioning several volumes including the Siyer-i Nebi, the most heavily illustrated biographical work on the life of the Islamic prophet Muhammad, the Book of Skills, the Book of Festivities and the Book of Victories. He had two large alabaster urns transported from Pergamon and placed on two sides of the nave in the Hagia Sophia in Constantinople and a large wax candle dressed in tin which was donated by him to the Rila monastery in Bulgaria is on display in the monastery museum. Murad also furnished the content of Kitabü’l-Menamat (The Book of Dreams), addressed to Murad's spiritual advisor, Şüca Dede. A collection of first person accounts, it tells of Murad's spiritual experiences as a Sufi disciple. Compiled from thousands of letters Murad wrote describing his dream visions, it presents a hagiographic self-portrait. Murad dreams of various activities, including being stripped naked by his father and having to sit on his lap, single-handedly killing 12,000 infidels in battle, walking on water, ascending to heaven, and producing milk from his fingers. He frequently encounters the Prophet Muhammed, and in one dream sits in the Prophet's lap and kisses his mouth. In another letter addressed to Şüca Dede, Murad wrote "I wish that God, may He be glorified and exalted, had not created this poor servant as the descendant of the Ottomans so that I would not hear this and that, and would not worry. I wish I were of unknown pedigree. Then, I would have one single task, and could ignore the whole world." The diplomatic edition of these dream letters have been recently published by Ozgen Felek in Turkish. Death Murad died from what is assumed to be natural causes in the Topkapı Palace and was buried in tomb next to the Hagia Sophia. In the mausoleum are 54 sarcophagus of the sultan, his wives and children that are also buried there. He is also responsible for changing the burial customs of the sultans' mothers. Murad had his mother Nurbanu buried next to her husband Selim II, making her the first consort to share a sultan's tomb. Family Consorts Murad's named consorts were: Safiye Sultan, an ethnic Albanian. Haseki Sultan of Murad and Valide Sultan of Mehmed III; Şahıhuban Hatun; Zerefşan Hatun; Şahi Hatun; Şemsiruhsar Hatun, mother of Rukiye Sultan; Nazperver Hatun; Sons Murad had twenty-two sons: Sultan Mehmed III (26 May 1566 – 22 December 1603, Topkapı Palace, Constantinople, buried in Mehmed III Mausoleum, Hagia Sophia Mosque, Constantinople), became the next sultan; Şehzade Mahmud (1568, Manisa Palace, Manisa – 1581, Topkapı Palace, Istanbul, buried in Selim II Mausoleum, Hagia Sophia Mosque); Şehzade Mustafa (1578-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Osman (1573-died 1587, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Bayezid (1579-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Selim (1581-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Cihangir (1585-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Abdullah (1580-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Abdurrahman (1585-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Hasan (1586-died 1591, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Ahmed (1586-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Yakub (1587-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Alemşah (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Yusuf (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Hüseyin (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Korkud (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Ali (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Ishak (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Ömer (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Alaeddin (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Davud (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque); Şehzade Suleiman (born and died in 1585, Topkapi Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque): Şehzade Yahya (1585, Manisa Palace, Manisa – 1648, Kotor, Montenegro, buried in Kotor, Montenegro), was claimed to be a son of Murad III; Daughters Murad had twenty-eight daughters, of whom sixteen died of plague in 1597. The rest, who were married, included the following: Hümaşah Sultan, married only once to Damad Nişar Mustafazade Mehmed Pasha (died 1586);. Safiye's possibly eldest daughter. Ayşe Sultan (died 15 May 1605, buried in Mehmed III Mausoleum, Hagia Sophia Mosque), daughter with Safiye, married firstly on 20 May 1586, to Damat Ibrahim Pasha, married secondly on 5 April 1602, to Damad Yemişçi Hasan Pasha, married thirdly on 29 June 1604, to Damad Güzelce Mahmud Pasha; Fatma Sultan (died 1620 buried in Murad III Mausoleum, Hagia Sophia Mosque), daughter with Safiye, married firstly on 6 December 1593, to Damad Halil Pasha, married secondly
the Ottoman Empire from 1595 until his death in 1603. Early life Mehmed was born at the Manisa Palace in 1566, during the reign of his great-grandfather, Suleiman the Magnificent. He was the son of Murad III, himself the son of Selim II, who was the son of Sultan Suleiman and Hurrem Sultan. His mother was Safiye Sultan, an Albanian from the Dukagjin highlands. His great-grandfather Suleiman I died the year he was born and his grandfather became the new sultan, Selim II. His grandfather Selim II died when Mehmed was eight, and Mehmed's father, Murad III, became sultan in 1574. Murad died in 1595, when Mehmed was 28 years old. Mehmed spent most of his time in Manisa with his father Murad and mother Safiye, his first teacher Ibrahim Efendi. His circumcision took place on 29 May 1582 when he was 16 years old. Reign Fratricide Upon ascending to the throne, Mehmed III ordered that all of his nineteen brothers be executed. They were strangled by his royal executioners, many of whom were deaf, mute or 'half-witted' to ensure absolute loyalty. Fratricidal successions were not unprecedented, as sultans would often have dozens of children with their concubines. Power struggle in Constantinople Mehmed III was an idle ruler, leaving government to his mother Safiye Sultan, the valide sultan. His first major problem was the rivalry between two of his viziers, Serdar Ferhad Pasha and Koca Sinan Pasha, and their supporters. His mother and her son-in-law Damat Ibrahim Pasha supported Koca Sinan Pasha and prevented Mehmed III from taking control of the issue himself. The issue grew to cause major disturbances by janissaries. On 7 July 1595, Mehmed III finally sacked Serdar Ferhad Pasha from the position of Grand Vizier due to his failure in Wallachia and replaced him with Sinan. Austro-Hungarian War The major event of his reign was the Austro-Ottoman War in Hungary (1593–1606). Ottoman defeats in the war caused Mehmed III to take personal command of the army, the first sultan to do so since Suleiman I in 1566. Accompanied by the Sultan, the Ottomans conquered Eger in 1596. Upon hearing of the Habsburg army's approach, Mehmed wanted to dismiss the army and return to Istanbul. However, the Ottomans eventually decided to face the enemy and defeated the Habsburg and Transylvanian forces at the Battle of Keresztes (known in Turkish as the Battle of Haçova), during which the Sultan had to be dissuaded from fleeing the field halfway through the battle. Upon returning to Istanbul in victory, Mehmed told his viziers that he would campaign again. The next year the Venetian Bailo in Istanbul noted, "the doctors declared that the Sultan cannot leave for a war on account of his bad health, produced by excesses of eating and drinking". In reward for his services at the war, Cigalazade Yusuf Sinan Pasha was made Grand Vizier in 1596. However, with pressure from the court and his mother, Mehmed reinstated Damat Ibrahim Pasha to this position shortly afterward. However, the victory at the Battle of Keresztes was soon set back by some important losses, including the loss of Győr () to the Austrians and the defeat of the Ottoman forces led by Hafız Ahmet Pasha by the Wallachian forces under Michael the Brave in Nikopol in 1599. In 1600, Ottoman forces under Tiryaki Hasan Pasha captured Nagykanizsa after a 40-day siege and later successfully held it against a much greater attacking force in the Siege of Nagykanizsa. Jelali revolts Another major event of his reign was the Jelali revolts in Anatolia. Karayazıcı Abdülhalim, a former Ottoman official, captured the city of Urfa and declared himself a sultan in 1600. The rumors of his claim to the throne spread to Constantinople and Mehmed ordered the rebels to be treated harshly to dispel the rumors, among these, was the execution of Hüseyin Pasha, whom
predecessor, Murad III, who had died before they had arrived. Included in these gifts was a large jewel-studded clockwork organ that was assembled on the slope of the Royal Private Garden by a team of engineers including Thomas Dallam. The organ took many weeks to complete and featured dancing sculptures such as a flock of blackbirds that sung and shook their wings at the end of the music. Also among the English gifts was a ceremonial coach, accompanied by a letter from the Queen to Mehmed's mother, Safiye Sultan. These gifts were intended to cement relations between the two countries, building on the trade agreement signed in 1581 that gave English merchants priority in the Ottoman region. Under the looming threat of Spanish military presence, England was eager to secure an alliance with the Ottomans, the two nations together having the capability to divide the power. Elizabeth's gifts arrived in a large 27-gun merchantman ship that Mehmed personally inspected, a clear display of English maritime strength that would prompt him to build up his fleet over the following years of his reign. The Anglo-Ottoman alliance would never be consummated, however, as relations between the nations grew stagnant due to anti-European sentiments reaped from the worsening Austro-Ottoman War and the deaths of Safiye Sultan's interpreter and the pro-English chief Hasan Pasha. Death Mehmed died on 22 December 1603 at the age of 37. According to one source, the cause of his death was the distress caused by the death of his son, Şehzade Mahmud. According to another source, he died either of plague or of stroke. He was buried in Hagia Sophia Mosque. He was succeeded by his son Ahmed I as the new sultan. Family Consorts None of Mehmed's consorts are listed as haseki sultan in Ottoman palace archives. Known consorts were: Halime Sultan (buried in Mustafa I Mausoleum, Hagia Sophia Mosque, Istanbul); Handan Sultan (died 9 November 1605, Topkapı Palace, Istanbul, buried in Mehmed III Mausoleum, Hagia Sophia Mosque); A consort who died in 1597, during the outbreak of plague ; Sons Şehzade Selim (1585, Manisa Palace, Manisa – 20 April 1597, Topkapı Palace, Istanbul, buried in Hagia Sophia Mosque) - with Handan; Şehzade Süleyman (born 1586, Manisa Palace, Manisa, died young, buried in Hagia Sophia Mosque) - with Handan; Şehzade Mahmud (born 1588, Manisa Palace, Manisa – executed by Mehmed III, 7 June 1603, Topkapı Palace, Istanbul, buried in Şehzade Mahmud Mausoleum, Şehzade Mosque) - with Halime; Sultan Ahmed I (18 April 1590, Manisa Palace, Manisa – 22 November 1617, Topkapı Palace, Istanbul, buried in Ahmed I Mausoleum, Sultan Ahmed Mosque), Sultan of the Ottoman Empire - with Handan; Sultan Mustafa I (1591, Manisa Palace, Manisa – 20 January 1639, Eski Palace, Istanbul, buried in Mustafa I Mausoleum, Hagia Sophia Mosque), Sultan of the Ottoman Empire - with Halime; A son who died in the second year of his life, after Selim's death; Şehzade Cihangir (1599, Topkapı Palace, Istanbul
does not mention the incapacity of Mustafa. Baron de Sancy ascribes the deposition to a political conspiracy between the grand admiral Ali Pasha and Chief Black Eunuch Mustafa Agha, who were angered by the former's removal from office upon Sultan Mustafa's accession. They may have circulated rumors of the sultan's mental instability subsequent to the coup in order to legitimize it. Second reign (1622–1623) He commenced his reign by executing all those who had taken any share in the murder of Sultan Osman. Hoca Ömer Efendi, the chief of the rebels, the kızlar Agha Suleiman Agha, the vizier Dilaver Pasha, the Kaim-makam Ahmed Pasha, the defterdar Baki Pasha, the segban-bashi Nasuh Agha, and the general of the janissaries Ali Agha, were cut into pieces. The epithet "Veli" (meaning "saint") was used in reference to him during his reign. His mental condition unimproved, Mustafa was a puppet controlled by his mother and brother-in-law, the grand vizier Kara Davud Pasha. He believed that Osman II was still alive and was seen searching for him throughout the palace, knocking on doors and crying out to his nephew to relieve him from the burden of sovereignty. "The present emperor being a fool" (according to English Ambassador Sir Thomas Roe), he was compared unfavorably with his predecessor. In fact, it was his mother Halime Sultan the de facto-co-ruler as Valide Sultan of the Ottoman Empire. Deposition and last years Political instability was generated by conflict between the Janissaries and the sipahis (Ottoman cavalry), followed by the Abaza rebellion, which occurred when the governor-general of Erzurum, Abaza Mehmed Pasha, decided to march to Istanbul to avenge the murder of Osman II. The regime tried to end the conflict by executing Kara Davud Pasha, but Abaza Mehmed continued his advance. Clerics and the new Grand Vizier (Kemankeş Kara Ali Pasha) prevailed upon Mustafa's mother to allow the deposition of her son. She agreed, on condition that Mustafa's life would be spared. The 11-year-old Murad IV, son
was overruled. Mustafa's rise created a new succession principle of seniority that would last until the end of the Empire. It was the first time an Ottoman Sultan was succeeded by his brother instead of his son. His mother Halime Sultan became the Valide Sultan as well as a regent and wielded great power. Due to Mustafa's mental conditions, she acted as a regent and exercised power more directly. It was hoped that regular social contact would improve Mustafa's mental health, but his behavior remained eccentric. He pulled off the turbans of his viziers and yanked their beards. Others observed him throwing coins to birds and fish. The Ottoman historian İbrahim Peçevi wrote "this situation was seen by all men of state and the people, and they understood that he was psychologically disturbed." Deposition Mustafa was never more than a tool of court cliques at the Topkapı Palace. In 1618, after a short rule, another palace faction deposed him in favour of his young nephew Osman II (1618–1622), and Mustafa was sent back to the Old Palace. The conflict between the Janissaries and Osman II presented him with a second chance. After a Janissary rebellion led to the deposition and assassination of Osman II in 1622, Mustafa was restored to the throne and held it for another year. Alleged mental instability Nevertheless, according to Baki Tezcan, there is not enough evidence to properly establish that Mustafa was mentally imbalanced when he came to the throne. Mustafa "made a number of excursions to the arsenal and the navy docks, examining various sorts of arms and taking an active interest in the munitions supply of the army and the navy." One of the dispatches of Baron de Sancy, the French ambassador, "suggested that Mustafa was interested in leading the Safavid campaign himself and was entertaining the idea of wintering in Konya for that purpose." Moreover, one contemporary observer provides an explanation of the coup which does not mention the incapacity of
to the Okmeydanı to escape the plague. The situation was worse in the countryside outside of Istanbul. Absolute rule and imperial policies (1632–1640) Murad IV tried to quell the corruption that had grown during the reigns of previous Sultans, and that had not been checked while his mother was ruling through proxy. Murad IV banned alcohol, tobacco, and coffee in Constantinople. He ordered execution for breaking this ban. He would reportedly patrol the streets and the lowest taverns of Constantinople in civilian clothes at night, policing the enforcement of his command by casting off his disguise on the spot and beheading the offender with his own hands. Rivaling the exploits of Selim the Grim, he would sit in a kiosk by the water near his Seraglio Palace and shoot arrows at any passerby or boatman who rowed too close to his imperial compound, seemingly for sport. He restored the judicial regulations by very strict punishments, including execution; he once strangled a grand vizier for the reason that the official had beaten his mother-in-law. Fire of 1633 On 2 September 1633, the big Cibali fire broke out, burning a fifth of the city. The fire started during the day when a caulker burned the shrub and the ship caulked into the walls. The fire, which spread from three branches to the city. One arm lowered towards the sea. He returned from Zeyrek and walked to Atpazan. Other kollan Büyükkaraman, Küçükkaraman, Sultanmehmet (Fatih), Saraçhane, Sangürz (Sangüzel) districts were ruined. The sultan could not do anything other than watching sentence viziers, Bostancı and Yeniçeri. The most beautiful districts of Istanbul were ruined, from the Yeniodas, Mollagürani districts, Fener gate to Sultanselim, Mesihpaşa, Bali Pasha and Lutfi Pasha mosques, Şahı buhan Palace, Unkapam to Atpazarı, Bostanzade houses, Sofular Bazaar. The fire that lasted for 30 hours was only extinguished after the wind stopped. The war against Safavid Iran Murad IV's reign is most notable for the Ottoman–Safavid War (1623–39) against Persia (today Iran) in which Ottoman forces managed to conquer Azerbaijan, occupying Tabriz, Hamadan, and capturing Baghdad in 1638. The Treaty of Zuhab that followed the war generally reconfirmed the borders as agreed by the Peace of Amasya, with Eastern Armenia, Eastern Georgia, Azerbaijan, and Dagestan staying Persian, while Western Armenia, and Western Georgia stayed Ottoman. Mesopotamia was irrevocably lost for the Persians. The borders fixed as a result of the war, are more or less the same as the present border line between Turkey, Iraq and Iran. During the siege of Baghdad in 1638, the city held out for forty days but was compelled to surrender. Murad IV himself commanded the Ottoman army in the last years of the war. Relations with the Mughal Empire While he was encamped in Baghdad, Murad IV is known to have met ambassadors of the Mughal Emperor Shah Jahan, Mir Zarif and Mir Baraka, who presented 1000 pieces of finely embroidered cloth and even armor. Murad IV gave them the finest weapons, saddles and Kaftans and ordered his forces to accompany the Mughals to the port of Basra, where they set sail to Thatta and finally Surat. Architecture Murad IV put emphasis on architecture and in his period many monuments were erected. The Baghdad Kiosk, built
and Western Georgia stayed Ottoman. Mesopotamia was irrevocably lost for the Persians. The borders fixed as a result of the war, are more or less the same as the present border line between Turkey, Iraq and Iran. During the siege of Baghdad in 1638, the city held out for forty days but was compelled to surrender. Murad IV himself commanded the Ottoman army in the last years of the war. Relations with the Mughal Empire While he was encamped in Baghdad, Murad IV is known to have met ambassadors of the Mughal Emperor Shah Jahan, Mir Zarif and Mir Baraka, who presented 1000 pieces of finely embroidered cloth and even armor. Murad IV gave them the finest weapons, saddles and Kaftans and ordered his forces to accompany the Mughals to the port of Basra, where they set sail to Thatta and finally Surat. Architecture Murad IV put emphasis on architecture and in his period many monuments were erected. The Baghdad Kiosk, built in 1635, and the Revan Kiosk, built in 1638 in Yerevan, were both built in the local styles. Some of the others include the Kavak Sarayı pavilion; the Meydanı Mosque; the Bayram Pasha Dervish Lodge, Tomb, Fountain, and Primary School; and the Şerafettin Mosque in Konya. Music and poetry Murad IV wrote many poems. He used the "Muradi" penname for his poems. He also liked testing people with riddles. Once he wrote a poetic riddle and announced that whoever came with the correct answer would get a generous reward. Cihadi Bey, a poet from Enderun School, gave the correct answer and he was promoted. Murad IV was also a composer. He has a composition called "Uzzal Peshrev". Family Consorts Very little is known about the concubines of Murad IV, principally because he did not leave sons who survived his death to reach the throne, but many historians consider Ayşe Sultan as his only consort until the very end of Murad's seventeen-year reign, when a second Haseki appeared in the records. It is possible that Murad had only a single concubine until the advent of the second, or that he had a number of concubines but singled out only two as Haseki. Sons Şehzade Ahmed (21 December 1628 – 1639, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Numan (1628 – 1629, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Orhan (1629 – 1629, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Hasan (March 1631 – 1632, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Suleiman (2 February 1632 – 1635, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Mehmed (11 August 1633 – 11 January 1640, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Osman (9 February 1634 – 1635, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Alaeddin (26 August 1635 – 1637, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Selim (1637 – 1640, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Şehzade Mahmud (15 May 1638 – 1638, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul) Daughters Murad had several daughters, among whom were: Kaya Sultan (1633–1659, buried in Mustafa I Mausoleum, Hagia Sophia Mosque, Istanbul), married August 1644, Melek Ahmed Pasha; Safiye Sultan (buried in
3: Ballistics (2003) (military-themed color illustration and CG art book featuring female characters with guns) Intron Depot 4: Bullets (2004) (color illustration art book collecting his work between 1995 and 1999) Intron Depot 5: Battalion (2012) (game & animation artwork covering the period 2001–2009) Intron Depot 6: Barb Wire 01 (2013) (illustrations for novels 2007–2010) Intron Depot 7: Barb Wire 02 (2013) (illustrations for novels 2007–2010) Intron Depot 8: Bomb Bay (2018) (illustrations 1992-2009) Intron Depot 9: Barrage Fire (2019) (illustrations 1998-2017) Intron Depot 10: Bloodbard (2020) (illustrations 2004-2019) Intron Depot 11: Bailey Bridge (2020) (illustrations 2012-2014) Kokin Toguihime Zowshi Shu (2009) Pieces 1 (2009) Pieces 2: Phantom Cats (2010) Pieces 3: Wild Wet Quest (2010) Pieces 4: Hell Hound 01 (2010) Pieces 5: Hell Hound 02 (2011) Pieces 6: Hell Cat (2011) Pieces 7: Hell Hound 01 & 02 Miscellaneous Work + α (2011) Pieces 8: Wild Wet West (2012) Pieces 9: Kokon Otogizoshi Shu Hiden (2012) Pieces GEM 01: The Ghost in The Shell Data + α (2014) Pieces GEM 02: Neuro Hard Bee Planet (2015) Pieces GEM 03: Appleseed Drawings (2016) W-Tails Cat 1 (2012) W-Tails Cat 2 (2013) W-Tails Cat 3 (2016) Greaseberries 1 (2014) Greaseberries 2 (2014) Greaseberries 3 (2018) Greaseberries 4 (2019) Greaseberries Rough (2019) Galgrease Galgrease (published in Uppers Magazine, 2002) is the collected name of several erotic manga and poster books by Shirow. The name comes from the fact that the women depicted often look "greased". The first series of Galgrease booklets included four issues each in the following settings: Wild Wet West (Wild West-themed) Hellhound (Horror-themed) Galhound (Near-future science fiction–themed) The second series included another run of 12 booklets in the following worlds: Wild Wet Quest (A Tomb Raider or Indiana Jones–style sequel to Wild Wet West) Hellcat (Pirate-themed) Galhound 2 (Near-future science fiction–themed) After each regular series, there were one or more bonus poster books that revisited the existing characters and settings. Minor works "Areopagus Arther" (1980), published in ATLAS (dōjinshi) "Yellow Hawk" (1981), published in ATLAS (dōjinshi) "Colosseum Pick" (1982), published in Funya (dōjinshi) "Pursuit (Manga)" (1982), published in Kintalion (dōjinshi) "Opional Orientation" (1984), published in ATLAS (dōjinshi) "Battle on Mechanism" (1984), published in ATLAS (dōjinshi) "Metamorphosis in Amazoness" (1984), published in ATLAS (dōjinshi) "Arice in Jargon" (1984), published in ATLAS (dōjinshi) "Bike Nut" (1985), published in Dorothy (dōjinshi) "Gun Dancing" (1986), published in Young Magazine Kaizokuban "Colosseum Pick" (1990), published in Comic Fusion Atpas (dōjinshi) Other Design of the MAPP1-SM mouse series (2002, commissioned by Elecom) Pandora in the Crimson Shell: Ghost Urn (2012), original concept Design of the EHP-SH1000 and EHP-SL100 headphones (2016, commissioned by Elecom) Adaptations Anime Film Ghost in the Shell (1995) by Mamoru Oshii Ghost in the Shell 2: Innocence (2004) by Mamoru Oshii Appleseed (2004) by Shinji Aramaki Ghost in the Shell: Stand Alone Complex - Solid State Society (2006) by Kenji Kamiyama Appleseed Ex Machina (2007) by Shinji Aramaki and John Woo Appleseed Alpha (2014) by Shinji Aramaki and Joseph Chou Kōkaku no Pandora - Ghost Urn (2015) by Munenori Nawa Ghost in the Shell: The New Movie (2016) by Kazuya Nomura
Depot 9: Barrage Fire (2019) (illustrations 1998-2017) Intron Depot 10: Bloodbard (2020) (illustrations 2004-2019) Intron Depot 11: Bailey Bridge (2020) (illustrations 2012-2014) Kokin Toguihime Zowshi Shu (2009) Pieces 1 (2009) Pieces 2: Phantom Cats (2010) Pieces 3: Wild Wet Quest (2010) Pieces 4: Hell Hound 01 (2010) Pieces 5: Hell Hound 02 (2011) Pieces 6: Hell Cat (2011) Pieces 7: Hell Hound 01 & 02 Miscellaneous Work + α (2011) Pieces 8: Wild Wet West (2012) Pieces 9: Kokon Otogizoshi Shu Hiden (2012) Pieces GEM 01: The Ghost in The Shell Data + α (2014) Pieces GEM 02: Neuro Hard Bee Planet (2015) Pieces GEM 03: Appleseed Drawings (2016) W-Tails Cat 1 (2012) W-Tails Cat 2 (2013) W-Tails Cat 3 (2016) Greaseberries 1 (2014) Greaseberries 2 (2014) Greaseberries 3 (2018) Greaseberries 4 (2019) Greaseberries Rough (2019) Galgrease Galgrease (published in Uppers Magazine, 2002) is the collected name of several erotic manga and poster books by Shirow. The name comes from the fact that the women depicted often look "greased". The first series of Galgrease booklets included four issues each in the following settings: Wild Wet West (Wild West-themed) Hellhound (Horror-themed) Galhound (Near-future science fiction–themed) The second series included another run of 12 booklets in the following worlds: Wild Wet Quest (A Tomb Raider or Indiana Jones–style sequel to Wild Wet West) Hellcat (Pirate-themed) Galhound 2 (Near-future science fiction–themed) After each regular series, there were one or more bonus poster books that revisited the existing characters and settings. Minor works "Areopagus Arther" (1980), published in ATLAS (dōjinshi) "Yellow Hawk" (1981), published in ATLAS (dōjinshi) "Colosseum Pick" (1982), published in Funya (dōjinshi) "Pursuit (Manga)" (1982), published in Kintalion (dōjinshi) "Opional Orientation" (1984), published in ATLAS (dōjinshi) "Battle on Mechanism" (1984), published in ATLAS (dōjinshi) "Metamorphosis in Amazoness" (1984), published in ATLAS (dōjinshi) "Arice in Jargon" (1984), published in ATLAS
handle squeezed between the legs, and the far end held with one hand. Some sawists play standing, either with the handle between the knees and the blade sticking out in front of them. The saw is usually played with the serrated edge, or "teeth", facing the body, though some players face them away. Some saw players file down the teeth which makes no discernable difference to the sound. Manyespecially professionalsaw players use a handle, called a Tip-Handle or a Cheat, at the tip of the saw for easier bending and higher virtuosity. To sound a note, a sawist first bends the blade into an S-curve. The parts of the blade that are curved are damped from vibration, and do not sound. At the center of the S-curve a section of the blade remains relatively flat. This section, the "sweet spot", can vibrate across the width of the blade, producing a distinct pitch: the wider the section of blade, the lower the sound. Sound is usually created by drawing a bow across the back edge of the saw at the sweet spot, or sometimes by striking the sweet spot with a mallet. The sawist controls the pitch by adjusting the S-curve, making the sweet spot travel up the blade (toward a thinner width) for a higher pitch, or toward the handle for a lower pitch. Harmonics can be created by playing at varying distances on either side of the sweet spot. Sawists can add vibrato by shaking one of their legs or by wobbling the hand that holds the tip of the blade. Once a sound is produced, it will sustain for quite a while, and can be carried through several notes of a phrase. On occasion the musical saw is called for in orchestral music, but orchestral percussionists are seldom also sawists. If a note outside of the saw's range is called for, an electric guitar with a slide can be substituted. Types Sawists often use standard wood-cutting saws, although special musical saws are also made. As compared with wood-cutting saws, the blades of musical saws are generally wider, for range, and longer, for finer control. They do not have set or sharpened teeth, and may have grain running parallel to the back edge of the saw, rather than parallel to the teeth. Some musical saws are made with thinner metal, to increase flexibility, while others are made thicker, for a richer tone, longer sustain, and stronger harmonics. A typical musical saw is wide at the handle end and wide at the tip. Such a saw will generally produce about two octaves, regardless of length. A bass saw may be over at the handle and produce about two-and-a-half octaves. There are also musical saws with 3–4 octaves range, and new improvements have resulted in as much as 5 octaves note range. Two-person saws, also called "misery whips", can also be played, though with less virtuosity, and they produce an octave or less of range. Most sawists use cello or violin bows, using violin rosin, but some may use improvised home-made bows, such as a wooden dowel. Producers Musical saws have been produced for over a century, primarily in the United States, but also in Scandinavia, Germany, France (Lame sonore) and Asia. United States In the early 1900s, there were at least ten companies in the United States manufacturing musical saws. These saws ranged from the familiar steel variety to gold-plated masterpieces worth hundreds of dollars. However, with the start of World War II the demand for metals made the manufacture of saws too expensive and many of these companies went out of business. By the year 2000, only three companies in the United StatesMussehl & Westphal, Charlie Blacklock, and Wentworthwere making saws. In 2012, a company called Index Drums started producing a saw that had a built-in transducer in the handle, called the "JackSaw". Outside the United States Outside the United States, makers of musical saws include Bahco, makers of the limited edition Stradivarius, Alexis in France, Feldmann and Stövesandt in Germany, Music Blade in Greece and Thomas Flinn & Company in the United Kingdom, based in Sheffield, who produce three different sized musical saws, as well as accessories. Events, championships and world records The International Musical Saw Association (IMSA) produces an annual International Musical Saw Festival (including a "Saw-Off" competition) every August in Santa Cruz and Felton, California. An International Musical Saw Festival is held every other summer in New York City, produced by Natalia Paruz. Paruz also produced a musical saw festival in Israel. There are also annual saw festivals in Japan and China. A Guinness World Record for the largest musical-saw ensemble was established July 18, 2009, at the annual NYC Musical Saw Festival. Organized by Paruz, 53 musical saw players performed together. In 2011 a World Championship took place in Jelenia Góra/Poland. Winners: 1. Gladys Hulot (France), 2. Katharina Micada (Germany), 3. Tom Fink (Germany). Performers People notable for playing the musical saw. Natalia Paruz, also known as the "Saw Lady", plays the musical saw in movie soundtracks, in television commercials, with orchestras internationally, and is the organizer of international musical saw festivals in New York City and Israel. She was a judge at the musical saw festival in France and she played the saw in the off-Broadway show 'Sawbones'. The December 3rd 2011 crossword puzzle of the Washington Post had Paruz as a question: Down 5Instrument played by Natalia Paruz. Mara Carlyle, a London based singer/songwriter who often performs using the musical saw, and the instrument features on her albums The Lovely and Floreat. David Coulter, multi-instrumentalist, producer and music supervisor; ex-member of Test Dept and The Pogues, has played musical saw live, in films, on tv and stages around the world and on numerous albums with: Damon Albarn, Gorillaz, and Tom Waits, among others. He has played on many film scores, including Is Anybody There? (2008) and It's a Boy Girl Thing (2006), and has featured on TV soundtrack and themes tunes, most recently for Psychoville and episodes of Wallander. Janeen Rae Heller played the saw in four television guest appearances: The Tracey Ullman Show (1989), Quantum Leap (1990), and Home Improvement (1992 and 1999). She has also performed on albums such as Michael Hedges' The Road to Return in 1994 and Rickie Lee Jones's Ghostyhead in 1997. Mio Higashino, based in Osaka, Japan, won first place in the 42nd International Musical Saw Festival. Mio performs in Japan as part of the two-member group Mollen. Charles Hindmarsh, The Yorkshire Musical Saw Player, has played the musical saw throughout the UK. Kev Hopper, formerly the bass guitarist in the 1980s band Stump, made an EP titled Saurus in 2002 featuring six original saw tunes. Christine Johnston (under the stage name Eve Kransky) of The Kransky Sisters plays the musical saw alongside other traditional and improvised instruments. Julian Koster of the band Neutral Milk Hotel played the singing saw, along with other instruments, in the band and currently plays the saw in his solo project, The Music Tapes. In 2008, he released The Singing Saw at Christmastime. He also writes the podcast The Orbiting Human Circus (of the Air) which prominently features singing saws in the story. Katharina Micada plays
two-and-a-half octaves. There are also musical saws with 3–4 octaves range, and new improvements have resulted in as much as 5 octaves note range. Two-person saws, also called "misery whips", can also be played, though with less virtuosity, and they produce an octave or less of range. Most sawists use cello or violin bows, using violin rosin, but some may use improvised home-made bows, such as a wooden dowel. Producers Musical saws have been produced for over a century, primarily in the United States, but also in Scandinavia, Germany, France (Lame sonore) and Asia. United States In the early 1900s, there were at least ten companies in the United States manufacturing musical saws. These saws ranged from the familiar steel variety to gold-plated masterpieces worth hundreds of dollars. However, with the start of World War II the demand for metals made the manufacture of saws too expensive and many of these companies went out of business. By the year 2000, only three companies in the United StatesMussehl & Westphal, Charlie Blacklock, and Wentworthwere making saws. In 2012, a company called Index Drums started producing a saw that had a built-in transducer in the handle, called the "JackSaw". Outside the United States Outside the United States, makers of musical saws include Bahco, makers of the limited edition Stradivarius, Alexis in France, Feldmann and Stövesandt in Germany, Music Blade in Greece and Thomas Flinn & Company in the United Kingdom, based in Sheffield, who produce three different sized musical saws, as well as accessories. Events, championships and world records The International Musical Saw Association (IMSA) produces an annual International Musical Saw Festival (including a "Saw-Off" competition) every August in Santa Cruz and Felton, California. An International Musical Saw Festival is held every other summer in New York City, produced by Natalia Paruz. Paruz also produced a musical saw festival in Israel. There are also annual saw festivals in Japan and China. A Guinness World Record for the largest musical-saw ensemble was established July 18, 2009, at the annual NYC Musical Saw Festival. Organized by Paruz, 53 musical saw players performed together. In 2011 a World Championship took place in Jelenia Góra/Poland. Winners: 1. Gladys Hulot (France), 2. Katharina Micada (Germany), 3. Tom Fink (Germany). Performers People notable for playing the musical saw. Natalia Paruz, also known as the "Saw Lady", plays the musical saw in movie soundtracks, in television commercials, with orchestras internationally, and is the organizer of international musical saw festivals in New York City and Israel. She was a judge at the musical saw festival in France and she played the saw in the off-Broadway show 'Sawbones'. The December 3rd 2011 crossword puzzle of the Washington Post had Paruz as a question: Down 5Instrument played by Natalia Paruz. Mara Carlyle, a London based singer/songwriter who often performs using the musical saw, and the instrument features on her albums The Lovely and Floreat. David Coulter, multi-instrumentalist, producer and music supervisor; ex-member of Test Dept and The Pogues, has played musical saw live, in films, on tv and stages around the world and on numerous albums with: Damon Albarn, Gorillaz, and Tom Waits, among others. He has played on many film scores, including Is Anybody There? (2008) and It's a Boy Girl Thing (2006), and has featured on TV soundtrack and themes tunes, most recently for Psychoville and episodes of Wallander. Janeen Rae Heller played the saw in four television guest appearances: The Tracey Ullman Show (1989), Quantum Leap (1990), and Home Improvement (1992 and 1999). She has also performed on albums such as Michael Hedges' The Road to Return in 1994 and Rickie Lee Jones's Ghostyhead in 1997. Mio Higashino, based in Osaka, Japan, won first place in the 42nd International Musical Saw Festival. Mio performs in Japan as part of the two-member group Mollen. Charles Hindmarsh, The Yorkshire Musical Saw Player, has played the musical saw throughout the UK. Kev Hopper, formerly the bass guitarist in the 1980s band Stump, made an EP titled Saurus in 2002 featuring six original saw tunes. Christine Johnston (under the stage name Eve Kransky) of The Kransky Sisters plays the musical saw alongside other traditional and improvised instruments. Julian Koster of the band Neutral Milk Hotel played the singing saw, along with other instruments, in the band and currently plays the saw in his solo project, The Music Tapes. In 2008, he released The Singing Saw at Christmastime. He also writes the podcast The Orbiting Human Circus (of the Air) which prominently features singing saws in the story. Katharina Micada plays the musical saw on cabaret stages and with different Symphony Orchestras like Berlin Philharmonic Orchestra and London Philharmonic Orchestra. A singer, she is one of the few players, who can sing and play the saw simultaneously and in pitch. She has played in TV- and Radio shows and for film and CD recordings. Jamie Muir of the progressive rock band King Crimson briefly uses a musical saw on the song "Easy Money" from the album Larks' Tongues in Aspic. Bonnie Paine, singer and multi-instrumentalist from Talequah, Oklahoma, co-founder of Colorado folk-rock group Elephant Revival has performed on the musical saw as a member of the band. Angela Perley and the Howlin' Moons, an American rock band from Columbus, Ohio, features singer/guitarist Angela Perley who performs the musical saw on their recorded albums and at their live shows. Quinta (a.k.a. Kath Mann), London-based multi-instrumentalist and composer, has collaborated with many artists on the musical saw, including Bat for Lashes, Radiohead's Philip Selway, and The Paper Cinema. Thomas Jefferson Scribner was a familiar figure on the streets of Santa Cruz, California during the 1970s playing the musical saw. He performed on a variety of recordings and appeared in folk music festivals in the United States and Canada during the 1970s. His work as labour organizer and member of the Industrial Workers of the World is documented in the 1979 film The Wobblies. Canadian composer/saw player Robert Minden pays tribute to him on his Web site. Musician and songwriter Utah Phillips has recorded a song referencing Scribner, "The Saw Playing Musician" on the album Fellow Workers with Ani DiFranco. Artist Marghe McMahon was inspired in 1978 to create a bronze statue of Tom playing the musical saw which sits in downtown Santa Cruz. That 1 Guy, an American based musician who performs using homemade instruments. Jim Turner released The Well-Tempered Saw on Owl Records in 1971
current (DC) power transmission. Opto-isolators keep MIDI devices electrically separated from their MIDI connections, which prevents ground loops and protects equipment from voltage spikes. There is no error detection capability in MIDI, so the maximum cable length is set at to limit interference. Most devices do not copy messages from their input to their output port. A third type of port, the "thru" port, emits a copy of everything received at the input port, allowing data to be forwarded to another instrument in a "daisy chain" arrangement. Not all devices contain thru ports, and devices that lack the ability to generate MIDI data, such as effects units and sound modules, may not include out ports. Management devices Each device in a daisy chain adds delay to the system. This is avoided with a MIDI thru box, which contains several outputs that provide an exact copy of the box's input signal. A MIDI merger is able to combine the input from multiple devices into a single stream, and allows multiple controllers to be connected to a single device. A MIDI switcher allows switching between multiple devices, and eliminates the need to physically repatch cables. MIDI patch bays combine all of these functions. They contain multiple inputs and outputs, and allow any combination of input channels to be routed to any combination of output channels. Routing setups can be created using computer software, stored in memory, and selected by MIDI program change commands. This enables the devices to function as standalone MIDI routers in situations where no computer is present. MIDI patch bays also clean up any skewing of MIDI data bits that occurs at the input stage. MIDI data processors are used for utility tasks and special effects. These include MIDI filters, which remove unwanted MIDI data from the stream, and MIDI delays, effects that send a repeated copy of the input data at a set time. Interfaces A computer MIDI interface's main function is to match clock speeds between the MIDI device and the computer. Some computer sound cards include a standard MIDI connector, whereas others connect by any of various means that include the D-subminiature DA-15 game port, USB, FireWire, Ethernet or a proprietary connection. The increasing use of USB connectors in the 2000s has led to the availability of MIDI-to-USB data interfaces that can transfer MIDI channels to USB-equipped computers. Some MIDI keyboard controllers are equipped with USB jacks, and can be plugged into computers that run music software. MIDI's serial transmission leads to timing problems. A three-byte MIDI message requires nearly 1 millisecond for transmission. Because MIDI is serial, it can only send one event at a time. If an event is sent on two channels at once, the event on the second channel cannot transmit until the first one is finished, and so is delayed by 1 ms. If an event is sent on all channels at the same time, the last channel's transmission is delayed by as much as 16 ms. This contributed to the rise of MIDI interfaces with multiple in- and out-ports, because timing improves when events are spread between multiple ports as opposed to multiple channels on the same port. The term "MIDI slop" refers to audible timing errors that result when MIDI transmission is delayed. Controllers There are two types of MIDI controllers: performance controllers that generate notes and are used to perform music, and controllers that may not send notes, but transmit other types of real-time events. Many devices are some combination of the two types. Keyboards are by far the most common type of MIDI controller. MIDI was designed with keyboards in mind, and any controller that is not a keyboard is considered an "alternative" controller. This was seen as a limitation by composers who were not interested in keyboard-based music, but the standard proved flexible, and MIDI compatibility was introduced to other types of controllers, including guitars, stringed and wind instruments, drums and specialized and experimental controllers. Other controllers include drum controllers and wind controllers, which can emulate the playing of drum kit and wind instruments, respectively. Nevertheless, some features of the keyboard playing for which MIDI was designed do not fully capture other instruments' capabilities; Jaron Lanier cites the standard as an example of technological "lock-in" that unexpectedly limited what was possible to express. Some of these features, such as per-note pitch bend, are to be addressed in MIDI 2.0, described below. Software synthesizers offer great power and versatility, but some players feel that division of attention between a MIDI keyboard and a computer keyboard and mouse robs some of the immediacy from the playing experience. Devices dedicated to real-time MIDI control provide an ergonomic benefit, and can provide a greater sense of connection with the instrument than an interface that is accessed through a mouse or a push-button digital menu. Controllers may be general-purpose devices that are designed to work with a variety of equipment, or they may be designed to work with a specific piece of software. Examples of the latter include Akai's APC40 controller for Ableton Live, and Korg's MS-20ic controller that is a reproduction of their MS-20 analog synthesizer. The MS-20ic controller includes patch cables that can be used to control signal routing in their virtual reproduction of the MS-20 synthesizer, and can also control third-party devices. Instruments A MIDI instrument contains ports to send and receive MIDI signals, a CPU to process those signals, an interface that allows user programming, audio circuitry to generate sound, and controllers. The operating system and factory sounds are often stored in a Read-only memory (ROM) unit. A MIDI instrument can also be a stand-alone module (without a piano style keyboard) consisting of a General MIDI soundboard (GM, GS and XG), onboard editing, including transposing/pitch changes, MIDI instrument changes and adjusting volume, pan, reverb levels and other MIDI controllers. Typically, the MIDI Module includes a large screen, so the user can view information for the currently selected function. Features can include scrolling lyrics, usually embedded in a MIDI file or karaoke MIDI, playlists, song library and editing screens. Some MIDI Modules include a Harmonizer and the ability to playback and transpose MP3 audio files. Synthesizers Synthesizers may employ any of a variety of sound generation techniques. They may include an integrated keyboard, or may exist as "sound modules" or "expanders" that generate sounds when triggered by an external controller, such as a MIDI keyboard. Sound modules are typically designed to be mounted in a 19-inch rack. Manufacturers commonly produce a synthesizer in both standalone and rack-mounted versions, and often offer the keyboard version in a variety of sizes. Samplers A sampler can record and digitize audio, store it in random-access memory (RAM), and play it back. Samplers typically allow a user to edit a sample and save it to a hard disk, apply effects to it, and shape it with the same tools that synthesizers use. They also may be available in either keyboard or rack-mounted form. Instruments that generate sounds through sample playback, but have no recording capabilities, are known as "ROMplers". Samplers did not become established as viable MIDI instruments as quickly as synthesizers did, due to the expense of memory and processing power at the time. The first low-cost MIDI sampler was the Ensoniq Mirage, introduced in 1984. MIDI samplers are typically limited by displays that are too small to use to edit sampled waveforms, although some can be connected to a computer monitor. Drum machines Drum machines typically are sample playback devices that specialize in drum and percussion sounds. They commonly contain a sequencer that allows the creation of drum patterns, and allows them to be arranged into a song. There often are multiple audio outputs, so that each sound or group of sounds can be routed to a separate output. The individual drum voices may be playable from another MIDI instrument, or from a sequencer. Workstations and hardware sequencers Sequencer technology predates MIDI. Analog sequencers use CV/Gate signals to control pre-MIDI analog synthesizers. MIDI sequencers typically are operated by transport features modeled after those of tape decks. They are capable of recording MIDI performances, and arranging them into individual tracks along a multitrack recording concept. Music workstations combine controller keyboards with an internal sound generator and a sequencer. These can be used to build complete arrangements and play them back using their own internal sounds, and function as self-contained music production studios. They commonly include file storage and transfer capabilities. Effects devices Some effects units can be remotely controlled via MIDI. For example, the Eventide H3000 Ultra-harmonizer allows such extensive MIDI control that it is playable as a synthesizer. The Drum Buddy, a pedal-format drum machine, has a MIDI connection so that it can have its tempo synchronized with a looper pedal or time-based effects such as delay. Technical specifications MIDI messages are made up of 8-bit words (commonly called bytes) that are transmitted serially at a rate of 31.25 kbit/s. This rate was chosen because it is an exact division of 1 MHz, the operational speed of many early microprocessors. The first bit of each word identifies whether the word is a status byte or a data byte, and is followed by seven bits of information. A start bit and a stop bit are added to each byte for framing purposes, so a MIDI byte requires ten bits for transmission. A MIDI link can carry sixteen independent channels of information. The channels are numbered 1–16, but their actual corresponding binary encoding is 0–15. A device can be configured to only listen to specific channels and to ignore the messages sent on other channels ("Omni Off" mode), or it can listen to all channels, effectively ignoring the channel address ("Omni On"). An individual device may be monophonic (the start of a new "note-on" MIDI command implies the termination of the previous note), or polyphonic (multiple notes may be sounding at once, until the polyphony limit of the instrument is reached, or the notes reach the end of their decay envelope, or explicit "note-off" MIDI commands are received). Receiving devices can typically be set to all four combinations of "omni off/on" versus "mono/poly" modes. Messages A MIDI message is an instruction that controls some aspect of the receiving device. A MIDI message consists of a status byte, which indicates the type of the message, followed by up to two data bytes that contain the parameters. MIDI messages can be channel messages sent on only one of the 16 channels and monitored only by devices on that channel, or system messages that all devices receive. Each receiving device ignores data not relevant to its function. There are five types of message: Channel Voice, Channel Mode, System Common, System Real-Time, and System Exclusive. Channel Voice messages transmit real-time performance data over a single channel. Examples include "note-on" messages which contain a MIDI note number that specifies the note's pitch, a velocity value that indicates how forcefully the note was played, and the channel number; "note-off" messages that end a note; program change messages that change a device's patch; and control changes that allow adjustment of an instrument's parameters. MIDI notes are numbered from 0 to 127 assigned to C−1 to G9. This corresponds to a range of 8.175799 to 12543.85 Hz (assuming equal temperament and 440 Hz A4) and extends beyond the 88 note piano range from A0 to C8. System Exclusive messages System Exclusive (SysEx) messages are a major reason for the flexibility and longevity of the MIDI standard. Manufacturers use them to create proprietary messages that control their equipment more thoroughly than standard MIDI messages could. SysEx messages are addressed to a specific device in a system. Each manufacturer has a unique identifier that is included in its SysEx messages, which helps ensure that only the targeted device responds to the message, and that all others ignore it. Many instruments also include a SysEx ID setting, so a controller can address two devices of the same model independently. SysEx messages can include functionality beyond what the MIDI standard provides. They target a specific instrument, and are ignored by all other devices on the system. Implementation chart Devices typically do not respond to every type of message defined by the MIDI specification. The MIDI implementation chart was standardized by the MMA as a way for users to see what specific capabilities an instrument has, and how it responds to messages. A specific MIDI Implementation Chart is usually published for each MIDI device within the device documentation. Electrical specifications The MIDI 1.0 specification for the electrical interface is based on a fully isolated current loop. The MIDI out port nominally sources a +5 volt source through a 220 ohm resistor out through pin 4 on the MIDI out DIN connector, in on pin 4 of the receiving device's MIDI in DIN connector, through a 220 ohm protection resistor and the LED of an opto-isolator. The current then returns via pin 5 on the MIDI in port to the originating device's MIDI out port pin 5, again with a 220 ohm resistor in the path, giving a nominal current of about 5 milliamperes. Despite the cable's appearance, there is no conductive path between the two MIDI devices, only an optically isolated one. Properly designed MIDI devices are relatively immune to ground loops and similar interference. The baud rate on this system is 31,250 symbols per second, logic 0 being current on. The MIDI specification provides for a ground "wire" and a braid or foil shield, connected on pin 2, protecting the two signal-carrying conductors on pins 4 and 5. Although the MIDI cable is supposed to connect pin 2 and the braid or foil shield to chassis ground, it should do so only at the MIDI out port; the MIDI in port should leave pin 2 unconnected and isolated. Some large manufacturers of MIDI devices use modified MIDI in-only DIN 5-pin sockets with the metallic conductors intentionally omitted at pin positions 1, 2, and 3 so that the maximum voltage isolation is obtained. Extensions MIDI's
played back. A file format that stores and exchanges the data is also defined. Advantages of MIDI include small file size, ease of modification and manipulation and a wide choice of electronic instruments and synthesizer or digitally-sampled sounds. A MIDI recording of a performance on a keyboard could sound like a piano or other keyboard instrument; however, since MIDI records the messages and information about their notes and not the specific sounds, this recording could be changed to many other sounds, ranging from synthesized or sampled guitar or flute to full orchestra. A MIDI recording is not an audio signal, as with a sound recording made with a microphone. Prior to the development of MIDI, electronic musical instruments from different manufacturers could generally not communicate with each other. This meant that a musician could not, for example, plug a Roland keyboard into a Yamaha synthesizer module. With MIDI, any MIDI-compatible keyboard (or other controller device) can be connected to any other MIDI-compatible sequencer, sound module, drum machine, synthesizer, or computer, even if they are made by different manufacturers. MIDI technology was standardized in 1983 by a panel of music industry representatives, and is maintained by the MIDI Manufacturers Association (MMA). All official MIDI standards are jointly developed and published by the MMA in Los Angeles, and the MIDI Committee of the Association of Musical Electronics Industry (AMEI) in Tokyo. In 2016, the MMA established The MIDI Association (TMA) to support a global community of people who work, play, or create with MIDI. History In the early 1980s, there was no standardized means of synchronizing electronic musical instruments manufactured by different companies. Manufacturers had their own proprietary standards to synchronize instruments, such as CV/gate, DIN sync and Digital Control Bus (DCB). Roland founder Ikutaro Kakehashi felt the lack of standardization was limiting the growth of the electronic music industry. In June 1981, he proposed developing a standard to Oberheim Electronics founder Tom Oberheim, who had developed his own proprietary interface, the Oberheim System. Kakehashi felt the Oberheim System was too cumbersome, and spoke to Sequential Circuits president Dave Smith about creating a simpler, cheaper alternative. While Smith discussed the concept with American companies, Kakehashi discussed it with Japanese companies Yamaha, Korg and Kawai. Representatives from all companies met to discuss the idea in October. Initially, only Sequential Circuits and the Japanese companies were interested. Using Roland's DCB as a basis, Smith and Sequential Circuits engineer Chet Wood devised a universal interface to allow communication between equipment from different manufacturers. Smith and Wood proposed this standard in a paper, Universal Synthesizer Interface, at the Audio Engineering Society show in October 1981. The standard was discussed and modified by representatives of Roland, Yamaha, Korg, Kawai, and Sequential Circuits. Kakehashi favored the name Universal Musical Interface (UMI), pronounced you-me, but Smith felt this was "a little corny". However, he liked the use of "instrument" instead of "synthesizer", and proposed the name Musical Instrument Digital Interface (MIDI). Moog Music founder Robert Moog announced MIDI in the October 1982 issue of Keyboard. At the 1983 Winter NAMM Show, Smith demonstrated a MIDI connection between Prophet 600 and Roland JP-6 synthesizers. The MIDI specification was published in August 1983. The MIDI standard was unveiled by Kakehashi and Smith, who received Technical Grammy Awards in 2013 for their work. In 1982, the first instruments were released with MIDI, the Roland Jupiter-6 and the Prophet 600. In 1983, the first MIDI drum machine, the Roland TR-909, and the first MIDI sequencer, the Roland MSQ-700 were released. The first computer to support MIDI, the NEC PC-88 and PC-98, was released in 1982. The MIDI Manufacturers Association (MMA) was formed following a meeting of "all interested companies" at the 1984 Summer NAMM Show in Chicago. The MIDI 1.0 Detailed Specification was published at the MMA's second meeting at the 1985 Summer NAMM show. The standard continued to evolve, adding standardized song files in 1991 (General MIDI) and adapted to new connection standards such as USB and FireWire. In 2016, the MIDI Association was formed to continue overseeing the standard. An initiative to create a 2.0 standard was announced in January 2019. The MIDI 2.0 standard was introduced at the 2020 Winter NAMM show. Impact MIDI's appeal was originally limited to professional musicians and record producers who wanted to use electronic instruments in the production of popular music. The standard allowed different instruments to communicate with each other and with computers, and this spurred a rapid expansion of the sales and production of electronic instruments and music software. This interoperability allowed one device to be controlled from another, which reduced the amount of hardware musicians needed. MIDI's introduction coincided with the dawn of the personal computer era and the introduction of samplers and digital synthesizers. The creative possibilities brought about by MIDI technology are credited for helping revive the music industry in the 1980s. MIDI introduced capabilities that transformed the way many musicians work. MIDI sequencing makes it possible for a user with no notation skills to build complex arrangements. A musical act with as few as one or two members, each operating multiple MIDI-enabled devices, can deliver a performance similar to that of a larger group of musicians. The expense of hiring outside musicians for a project can be reduced or eliminated, and complex productions can be realized on a system as small as a synthesizer with integrated keyboard and sequencer. MIDI also helped establish home recording. By performing preproduction in a home environment, an artist can reduce recording costs by arriving at a recording studio with a partially completed song. Applications Instrument control MIDI was invented so that electronic or digital musical instruments could communicate with each other and so that one instrument can control another. For example, a MIDI-compatible sequencer can trigger beats produced by a drum sound module. Analog synthesizers that have no digital component and were built prior to MIDI's development can be retrofitted with kits that convert MIDI messages into analog control voltages. When a note is played on a MIDI instrument, it generates a digital MIDI message that can be used to trigger a note on another instrument. The capability for remote control allows full-sized instruments to be replaced with smaller sound modules, and allows musicians to combine instruments to achieve a fuller sound, or to create combinations of synthesized instrument sounds, such as acoustic piano and strings. MIDI also enables other instrument parameters (volume, effects, etc.) to be controlled remotely. Synthesizers and samplers contain various tools for shaping an electronic or digital sound. Filters adjust timbre, and envelopes automate the way a sound evolves over time after a note is triggered. The frequency of a filter and the envelope attack (the time it takes for a sound to reach its maximum level), are examples of synthesizer parameters, and can be controlled remotely through MIDI. Effects devices have different parameters, such as delay feedback or reverb time. When a MIDI continuous controller number (CCN) is assigned to one of these parameters, the device responds to any messages it receives that are identified by that number. Controls such as knobs, switches, and pedals can be used to send these messages. A set of adjusted parameters can be saved to a device's internal memory as a patch, and these patches can be remotely selected by MIDI program changes. Composition MIDI events can be sequenced with computer software, or in specialized hardware music workstations. Many digital audio workstations (DAWs) are specifically designed to work with MIDI as an integral component. MIDI piano rolls have been developed in many DAWs so that the recorded MIDI messages can be easily modified. These tools allow composers to audition and edit their work much more quickly and efficiently than did older solutions, such as multitrack recording. Because MIDI is a set of commands that create sound, MIDI sequences can be manipulated in ways that prerecorded audio cannot. It is possible to change the key, instrumentation or tempo of a MIDI arrangement, and to reorder its individual sections. The ability to compose ideas and quickly hear them played back enables composers to experiment. Algorithmic composition programs provide computer-generated performances that can be used as song ideas or accompaniment. Some composers may take advantage of standard, portable set of commands and parameters in MIDI 1.0 and General MIDI (GM) to share musical data files among various electronic instruments. The data composed via the sequenced MIDI recordings can be saved as a standard MIDI file (SMF), digitally distributed, and reproduced by any computer or electronic instrument that also adheres to the same MIDI, GM, and SMF standards. MIDI data files are much smaller than corresponding recorded audio files. Use with computers The personal computer market stabilized at the same time that MIDI appeared, and computers became a viable option for music production. In 1983 computers started to play a role in mainstream music production. In the years immediately after the 1983 ratification of the MIDI specification, MIDI features were adapted to several early computer platforms. NEC's PC-88 and PC-98 began supporting MIDI as early as 1982. The Yamaha CX5M introduced MIDI support and sequencing in an MSX system in 1984. The spread of MIDI on personal computers was largely facilitated by Roland Corporation's MPU-401, released in 1984, as the first MIDI-equipped PC sound card, capable of MIDI sound processing and sequencing. After Roland sold MPU sound chips to other sound card manufacturers, it established a universal standard MIDI-to-PC interface. The widespread adoption of MIDI led to computer-based MIDI software being developed. Soon after, a number of platforms began supporting MIDI, including the Apple II Plus, IIe and Macintosh, Commodore 64 and Amiga, Atari ST, Acorn Archimedes, and PC DOS. The Macintosh was a favorite among musicians in the United States, as it was marketed at a competitive price, and it took several years for PC systems to catch up with its efficiency and graphical interface. The Atari ST was preferred in Europe, where Macintoshes were more expensive. The Atari ST had the advantage of MIDI ports that were built directly into the computer. Most music software in MIDI's first decade was published for either the Apple or the Atari. By the time of Windows 3.0's 1990 release, PCs had caught up in processing power and had acquired a graphical interface and software titles began to see release on multiple platforms. In 2015, Retro Innovations released the first MIDI interface for a Commodore VIC-20, making the computer's four voices available to electronic musicians and retro-computing enthusiasts for the first time. Retro Innovations also makes a MIDI interface cartridge for Tandy Color Computer and Dragon computers. Chiptune musicians also use retro gaming consoles to compose, produce and perform music using MIDI interfaces. Custom interfaces are available for the Famicom, Nintendo Entertainment System (NES), Nintendo Gameboy and Game Boy Advance, Sega Megadrive and Sega Genesis. Computer files Standard files The Standard MIDI File (SMF) is a file format that provides a standardized way for music sequences to be saved, transported, and opened in other systems. The standard was developed and is maintained by the MMA, and usually uses a .mid extension. The compact size of these files led to their widespread use in computers, mobile phone ringtones, webpage authoring and musical greeting cards. These files are intended for universal use and include such information as note values, timing and track names. Lyrics may be included as metadata, and can be displayed by karaoke machines. SMFs are created as an export format of software sequencers or hardware workstations. They organize MIDI messages into one or more parallel tracks and time-stamp the events so that they can be played back in sequence. A header contains the arrangement's track count, tempo and an indicator of which of three SMF formats the file uses. A type 0 file contains the entire performance, merged onto a single track, while type 1 files may contain any number of tracks that are performed synchronously. Type 2 files are rarely used and store multiple arrangements, with each arrangement having its own track and intended to be played in sequence. RMID files Microsoft Windows bundles SMFs together with Downloadable Sounds (DLS) in a Resource Interchange File Format (RIFF) wrapper, as RMID files with a .rmi extension. RIFF-RMID has been deprecated in favor of Extensible Music Files (XMF). A MIDI file is not an audio recording. Rather, it is a set of instructionsfor example, for pitch or tempoand can use a thousand times less disk space than the equivalent recorded audio. Due to their tiny filesize, fan-made MIDI arrangements became an attractive way to share music online, before the advent of broadband internet access and multi-gigabyte hard drives. The major drawback to this is the wide variation in quality of users' audio cards, and in the actual audio contained as samples or synthesized sound in the card that the MIDI data only refers to symbolically. Even a sound card that contains high-quality sampled sounds can have inconsistent quality from one sampled instrument to another, Early budget-priced cards, such as the AdLib and the Sound Blaster and its compatibles, used a stripped-down version of Yamaha's frequency modulation synthesis (FM synthesis) technology played back through low-quality digital-to-analog converters. The low-fidelity reproduction of these ubiquitous cards was often assumed to somehow be a property of MIDI itself. This created a perception of MIDI as low-quality audio, while in reality MIDI itself contains no sound, and the quality of its playback depends entirely on the quality of the sound-producing device. Software The main advantage of the personal computer in a MIDI system is that it can serve a number of different purposes, depending on the software that is loaded. Multitasking allows simultaneous operation of programs that may be able to share data with each other. Sequencers Sequencing software allows recorded MIDI data to be manipulated using standard computer editing features such as cut, copy and paste and drag and drop. Keyboard shortcuts can be used to streamline workflow, and, in some systems, editing functions may be invoked by MIDI events. The sequencer allows each channel to be set to play a different sound and gives a graphical overview of the arrangement. A variety of editing tools are made available, including a notation display or scorewriter that can be used to create printed parts for musicians. Tools such as looping, quantization, randomization, and transposition simplify the arranging process. Beat creation is simplified, and groove templates can be used to duplicate another track's rhythmic feel. Realistic expression can be added through the manipulation of real-time controllers. Mixing can be performed, and MIDI can be synchronized with recorded audio and video tracks. Work can be saved, and transported between different computers or studios. Sequencers may take alternate forms, such as drum pattern editors that allow users to create beats by clicking on pattern grids, and loop sequencers such as ACID Pro, which allow MIDI to be combined with prerecorded audio loops whose tempos and keys are matched to each other. Cue-list sequencing is used to trigger dialogue, sound effect, and music cues in stage and broadcast production. Notation software With MIDI, notes played on a keyboard can automatically be transcribed to sheet music. Scorewriting software typically lacks advanced sequencing tools, and is optimized for the creation of a neat, professional printout designed for live instrumentalists. These programs provide support for dynamics and expression markings, chord and lyric display, and complex score styles. Software is available that can print scores in braille. Notation programs include Finale, Encore, Sibelius, MuseScore and Dorico. SmartScore software can produce MIDI files from scanned sheet music. Editor/librarians Patch editors allow users to program their equipment through the computer interface. These became essential with the appearance of complex synthesizers such as the Yamaha FS1R, which contained several thousand programmable parameters, but had an interface that consisted of fifteen tiny buttons, four knobs and a small LCD. Digital instruments typically discourage users from experimentation, due to their lack of the feedback and direct control that switches and knobs would provide, but patch editors give owners of hardware instruments and effects devices the same editing functionality that is available to users of software synthesizers. Some editors are designed for a specific instrument or effects device, while other, universal editors support a variety of equipment, and ideally can control the parameters of every device in a setup through the use of System Exclusive messages. Patch librarians have the specialized function of organizing the sounds in a collection of equipment and exchange entire banks of sounds between an instrument and a computer. In this way the device's limited patch storage is augmented by a computer's much greater disk capacity. Once transferred to the computer, it is possible to share custom patches with other owners of the same instrument. Universal editor/librarians that combine the two functions were once common, and included Opcode Systems' Galaxy and eMagic's SoundDiver. These programs have been largely abandoned with the trend toward computer-based synthesis, although Mark of the Unicorn's (MOTU)'s Unisyn and Sound Quest's Midi Quest remain available. Native Instruments' Kore was an effort to bring the editor/librarian concept into the age of software instruments. Auto-accompaniment programs Programs that can dynamically generate accompaniment tracks are called auto-accompaniment programs. These create a full band arrangement in a style that the user selects, and send the result to a MIDI sound generating device for playback. The generated tracks can be used as educational or practice tools, as accompaniment for live performances, or as a songwriting aid. Synthesis and sampling Computers can use software to generate sounds, which are then passed through a digital-to-analog converter (DAC) to a power amplifier and loudspeaker system. The number of sounds that can be played simultaneously (the polyphony) is dependent on the power of the computer's CPU, as are the sample rate and bit depth of playback, which directly affect the quality of the sound. Synthesizers implemented in software are subject to timing issues that are not necessarily present with hardware instruments, whose dedicated operating systems are not subject to interruption from background tasks as desktop operating systems are. These timing issues can cause synchronization problems, and clicks and pops when sample playback is interrupted. Software synthesizers also may exhibit additional latency in their sound generation. The roots of software synthesis go back as far as the 1950s, when Max Mathews of Bell Labs wrote the MUSIC-N programming language, which was capable of non-real-time sound generation. The first synthesizer to run directly on a host computer's CPU was Reality, by Dave Smith's Seer Systems, which achieved a low latency through tight driver integration, and therefore could run only on Creative Labs soundcards. Some systems use dedicated hardware to reduce the load on the host CPU, as with Symbolic Sound Corporation's Kyma System, and the Creamware/Sonic Core Pulsar/SCOPE systems, which power an entire recording studio's worth of instruments, effect units, and mixers. The ability to construct full MIDI arrangements entirely in computer software allows a composer to render a finalized result directly as an audio file. Game music Early PC games were distributed on floppy disks, and the small size of MIDI files made them a viable means of providing soundtracks. Games of the DOS and early Windows eras typically required compatibility with either Ad Lib or Sound Blaster audio cards. These cards used FM synthesis, which generates sound through modulation of sine waves. John Chowning, the technique's pioneer, theorized that the technology would be capable of accurate recreation of any sound if enough sine waves were used, but budget computer audio cards performed FM synthesis with only two sine waves. Combined with the cards' 8-bit audio, this resulted in a sound described as "artificial" and "primitive". Wavetable daughterboards that were later available provided audio samples that could be used in place of the FM sound. These were expensive, but often used the sounds from respected MIDI instruments such as the E-mu Proteus. The computer industry moved in the mid-1990s toward wavetable-based soundcards with 16-bit playback, but standardized on a 2 MB of wavetable storage, a space too small in which to fit good-quality samples of 128 General MIDI instruments plus drum kits. To make the most of the limited space, some manufacturers stored 12-bit samples and expanded those to 16 bits on playback. Other applications Despite its association with music devices, MIDI can control any electronic or digital device that can read and process a MIDI command. MIDI has been adopted as a control protocol in a number of non-musical applications. MIDI Show Control uses MIDI commands to direct stage lighting systems and to trigger cued events in theatrical productions. VJs and turntablists use it to cue clips, and to synchronize equipment, and recording systems use it for synchronization and automation. Apple Motion allows control of animation parameters through MIDI. The 1987 first-person shooter game MIDI Maze and the 1990 Atari ST computer puzzle game Oxyd used MIDI to network computers together. Devices Connectors The cables terminate in a 180° five-pin DIN connector. Standard applications use only three of the five conductors: a ground wire (pin 2), and a balanced pair of conductors (pins 4 and 5) that carry a +5 volt data signal. This connector configuration can only carry messages in one direction, so a second cable is necessary for two-way communication. Some proprietary applications, such as phantom-powered footswitch controllers, use the spare pins for direct current (DC) power transmission. Opto-isolators keep MIDI devices electrically separated from their MIDI connections, which prevents ground loops and protects equipment from voltage spikes. There is no error detection capability in MIDI, so the maximum cable length is set at to limit interference. Most devices do not copy messages from their input to their output port. A third type of port, the "thru" port, emits a copy of everything received at the input port, allowing data to be forwarded to another instrument in a "daisy chain" arrangement. Not all devices contain thru ports, and devices that lack the ability to generate MIDI data, such as effects units and sound modules, may not include out ports. Management devices Each device in a daisy chain adds delay to the system. This is avoided with a MIDI thru box, which contains several outputs that provide an exact copy of the box's input signal. A MIDI merger is able to combine the input from multiple devices into a single stream, and allows multiple controllers to be connected to a single device. A MIDI switcher allows switching between multiple devices, and eliminates the need to physically repatch cables. MIDI patch bays combine all of these functions. They contain multiple inputs and outputs, and allow any combination of input channels to be routed to any combination of output channels. Routing setups can be created using computer software, stored in memory, and selected by MIDI program change commands. This enables the devices to function as standalone MIDI routers in situations where no computer is present. MIDI patch bays also clean up any skewing of MIDI data bits that occurs at the input stage. MIDI data processors are used for utility tasks and special effects. These include MIDI filters, which remove unwanted MIDI data from the stream, and MIDI delays, effects that send a repeated copy of the input data at a set time. Interfaces A computer MIDI interface's main function is to match clock speeds between the MIDI device and the computer. Some computer sound cards include a standard MIDI connector, whereas others connect by any of various means that include the D-subminiature DA-15 game port, USB, FireWire, Ethernet or a proprietary connection. The increasing use of USB connectors in the 2000s has led to the availability of MIDI-to-USB data interfaces that can transfer MIDI channels to USB-equipped computers. Some MIDI keyboard controllers are equipped with USB jacks, and can be plugged into computers that run music software. MIDI's serial transmission leads to timing problems. A three-byte MIDI message requires nearly 1 millisecond for transmission. Because MIDI is serial, it can only send one event at a time. If an event is sent on two channels at once, the event on the second channel cannot transmit until the first one is finished, and so is delayed by 1 ms. If an event is sent on all channels at the same time, the last channel's transmission is delayed by as much as 16 ms. This contributed to the rise of MIDI interfaces with multiple in- and out-ports, because timing improves when events are spread between multiple ports as opposed to multiple channels on the same port. The term "MIDI slop" refers to audible timing errors that result when MIDI transmission is delayed. Controllers There are two types of MIDI controllers: performance controllers that generate notes and are used to perform music, and controllers that may not send notes, but transmit other types of real-time events. Many devices are some combination of the two types. Keyboards are by far the most common type of MIDI controller. MIDI was designed
layer implements the System/38's hardware-independent Machine Interface instruction set in terms of IMPI instructions. Prior to the instruction of the IBM RS64 processor line, early IBM AS/400 systems used the same architecture. The Nintendo 64's Reality Coprocessor (RCP), which serves as the console's graphics processing unit and audio processor, utilizes microcode; it is possible to implement new effects or tweak the processor to achieve the desired output. Some notable examples of custom RCP microcode include the high-resolution graphics, particle engines, and unlimited draw distances found in Factor 5's Indiana Jones and the Infernal Machine, Star Wars: Rogue Squadron, and Star Wars: Battle for Naboo; and the full motion video playback found in Angel Studios' Resident Evil 2. The VU0 and VU1 vector units in the Sony PlayStation 2 are microprogrammable; in fact, VU1 is only accessible via microcode for the first several generations of the SDK. The MicroCore Labs MCL86 , MCL51 and MCL65 are examples of highly encoded "vertical" microsequencer implementations of the Intel 8086/8088, 8051, and MOS 6502. The Digital Scientific Corp. Meta 4 Series 16 computer system was a user-microprogammable system first available in 1970. The microcode had a primarily vertical style with 32-bit microinstructions. The instructions were stored on replaceable program boards with a grid of bit positions. One (1) bits were represented by small metal squares that were sensed by amplifiers, zero (0) bits by the absence of the squares. The system could be configured with up to 4K 16-bit words of microstore. One of Digital Scientific's products was an emulator for the IBM 1130. The MCP-1600 is a microprocessor made by Western Digital in the late 1970s through the early 1980s used to implement three different computer architectures in microcode: the Pascal MicroEngine, the WD16, and the DEC LSI-11, a cost-reduced PDP-11. Earlier x86 processors are fully microcoded; starting with the Intel 80486, less complicated instructions are implemented directly in hardware. x86 processors implemented patchable microcode (patch by BIOS or operating system) since Intel P6 microarchitecture and AMD K7 microarchitecture. Some video cards, wireless network interface controllers implemented patchable microcode (patch by operating system). Implementation Each microinstruction in a microprogram provides the bits that control the functional elements that internally compose a CPU. The advantage over a hard-wired CPU is that internal CPU control becomes a specialized form of a computer program. Microcode thus transforms a complex electronic design challenge (the control of a CPU) into a less complex programming challenge. To take advantage of this, a CPU is divided into several parts: An I-unit may decode instructions in hardware and determine the microcode address for processing the instruction in parallel with the E-unit. A microsequencer picks the next word of the control store. A sequencer is mostly a counter, but usually also has some way to jump to a different part of the control store depending on some data, usually data from the instruction register and always some part of the control store. The simplest sequencer is just a register loaded from a few bits of the control store. A register set is a fast memory containing the data of the central processing unit. It may include the program counter and stack pointer, and may also include other registers that are not easily accessible to the application programmer. Often the register set is a triple-ported register file; that is, two registers can be read, and a third written at the same time. An arithmetic and logic unit performs calculations, usually addition, logical negation, a right shift, and logical AND. It often performs other functions, as well. There may also be a memory address register and a memory data register, used to access the main computer storage. Together, these elements form an "execution unit". Most modern CPUs have several execution units. Even simple computers usually have one unit to read and write memory, and another to execute user code. These elements could often be brought together as a single chip. This chip comes in a fixed width that would form a "slice" through the execution unit. These are known as "bit slice" chips. The AMD Am2900 family is one of the best known examples of bit slice elements. The parts of the execution units and the whole execution units are interconnected by a bundle of wires called a bus. Programmers develop microprograms, using basic software tools. A microassembler allows a programmer to define the table of bits symbolically. Because of its close relationship to the underlying architecture, "microcode has several properties that make it difficult to generate using a compiler." A simulator program is intended to execute the bits in the same way as the electronics, and allows much more freedom to debug the microprogram. After the microprogram is finalized, and extensively tested, it is sometimes used as the input to a computer program that constructs logic to produce the same data. This program is similar to those used to optimize a programmable logic array. Even without fully optimal logic, heuristically optimized logic can vastly reduce the number of transistors from the number needed for a read-only memory (ROM) control store. This reduces the cost to produce, and the electricity used by, a CPU. Microcode can be characterized as horizontal or vertical, referring primarily to whether each microinstruction controls CPU elements with little or no decoding (horizontal microcode) or requires extensive decoding by combinatorial logic before doing so (vertical microcode). Consequently, each horizontal microinstruction is wider (contains more bits) and occupies more storage space than a vertical microinstruction. Horizontal microcode "Horizontal microcode has several discrete micro-operations that are combined in a single microinstruction for simultaneous operation." Horizontal microcode is typically contained in a fairly wide control store; it is not uncommon for each word to be 108 bits or more. On each tick of a sequencer clock a microcode word is read, decoded, and used to control the functional elements that make up the CPU. In a typical implementation a horizontal microprogram word comprises fairly tightly defined groups of bits. For example, one simple arrangement might be: For this type of micromachine to implement a JUMP instruction with the address following the opcode, the microcode might require two clock ticks. The engineer designing it would write microassembler source code looking something like this: # Any line starting with a number-sign is a comment # This is just a label, the ordinary way assemblers symbolically represent a # memory address. InstructionJUMP: # To prepare for the next instruction, the instruction-decode microcode has already # moved the program counter to the memory address register. This instruction fetches # the target address of the jump instruction from the memory word following the # jump opcode, by copying from the memory data register to the memory address register. # This gives the memory system two clock ticks to fetch the next # instruction to the memory data register for use by the instruction decode. # The sequencer instruction "next" means just add 1 to the control word address. MDR, NONE, MAR, COPY, NEXT, NONE # This places the address of the next instruction into the PC. # This gives the memory system a clock tick to finish the fetch started on the # previous microinstruction. # The sequencer instruction is to jump to the start of the instruction decode. MAR, 1, PC, ADD, JMP, InstructionDecode # The instruction decode is not shown, because it is usually a mess, very particular # to the exact processor being emulated. Even this example is simplified. # Many CPUs have several ways to calculate the address, rather than just fetching # it from the word following the op-code. Therefore, rather than just one # jump instruction, those CPUs have a family of related jump instructions. For each tick it is common to find that only some portions of the CPU are used, with the remaining groups of bits in the microinstruction being no-ops. With careful design of hardware and microcode, this property can be exploited to parallelise operations that use different areas of the CPU; for example, in the case above, the ALU is not required during the first tick, so it could potentially be used to complete an earlier arithmetic instruction. Vertical microcode In vertical microcode, each microinstruction is significantly encoded, that is, the bit fields generally pass through intermediate combinatory logic that, in turn, generates the control and sequencing signals for internal CPU elements (ALU, registers, etc.). This is in contrast with horizontal microcode, in which the bit fields either directly produce the control and sequencing signals or are only minimally encoded. Consequently, vertical microcode requires smaller instruction lengths and less storage, but requires more time to decode, resulting in a slower CPU clock. Some vertical microcode is just the assembly language of a simple conventional computer that is emulating a more complex computer. Some processors, such as DEC Alpha processors and the CMOS microprocessors on later IBM mainframes System/390 and z/Architecture, use machine code, running in a special mode that gives it access to special instructions, special registers, and other hardware resources unavailable to regular machine code, to implement some instructions and other functions, such as page table walks on Alpha processors. This is called PALcode on Alpha processors and millicode on IBM mainframe processors. Another form of vertical microcode has two fields: The field select selects which part of the CPU will be controlled by this word of the control store. The field value controls that part of the CPU. With this type of microcode, a designer explicitly chooses to make a slower CPU to save money by reducing the unused bits in the control store; however, the reduced complexity may increase the CPU's clock frequency, which lessens the effect of an increased number of cycles per instruction. As transistors grew cheaper, horizontal microcode came to dominate the design of CPUs using microcode, with vertical microcode being used less often. When both vertical and horizontal microcode are used, the horizontal microcode may be referred to as nanocode or picocode. Writable control store A few computers were built using writable microcode. In this design, rather than storing the microcode in ROM or hard-wired logic, the microcode is stored in a RAM called a writable control store or WCS. Such a computer is sometimes called a writable instruction set computer (WISC). Many experimental prototype computers use writable control stores; there are also commercial machines that use writable microcode, such as the Burroughs Small Systems, early Xerox workstations, the DEC VAX 8800 (Nautilus) family, the Symbolics L- and G-machines, a number of IBM System/360 and System/370 implementations, some DEC PDP-10 machines, and the Data General Eclipse MV/8000. Many more machines offer user-programmable writable control stores as an option, including the HP 2100, DEC PDP-11/60 and Varian Data Machines V-70 series minicomputers. The IBM System/370 includes a facility called Initial-Microprogram Load (IML or IMPL) that can be invoked from the console, as part of power-on reset (POR) or from another processor in a tightly coupled multiprocessor complex. Some commercial machines, for example IBM 360/85, have both a read-only storage and a writable control store for microcode. WCS offers several advantages including the ease of patching the microprogram and, for certain hardware generations, faster access than ROMs can provide. User-programmable WCS allows the user to optimize the machine for specific purposes. Starting with the Pentium Pro in 1995, several x86 CPUs have writable Intel Microcode. This, for example, has allowed bugs in the Intel Core 2 and Intel Xeon microcodes to be fixed by patching their microprograms, rather than requiring the entire chips to be replaced. A second prominent example is the set of microcode patches that Intel offered for some of their processor architectures of up to 10 years in age, in a bid to counter the security vulnerabilities discovered in their designs – Spectre and Meltdown – which went public at the start of 2018. A microcode update can be installed by Linux, FreeBSD, Microsoft Windows, or the motherboard BIOS. Comparison to VLIW and RISC The design trend toward heavily microcoded processors with complex instructions began in the early 1960s and continued until roughly the mid-1980s. At that point the RISC design philosophy started becoming more prominent. A CPU that uses microcode generally takes several clock cycles to execute a single instruction, one clock cycle for each step in the microprogram for that instruction. Some CISC processors include instructions that can take a very long time to execute. Such variations interfere with both interrupt latency and, what is far more important in modern systems, pipelining. When designing a new processor, a hardwired control RISC has the following advantages over microcoded CISC: Programming has largely moved away from assembly level, so it's no longer worthwhile to provide complex instructions for productivity reasons. Simpler instruction sets allow direct execution by hardware, avoiding the performance penalty of microcoded execution. Analysis shows complex instructions are rarely used, hence the machine resources devoted to them are largely wasted. The machine resources devoted to rarely used complex instructions are better used for expediting performance of simpler, commonly used instructions. Complex microcoded instructions may require many clock cycles that vary, and are difficult to pipeline for increased performance. There are counterpoints as well: The complex instructions in heavily microcoded implementations may not take much extra machine resources, except for microcode space. For example, the same ALU is often used to calculate an effective address and to compute the result from the operands, e.g., the original Z80, 8086, and others. The simpler non-RISC instructions (i.e., involving direct memory operands) are frequently used by modern compilers. Even immediate to
of detailed circuit-level operations. It separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram. More extensive microcoding allows small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family. Some hardware vendors, especially IBM, use the term microcode as a synonym for firmware. In that way, all code within a device is termed microcode regardless of it being microcode or machine code; for example, hard disk drives are said to have their microcode updated, though they typically contain both microcode and firmware. Overview The lowest layer in a computer's software stack is traditionally raw machine code instructions for the processor. In microcoded processors, the microcode fetches and executes those instructions. To avoid confusion, each microprogram-related element is differentiated by the micro prefix: microinstruction, microassembler, microprogrammer, microarchitecture, etc. Engineers normally write the microcode during the design phase of a processor, storing it in a read-only memory (ROM) or programmable logic array (PLA) structure, or in a combination of both. However, machines also exist that have some or all microcode stored in static random-access memory (SRAM) or flash memory. This is traditionally denoted as writeable control store in the context of computers, which can be either read-only or read-write memory. In the latter case, the CPU initialization process loads microcode into the control store from another storage medium, with the possibility of altering the microcode to correct bugs in the instruction set, or to implement new machine instructions. Complex digital processors may also employ more than one (possibly microcode-based) control unit in order to delegate sub-tasks that must be performed essentially asynchronously in parallel. A high-level programmer, or even an assembly language programmer, does not normally see or change microcode. Unlike machine code, which often retains some backward compatibility among different processors in a family, microcode only runs on the exact electronic circuitry for which it is designed, as it constitutes an inherent part of the particular processor design itself. Microprograms consist of series of microinstructions, which control the CPU at a very fundamental level of hardware circuitry. For example, a single typical horizontal microinstruction might specify the following operations: Connect register 1 to the A side of the ALU Connect register 7 to the B side of the ALU Set the ALU to perform two's-complement addition Set the ALU's carry input to zero Store the result value in register 8 Update the condition codes from the ALU status flags (negative, zero, overflow, and carry) Microjump to a given microPC address for the next microinstruction To simultaneously control all processor's features in one cycle, the microinstruction is often wider than 50 bits; e.g., 128 bits on a 360/85 with an emulator feature. Microprograms are carefully designed and optimized for the fastest possible execution, as a slow microprogram would result in a slow machine instruction and degraded performance for related application programs that use such instructions. Justification Microcode was originally developed as a simpler method of developing the control logic for a computer. Initially, CPU instruction sets were hardwired. Each step needed to fetch, decode, and execute the machine instructions (including any operand address calculations, reads, and writes) was controlled directly by combinational logic and rather minimal sequential state machine circuitry. While such hard-wired processors were very efficient, the need for powerful instruction sets with multi-step addressing and complex operations (see below) made them difficult to design and debug; highly encoded and varied-length instructions can contribute to this as well, especially when very irregular encodings are used. Microcode simplified the job by allowing much of the processor's behaviour and programming model to be defined via microprogram routines rather than by dedicated circuitry. Even late in the design process, microcode could easily be changed, whereas hard-wired CPU designs were very cumbersome to change. Thus, this greatly facilitated CPU design. From the 1940s to the late 1970s, a large portion of programming was done in assembly language; higher-level instructions mean greater programmer productivity, so an important advantage of microcode was the relative ease by which powerful machine instructions can be defined. The ultimate extension of this are "Directly Executable High Level Language" designs, in which each statement of a high-level language such as PL/I is entirely and directly executed by microcode, without compilation. The IBM Future Systems project and Data General Fountainhead Processor are examples of this. During the 1970s, CPU speeds grew more quickly than memory speeds and numerous techniques such as memory block transfer, memory pre-fetch and multi-level caches were used to alleviate this. High-level machine instructions, made possible by microcode, helped further, as fewer more complex machine instructions require less memory bandwidth. For example, an operation on a character string can be done as a single machine instruction, thus avoiding multiple instruction fetches. Architectures with instruction sets implemented by complex microprograms included the IBM System/360 and Digital Equipment Corporation VAX. The approach of increasingly complex microcode-implemented instruction sets was later called complex instruction set computer (CISC). An alternate approach, used in many microprocessors, is to use one or more programmable logic array (PLA) or read-only memory (ROM) (instead of combinational logic) mainly for instruction decoding, and let a simple state machine (without much, or any, microcode) do most of the sequencing. The MOS Technology 6502 is an example of a microprocessor using a PLA for instruction decode and sequencing. The PLA is visible in photomicrographs of the chip, and its operation can be seen in the transistor-level simulation. Microprogramming is still used in modern CPU designs. In some cases, after the microcode is debugged in simulation, logic functions are substituted for the control store. Logic functions are often faster and less expensive than the equivalent microprogram memory. Benefits A processor's microprograms operate on a more primitive, totally different, and much more hardware-oriented architecture than the assembly instructions visible to normal programmers. In coordination with the hardware, the microcode implements the programmer-visible architecture. The underlying hardware need not have a fixed relationship to the visible architecture. This makes it easier to implement a given instruction set architecture on a wide variety of underlying hardware micro-architectures. The IBM System/360 has a 32-bit architecture with 16 general-purpose registers, but most of the System/360 implementations use hardware that implements a much simpler underlying microarchitecture; for example, the System/360 Model 30 has 8-bit data paths to the arithmetic logic unit (ALU) and main memory and implemented the general-purpose registers in a special unit of higher-speed core memory, and the System/360 Model 40 has 8-bit data paths to the ALU and 16-bit data paths to main memory and also implemented the general-purpose registers in a special unit of higher-speed core memory. The Model 50 has full 32-bit data paths and implements the general-purpose registers in a special unit of higher-speed core memory. The Model 65 through the Model 195 have larger data paths and implement the general-purpose registers in faster transistor circuits. In this way, microprogramming enabled IBM to design many System/360 models with substantially different hardware and spanning a wide range of cost and performance, while
A three-tier architecture is typically composed of a presentation tier, a logic tier, and a data tier. While the concepts of layer and tier are often used interchangeably, one fairly common point of view is that there is indeed a difference. This view holds that a layer is a logical structuring mechanism for the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure. For example, a three-layer solution could easily be deployed on a single tier, such in the case of an extreme database-centric architecture called RDBMS-only architecture or in a personal workstation. Layers The "Layers" architectural pattern has been described in various publications. Common layers In a logical multilayer architecture for an information system with an object-oriented design, the following four are the most common: Presentation layer (a.k.a. UI layer, view layer, presentation tier in multitier architecture) Application layer (a.k.a. service layer or GRASP Controller Layer ) Business layer (a.k.a. business logic layer (BLL), domain logic layer) Data access layer (a.k.a. persistence layer, logging, networking, and other services which are required to support a particular business layer) The book Domain Driven Design describes some common uses for the above four layers, although its primary focus is the domain layer. If the application architecture has no explicit distinction between the business layer and the presentation layer (i.e., the presentation layer is considered part of the business layer), then a traditional client-server (two-tier) model has been implemented. The more usual convention is that the application layer (or service layer) is considered a sublayer of the business layer, typically encapsulating the API definition surfacing the supported business functionality. The application/business layers can, in fact, be further subdivided to emphasize additional sublayers of distinct responsibility. For example, if the model–view–presenter pattern is used, the presenter sublayer might be used as an additional layer between the user interface layer and the business/application layer (as represented by the model sublayer). Some also identify a separate layer called the business infrastructure layer (BI), located between the business layer(s) and the infrastructure layer(s). It's also sometimes called the "low-level business layer" or the "business services layer". This layer is very general and can be used in several application tiers (e.g. a CurrencyConverter). The infrastructure layer can be partitioned into different levels (high-level or low-level technical services). Developers often focus on the persistence (data access) capabilities of the infrastructure layer and therefore only talk about the persistence layer or the data access layer (instead of an infrastructure layer or technical services layer). In other words, the other kind of technical services are not always explicitly thought of as part of any particular layer. A layer is on top of another, because it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Another common view is that layers do not always strictly depend on only the adjacent layer below. For example, in a relaxed layered system (as opposed to a strict layered system) a layer can also depend on all the layers below it. Three-tier architecture Three-tier architecture is a client-server software architecture pattern in which the user interface (presentation), functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms. It was developed by John J. Donovan in Open Environment Corporation (OEC), a tools company he founded
the user interface layer and the business/application layer (as represented by the model sublayer). Some also identify a separate layer called the business infrastructure layer (BI), located between the business layer(s) and the infrastructure layer(s). It's also sometimes called the "low-level business layer" or the "business services layer". This layer is very general and can be used in several application tiers (e.g. a CurrencyConverter). The infrastructure layer can be partitioned into different levels (high-level or low-level technical services). Developers often focus on the persistence (data access) capabilities of the infrastructure layer and therefore only talk about the persistence layer or the data access layer (instead of an infrastructure layer or technical services layer). In other words, the other kind of technical services are not always explicitly thought of as part of any particular layer. A layer is on top of another, because it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Another common view is that layers do not always strictly depend on only the adjacent layer below. For example, in a relaxed layered system (as opposed to a strict layered system) a layer can also depend on all the layers below it. Three-tier architecture Three-tier architecture is a client-server software architecture pattern in which the user interface (presentation), functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms. It was developed by John J. Donovan in Open Environment Corporation (OEC), a tools company he founded in Cambridge, Massachusetts. Apart from the usual advantages of modular software with well-defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently in response to changes in requirements or technology. For example, a change of operating system in the presentation tier would only affect the user interface code. Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic that may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe that contains the computer data storage logic. The middle tier may be multitiered itself (in which case the overall architecture is called an "n-tier architecture"). Presentation tier This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing and shopping cart contents. It communicates with other tiers by which it puts out the results to the browser/client tier and all other tiers in the network. In simple terms, it is a layer which users can access directly (such as a web page, or an operating system's GUI). Application tier (business logic, logic tier, or middle tier) The logical tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing. Data tier The data tier includes the data persistence mechanisms (database servers, file shares, etc.) and the data access layer that encapsulates the persistence mechanisms and exposes the data. The data access layer should provide an API to the application tier that exposes methods of managing the stored data without exposing
connectors, distances, signaling). Myri-10G started shipping at the end of 2005. Myrinet was approved in 1998 by the American National Standards Institute for use on the VMEbus as ANSI/VITA 26-1998. One of the earliest publications on Myrinet is a 1995 IEEE article. Performance Myrinet is a lightweight protocol with little overhead that allows it to operate with throughput close to the basic signaling speed of the physical layer. For supercomputing, the low latency of Myrinet is even more important than its throughput performance, since, according to Amdahl's law, a high-performance parallel system tends to be bottlenecked by its slowest sequential process, which in all but the most embarrassingly parallel supercomputer workloads is often the latency of message transmission across the network.
the company Myricom to be used as an interconnect between multiple machines to form computer clusters. Description Myrinet was promoted as having lower protocol overhead than standards such as Ethernet, and therefore better throughput, less interference, and lower latency while using the host CPU. Although it can be used as a traditional networking system, Myrinet is often used directly by programs that "know" about it, thereby bypassing a call into the operating system. Myrinet physically consists of two fibre optic cables, upstream and downstream, connected to the host computers with a single connector. Machines are connected via low-overhead routers and switches, as opposed to connecting one machine directly to another. Myrinet includes a number of fault-tolerance features, mostly backed by the switches. These include flow
playback speed lead to modification in the character of the sound material: Variation in the sounds' length, in a manner directly proportional to the ratio of speed variation. Variation in length is coupled with a variation in pitch, and is also proportional to the ratio of speed variation. A sound's attack characteristic is altered, whereby it is either dislocated from succeeding events, or the energy of the attack is more sharply focused. The distribution of spectral energy is altered, thereby influencing how the resulting timbre might be perceived, relative to its original unaltered state. The phonogène was a machine capable of modifying sound structure significantly and it provided composers with a means to adapt sound to meet specific compositional contexts. The initial phonogènes were manufactured in 1953 by two subcontractors: the chromatic phonogène by a company called Tolana, and the sliding version by the SAREG Company. A third version was developed later at ORTF. An outline of the unique capabilities of the various phonogènes can be seen here: Chromatic: The chromatic phonogène was controlled through a one-octave keyboard. Multiple capstans of differing diameters vary the tape speed over a single stationary magnetic tape head. A tape loop was put into the machine, and when a key was played, it would act on an individual pinch roller / capstan arrangement and cause the tape to be played at a specific speed. The machine worked with short sounds only. Sliding: The sliding phonogène (also called continuous-variation phonogène) provided continuous variation of tape speed using a control rod. The range allowed the motor to arrive at almost a stop position, always through a continuous variation. It was basically a normal tape recorder but with the ability to control its speed, so it could modify any length of tape. One of the earliest examples of its use can by heard in Voile d'Orphée by Pierre Henry (1953), where a lengthy glissando is used to symbolise the removal of Orpheus's veil as he enters hell. Universal: A final version called the universal phonogène was completed in 1963. The device's main ability was that it enabled the dissociation of pitch variation from time variation. This was the starting point for methods that would later become widely available using digital technology, for instance harmonising (transposing sound without modifying duration) and time stretching (modifying duration without pitch modification). This was obtained through a rotating magnetic head called the Springer temporal regulator, an ancestor of the rotating heads used in video machines. Three-head tape recorder This original tape recorder was one of the first machines permitting the simultaneous listening of several synchronised sources. Until 1958 musique concrète, radio and the studio machines were monophonic. The three-head tape recorder superposed three magnetic tapes that were dragged by a common motor, each tape having an independent spool. The objective was to keep the three tapes synchronised from a common starting point. Works could then be conceived polyphonically, and thus each head conveyed a part of the information and was listened to through a dedicated loudspeaker. It was an ancestor of the multi-track player (four then eight tracks) that appeared in the 1960s. Timbres Durées by Olivier Messiaen with the technical assistance of Pierre Henry was the first work composed for this tape recorder in 1952. A rapid rhythmic polyphony was distributed over the three channels. Morphophone This machine was conceived to build complex forms through repetition, and accumulation of events through delays, filtering and feedback. It consisted of a large rotating disk, 50 cm in diameter, on which was stuck a tape with its magnetic side facing outward. A series of twelve movable magnetic heads (one each recording head and erasing head, and ten playback heads) were positioned around the disk, in contact with the tape. A sound up to four seconds long could be recorded on the looped tape and the ten playback heads would then read the information with different delays, according to their (adjustable) positions around the disk. A separate amplifier and band-pass filter for each head could modify the spectrum of the sound, and additional feedback loops could transmit the information to the recording head. The resulting repetitions of a sound occurred at different time intervals, and could be filtered or modified through feedback. This system was also easily capable of producing artificial reverberation or continuous sounds. Early sound spatialisation system At the premiere of Pierre Schaeffer's Symphonie pour un homme seul in 1951, a system that was designed for the spatial control of sound was tested. It was called a "relief desk" (pupitre de relief, but also referred to as pupitre d'espace or potentiomètre d'espace) and was intended to control the dynamic level of music played from several shellac players. This created a stereophonic effect by controlling the positioning of a monophonic sound source. One of five tracks, provided by a purpose-built tape machine, was controlled by the performer and the other four tracks each supplied a single loudspeaker. This provided a mixture of live and preset sound positions. The placement of loudspeakers in the performance space included two loudspeakers at the front right and left of the audience, one placed at the rear, and in the centre of the space a loudspeaker was placed in a high position above the audience. The sounds could therefore be moved around the audience, rather than just across the front stage. On stage, the control system allowed a performer to position a sound either to the left or right, above or behind the audience, simply by moving a small, hand held transmitter coil towards or away from four somewhat larger receiver coils arranged around the performer in a manner reflecting the loudspeaker positions. A contemporary eyewitness described the potentiomètre d'espace in normal use: One found one's self sitting in a small studio which was equipped with four loudspeakers—two in front of one—right and left; one behind one and a fourth suspended above. In the front center were four large loops and an "executant" moving a small magnetic unit through the air. The four loops controlled the four speakers, and while all four were giving off sounds all the time, the distance of the unit from the loops determined the volume of sound sent out from each.The music thus came to one at varying intensity from various parts of the room, and this "spatial projection" gave new sense to the rather abstract sequence of sound originally recorded. The central concept underlying this method was the notion that music should be controlled during public presentation in order to create a performance situation; an attitude that has stayed with acousmatic music to the present day. Coupigny synthesiser and Studio 54 mixing desk After the longstanding rivalry with the "electronic music" of the Cologne studio had subsided, in 1970 the GRM finally created an electronic studio using tools developed by the physicist Enrico Chiarucci, called the Studio 54, which featured the "Coupigny modular synthesiser" and a Moog synthesiser. The Coupigny synthesiser, named for its designer François Coupigny, director of the Group for Technical Research, and the Studio 54 mixing desk had a major influence on the evolution of GRM and from the point of their introduction on they brought a new quality to the music. The mixing desk and synthesiser were combined in one unit and were created specifically for the creation of musique concrète. The design of the desk was influenced by trade union rules at French National Radio that required technicians and production staff to have clearly defined duties. The solitary practice of musique concrète composition did not suit a system that involved three operators: one in charge of the machines, a second controlling the mixing desk, and third to provide guidance to the others. Because of this the synthesiser and desk were combined and organised in a manner that allowed it to be used easily by a composer. Independently of the mixing tracks (twenty-four in total), it had a coupled connection patch that permitted the organisation of the machines within the studio. It also had a number of remote controls for operating tape recorders. The system was easily adaptable to any context, particularly that of interfacing with external equipment. Before the late 1960s the musique concrète produced at GRM had largely been based on the recording and manipulation of sounds, but synthesised sounds had featured in a number of works prior to the introduction of the Coupigny. Pierre Henry had used oscillators to produce sounds as early as 1955. But a synthesiser with envelope control was something Pierre Schaeffer was against, since it favoured the preconception of music and therefore deviated from Schaeffer's principal of making through listening. Because of Schaeffer's concerns the Coupigny synthesiser was conceived as a sound-event generator with parameters controlled globally, without a means to define values as precisely as some other synthesisers of the day. The development of the machine was constrained by several factors. It needed to be modular and the modules had to be easily interconnected (so that the synthesiser would have more modules than slots and it would have an easy-to-use patch). It also needed to include all the major functions of a modular synthesiser including oscillators, noise-generators, filters, ring-modulators, but an intermodulation facility was viewed as the primary requirement; to enable complex synthesis processes such as frequency modulation, amplitude modulation, and modulation via an external source. No keyboard was attached to the synthesiser and instead a specific and somewhat complex envelope generator was used to shape sound. This synthesiser was well-adapted to the production of continuous and complex sounds using intermodulation techniques such as cross-synthesis and frequency modulation but was less effective in generating precisely defined frequencies and triggering specific sounds. The Coupigny synthesiser also served as the model for a smaller, portable unit, which has been used down to the present day. Acousmonium In 1966 composer and technician François Bayle was placed in charge of the Groupe de Recherches Musicales and in 1975, GRM was integrated with the new Institut national de l'audiovisuel (INA – Audiovisual National Institute) with Bayle as its head. In taking the lead on work that began in the early 1950s, with Jacques Poullin's potentiomètre d'espace, a system designed to move monophonic sound sources across four speakers, Bayle and the engineer Jean-Claude Lallemand created an orchestra of loudspeakers (un orchestre de haut-parleurs) known as the Acousmonium in 1974. An inaugural concert took place on 14 February 1974 at the Espace Pierre Cardin in Paris with a presentation of Bayle's Expérience acoustique. The Acousmonium is a specialised sound reinforcement system consisting of between 50 and 100 loudspeakers, depending on the character of the concert, of varying shape and size. The system was designed specifically for the concert presentation of musique-concrète-based works but with the added enhancement of sound spatialisation. Loudspeakers are placed both on stage and at positions throughout the performance space and a mixing console is used to manipulate the placement of acousmatic material across the speaker array, using a performative technique known as "sound diffusion". Bayle has commented that the purpose of the Acousmonium is to "substitute a momentary classical disposition of sound making, which diffuses the sound from the circumference towards the centre of the hall, by a group of sound projectors which form an 'orchestration' of the acoustic image". As
music" using a cumbersome wire recorder. He recorded the sounds of an ancient zaar ceremony and at the Middle East Radio studios processed the material using reverberation, echo, voltage controls, and re-recording. The resulting tape-based composition, entitled The Expression of Zaar, was presented in 1944 at an art gallery event in Cairo. El-Dabh has described his initial activities as an attempt to unlock "the inner sound" of the recordings. While his early compositional work was not widely known outside of Egypt at the time, El-Dabh would eventually gain recognition for his influential work at the Columbia-Princeton Electronic Music Center in the late 1950s. Club d'Essai and Cinq études de bruits Following Schaeffer's work with Studio d'Essai at Radiodiffusion Nationale during the early 1940s he was credited with originating the theory and practice of musique concrète. The Studio d'Essai was renamed Club d'Essai de la Radiodiffusion-Télévision Française in 1946 and in the same year Schaeffer discussed, in writing, the question surrounding the transformation of time perceived through recording. The essay evidenced knowledge of sound manipulation techniques he would further exploit compositionally. In 1948 Schaeffer formally initiated "research in to noises" at the Club d'Essai and on 5 October 1948 the results of his initial experimentation were premiered at a concert given in Paris. Five works for phonograph – known collectively as Cinq études de bruits (Five Studies of Noises) including Étude violette (Study in Purple) and Étude aux chemins de fer (Study with Railroads) – were presented. Musique concrète By 1949 Schaeffer's compositional work was known publicly as musique concrète. Schaeffer stated: "when I proposed the term 'musique concrète,' I intended … to point out an opposition with the way musical work usually goes. Instead of notating musical ideas on paper with the symbols of solfege and entrusting their realization to well-known instruments, the question was to collect concrete sounds, wherever they came from, and to abstract the musical values they were potentially containing". According to Pierre Henry, "musique concrète was not a study of timbre, it is focused on envelopes, forms. It must be presented by means of non-traditional characteristics, you see … one might say that the origin of this music is also found in the interest in 'plastifying' music, of rendering it plastic like sculpture…musique concrète, in my opinion … led to a manner of composing, indeed, a new mental framework of composing". Schaeffer had developed an aesthetic that was centred upon the use of sound as a primary compositional resource. The aesthetic also emphasised the importance of play (jeu) in the practice of sound based composition. Schaeffer's use of the word jeu, from the verb jouer, carries the same double meaning as the English verb to play: 'to enjoy oneself by interacting with one's surroundings', as well as 'to operate a musical instrument'. Groupe de Recherche de Musique Concrète By 1951 the work of Schaeffer, composer-percussionist Pierre Henry, and sound engineer Jacques Poullin had received official recognition and the Groupe de Recherches de Musique Concrète, Club d 'Essai de la Radiodiffusion-Télévision Française was established at RTF in Paris, the ancestor of the ORTF. At RTF the GRMC established the first purpose-built electroacoustic music studio. It quickly attracted many who either were or were later to become notable composers, including Olivier Messiaen, Pierre Boulez, Jean Barraqué, Karlheinz Stockhausen, Edgard Varèse, Iannis Xenakis, Michel Philippot, and Arthur Honegger. Compositional "output from 1951 to 1953 comprised Étude I (1951) and Étude II (1951) by Boulez, Timbres-durées (1952) by Messiaen, Étude aux mille collants (1952) by Stockhausen, Le microphone bien tempéré (1952) and La voile d'Orphée (1953) by Henry, Étude I (1953) by Philippot, Étude (1953) by Barraqué, the mixed pieces Toute la lyre (1951) and Orphée 53 (1953) by Schaeffer/Henry, and the film music Masquerage (1952) by Schaeffer and Astrologie (1953) by Henry. In 1954 Varèse and Honegger visited to work on the tape parts of Déserts and La rivière endormie". In the early and mid 1950s Schaeffer's commitments to RTF included official missions which often required extended absences from the studios. This led him to invest Philippe Arthuys with responsibility for the GRMC in his absence, with Pierre Henry operating as Director of Works. Pierre Henry's composing talent developed greatly during this period at the GRMC and he worked with experimental filmmakers such as Max de Haas, Jean Grémillon, Enrico Fulchignoni, and Jean Rouch, and with choreographers including Dick Sanders and Maurice Béjart. Schaeffer returned to run the group at the end of 1957, and immediately stated his disapproval of the direction the GRMC had taken. A proposal was then made to "renew completely the spirit, the methods and the personnel of the Group, with a view to undertake research and to offer a much needed welcome to young composers". Groupe de Recherches Musicales Following the emergence of differences within the GRMC Pierre Henry, Philippe Arthuys, and several of their colleagues, resigned in April 1958. Schaeffer created a new collective, called Groupe de Recherches Musicales (GRM) and set about recruiting new members including Luc Ferrari, Beatriz Ferreyra, François-Bernard Mâche, Iannis Xenakis, Bernard Parmegiani, and Mireille Chamass-Kyrou. Later arrivals included Ivo Malec, Philippe Carson, Romuald Vandelle, Edgardo Canton and François Bayle. GRM was one of several theoretical and experimental groups working under the umbrella of the Schaeffer-led Service de la Recherche at ORTF (1960–74). Together with the GRM, three other groups existed: the Groupe de Recherches Image GRI, the Groupe de Recherches Technologiques GRT and the Groupe de Recherches which became the Groupe d'Etudes Critiques. Communication was the one theme that unified the various groups, all of which were devoted to production and creation. In terms of the question "who says what to whom?" Schaeffer added "how?", thereby creating a platform for research into audiovisual communication and mass media, audible phenomena and music in general (including non-Western musics). At the GRM the theoretical teaching remained based on practice and could be summed up in the catch phrase do and listen. Schaeffer kept up a practice established with the GRMC of delegating the functions (though not the title) of Group Director to colleagues. Since 1961 GRM has had six Group Directors: Michel Philippot (1960–61), Luc Ferrari (1962–63), Bernard Baschet and François Vercken (1964–66). From the beginning of 1966, François Bayle took over the direction for the duration of thirty-one years, to 1997. He was then replaced by Daniel Teruggi. Traité des objets musicaux The group continued to refine Schaeffer's ideas and strengthened the concept of musique acousmatique. Schaeffer had borrowed the term acousmatic from Pythagoras and defined it as: "Acousmatic, adjective: referring to a sound that one hears without seeing the causes behind it". In 1966 Schaeffer published the book Traité des objets musicaux (Treatise on Musical Objects) which represented the culmination of some 20 years of research in the field of musique concrète. In conjunction with this publication, a set of sound recordings was produced, entitled Le solfège de l'objet sonore (Music Theory of the Acoustic Object), to provide examples of concepts dealt with in the treatise. Technology The development of musique concrète was facilitated by the emergence of new music technology in post-war Europe. Access to microphones, phonographs, and later magnetic tape recorders (created in 1939 and acquired by the Schaeffer's Groupe de Recherche de Musique Concrète (Research Group on Concrete Music) in 1952), facilitated by an association with the French national broadcasting organization, at that time the Radiodiffusion-Télévision Française, gave Schaeffer and his colleagues an opportunity to experiment with recording technology and tape manipulation. Initial tools of musique concrète In 1948, a typical radio studio consisted of a series of shellac record players, a shellac record recorder, a mixing desk with rotating potentiometers, mechanical reverberation, filters, and microphones. This technology made a number of limited operations available to a composer: Shellac record players: could read a sound normally and in reverse mode, could change speed at fixed ratios thus permitting octave transposition. Shellac recorder: would record any result coming out of the mixing desk. Mixing desk: would permit several sources to be mixed together with an independent control of the gain or volume of the sound. The result of the mixing was sent to the recorder and to the monitoring loudspeakers. Signals could be sent to the filters or the reverberation unit. Mechanical reverberation: made of a metal plate or a series of springs that created the reverberation effect, indispensable to force sounds to "fuse" together. Filters: two kinds of filters, 1/3 octave filters and high and low-pass filters. They allow the elimination or enhancement of selected frequencies. Microphones: essential tool for capturing sound. The application of the above technologies in the creation of musique concrète led to the development of a number of sound manipulation techniques including: Sound transposition: reading a sound at a different speed than the one at which it was recorded. Sound looping: composers developed a skilled technique in order to create loops at specific locations within a recording. Sound-sample extraction: a hand-controlled method that required delicate manipulation to get a clean sample of sound. It entailed letting the stylus read a small segment of a record. Used in the Symphonie pour un homme seul. Filtering: by eliminating most of the central frequencies of a signal, the remains would keep some trace of the original sound but without making it recognisable. Magnetic tape The first tape recorders started arriving at ORTF in 1949; however, their functioning was much less reliable than the shellac players, to the point that the Symphonie pour un homme seul, which was composed in 1950–51, was mainly composed with records, even if the tape recorder was available. In 1950, when the machines finally functioned correctly, the techniques of musique concrète were expanded. A range of new sound manipulation practices were explored using improved media manipulation methods and operations such as speed variation. A completely new possibility of organising sounds appears with tape editing, which permits tape to be spliced and arranged with an extraordinary new precision. The "axe-cut junctions" were replaced with micrometric junctions and a whole new technique of production, less dependency on performance skills, could be developed. Tape editing brought a new technique called "micro-editing", in which
chessboard distance, the minimal number of moves a chess king would take to travel from to . The British Rail metric (also called the "post office metric" or the "SNCF metric") on a normed vector space is given by for distinct points and , and . More generally can be replaced with a function taking an arbitrary set to non-negative reals and taking the value at most once: then the metric is defined on by for distinct points and , and The name alludes to the tendency of railway journeys to proceed via London (or Paris) irrespective of their final destination. If is a metric space and is a subset of then becomes a metric space by restricting the domain of to The discrete metric, where if and otherwise, is a simple but important example, and can be applied to all sets. This, in particular, shows that for any set, there is always a metric space associated to it. Using this metric, the singleton of any point is an open ball, therefore every subset is open and the space has the discrete topology. A finite metric space is a metric space having a finite number of points. Not every finite metric space can be isometrically embedded in a Euclidean space. The hyperbolic plane is a metric space. More generally: If is any connected Riemannian manifold, then we can turn into a metric space by defining the distance of two points as the infimum of the lengths of the paths (continuously differentiable curves) connecting them. If is some set and is a metric space, then, the set of all bounded functions (i.e. those functions whose image is a bounded subset of ) can be turned into a metric space by defining for any two bounded functions and (where is supremum). This metric is called the uniform metric or supremum metric, and If is complete, then this function space is complete as well. If X is also a topological space, then the set of all bounded continuous functions from to (endowed with the uniform metric), will also be a complete metric if M is. If is an undirected connected graph, then the set of vertices of can be turned into a metric space by defining to be the length of the shortest path connecting the vertices and In geometric group theory this is applied to the Cayley graph of a group, yielding the word metric. Graph edit distance is a measure of dissimilarity between two graphs, defined as the minimal number of graph edit operations required to transform one graph into another. The Levenshtein distance is a measure of the dissimilarity between two strings and , defined as the minimal number of character deletions, insertions, or substitutions required to transform into . This can be thought of as a special case of the shortest path metric in a graph and is one example of an edit distance. Given a metric space and an increasing concave function such that if and only if , then is also a metric on . Given an injective function from any set to a metric space , defines a metric on . Using T-theory, the tight span of a metric space is also a metric space. The tight span is useful in several types of analysis. The set of all by matrices over some field is a metric space with respect to the rank distance . The Helly metric is used in game theory. Open and closed sets, topology and convergence Every metric space is a topological space in a natural manner, and therefore all definitions and theorems about general topological spaces also apply to all metric spaces. About any point in a metric space we define the open ball of radius (where is a real number) about as the set These open balls form the base for a topology on M, making it a topological space. Explicitly, a subset of is called open if for every in there exists an such that is contained in . The complement of an open set is called closed. A neighborhood of the point is any subset of that contains an open ball about as a subset. A topological space which can arise in this way from a metric space is called a metrizable space. A sequence () in a metric space is said to converge to the limit if and only if for every , there exists a natural number N such that for all . Equivalently, one can use the general definition of convergence available in all topological spaces. A subset of the metric space is closed if and only if every sequence in that converges to a limit in has its limit in . Types of metric spaces Complete spaces A metric space is said to be complete if every Cauchy sequence converges in . That is to say: if as both and independently go to infinity, then there is some with . Every Euclidean space is complete, as is every closed subset of a complete space. The rational numbers, using the absolute value metric , are not complete. Every metric space has a unique (up to isometry) completion, which is a complete space that contains the given space as a dense subset. For example, the real numbers are the completion of the rationals. If is a complete subset of the metric space , then is closed in . Indeed, a space is complete if and only if it is closed in any containing metric space. Every complete metric space is a Baire space. Bounded and totally bounded spaces A metric space is called if there exists some number , such that for all The smallest possible such is called the of The space is called precompact or totally bounded if for every there exist finitely many open balls of radius whose union covers Since the set of the centres of these balls is finite, it has finite diameter, from which it follows (using the triangle inequality) that every totally bounded space is bounded. The converse does not hold, since any infinite set can be given the discrete metric (one of the examples above) under which it is bounded and yet not totally bounded. Note that in the context of intervals in the space of real numbers and occasionally regions in a Euclidean space a bounded set is referred to as "a finite interval" or "finite region". However boundedness should not in general be confused with "finite", which refers to the number of elements, not to how far the set extends; finiteness implies boundedness, but not conversely. Also note that an unbounded subset of may have a finite volume. Compact spaces A metric space is compact if every sequence in has a subsequence that converges to a point in . This is known as sequential compactness and, in metric spaces (but not in general topological spaces), is equivalent to the topological notions of countable compactness and compactness defined via open covers. Examples of compact metric spaces include the closed interval with the absolute value metric, all metric spaces with finitely many points, and the Cantor set. Every closed subset of a compact space is itself compact. A metric space is compact if and only if it is complete and totally bounded. This is known as the Heine–Borel theorem. Note that compactness depends only on the topology, while boundedness depends on the metric. Lebesgue's number lemma states that for every open cover of a compact metric space , there exists a "Lebesgue number" such that every subset of of diameter is contained in some member of the cover. Every compact metric space is second countable, and is a continuous image of the Cantor set. (The latter result is due to Pavel Alexandrov and Urysohn.) Locally compact and proper spaces A metric space is said to be locally compact if every point has a compact neighborhood. Euclidean spaces are locally compact, but infinite-dimensional Banach spaces are not. A space is proper if every closed ball is compact. Proper spaces are locally
simple properties: the distance from to is zero if and only if and are the same point, the distance between two distinct points is positive, the distance from to is the same as the distance from to , and the distance from to is less than or equal to the distance from to via any third point . A metric on a space induces topological properties like open and closed sets, which lead to the study of more abstract topological spaces. The most familiar metric space is 3-dimensional Euclidean space. In fact, a "metric" is the generalization of the Euclidean metric arising from the four long-known properties of the Euclidean distance. The Euclidean metric defines the distance between two points as the length of the straight line segment connecting them. Other metric spaces occur for example in elliptic geometry and hyperbolic geometry, where distance on a sphere measured by angle is a metric, and the hyperboloid model of hyperbolic geometry is used by special relativity as a metric space of velocities. Some of non-geometric metric spaces include spaces of finite strings (finite sequences of symbols from a predefined alphabet) equipped with e.g. a Hamming's or Levenshtein distance, a space of subsets of any metric space equipped with Hausdorff distance, a space of real functions integrable on a unit interval with an integral metric or probabilistic spaces on any chosen metric space equipped with Wasserstein metric. See also the section . History In 1906 Maurice Fréchet introduced metric spaces in his work Sur quelques points du calcul fonctionnel. However the name is due to Felix Hausdorff. Definition A metric space is an ordered pair where is a set and is a metric on , i.e., a function such that for any , the following holds: {| |- | 1. || || identity of indiscernibles |- | 2. || || symmetry |- | 3. || || subadditivity or triangle inequality |} Given the above three axioms, we also have that for any . This is deduced as follows (from the top to the bottom): {| |- |style="width:250px"| || by triangle inequality |- | | by symmetry |- | | by identity of indiscernibles |- | | we have non-negativity |- |} The function is also called distance function or simply distance. Often, is omitted and one just writes for a metric space if it is clear from the context what metric is used. Ignoring mathematical details, for any system of roads and terrains the distance between two locations can be defined as the length of the shortest route connecting those locations. To be a metric there shouldn't be any one-way roads. The triangle inequality expresses the fact that detours aren't shortcuts. If the distance between two points is zero, the two points are indistinguishable from one-another. Many of the examples below can be seen as concrete versions of this general idea. Examples of metric spaces The real numbers with the distance function given by the absolute difference, and, more generally, Euclidean -space with the Euclidean distance, are complete metric spaces. The rational numbers with the same distance function also form a metric space, but not a complete one. The positive real numbers with distance function is a complete metric space. Any normed vector space is a metric space by defining , see also metrics on vector spaces. (If such a space is complete, we call it a Banach space.) Examples: The Manhattan norm gives rise to the Manhattan distance, where the distance between any two points, or vectors, is the sum of the differences between corresponding coordinates. The cyclic Mannheim metric or Mannheim distance is a modulo variant of the Manhattan metric. The maximum norm gives rise to the Chebyshev distance or chessboard distance, the minimal number of moves a chess king would take to travel from to . The British Rail metric (also called the "post office metric" or the "SNCF metric") on a normed vector space is given by for distinct points and , and . More generally can be replaced with a function taking an arbitrary set to non-negative reals and taking the value at most once: then the metric is defined on by for distinct points and , and The name alludes to the tendency of railway journeys to proceed via London (or Paris) irrespective of their final destination. If is a metric space and is a subset of then becomes a metric space by restricting the domain of to The discrete metric, where if and otherwise, is a simple but important example, and can be applied to all sets. This, in particular, shows that for any set, there is always a metric space associated to it. Using this metric, the singleton of any point is an open ball, therefore every subset is open and the space has the discrete topology. A finite metric space is a metric space having a finite number of points. Not every finite metric space can be isometrically embedded in a Euclidean space. The hyperbolic plane is a metric space. More generally: If is any connected Riemannian manifold, then we can turn into a metric space by defining the distance of two points as the infimum of the lengths of the paths (continuously differentiable curves) connecting them. If is some set and is a metric space, then, the set of all bounded functions (i.e. those functions whose image is a bounded subset of ) can be turned into a metric space by defining for any two bounded functions and (where is supremum). This metric is called the uniform metric or supremum metric, and If is complete, then this function space is complete as well. If X is also a topological space, then the set of all bounded continuous functions from to (endowed with the uniform metric), will also be a complete metric if M is. If is an undirected connected graph, then the set of vertices of can be turned into a metric space by defining to be the length of the shortest path connecting the vertices and In geometric group theory this is applied to the Cayley graph of a group, yielding the word metric. Graph edit distance is a measure of dissimilarity between two graphs, defined as the minimal number of graph edit operations required to transform one graph into another. The Levenshtein distance is a measure of the dissimilarity between two strings and , defined as the minimal number of character deletions, insertions, or substitutions required to transform into . This can be thought of as a special case of the shortest path metric in a graph and is one example of an edit distance. Given a metric space and an increasing concave function such that if and only if , then is also a metric on . Given an injective function from any set to a metric space , defines a metric on . Using T-theory, the tight span of a metric space is also a metric space. The tight span is useful in several types of analysis. The set of all by matrices over some field is a metric space with respect to the rank distance . The Helly metric is used in game theory. Open and closed sets, topology and convergence Every metric space is a topological space in a natural manner, and therefore all definitions and theorems about general topological spaces also apply to all metric spaces. About any point in a metric space we define the open ball of radius (where is a real number) about as the set These open balls form the base for a topology on M, making it a topological space. Explicitly, a subset of is called open if for every in there exists an such that is contained in . The complement of an open set is called closed. A neighborhood of the point is any subset of that contains an open ball about as a subset. A topological space which can arise in this way from a metric space is called a metrizable space. A sequence () in a metric space is said to converge to the limit if and only if for every , there exists a natural number N such that for all . Equivalently, one can use the general definition of convergence available in all topological spaces. A subset of the metric space is closed if and only if every sequence in that converges to a limit in has its limit in . Types of metric spaces Complete spaces A metric space is said to be complete if every Cauchy sequence converges in . That is to say: if as both and independently go to infinity, then there is some with . Every Euclidean space is complete, as is every closed subset of a complete space. The rational numbers, using the absolute value metric , are not complete. Every metric space has a unique (up to isometry) completion, which is a complete space that contains the given space as a dense subset. For example, the real numbers are the completion of the rationals. If is a complete subset of the metric space , then is closed in . Indeed, a space is complete if and only if it is closed in any containing metric space. Every complete metric space is a Baire space. Bounded and totally bounded spaces A metric space is called if there exists some number , such that for all The smallest possible such is
no requirement to return to land. Birds Birds adapted to living in the marine environment are often called seabirds. Examples include albatross, penguins, gannets, and auks. Although they spend most of their lives in the ocean, species such as gulls can often be found thousands of miles inland. Mammals There are five main types of marine mammals, namely cetaceans (toothed whales and baleen whales); sirenians such as manatees; pinnipeds including seals and the walrus; sea otters; and the polar bear. All are air-breathing, and while some such as the sperm whale can dive for prolonged periods, all must return to the surface to breathe. Subfields The marine ecosystem is large, and thus there are many sub-fields of marine biology. Most involve studying specializations of particular animal groups, such as phycology, invertebrate zoology and ichthyology. Other subfields study the physical effects of continual immersion in sea water and the ocean in general, adaptation to a salty environment, and the effects of changing various oceanic properties on marine life. A subfield of marine biology studies the relationships between oceans and ocean life, and global warming and environmental issues (such as carbon dioxide displacement). Recent marine biotechnology has focused largely on marine biomolecules, especially proteins, that may have uses in medicine or engineering. Marine environments are the home to many exotic biological materials that may inspire biomimetic materials. Related fields Marine biology is a branch of biology. It is closely linked to oceanography, especially biological oceanography, and may be regarded as a sub-field of marine science. It also encompasses many ideas from ecology. Fisheries science and marine conservation can be considered partial offshoots of marine biology (as well as environmental studies). Marine Chemistry, Physical oceanography and Atmospheric sciences are closely related to this field. Distribution factors An active research topic in marine biology is to discover and map the life cycles of various species and where they spend their time. Technologies that aid in this discovery include pop-up satellite archival tags, acoustic tags, and a variety of other data loggers. Marine biologists study how the ocean currents, tides and many other oceanic factors affect ocean life forms, including their growth, distribution and well-being. This has only recently become technically feasible with advances in GPS and newer underwater visual devices. Most ocean life breeds in specific places, nests or not in others, spends time as juveniles in still others, and in maturity in yet others. Scientists know little about where many species spend different parts of their life cycles especially in the infant and juvenile years. For example, it is still largely unknown where juvenile sea turtles and some year-1 sharks travel. Recent advances in underwater tracking devices are illuminating what we know about marine organisms that live at great Ocean depths. The information that pop-up satellite archival tags give aids in certain time of the year fishing closures and development of a marine protected area. This data is important to both scientists and fishermen because they are discovering that by restricting commercial fishing in one small area they can have a large impact in maintaining a healthy fish population in a much larger area. History The study of marine biology dates back to Aristotle (384–322 BC), who made many observations of life in the sea around Lesbos, laying the foundation for many future discoveries. In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves. The British naturalist Edward Forbes (1815–1854) is generally regarded as the founder of the science of marine biology. The pace of oceanographic and marine biology studies quickly accelerated during the course of the 19th century. The observations made in the first studies of marine biology fueled the age of discovery and exploration that followed. During this time, a vast amount of knowledge was gained about the life that exists in the oceans of the world. Many voyages contributed significantly to this pool of knowledge. Among the most significant were the voyages of where Charles Darwin came up with his theories of evolution and on the formation of coral reefs. Another important expedition was undertaken by HMS Challenger, where findings were made of unexpectedly high species diversity among fauna stimulating much theorizing by population ecologists on how such varieties of life could be maintained in what was thought to be such a hostile environment. This era was important for the history of marine biology but naturalists were still limited in their studies because they lacked technology that would allow them to adequately examine species that lived in deep parts of the oceans. The creation of marine laboratories was important because it allowed marine biologists to conduct research and process their specimens from expeditions. The oldest marine laboratory in
modified by their inhabitants. Some marine organisms, like corals, kelp and sea grasses, are ecosystem engineers which reshape the marine environment to the point where they create further habitat for other organisms. Intertidal and near shore Intertidal zones, the areas that are close to the shore, are constantly being exposed and covered by the ocean's tides. A huge array of life can be found within this zone. Shore habitats span from the upper intertidal zones to the area where land vegetation takes prominence. It can be underwater anywhere from daily to very infrequently. Many species here are scavengers, living off of sea life that is washed up on the shore. Many land animals also make much use of the shore and intertidal habitats. A subgroup of organisms in this habitat bores and grinds exposed rock through the process of bioerosion. Estuaries Estuaries are also near shore and influenced by the tides. An estuary is a partially enclosed coastal body of water with one or more rivers or streams flowing into it and with a free connection to the open sea. Estuaries form a transition zone between freshwater river environments and saltwater maritime environments. They are subject both to marine influences—such as tides, waves, and the influx of saline water—and to riverine influences—such as flows of fresh water and sediment. The shifting flows of both sea water and fresh water provide high levels of nutrients both in the water column and in sediment, making estuaries among the most productive natural habitats in the world. Reefs Reefs comprise some of the densest and most diverse habitats in the world. The best-known types of reefs are tropical coral reefs which exist in most tropical waters; however, reefs can also exist in cold water. Reefs are built up by corals and other calcium-depositing animals, usually on top of a rocky outcrop on the ocean floor. Reefs can also grow on other surfaces, which has made it possible to create artificial reefs. Coral reefs also support a huge community of life, including the corals themselves, their symbiotic zooxanthellae, tropical fish and many other organisms. Much attention in marine biology is focused on coral reefs and the El Niño weather phenomenon. In 1998, coral reefs experienced the most severe mass bleaching events on record, when vast expanses of reefs across the world died because sea surface temperatures rose well above normal. Some reefs are recovering, but scientists say that between 50% and 70% of the world's coral reefs are now endangered and predict that global warming could exacerbate this trend. Open ocean The open ocean is relatively unproductive because of a lack of nutrients, yet because it is so vast, in total it produces the most primary productivity. The open ocean is separated into different zones, and the different zones each have different ecologies. Zones which vary according to their depth include the epipelagic, mesopelagic, bathypelagic, abyssopelagic, and hadopelagic zones. Zones which vary by the amount of light they receive include the photic and aphotic zones. Much of the aphotic zone's energy is supplied by the open ocean in the form of detritus. Deep sea and trenches The deepest recorded oceanic trench measured to date is the Mariana Trench, near the Philippines, in the Pacific Ocean at . At such depths, water pressure is extreme and there is no sunlight, but some life still exists. A white flatfish, a shrimp and a jellyfish were seen by the American crew of the bathyscaphe Trieste when it dove to the bottom in 1960. In general, the deep sea is considered to start at the aphotic zone, the point where sunlight loses its power of transference through the water. Many life forms that live at these depths have the ability to create their own light known as bio-luminescence. Marine life also flourishes around seamounts that rise from the depths, where fish and other sea life congregate to spawn and feed. Hydrothermal vents along the mid-ocean ridge spreading centers act as oases, as do their opposites, cold seeps. Such places support unique biomes and many new microbes and other lifeforms have been discovered at these locations. Marine life In biology many phyla, families and genera have some species that live in the sea and others that live on land. Marine biology classifies species based on the environment rather than on taxonomy. For this reason marine biology encompasses not only organisms that live only in a marine environment, but also other organisms whose lives revolve around the sea. Microscopic life As inhabitants of the largest environment on Earth, microbial marine systems drive changes in every global system. Microbes are responsible for virtually all the photosynthesis that occurs in the ocean, as well as the cycling of carbon, nitrogen, phosphorus and other nutrients and trace elements. Microscopic life undersea is incredibly diverse and still poorly understood. For example, the role of viruses in marine ecosystems is barely being explored even in the beginning of the 21st century. The role of phytoplankton is better understood due to their critical position as the most numerous primary producers on Earth. Phytoplankton are categorized into cyanobacteria (also called blue-green algae/bacteria), various types of algae (red, green, brown, and yellow-green), diatoms, dinoflagellates, euglenoids, coccolithophorids, cryptomonads, chrysophytes, chlorophytes, prasinophytes, and silicoflagellates. Zooplankton tend to be somewhat larger, and not all are microscopic. Many Protozoa are zooplankton, including dinoflagellates, zooflagellates, foraminiferans, and radiolarians. Some of these (such as dinoflagellates) are also phytoplankton; the distinction between plants and animals often breaks down in very small organisms. Other zooplankton include cnidarians, ctenophores, chaetognaths, molluscs, arthropods, urochordates, and annelids such as polychaetes. Many larger animals begin their life as zooplankton before they become large enough to take their familiar forms. Two examples are fish larvae and sea stars (also called starfish). Plants and algae Microscopic algae and plants provide important habitats
Apple's macOS, released in 2001, still uses a hybrid kernel called XNU, which combines a heavily modified (hybrid) OSF/1's Mach kernel (OSFMK 7.3 kernel) with code from BSD UNIX, and this kernel is also used in iOS, tvOS, and watchOS. Windows NT, starting with NT 3.1 and continuing with Windows 10, uses a hybrid kernel design. , the Mach-based GNU Hurd is also functional and included in testing versions of Arch Linux and Debian. Although major work on microkernels had largely ended, experimenters continued development. It has since been shown that many of the performance problems of earlier designs were not a fundamental limitation of the concept, but instead due to the designer's desire to use single-purpose systems to implement as many of these services as possible. Using a more pragmatic approach to the problem, including assembly code and relying on the processor to enforce concepts normally supported in software led to a new series of microkernels with dramatically improved performance. Microkernels are closely related to exokernels. They also have much in common with hypervisors, but the latter make no claim to minimality and are specialized to supporting virtual machines; the L4 microkernel frequently finds use in a hypervisor capacity. Introduction Early operating system kernels were rather small, partly because computer memory was limited. As the capability of computers grew, the number of devices the kernel had to control also grew. Throughout the early history of Unix, kernels were generally small, even though they contained various device drivers and file system implementations. When address spaces increased from 16 to 32 bits, kernel design was no longer constrained by the hardware architecture, and kernels began to grow larger. The Berkeley Software Distribution (BSD) of Unix began the era of larger kernels. In addition to operating a basic system consisting of the CPU, disks and printers, BSD added a complete TCP/IP networking system and a number of "virtual" devices that allowed the existing programs to work 'invisibly' over the network. This growth continued for many years, resulting in kernels with millions of lines of source code. As a result of this growth, kernels were prone to bugs and became increasingly difficult to maintain. The microkernel was intended to address this growth of kernels and the difficulties that resulted. In theory, the microkernel design allows for easier management of code due to its division into user space services. This also allows for increased security and stability resulting from the reduced amount of code running in kernel mode. For example, if a networking service crashed due to buffer overflow, only the networking service's memory would be corrupted, leaving the rest of the system still functional. Inter-process communication Inter-process communication (IPC) is any mechanism which allows separate processes to communicate with each other, usually by sending messages. Shared memory is, strictly defined, also an inter-process communication mechanism, but the abbreviation IPC usually refers to message passing only, and it is the latter that is particularly relevant to microkernels. IPC allows the operating system to be built from a number of smaller programs called servers, which are used by other programs on the system, invoked via IPC. Most or all support for peripheral hardware is handled in this fashion, with servers for device drivers, network protocol stacks, file systems, graphics, etc. IPC can be synchronous or asynchronous. Asynchronous IPC is analogous to network communication: the sender dispatches a message and continues executing. The receiver checks (polls) for the availability of the message, or is alerted to it via some notification mechanism. Asynchronous IPC requires that the kernel maintains buffers and queues for messages, and deals with buffer overflows; it also requires double copying of messages (sender to kernel and kernel to receiver). In synchronous IPC, the first party (sender or receiver) blocks until the other party is ready to perform the IPC. It does not require buffering or multiple copies, but the implicit rendezvous can make programming tricky. Most programmers prefer asynchronous send and synchronous receive. First-generation microkernels typically supported synchronous as well as asynchronous IPC, and suffered from poor IPC performance. Jochen Liedtke assumed the design and implementation of the IPC mechanisms to be the underlying reason for this poor performance. In his L4 microkernel he pioneered methods that lowered IPC costs by an order of magnitude. These include an IPC system call that supports a send as well as a receive operation, making all IPC synchronous, and passing as much data as possible in registers. Furthermore, Liedtke introduced the concept of the direct process switch, where during an IPC execution an (incomplete) context switch is performed from the sender directly to the receiver. If, as in L4, part or all of the message is passed in registers, this transfers the in-register part of the message without any copying at all. Furthermore, the overhead of invoking the scheduler is avoided; this is especially beneficial in the common case where IPC is used in an remote procedure call (RPC) type fashion by a client invoking a server. Another optimization, called lazy scheduling, avoids traversing scheduling queues during IPC by leaving threads that block during IPC in the ready queue. Once the scheduler is invoked, it moves such threads to the appropriate waiting queue. As in many cases a thread gets unblocked before the next scheduler invocation, this approach saves significant work. Similar approaches have since been adopted by QNX and MINIX 3. In a series of experiments, Chen and Bershad compared memory cycles per instruction (MCPI) of monolithic Ultrix with those of microkernel Mach combined with a 4.3BSD Unix server running in user space. Their results explained Mach's poorer performance by higher MCPI and demonstrated that IPC alone is not responsible for much of the system overhead, suggesting that optimizations focused exclusively on IPC will have a limited effect. Liedtke later refined Chen and Bershad's results by making an observation that the bulk of the difference between Ultrix and Mach MCPI was caused by capacity cache-misses and concluding that drastically reducing the cache working set of a microkernel will solve the problem. In a client-server system, most communication is essentially synchronous, even if using asynchronous primitives, as the typical operation is a client invoking a server and then waiting for a reply. As it also lends itself to more efficient implementation, most microkernels generally followed L4's lead and only provided a synchronous IPC primitive. Asynchronous IPC could be implemented on top by using helper threads. However, experience has shown that the utility of synchronous IPC is dubious: synchronous IPC forces a multi-threaded design onto otherwise simple systems, with the resulting synchronization complexities. Moreover, an RPC-like server invocation sequentializes client and server, which should be avoided if they are running on separate cores. Versions of L4 deployed in commercial products have therefore found it necessary to add an asynchronous notification mechanism to better support asynchronous communication. This signal-like mechanism does not carry data and therefore does not require buffering by the kernel. By having two forms of IPC, they have nonetheless violated the principle of minimality. Other versions of L4 have switched to asynchronous IPC completely. As synchronous IPC blocks the first party until the other is ready, unrestricted use could easily lead to deadlocks. Furthermore, a client could easily mount a denial-of-service attack on a server by sending a request and never attempting to receive the reply. Therefore, synchronous IPC must provide a means to prevent indefinite blocking. Many microkernels provide timeouts on IPC calls, which limit the blocking time. In practice, choosing sensible timeout values is difficult, and systems almost inevitably use infinite timeouts for clients and zero timeouts for servers. As a consequence, the trend is towards not providing arbitrary timeouts, but only a flag which indicates that the IPC should fail immediately if the partner is not ready. This approach effectively provides a choice of the two timeout values of zero and infinity. Recent versions of L4 and MINIX have gone down this path (older versions of L4 used timeouts). QNX avoids the problem by requiring the client to specify the reply buffer as part of the message send call. When the server replies the kernel copies the data to the client's buffer, without having to wait for the client to receive the response explicitly. Servers Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs. This allows some servers, particularly device drivers, to interact directly with hardware. A basic set of servers for a general-purpose microkernel includes file system servers, device driver servers, networking servers, display servers, and user interface device servers. This set of servers (drawn from QNX) provides roughly the set of services offered by a Unix monolithic kernel. The necessary servers are started at system startup and provide services, such as file, network, and device access, to ordinary application programs. With such servers running in the environment of a user application, server development is similar to ordinary application development, rather than the build-and-boot process needed for kernel development. Additionally, many "crashes" can be corrected by simply stopping and restarting the server. However, part of the system state is lost with the failing server, hence this approach requires applications to cope with failure. A good example is a server responsible for TCP/IP connections: If this server is restarted, applications will experience a "lost" connection, a normal occurrence in a networked system. For other services, failure is less expected and may require changes to application code. For QNX, restart capability is offered as the QNX High Availability Toolkit. Device drivers Device drivers frequently perform direct memory access (DMA), and therefore can write to arbitrary locations of physical memory, including various kernel data structures. Such drivers must therefore be trusted. It is a common misconception that this means that they must be part of the kernel. In fact, a driver is not inherently more or less trustworthy by being part of the kernel. While running a device driver in user space does not necessarily reduce the damage a misbehaving driver can cause, in practice it is beneficial for system stability in the presence of buggy (rather than malicious) drivers: memory-access violations by the driver code itself (as opposed to the device) may still be caught by the memory-management hardware. Furthermore, many devices are not DMA-capable, their drivers can be made untrusted by running them in user space. Recently, an increasing number of computers feature IOMMUs, many of which can be used to restrict a device's access to physical memory. This also allows user-mode drivers to become untrusted. User-mode drivers actually predate microkernels. The Michigan Terminal System (MTS), in 1967, supported user space drivers (including its file system support), the first operating system to be designed with that capability. Historically, drivers were less of a problem, as the number of devices was small and trusted anyway, so having them in the kernel simplified the design and avoided potential performance problems. This led to the traditional driver-in-the-kernel style of Unix, Linux, and Windows NT. With the proliferation of various kinds of peripherals, the amount of driver code escalated and in modern operating systems dominates the kernel in code size. Essential components and minimality As a microkernel must allow building arbitrary operating system services on top, it must provide some core functionality. At a minimum, this includes: Some mechanisms for dealing with address spaces, required for managing memory protection Some execution abstraction to manage CPU allocation, typically threads or scheduler activations Inter-process communication, required to invoke servers running in their own address spaces This minimal design was pioneered by Brinch Hansen's Nucleus and the hypervisor of IBM's VM. It has since been formalised in Liedtke's minimality principle: A concept is tolerated inside the microkernel only if moving it outside the kernel, i.e., permitting competing implementations, would prevent the implementation of the system's required functionality. Everything else can be done in a usermode program, although device drivers implemented as user programs may on some processor architectures require special privileges to access I/O hardware. Related to the minimality principle, and equally important for microkernel design, is the separation of mechanism and policy, it is what enables the construction of arbitrary systems on top of a minimal kernel. Any policy built into the kernel cannot be
of L4 used timeouts). QNX avoids the problem by requiring the client to specify the reply buffer as part of the message send call. When the server replies the kernel copies the data to the client's buffer, without having to wait for the client to receive the response explicitly. Servers Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs. This allows some servers, particularly device drivers, to interact directly with hardware. A basic set of servers for a general-purpose microkernel includes file system servers, device driver servers, networking servers, display servers, and user interface device servers. This set of servers (drawn from QNX) provides roughly the set of services offered by a Unix monolithic kernel. The necessary servers are started at system startup and provide services, such as file, network, and device access, to ordinary application programs. With such servers running in the environment of a user application, server development is similar to ordinary application development, rather than the build-and-boot process needed for kernel development. Additionally, many "crashes" can be corrected by simply stopping and restarting the server. However, part of the system state is lost with the failing server, hence this approach requires applications to cope with failure. A good example is a server responsible for TCP/IP connections: If this server is restarted, applications will experience a "lost" connection, a normal occurrence in a networked system. For other services, failure is less expected and may require changes to application code. For QNX, restart capability is offered as the QNX High Availability Toolkit. Device drivers Device drivers frequently perform direct memory access (DMA), and therefore can write to arbitrary locations of physical memory, including various kernel data structures. Such drivers must therefore be trusted. It is a common misconception that this means that they must be part of the kernel. In fact, a driver is not inherently more or less trustworthy by being part of the kernel. While running a device driver in user space does not necessarily reduce the damage a misbehaving driver can cause, in practice it is beneficial for system stability in the presence of buggy (rather than malicious) drivers: memory-access violations by the driver code itself (as opposed to the device) may still be caught by the memory-management hardware. Furthermore, many devices are not DMA-capable, their drivers can be made untrusted by running them in user space. Recently, an increasing number of computers feature IOMMUs, many of which can be used to restrict a device's access to physical memory. This also allows user-mode drivers to become untrusted. User-mode drivers actually predate microkernels. The Michigan Terminal System (MTS), in 1967, supported user space drivers (including its file system support), the first operating system to be designed with that capability. Historically, drivers were less of a problem, as the number of devices was small and trusted anyway, so having them in the kernel simplified the design and avoided potential performance problems. This led to the traditional driver-in-the-kernel style of Unix, Linux, and Windows NT. With the proliferation of various kinds of peripherals, the amount of driver code escalated and in modern operating systems dominates the kernel in code size. Essential components and minimality As a microkernel must allow building arbitrary operating system services on top, it must provide some core functionality. At a minimum, this includes: Some mechanisms for dealing with address spaces, required for managing memory protection Some execution abstraction to manage CPU allocation, typically threads or scheduler activations Inter-process communication, required to invoke servers running in their own address spaces This minimal design was pioneered by Brinch Hansen's Nucleus and the hypervisor of IBM's VM. It has since been formalised in Liedtke's minimality principle: A concept is tolerated inside the microkernel only if moving it outside the kernel, i.e., permitting competing implementations, would prevent the implementation of the system's required functionality. Everything else can be done in a usermode program, although device drivers implemented as user programs may on some processor architectures require special privileges to access I/O hardware. Related to the minimality principle, and equally important for microkernel design, is the separation of mechanism and policy, it is what enables the construction of arbitrary systems on top of a minimal kernel. Any policy built into the kernel cannot be overwritten at user level and therefore limits the generality of the microkernel. Policy implemented in user-level servers can be changed by replacing the servers (or letting the application choose between competing servers offering similar services). For efficiency, most microkernels contain schedulers and manage timers, in violation of the minimality principle and the principle of policy-mechanism separation. Start up (booting) of a microkernel-based system requires device drivers, which are not part of the kernel. Typically this means that they are packaged with the kernel in the boot image, and the kernel supports a bootstrap protocol that defines how the drivers are located and started; this is the traditional bootstrap procedure of L4 microkernels. Some microkernels simplify this by placing some key drivers inside the kernel (in violation of the minimality principle), LynxOS and the original Minix are examples. Some even include a file system in the kernel to simplify booting. A microkernel-based system may boot via multiboot compatible boot loader. Such systems usually load statically-linked servers to make an initial bootstrap or mount an OS image to continue bootstrapping. A key component of a microkernel is a good IPC system and virtual-memory-manager design that allows implementing page-fault handling and swapping in usermode servers in a safe way. Since all services are performed by usermode programs, efficient means of communication between programs are essential, far more so than in monolithic kernels. The design of the IPC system makes or breaks a microkernel. To be effective, the IPC system must not only have low overhead, but also interact well with CPU scheduling. Performance On most mainstream processors, obtaining a service is inherently more expensive in a microkernel-based system than a monolithic system. In the monolithic system, the service is obtained by a single system call, which requires two mode switches (changes of the processor's ring or CPU mode). In the microkernel-based system, the service is obtained by sending an IPC message to a server, and obtaining the result in another IPC message from the server. This requires a context switch if the drivers are implemented as processes, or a function call if they are implemented as procedures. In addition, passing actual data to the server and back may incur extra copying overhead, while in a monolithic system the kernel can directly access the data in the client's buffers. Performance is therefore a potential issue in microkernel systems. The experience of first-generation microkernels such as Mach and ChorusOS showed that systems based on them performed very poorly. However, Jochen Liedtke showed that Mach's performance problems were the result of poor design and implementation, specifically Mach's excessive cache footprint. Liedtke demonstrated with his own L4 microkernel that through careful design and implementation, and especially by following the minimality principle, IPC costs could be reduced by more than an order of magnitude compared to Mach. L4's IPC performance is still unbeaten across a range of architectures. While these results demonstrate that the poor performance of systems based on first-generation microkernels is not representative for second-generation kernels such as L4, this constitutes no proof that microkernel-based systems can
kernel technology ATI Mach, a 2D GPU chip by ATI GNU Mach, the microkernel upon which GNU Hurd is based mach, a computer program for building RPM packages in a chroot environment Places Machh or Mach, a town in Pakistan Machynlleth or Mach, a town in Wales Mach (crater), a lunar crater 3949 Mach, an asteroid Other uses Mach number, a measure of
(song), a 2010 song by Rainbow Mach (Transformers), a Multiforce character in Transformers: Victory M.A.C.H. (video game) Muscarinic acetylcholine receptor (mACh) Fly Castelluccio Mach, an Italian paramotor design Vietnamese mạch, an obsolete Vietnamese currency unit Hayato Sakurai or Mach (born 1975), mixed martial artist M.A.C.H., a fictional series of cyborg and robot agents in M.A.C.H. 1 See also Mac (disambiguation) Mach O (disambiguation) Mach 1 (disambiguation) Mach 2 (disambiguation) Mach 3 (disambiguation) Mach
century. However, success required better materials and more developed hydrodynamic technologies. During the second half of the 20th century catamaran designs flourished. Catamaran configurations are used for racing, sailing, tourist and fishing boats. The hulls of a catamaran are typically connected by a bridgedeck, although some simpler cruising catamarans simply have a trampoline stretched between the crossbeams (or "akas"). Small beachable catamarans, such as the Hobie Cat, also have only a trampoline between the hulls. Catamarans derive stability from the distance between the hulls—transverse clearance—the greater this distance, the greater the stability. Typically, catamaran hulls are slim, although they may flare above the waterline to give reserve buoyancy. The vertical clearance between the design waterplane and the bottom of the bridge deck determines the likelihood of contact with waves. Increased vertical clearance diminishes such contact and increases seaworthiness, within limits. Trimaran (double-outrigger) A trimaran (or double-outrigger) is a vessel with two outrigger floats attached on either side of a main hull by a crossbeam, wing, or other form of superstructure. They are derived from traditional double-outrigger vessels of maritime Southeast Asia. Despite not being traditionally Polynesian, western trimarans use traditional Polynesian terms for the hull (vaka), the floats (ama), and connectors (aka). The word "trimaran" is a portmanteau of "tri" and "(cata)maran", a term that is thought to have been coined by Victor Tchetchet, a pioneering, Ukrainian-born modern multihull designer. Some trimaran configurations use the outlying hulls to enhance stability and allow for shallow draft, examples include the experimental ship RV Triton and the Independence class of littoral combat ships (US). Four and five hulls Some multihulls with four (quadrimaran) or five (pentamaran) hulls have been proposed; few have been built. A Swiss entrepreneur is attempting to raise €25 million to build a sail-driven quadrimaran that would use solar power to scoop plastic from the ocean; the project is scheduled for launch in 2020. A French manufacturer, Tera-4, produces motor quadrimarans which use aerodynamic lift between the four hulls to promote planing and reduce power consumption. Design concepts for vessels with two pair of outriggers have been referred to as pentamarans. The design concept comprises a narrow, long hull that cuts through waves. The outriggers then provide the stability that such a narrow hull needs. While the aft sponsons act as trimaran sponsons do, the front sponsons do not touch the water normally; only if the ship rolls to one side do they provide added buoyancy to correct the roll. BMT Group, a shipbuilding and engineering company in the UK, has proposed a fast cargo ship and a yacht using this kind of hull. SWATH multihulls Multihull designs may have hull beams that are slimmer at the water surface ("waterplane") than underwater. This arrangement allows good wave-piercing, while keeping a buoyant hydrodynamic hull beneath the waterplane. In a catamaran configuration this is called a small waterplane area twin hull, or SWATH. While SWATHs are stable in rough seas, they have the drawbacks, compared with other catamarans, of having a deeper draft, being more sensitive to loading, and requiring more power because of their higher underwater surface areas. Triple-hull configurations of small waterplane area craft had been studied, but not built, as of 2008. Performance Each hull of a multihull vessel can be narrower than that of a monohull with the same displacement and long, narrow hulls, a multihull typically produces very small bow waves and wakes, a consequence of a favorable Froude number. Vessels with beamy hulls (typically monohulls) normally create a large bow wave and wake. Such a vessel is limited by its "hull speed", being unable to "climb over" its bow wave unless it changes from displacement mode to planing mode. Vessels with slim hulls (typically multihulls) will normally create no appreciable bow wave to limit their progress. In 1978, 101 years after catamarans like Amaryllis were banned from yacht racing they returned to the sport. This started with the victory of the trimaran Olympus Photo, skippered by Mike Birch in the first Route du Rhum. Thereafter, no open ocean race was won by a monohull. Winning times dropped by 70%, since 1978. Olympus Photo's 23-day 6 hr 58' 35" success dropped to Gitana 11's 7d 17h 19'6", in 2006. Around 2016 the first large wind driven foil-borne racing catamarans were built. These cats rise onto foils and T-foiled rudders only at higher speeds. Sailing multihulls and workboats The increasing
use sails are usually inaccurately referred to by the name "proa". While single-outrigger canoes and proas both derive stability from the outrigger, the proa has the greater need of the outrigger to counter the heeling effect of the sail. The outrigger on a proa can either be on the lee or windward side, or in a tacking proa, interchangeable. However, more recently, proas tend to keep the outrigger either to leeward or to wind which means that instead of tacking, a "shunt" is required, whereby the bow becomes the stern, and the stern becomes the bow. see Pacific, Atlantic, Harry and tacking proas Catamaran (double-hull) A catamaran is a vessel with twin hulls. Commercial catamarans began in 17th century England. Separate attempts at steam-powered catamarans were carried out by the middle of the 20th century. However, success required better materials and more developed hydrodynamic technologies. During the second half of the 20th century catamaran designs flourished. Catamaran configurations are used for racing, sailing, tourist and fishing boats. The hulls of a catamaran are typically connected by a bridgedeck, although some simpler cruising catamarans simply have a trampoline stretched between the crossbeams (or "akas"). Small beachable catamarans, such as the Hobie Cat, also have only a trampoline between the hulls. Catamarans derive stability from the distance between the hulls—transverse clearance—the greater this distance, the greater the stability. Typically, catamaran hulls are slim, although they may flare above the waterline to give reserve buoyancy. The vertical clearance between the design waterplane and the bottom of the bridge deck determines the likelihood of contact with waves. Increased vertical clearance diminishes such contact and increases seaworthiness, within limits. Trimaran (double-outrigger) A trimaran (or double-outrigger) is a vessel with two outrigger floats attached on either side of a main hull by a crossbeam, wing, or other form of superstructure. They are derived from traditional double-outrigger vessels of maritime Southeast Asia. Despite not being traditionally Polynesian, western trimarans use traditional Polynesian terms for the hull (vaka), the floats (ama), and connectors (aka). The word "trimaran" is a portmanteau of "tri" and "(cata)maran", a term that is thought to have been coined by Victor Tchetchet, a pioneering, Ukrainian-born modern multihull designer. Some trimaran configurations use the outlying hulls to enhance stability and allow for shallow draft, examples include the experimental ship RV Triton and the Independence class of littoral combat ships (US). Four and five hulls Some multihulls with four (quadrimaran) or five (pentamaran) hulls have been proposed; few have been built. A Swiss entrepreneur is attempting to raise €25 million to build a sail-driven quadrimaran that would use solar power to scoop plastic from the ocean; the project is scheduled for launch in 2020. A French manufacturer, Tera-4, produces motor quadrimarans which use aerodynamic lift between the four hulls to promote planing and reduce power consumption. Design concepts for vessels with two pair of outriggers have been referred to as pentamarans. The design concept comprises a narrow, long hull that cuts through waves. The outriggers then provide the stability that such a narrow hull needs. While the aft sponsons act as trimaran sponsons do, the front sponsons do not touch the water normally; only if the ship rolls to one side do they provide added buoyancy to correct the roll. BMT Group, a shipbuilding and engineering company in the UK, has proposed a fast cargo ship and a yacht using this kind of hull. SWATH multihulls Multihull designs may have hull beams that are slimmer at the water surface ("waterplane") than underwater. This arrangement allows good wave-piercing, while keeping a buoyant hydrodynamic hull beneath the waterplane. In a catamaran configuration this is called a small waterplane area twin hull, or SWATH. While SWATHs are stable in rough seas, they have the drawbacks, compared with other catamarans, of having a deeper draft, being more sensitive to loading, and
data manipulation, equivalent to the SELECT or UPDATE statements in SQL. Other operations, like creating a new database, or general file management, required the use of a separate command program. References Paul McJones, "Multics Relational Data Store (MRDS)" Multics software Proprietary database management systems
the Multics operating system and first sold in June 1976. Unlike the SQL systems that emerged in the late 1970s and early 80's, MRDS used a command language only for basic data
television show Top of the Pops. Oldfield's music was used for the score of The Space Movie (1980), a Virgin Films production that celebrated the tenth anniversary of the Apollo 11 mission. In 1979, he recorded a version of the signature tune for the BBC children's television programme Blue Peter, which was used by the show for 10 years. Platinum to Heaven's Open Oldfield's fifth album, Platinum, was released in November 1979 and marked the start of his transition from long compositions towards mainstream and pop music. Oldfield performed across Europe between April and December 1980 with the In Concert 1980 tour. In 1980, Oldfield released QE2, named after the ocean liner, which features a variety of guest musicians including Phil Collins on drums. This was followed by the European Adventure Tour 1981, during which Oldfield accepted an invitation to perform at a free concert celebrating the wedding of Prince Charles and Lady Diana in Guildhall. He wrote a new track, "Royal Wedding Anthem", for the occasion. His next album, Five Miles Out, followed in March 1982, which features the 24-minute track "Taurus II" occupying side one. The Five Miles Out World Tour 1982 saw Oldfield perform from April to December of that year. Crises saw Oldfield continue the pattern of one long composition with shorter songs. The first single from the album, "Moonlight Shadow", with Maggie Reilly on vocals, became Oldfield's most successful single, reaching No. 4 in the UK and No. 1 in nine other countries. The subsequent Crises Tour in 1983 concluded with a concert at Wembley Arena to commemorate the tenth anniversary of Tubular Bells. The next album, Discovery, continues with this trend, being the first single "To France" and subsequent Discovery Tour 1984. Oldfield later turned to film and video, writing the score for Roland Joffé's acclaimed film The Killing Fields and producing substantial video footage for his album Islands. Islands continued what Oldfield had been doing on the past couple of albums, with an instrumental piece on one side and rock/pop singles on the other. Of these, "Islands", sung by Bonnie Tyler and "Magic Touch", with vocals by Max Bacon (in the US version) and Glasgow vocalist Jim Price (Southside Jimmy) in the rest of the world, were the major hits. In the US "Magic Touch" reached the top 10 on the Billboard album rock charts in 1988. During the 1980s, Oldfield's then-wife, Norwegian singer Anita Hegerland, contributed vocals to many songs including "Pictures in the Dark". Released in July 1989, Earth Moving features seven vocalists across the album's nine tracks. It is Oldfield's first to consist solely of rock and pop songs, several of which were released as singles: "Innocent" and "Holy" in Europe, and "Hostage" in the US. For his next instrumental album, Virgin insisted that Oldfield use the title Tubular Bells 2. Oldfield's rebellious response was Amarok, an hour-long work featuring rapidly changing themes, unpredictable bursts of noise and a hidden Morse code insult, stating "Fuck off RB", allegedly directed at Branson. Oldfield did everything in his power to make it impossible to make extracts and Virgin returned the favour by barely promoting the album. in February 1991, Oldfield released his final album for Virgin, Heaven's Open, under the name "Michael Oldfield". It marks the first time he handles all lead vocals. In 2013, Oldfield invited Branson to the opening of St. Andrew's International School of The Bahamas, where two of Oldfield's children were pupils. This was the occasion of the debut of Tubular Bells for Schools, a piano solo adaptation of Oldfield's work. 1992–2003: Warner years By early 1992, Oldfield had secured Clive Banks as his new manager and had several record label owners listen to his demo of Tubular Bells II at his house. Oldfield signed with Rob Dickins of WEA Warner and recorded the album with Trevor Horn as producer. Released in August 1992, the album went to No. 1 in the UK. Its live premiere followed on 4 September at Edinburgh Castle which was released on home video as Tubular Bells II Live. Oldfield supported the album with his Tubular Bells II 20th Anniversary Tour in 1992 and 1993, his first concert tour since 1984. By April 1993, the album had sold over three million copies worldwide. Oldfield continued to embrace new musical styles, with The Songs of Distant Earth (based on Arthur C. Clarke's novel of the same name) exhibiting a softer new-age sound. In 1994, he also had an asteroid, 5656 Oldfield, named after him. In 1995, Oldfield continued to embrace new musical styles by producing the Celtic-themed album Voyager. In 1992, Oldfield met Luar na Lubre, a Galician Celtic-folk band (from A Coruña, Spain), with the singer Rosa Cedrón. The band's popularity grew after Oldfield covered their song "O son do ar" ("The sound of the air") on his Voyager album. In 1998, Oldfield produced the third Tubular Bells album (also premiered at a concert, this time in Horse Guards Parade, London), drawing on the dance music scene at his then new home on the island of Ibiza. This album was inspired by themes from Tubular Bells, but differed in lacking a clear two-part structure. During 1999, Oldfield released two albums. The first, Guitars, used guitars as the source for all the sounds on the album, including percussion. The second, The Millennium Bell, consisted of pastiches of a number of styles of music that represented various historical periods over the past millennium. The work was performed live in Berlin for the city's millennium celebrations in 1999–2000. He added to his repertoire the MusicVR project, combining his music with a virtual reality-based computer game. His first work on this project is Tr3s Lunas launched in 2002, a virtual game where the player can interact with a world full of new music. This project appeared as a double CD, one with the music, and the other with the game. In 2002 and 2003, Oldfield re-recorded Tubular Bells using modern equipment to coincide the 30th anniversary of the original. He had wanted to do it years before but his contract with Virgin kept him from doing so. This new version features John Cleese as the Master of Ceremonies as Viv Stanshall, who spoke on the original, died in the interim. Tubular Bells 2003 was released in May 2003. 2004–present: Mercury years On 12 April 2004 Oldfield launched his next virtual reality project, Maestro, which contains music from the Tubular Bells 2003 album and some new chillout melodies. The games have since been made available free of charge on Tubular.net. In 2005, Oldfield signed a deal with Mercury Records UK, who secured the rights to his catalogue when the rights had reverted to himself. Mercury acquired the rights to Oldfield's back catalogue, in July 2007. Oldfield released his first album on the Mercury label, Light + Shade, in September 2005. It is a double album of music of contrasting mood: relaxed (Light) and upbeat and moody (Shade). In 2006 and 2007, Oldfield headlined the Night of the Proms tour, consisting of 21 concerts across Europe. Also in 2007, Oldfield released his autobiography, Changeling. In March 2008 Oldfield released his first classical album, Music of the Spheres; Karl Jenkins assisted with the orchestration. In the first week of release the album topped the UK Classical chart and reached number 9 on the main UK Album Chart. A single "Spheres", featuring a demo version of pieces from the album, was released digitally. The album was nominated for a Classical Brit Award, the NS&I Best Album of 2009. In 2008, when Oldfield's original 35-year deal with Virgin Records ended, the rights to Tubular Bells and his other Virgin releases were returned to him, and were then transferred to Mercury Records. Mercury announced that his Virgin albums will be reissued with bonus content from 2009. In 2009, Mercury released the compilation album The Mike Oldfield Collection 1974–1983, that went to No. 11 in the UK chart. In 2008, Oldfield contributed a new track, "Song for Survival", to the charity album Songs for Survival in support of Survival International. Oldfield's daughter Molly played a large part in the project. In 2010, lyricist Don Black said that he had been working with Oldfield. In 2012, Oldfield was featured on Journey into Space, an album by his brother Terry, and on the track "Islanders" by German producer Torsten Stenzel's York project. In 2013, Oldfield and York released a remix album entitled Tubular Beats. Oldfield performed live at the 2012 Summer Olympics opening ceremony in London. His set included renditions of Tubular Bells, "Far Above the Clouds" and "In Dulci Jubilo" during a segment about the National Health Service. This track appears on the officially released soundtrack album Isles of Wonder. Later in 2012, the compilation album Two Sides: The Very Best of Mike Oldfield, was released which reached No. 6 in the UK. In October 2013, the BBC broadcast Tubular Bells: The Mike Oldfield Story, a documentary on Oldfield's life and career. Oldfield's latest rock-themed album of songs, titled Man on the Rocks, was released on 3 March 2014 by Virgin EMI. The album was produced by Steve Lipson. The album marks a return of Oldfield to a Virgin branded label, through the merger of Mercury Records UK and Virgin Records after Universal Music's purchase of EMI. The track "Nuclear" was used for the E3 trailer of Metal Gear Solid V: The Phantom Pain. In 2015, Oldfield told Steve Wright on his BBC radio show that a sequel album to Tubular Bells was in early development, which he aimed to record on analogue equipment. Later in 2015, Oldfield revealed that he had started on a sequel to Ommadawn. The album, named Return to Ommadawn, was finished in 2016 and released in January 2017. It went to No. 4 in the UK. Oldfield again hinted at a fourth Tubular Bells album when he posted photos of his new equipment, including a new Telecaster guitar. Musicianship Although Oldfield considers himself primarily a guitarist, he is also one of popular music's most skilled and diverse multi-instrumentalists. His 1970s recordings were characterised by a very broad variety of instrumentation predominantly played by himself, plus assorted guitar sound treatments to suggest other instrumental timbres (such as the bagpipe, mandolin, "Glorfindel" and varispeed guitars on the original Tubular Bells). During the 1980s Oldfield became expert in the use of digital synthesizers and sequencers (notably the Fairlight CMI) which began to dominate the sound of his recordings: from the late 1990s onwards, he became a keen user of software synthesizers. He has, however, regularly returned to projects emphasising detailed, manually played and part-acoustic instrumentation (such as 1990's Amarok, 1996's Voyager and 1999's Guitars). Oldfield has played over forty distinct and different instruments on record, including: a wide variety of electric and acoustic six-string guitars and bass guitars (plus electric sitar and guitar synthesizer) in a variety of different styles including folk, rock, pop and flamenco and taking in techniques such as bowing other fretted instruments (banjo, mandolin, bouzouki, ukulele, Chapman Stick) keyboards (piano, assorted electric/electronic organs and synthesizers, spinet) electronic instruments (Fairlight CMI plus other digital samplers and sequencers; assorted drum programs, vocoder, software synthesizers) wind instruments (flageolet, recorder, penny and bass whistles, Northumbrian bagpipes) free-reed instruments (accordion, melodica) string instruments (violin, harp, psaltery) unpitched percussion (including bodhrán, African drums, timpani, rhythm sticks, tambourine, shaker, cabasa) tuned percussion (tubular bells, glockenspiel, marimba, gong, sleigh bells, bell tree, Rototoms, Simmons electronic drums, triangle) plucked idiophones (kalimba, jaw harp) occasional found instruments (such as nutcrackers) While generally preferring the sound of guest vocalists, Oldfield has frequently sung both lead and backup parts for his songs and compositions. He has also contributed experimental vocal effects such as fake choirs and the notorious "Piltdown Man" impression on Tubular Bells. Although recognised as a highly skilled guitarist, Oldfield is self-deprecating about his other instrumental skills, describing them as having been developed out of necessity to perform and record the music he composes. He has been particularly dismissive of his violin-playing and singing abilities. Guitars Over the years, Oldfield has used a range of guitars. Among the more notable of these are: 1963
in the Dark". Released in July 1989, Earth Moving features seven vocalists across the album's nine tracks. It is Oldfield's first to consist solely of rock and pop songs, several of which were released as singles: "Innocent" and "Holy" in Europe, and "Hostage" in the US. For his next instrumental album, Virgin insisted that Oldfield use the title Tubular Bells 2. Oldfield's rebellious response was Amarok, an hour-long work featuring rapidly changing themes, unpredictable bursts of noise and a hidden Morse code insult, stating "Fuck off RB", allegedly directed at Branson. Oldfield did everything in his power to make it impossible to make extracts and Virgin returned the favour by barely promoting the album. in February 1991, Oldfield released his final album for Virgin, Heaven's Open, under the name "Michael Oldfield". It marks the first time he handles all lead vocals. In 2013, Oldfield invited Branson to the opening of St. Andrew's International School of The Bahamas, where two of Oldfield's children were pupils. This was the occasion of the debut of Tubular Bells for Schools, a piano solo adaptation of Oldfield's work. 1992–2003: Warner years By early 1992, Oldfield had secured Clive Banks as his new manager and had several record label owners listen to his demo of Tubular Bells II at his house. Oldfield signed with Rob Dickins of WEA Warner and recorded the album with Trevor Horn as producer. Released in August 1992, the album went to No. 1 in the UK. Its live premiere followed on 4 September at Edinburgh Castle which was released on home video as Tubular Bells II Live. Oldfield supported the album with his Tubular Bells II 20th Anniversary Tour in 1992 and 1993, his first concert tour since 1984. By April 1993, the album had sold over three million copies worldwide. Oldfield continued to embrace new musical styles, with The Songs of Distant Earth (based on Arthur C. Clarke's novel of the same name) exhibiting a softer new-age sound. In 1994, he also had an asteroid, 5656 Oldfield, named after him. In 1995, Oldfield continued to embrace new musical styles by producing the Celtic-themed album Voyager. In 1992, Oldfield met Luar na Lubre, a Galician Celtic-folk band (from A Coruña, Spain), with the singer Rosa Cedrón. The band's popularity grew after Oldfield covered their song "O son do ar" ("The sound of the air") on his Voyager album. In 1998, Oldfield produced the third Tubular Bells album (also premiered at a concert, this time in Horse Guards Parade, London), drawing on the dance music scene at his then new home on the island of Ibiza. This album was inspired by themes from Tubular Bells, but differed in lacking a clear two-part structure. During 1999, Oldfield released two albums. The first, Guitars, used guitars as the source for all the sounds on the album, including percussion. The second, The Millennium Bell, consisted of pastiches of a number of styles of music that represented various historical periods over the past millennium. The work was performed live in Berlin for the city's millennium celebrations in 1999–2000. He added to his repertoire the MusicVR project, combining his music with a virtual reality-based computer game. His first work on this project is Tr3s Lunas launched in 2002, a virtual game where the player can interact with a world full of new music. This project appeared as a double CD, one with the music, and the other with the game. In 2002 and 2003, Oldfield re-recorded Tubular Bells using modern equipment to coincide the 30th anniversary of the original. He had wanted to do it years before but his contract with Virgin kept him from doing so. This new version features John Cleese as the Master of Ceremonies as Viv Stanshall, who spoke on the original, died in the interim. Tubular Bells 2003 was released in May 2003. 2004–present: Mercury years On 12 April 2004 Oldfield launched his next virtual reality project, Maestro, which contains music from the Tubular Bells 2003 album and some new chillout melodies. The games have since been made available free of charge on Tubular.net. In 2005, Oldfield signed a deal with Mercury Records UK, who secured the rights to his catalogue when the rights had reverted to himself. Mercury acquired the rights to Oldfield's back catalogue, in July 2007. Oldfield released his first album on the Mercury label, Light + Shade, in September 2005. It is a double album of music of contrasting mood: relaxed (Light) and upbeat and moody (Shade). In 2006 and 2007, Oldfield headlined the Night of the Proms tour, consisting of 21 concerts across Europe. Also in 2007, Oldfield released his autobiography, Changeling. In March 2008 Oldfield released his first classical album, Music of the Spheres; Karl Jenkins assisted with the orchestration. In the first week of release the album topped the UK Classical chart and reached number 9 on the main UK Album Chart. A single "Spheres", featuring a demo version of pieces from the album, was released digitally. The album was nominated for a Classical Brit Award, the NS&I Best Album of 2009. In 2008, when Oldfield's original 35-year deal with Virgin Records ended, the rights to Tubular Bells and his other Virgin releases were returned to him, and were then transferred to Mercury Records. Mercury announced that his Virgin albums will be reissued with bonus content from 2009. In 2009, Mercury released the compilation album The Mike Oldfield Collection 1974–1983, that went to No. 11 in the UK chart. In 2008, Oldfield contributed a new track, "Song for Survival", to the charity album Songs for Survival in support of Survival International. Oldfield's daughter Molly played a large part in the project. In 2010, lyricist Don Black said that he had been working with Oldfield. In 2012, Oldfield was featured on Journey into Space, an album by his brother Terry, and on the track "Islanders" by German producer Torsten Stenzel's York project. In 2013, Oldfield and York released a remix album entitled Tubular Beats. Oldfield performed live at the 2012 Summer Olympics opening ceremony in London. His set included renditions of Tubular Bells, "Far Above the Clouds" and "In Dulci Jubilo" during a segment about the National Health Service. This track appears on the officially released soundtrack album Isles of Wonder. Later in 2012, the compilation album Two Sides: The Very Best of Mike Oldfield, was released which reached No. 6 in the UK. In October 2013, the BBC broadcast Tubular Bells: The Mike Oldfield Story, a documentary on Oldfield's life and career. Oldfield's latest rock-themed album of songs, titled Man on the Rocks, was released on 3 March 2014 by Virgin EMI. The album was produced by Steve Lipson. The album marks a return of Oldfield to a Virgin branded label, through the merger of Mercury Records UK and Virgin Records after Universal Music's purchase of EMI. The track "Nuclear" was used for the E3 trailer of Metal Gear Solid V: The Phantom Pain. In 2015, Oldfield told Steve Wright on his BBC radio show that a sequel album to Tubular Bells was in early development, which he aimed to record on analogue equipment. Later in 2015, Oldfield revealed that he had started on a sequel to Ommadawn. The album, named Return to Ommadawn, was finished in 2016 and released in January 2017. It went to No. 4 in the UK. Oldfield again hinted at a fourth Tubular Bells album when he posted photos of his new equipment, including a new Telecaster guitar. Musicianship Although Oldfield considers himself primarily a guitarist, he is also one of popular music's most skilled and diverse multi-instrumentalists. His 1970s recordings were characterised by a very broad variety of instrumentation predominantly played by himself, plus assorted guitar sound treatments to suggest other instrumental timbres (such as the bagpipe, mandolin, "Glorfindel" and varispeed guitars on the original Tubular Bells). During the 1980s Oldfield became expert in the use of digital synthesizers and sequencers (notably the Fairlight CMI) which began to dominate the sound of his recordings: from the late 1990s onwards, he became a keen user of software synthesizers. He has, however, regularly returned to projects emphasising detailed, manually played and part-acoustic instrumentation (such as 1990's Amarok, 1996's Voyager and 1999's Guitars). Oldfield has played over forty distinct and different instruments on record, including: a wide variety of electric and acoustic six-string guitars and bass guitars (plus electric sitar and guitar synthesizer) in a variety of different styles including folk, rock, pop and flamenco and taking in techniques such as bowing other fretted instruments (banjo, mandolin, bouzouki, ukulele, Chapman Stick) keyboards (piano, assorted electric/electronic organs and synthesizers, spinet) electronic instruments (Fairlight CMI plus other digital samplers and sequencers; assorted drum programs, vocoder, software synthesizers) wind instruments (flageolet, recorder, penny and bass whistles, Northumbrian bagpipes) free-reed instruments (accordion, melodica) string instruments (violin, harp, psaltery) unpitched percussion (including bodhrán, African drums, timpani, rhythm sticks, tambourine, shaker, cabasa) tuned percussion (tubular bells, glockenspiel, marimba, gong, sleigh bells, bell tree, Rototoms, Simmons electronic drums, triangle) plucked idiophones (kalimba, jaw harp) occasional found instruments (such as nutcrackers) While generally preferring the sound of guest vocalists, Oldfield has frequently sung both lead and backup parts for his songs and compositions. He has also contributed experimental vocal effects such as fake choirs and the notorious "Piltdown Man" impression on Tubular Bells. Although recognised as a highly skilled guitarist, Oldfield is self-deprecating about his other instrumental skills, describing them as having been developed out of necessity to perform and record the music he composes. He has been particularly dismissive of his violin-playing and singing abilities. Guitars Over the years, Oldfield has used a range of guitars. Among the more notable of these are: 1963 Fender Stratocaster Serial no. L08044, in salmon pink (fiesta red). Used by Oldfield from 1984 (the Discovery album) until 2006 (Night of the Proms, rehearsals in Antwerp). Subsequently, sold for £30,000 at Chandler Guitars. 1989 PRS Artist Custom 24 In amber, used by Oldfield from the late 1980s to the present day. 1966 Fender Telecaster Serial no. 180728, in blonde. Previously owned by Marc Bolan, this was the only electric guitar used on Tubular Bells. The guitar was unsold at auction by Bonhams in 2007, 2008 and 2009 at estimated values of, respectively, £25,000–35,000, £10,000–15,000 and £8,000–12,000; Oldfield has since sold it and donated the £6500 received to the charity SANE. Various Gibson Les Paul, Zemaitis and SG guitars Used extensively by Oldfield in the 1970s and 80s. The most notable Gibson guitar Oldfield favoured in this time period was a 1962 Les Paul/SG Junior model, which was his primary guitar for the recording of Ommadawn, among other works. Oldfield is also known to have owned and used an L6-S during that model's production run in the mid-1970s. On occasion, Oldfield was also seen playing a black Les Paul Custom, an early reissue model built around 1968. Oldfield used a modified Roland GP8 effects processor in conjunction with his PRS Artist to get many of his heavily overdriven guitar sounds from the Earth Moving album onwards. Oldfield has also been using guitar synthesizers since the mid-1980s, using a 1980s Roland GR-300/G-808 type system, then a 1990s Roland GK2 equipped red PRS Custom 24 (sold in 2006) with a Roland VG8, and most recently a Line 6 Variax. Oldfield has an unusual playing style, using fingers and long right-hand fingernails and different ways of creating vibrato: a "very fast side-to-side vibrato" and "violinist's vibrato". Oldfield has stated that his playing style originates from his musical roots playing folk music and the bass guitar. Keyboards Over the years, Oldfield has owned and used a vast number of synthesizers and other keyboard instruments. In the 1980s, he composed the score for the film The Killing Fields on a Fairlight CMI. Some examples of keyboard and synthesised instruments which Oldfield has made use of include Sequential Circuits Prophet-5s (notably on Platinum and The Killing Fields), Roland JV-1080/JV-2080 units (1990s), a Korg M1 (as seen in the "Innocent" video), a Clavia Nord Lead and Steinway pianos. In recent years, he has also made use of software synthesis products, such as Native Instruments. Lead vocalists Oldfield has occasionally sung himself on his records and live performances, sometimes using a vocoder as a resource. It is not unusual for him to collaborate with diverse singers and to hold auditions before deciding the most appropriate for a particular song or album. Featured lead vocalists who have collaborated with him include: Amar Jon Anderson Kevin Ayers Max Bacon Rosa Cedrón Roger Chapman Pepsi Demacque Cara Dillon Anita Hegerland Sally Oldfield Barry Palmer Maddy Prior Maggie Reilly Luke Spiller Chris Thompson Bonnie Tyler Recording Oldfield has self-recorded and produced many of his albums, and played the majority of the featured instruments, largely at his home studios. In the 1990s and 2000s he mainly used DAWs such as Apple Logic, Avid Pro Tools and Steinberg Nuendo as recording suites. For composing orchestral music Oldfield has been quoted as using the software notation program Sibelius running on Apple Macintoshes. He also used the FL Studio DAW on his 2005 double album Light + Shade. Among the mixing consoles Oldfield has owned are an AMS Neve Capricorn 33238, a Harrison Series X, and a Euphonix System 5-MC. Personal life Family Oldfield has been married four times and has seven children. In 1978 he married Diana Fuller, a relative of the Exegesis group leader, which lasted for three months. Oldfield recalled that he phoned Branson the day after the ceremony and said he had made a mistake. From 1979 to 1986, Oldfield was married to Sally Cooper, who he met through Virgin. They had three children, daughter Molly and sons Dougal (1981–2015) and Luke. Shortly before Luke's birth in 1986, the relationship had broken down and they amicably split. By this time, Oldfield had entered a relationship with Norwegian singer Anita Hegerland, lasting until 1991. The pair had met backstage at one of Oldfield's gigs while touring Germany in 1984. They lived in Switzerland, France, and England. They have two children: Greta and Noah. In the late 1990s, Oldfield posted in a lonely hearts column in a local Ibiza newspaper. It was answered by Amy Lauer and the pair dated, but the relationship was troubled by Oldfield's bouts of alcohol and substance abuse and it ended after two months. In 2001, Oldfield began counselling and psychotherapy. Between 2002 and 2013, Oldfield was married to Fanny Vandekerckhove, whom he had met while living in Ibiza. They have two sons, Jake and Eugene. Other Oldfield and his siblings were raised as Catholic, their mother's faith. He used drugs in his early life including LSD, which he claimed affected his mental health. In the
value v and a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list of another, which require disentangling to prove results about. In Standard ML, the tree and forest datatypes can be mutually recursively defined as follows, allowing empty trees: datatype 'a tree = Empty | Node of 'a * 'a forest and 'a forest = Nil | Cons of 'a tree * 'a forest Computer functions Just as algorithms on recursive datatypes can naturally be given by recursive functions, algorithms on mutually recursive data structures can be naturally given by mutually recursive functions. Common examples include algorithms on trees, and recursive descent parsers. As with direct recursion, tail call optimization is necessary if the recursion depth is large or unbounded, such as using mutual recursion for multitasking. Note that tail call optimization in general (when the function called is not the same as the original function, as in tail-recursive calls) may be more difficult to implement than the special case of tail-recursive call optimization, and thus efficient implementation of mutual tail recursion may be absent from languages that only optimize tail-recursive calls. In languages such as Pascal that require declaration before use, mutually recursive functions require forward declaration, as a forward reference cannot be avoided when defining them. As with directly recursive functions, a wrapper function may be useful, with the mutually recursive functions defined as nested functions within its scope if this is supported. This is particularly useful for sharing state across a set of functions without having to pass parameters between them. Basic examples A standard example of mutual recursion, which is admittedly artificial, determines whether a non-negative number is even or odd by defining two separate functions that call each other, decrementing by 1 each time. In C: bool is_even(unsigned int n) { if (n == 0) return true; else return is_odd(n - 1); } bool is_odd(unsigned int n) { if (n == 0) return false; else return is_even(n - 1); } These functions are based on the observation that the question is 4 even? is equivalent to is 3 odd?, which is in turn equivalent to is 2 even?, and so on down to 0. This example is mutual single recursion, and could easily be replaced by iteration. In this example, the mutually recursive calls are tail calls, and tail call optimization would be necessary to execute in constant stack space. In C, this would take O(n) stack space, unless rewritten to use jumps instead of calls. This could be reduced to a single recursive function is_even. In that case, is_odd, which could be inlined, would call is_even, but is_even would only call itself. As a more general class of examples, an algorithm on a tree can be decomposed into its behavior on a value and its behavior on children, and can be split up into two mutually recursive functions, one specifying the behavior on a tree, calling the forest function for the forest of children, and one specifying the behavior on a forest, calling the tree function for the tree in
for multitasking. Note that tail call optimization in general (when the function called is not the same as the original function, as in tail-recursive calls) may be more difficult to implement than the special case of tail-recursive call optimization, and thus efficient implementation of mutual tail recursion may be absent from languages that only optimize tail-recursive calls. In languages such as Pascal that require declaration before use, mutually recursive functions require forward declaration, as a forward reference cannot be avoided when defining them. As with directly recursive functions, a wrapper function may be useful, with the mutually recursive functions defined as nested functions within its scope if this is supported. This is particularly useful for sharing state across a set of functions without having to pass parameters between them. Basic examples A standard example of mutual recursion, which is admittedly artificial, determines whether a non-negative number is even or odd by defining two separate functions that call each other, decrementing by 1 each time. In C: bool is_even(unsigned int n) { if (n == 0) return true; else return is_odd(n - 1); } bool is_odd(unsigned int n) { if (n == 0) return false; else return is_even(n - 1); } These functions are based on the observation that the question is 4 even? is equivalent to is 3 odd?, which is in turn equivalent to is 2 even?, and so on down to 0. This example is mutual single recursion, and could easily be replaced by iteration. In this example, the mutually recursive calls are tail calls, and tail call optimization would be necessary to execute in constant stack space. In C, this would take O(n) stack space, unless rewritten to use jumps instead of calls. This could be reduced to a single recursive function is_even. In that case, is_odd, which could be inlined, would call is_even, but is_even would only call itself. As a more general class of examples, an algorithm on a tree can be decomposed into its behavior on a value and its behavior on children, and can be split up into two mutually recursive functions, one specifying the behavior on a tree, calling the forest function for the forest of children, and one specifying the behavior on a forest, calling the tree function for the tree in the forest. In Python: def f_tree(tree) -> None: f_value(tree.value) f_forest(tree.children) def f_forest(forest) -> None: for tree in forest: f_tree(tree) In this case the tree function calls the forest function by single recursion, but the forest function calls the tree function by multiple recursion. Using the Standard ML datatype above, the size of a tree (number of nodes) can be computed via the following mutually recursive functions: fun size_tree Empty = 0 | size_tree (Node (_, f)) = 1 + size_forest f and size_forest Nil = 0 | size_forest (Cons (t, f')) = size_tree t + size_forest f' A more detailed example in Scheme, counting the leaves of a tree: (define (count-leaves tree) (if (leaf? tree) 1 (count-leaves-in-forest (children tree)))) (define (count-leaves-in-forest forest) (if (null? forest) 0 (+ (count-leaves (car forest)) (count-leaves-in-forest (cdr forest))))) These examples reduce easily to a single recursive function by inlining the forest function in the tree function, which is commonly done in practice: directly recursive functions that operate on trees sequentially process the value of the node and recurse on the children within one function, rather than dividing these into two separate functions. Advanced examples A more complicated example is given by recursive descent parsers, which can be naturally implemented by having one function for each production rule
Foo(int bar); void Foo(int bar, int baz); void Foo(int bar, int baz, int qux); Python Spam, ham, and eggs are the principal metasyntactic variables used in the Python programming language. This is a reference to the famous comedy sketch, "Spam", by Monty Python, the eponym of the language. In the following example spam, ham, and eggs are metasyntactic variables and lines beginning with # are comments. # Define a function named spam def spam(): # Define the variable ham ham = "Hello World!" # Define the variable eggs eggs = 1 return IETF Requests for Comments Both the IETF RFCs and computer programming languages are rendered in plain text, making it necessary to distinguish metasyntactic variables by a naming convention, since it would not be obvious from context. Here is an example from the official IETF document explaining the e-mail protocols (from RFC 772 - cited in RFC 3092): All is well; now the recipients can be specified. S: MRCP TO:<Foo@Y> <CRLF> R: 200 OK S: MRCP TO:<Raboof@Y> <CRLF> R: 553 No such user here S: MRCP TO:<bar@Y> <CRLF> R: 200 OK S: MRCP TO:<@Y,@X,fubar@Z> <CRLF> R: 200 OK Note that the failure of "Raboof" has no effect on the storage of mail for "Foo", "bar" or the mail to be forwarded to "fubar@Z" through host "X". (The documentation for texinfo emphasizes the distinction between metavariables and mere variables used in a programming language being documented in some texinfo file as: "Use the @var command to indicate metasyntactic variables. A metasyntactic variable is something that stands for another piece of text. For example, you should use a metasyntactic variable in the documentation of a function to describe the arguments that are passed to that function. Do not use @var for the names of particular variables in programming languages. These are specific names from a program, so @code is correct for them.") Another point reflected in the above example is the convention that a metavariable is to be uniformly substituted with the same instance in all its appearances in a given schema. This is in contrast with nonterminal symbols in formal grammars where the nonterminals on the right of a production can be substituted by different instances. Example data SQL It is common to use the name ACME in example SQL Databases and as placeholder company-name for the purpose of teaching. The term 'ACME Database' is commonly used to mean a training or example-only set of database data used solely for training or testing. ACME is also commonly used in documentation which shows SQL usage examples, a common practice with in many educational texts as well as technical documentation from companies such as Microsoft and Oracle. See also Metavariable (logic) xyzzy Alice and Bob
772 - cited in RFC 3092): All is well; now the recipients can be specified. S: MRCP TO:<Foo@Y> <CRLF> R: 200 OK S: MRCP TO:<Raboof@Y> <CRLF> R: 553 No such user here S: MRCP TO:<bar@Y> <CRLF> R: 200 OK S: MRCP TO:<@Y,@X,fubar@Z> <CRLF> R: 200 OK Note that the failure of "Raboof" has no effect on the storage of mail for "Foo", "bar" or the mail to be forwarded to "fubar@Z" through host "X". (The documentation for texinfo emphasizes the distinction between metavariables and mere variables used in a programming language being documented in some texinfo file as: "Use the @var command to indicate metasyntactic variables. A metasyntactic variable is something that stands for another piece of text. For example, you should use a metasyntactic variable in the documentation of a function to describe the arguments that are passed to that function. Do not use @var for the names of particular variables in programming languages. These are specific names from a program, so @code is correct for them.") Another point reflected in the above example is the convention that a metavariable is to be uniformly substituted with the same instance in all its appearances in a given schema. This is in contrast with nonterminal symbols in formal grammars where the nonterminals on the right of a production can be substituted by different instances. Example data SQL It is common to use the name ACME in example SQL Databases and as placeholder company-name for the purpose of teaching. The term
A series of advertisements for Maxell audio cassette tapes, produced by Howell Henry Chaldecott Lury, shown in 1989 and 1990, featured misheard versions of "Israelites" (e.g., "Me ears are alight") by Desmond Dekker and "Into the Valley" by The Skids as heard by users of other brands of tape. A 1987 series of advertisements for Kellogg's Nut 'n Honey Crunch featured a joke in which one person asks "What's for breakfast?" and is told "Nut 'N' Honey", which is misheard as "Nothing, honey". Other notable examples The traditional game Chinese whispers ("Telephone" or "Gossip" in North America) involves mishearing a whispered sentence to produce successive mondegreens that gradually distort the original sentence as it is repeated by successive listeners. Among schoolchildren in the US, daily rote recitation of the Pledge of Allegiance has long provided opportunities for the genesis of mondegreens. Speech-to-text functionality in modern smartphone messaging apps and search or assist functions may be hampered by faulty speech recognition. It has been noted that in text messaging, users often leave uncorrected mondegreens as a joke or puzzle for the recipient to solve. This wealth of mondegreens has proven to be a fertile ground for study by speech scientists and psychologists. Reverse mondegreen A reverse mondegreen is the intentional production, in speech or writing, of words or phrases that seem to be gibberish but disguise meaning. A prominent example is Mairzy Doats, a 1943 novelty song by Milton Drake, Al Hoffman, and Jerry Livingston. The lyrics are a reverse mondegreen, made up of same-sounding words or phrases (sometimes also referred to as "oronyms"), so pronounced (and written) as to challenge the listener (or reader) to interpret them: Mairzy doats and dozy doats and liddle lamzy divey A kiddley divey too, wouldn't you? The clue to the meaning is contained in the bridge of the song: If the words sound queer and funny to your ear, a little bit jumbled and jivey, Sing "Mares eat oats and does eat oats and little lambs eat ivy." This makes it clear that the last line is "A kid'll eat ivy, too; wouldn't you?" Deliberate mondegreen Two authors have written books of supposed foreign-language poetry that are actually mondegreens of nursery rhymes in English. Luis van Rooten's pseudo-French Mots D'Heures: Gousses, Rames includes critical, historical, and interpretive apparatus, as does John Hulme's Mörder Guss Reims, attributed to a fictitious German poet. Both titles sound like the phrase "Mother Goose Rhymes". Both works can also be considered soramimi, which produces different meanings when interpreted in another language. The genre of animutation is based on deliberate mondegreen. Wolfgang Amadeus Mozart produced a similar effect in his canon "Difficile Lectu" (Difficult to Read), which, though ostensibly in Latin, is actually an opportunity for scatological humor in both German and Italian. Some performers and writers have used deliberate mondegreens to create double entendres. The phrase "if you see Kay" (F-U-C-K) has been employed many times, notably as a line from James Joyce's 1922 novel Ulysses. "Mondegreen" is a song by Yeasayer on their 2010 album, Odd Blood. The lyrics are intentionally obscure (for instance, "Everybody sugar in my bed" and "Perhaps the pollen in the air turns us into a stapler") and spoken hastily to encourage the mondegreen effect. Anguish Languish is an ersatz language created by Howard L. Chace. A play on the words "English Language," it is based on homophonic transformations of English words and consists entirely of deliberate mondegreens that seem nonsensical in print but are readily understood when spoken aloud. A notable example is the story "Ladle Rat Rotten Hut" ("Little Red Riding Hood"), which appears in his collection of stories and poems, Anguish Languish (Prentice-Hall, 1956). Related linguistic phenomena Closely related categories are Hobson-Jobson, where a word from a foreign language is homophonically translated into one's own language, e.g. "cockroach" from Spanish cucaracha, and soramimi, a Japanese term for deliberate homophonic misinterpretation of words for humor. An unintentionally incorrect use of similar-sounding words or phrases, resulting in a changed meaning, is a malapropism. If there is a connection in meaning, it may be called an eggcorn. If a person stubbornly continues to mispronounce a word or phrase after being corrected, that person has committed a mumpsimus. Earworm Eggcorn Holorime Homophonic translation Hypercorrection Phono-semantic matching Spoonerism Syntactic ambiguity Non-English languages Croatian Queen's song "Another one bites the dust" has a long-standing history as a mondegreen in Croatian, misheard as Radovan baca daske which means "Radovan (personal name) throws planks". This might also be a soramimi. Dutch In Dutch, mondegreens are popularly referred to as Mama appelsap ("Mommy applejuice"), from the Michael Jackson song Wanna Be Startin' Somethin' which features the lyrics Mama-se mama-sa ma-ma-coo-sa, and was once misheard as Mama say mama sa mam[a]appelsap. The Dutch radio station 3FM had a show Superrradio (originally Timur Open Radio) run by Timur Perlin and Ramon with an item in which listeners were encouraged to send in mondegreens under the name "Mama appelsap". The segment was popular for years. French In French, the phenomenon is also known as hallucination auditive, especially when referring to pop songs. The title of the film La Vie en rose ("Life in pink") depicting the life of Édith Piaf can be mistaken for L'Avion rose' ("The pink airplane"). The title of the 1983 French novel Le Thé au harem d'Archi Ahmed ("Tea in the Harem of Archi Ahmed") by Mehdi Charef (and the 1985 movie of the same name) is based on the main character mishearing le théorème d'Archimède ("the theorem of Archimedes") in his mathematics class. A classic example in French is similar to the "Lady Mondegreen" anecdote: in his 1962 collection of children's quotes La Foire aux cancres, the humorist Jean-Charles refers to a misunderstood lyric of "La Marseillaise" (the French national anthem): Entendez-vous ... mugir ces féroces soldats ("Do you hear those savage soldiers roar?") is misheard as ...Séféro, ce soldat ("that soldier Séféro"). German Mondegreens are a well-known phenomenon in German, especially where non-German songs are concerned. They are sometimes called, after a well-known example, Agathe Bauer-songs ("I got the power", a song by Snap!, misinterpreted as a German female name). Journalist Axel Hacke published a series of books about them, beginning with Der weiße Neger Wumbaba ("The White Negro Wumbaba", a mishearing of the line der weiße Nebel wunderbar from "Der Mond ist aufgegangen"). In urban legend, children's paintings of nativity scenes, occasionally include next to the Child, Mary, Joseph, and so on, an additional, laughing creature known as the Owi. The reason is to be found in the line Gottes Sohn! O wie lacht / Lieb' aus Deinem göttlichen Mund ("God's Son! Oh, how does love laugh out of Thy divine mouth!") from the song "Silent Night". The subject is Lieb, a poetic contraction of die Liebe leaving off the final -e and the definite article, so that the phrase might be misunderstood as being about a person named Owi laughing "in a loveable manner". Owi lacht has been used as the title of at least one book about Christmas and Christmas songs. Hebrew Ghil'ad Zuckermann mentions the example mukhrakhím liyót saméakh (, which means "we must be happy", with a grammatical error) as a mondegreen of the original úru 'akhím belév saméakh (, which means "wake up, brothers, with a happy heart"). Although this line is taken from the extremely well-known song "Háva Nagíla" ("Let’s be happy"), given the Hebrew high-register of úru ( "wake up!"), Israelis often mishear it. An Israeli site dedicated to Hebrew mondegreens has coined the term avatiach (, Hebrew for "watermelon") for "mondegreen", named for a common mishearing of Shlomo Artzi's award-winning 1970 song "Ahavtia" ("I loved her",
lawn there arose such a clatter" from A Visit from St. Nicholas as "Out on the lawn, there's a Rose Suchak ladder". In television Mondegreens have been used in many television advertising campaigns, including: An advertisement for the 2012 Volkswagen Passat touting the car's audio system shows a number of people singing incorrect versions of the line "Burning out his fuse up here alone" from the Elton John/Bernie Taupin song "Rocket Man", until a woman listening to the song in a Passat realizes the correct words. A 2002 advertisement for T-Mobile shows spokeswoman Catherine Zeta-Jones helping to correct a man who has misunderstood the chorus of Def Leppard's "Pour Some Sugar On Me" as "pour some shook up ramen". A series of advertisements for Maxell audio cassette tapes, produced by Howell Henry Chaldecott Lury, shown in 1989 and 1990, featured misheard versions of "Israelites" (e.g., "Me ears are alight") by Desmond Dekker and "Into the Valley" by The Skids as heard by users of other brands of tape. A 1987 series of advertisements for Kellogg's Nut 'n Honey Crunch featured a joke in which one person asks "What's for breakfast?" and is told "Nut 'N' Honey", which is misheard as "Nothing, honey". Other notable examples The traditional game Chinese whispers ("Telephone" or "Gossip" in North America) involves mishearing a whispered sentence to produce successive mondegreens that gradually distort the original sentence as it is repeated by successive listeners. Among schoolchildren in the US, daily rote recitation of the Pledge of Allegiance has long provided opportunities for the genesis of mondegreens. Speech-to-text functionality in modern smartphone messaging apps and search or assist functions may be hampered by faulty speech recognition. It has been noted that in text messaging, users often leave uncorrected mondegreens as a joke or puzzle for the recipient to solve. This wealth of mondegreens has proven to be a fertile ground for study by speech scientists and psychologists. Reverse mondegreen A reverse mondegreen is the intentional production, in speech or writing, of words or phrases that seem to be gibberish but disguise meaning. A prominent example is Mairzy Doats, a 1943 novelty song by Milton Drake, Al Hoffman, and Jerry Livingston. The lyrics are a reverse mondegreen, made up of same-sounding words or phrases (sometimes also referred to as "oronyms"), so pronounced (and written) as to challenge the listener (or reader) to interpret them: Mairzy doats and dozy doats and liddle lamzy divey A kiddley divey too, wouldn't you? The clue to the meaning is contained in the bridge of the song: If the words sound queer and funny to your ear, a little bit jumbled and jivey, Sing "Mares eat oats and does eat oats and little lambs eat ivy." This makes it clear that the last line is "A kid'll eat ivy, too; wouldn't you?" Deliberate mondegreen Two authors have written books of supposed foreign-language poetry that are actually mondegreens of nursery rhymes in English. Luis van Rooten's pseudo-French Mots D'Heures: Gousses, Rames includes critical, historical, and interpretive apparatus, as does John Hulme's Mörder Guss Reims, attributed to a fictitious German poet. Both titles sound like the phrase "Mother Goose Rhymes". Both works can also be considered soramimi, which produces different meanings when interpreted in another language. The genre of animutation is based on deliberate mondegreen. Wolfgang Amadeus Mozart produced a similar effect in his canon "Difficile Lectu" (Difficult to Read), which, though ostensibly in Latin, is actually an opportunity for scatological humor in both German and Italian. Some performers and writers have used deliberate mondegreens to create double entendres. The phrase "if you see Kay" (F-U-C-K) has been employed many times, notably as a line from James Joyce's 1922 novel Ulysses. "Mondegreen" is a song by Yeasayer on their 2010 album, Odd Blood. The lyrics are intentionally obscure (for instance, "Everybody sugar in my bed" and "Perhaps the pollen in the air turns us into a stapler") and spoken hastily to encourage the mondegreen effect. Anguish Languish is an ersatz language created by Howard L. Chace. A play on the words "English Language," it is based on homophonic transformations of English words and consists entirely of deliberate mondegreens that seem nonsensical in print but are readily understood when spoken aloud. A notable example is the story "Ladle Rat Rotten Hut" ("Little Red Riding Hood"), which appears in his collection of stories and poems, Anguish Languish (Prentice-Hall, 1956). Related linguistic phenomena Closely related categories are Hobson-Jobson, where a word from a foreign language is homophonically translated into one's own language, e.g. "cockroach" from Spanish cucaracha, and soramimi, a Japanese term for deliberate homophonic misinterpretation of words for humor. An unintentionally incorrect use of similar-sounding words or phrases, resulting in a changed meaning, is a malapropism. If there is a connection in meaning, it may be called an eggcorn. If a person stubbornly continues to mispronounce a word or phrase after being corrected, that person has committed a mumpsimus. Earworm Eggcorn Holorime Homophonic translation Hypercorrection Phono-semantic matching Spoonerism Syntactic ambiguity Non-English languages Croatian Queen's song "Another one bites the dust" has a long-standing history as a mondegreen in Croatian, misheard as Radovan baca daske which means "Radovan (personal name) throws planks". This might also be a soramimi. Dutch In Dutch, mondegreens are popularly referred to as Mama appelsap ("Mommy applejuice"), from the Michael Jackson song Wanna Be Startin' Somethin' which features the lyrics Mama-se mama-sa ma-ma-coo-sa, and was once misheard as Mama say mama sa mam[a]appelsap. The Dutch radio station 3FM had a show Superrradio (originally Timur Open Radio) run by Timur Perlin and Ramon with an item in which listeners were encouraged to send in mondegreens under the name "Mama appelsap". The segment was popular for years. French In French, the phenomenon is also known as hallucination auditive, especially when referring to pop songs. The title of the film La Vie en rose ("Life in pink") depicting the life of Édith Piaf can be mistaken for L'Avion rose' ("The pink airplane"). The title of the 1983 French novel Le Thé au harem d'Archi Ahmed ("Tea in the Harem of Archi Ahmed") by Mehdi Charef (and the 1985 movie of the same name) is based on the main character mishearing le théorème d'Archimède ("the theorem of Archimedes") in his mathematics class. A classic example in French is similar to the "Lady Mondegreen" anecdote: in his 1962 collection of children's quotes La Foire aux cancres, the humorist Jean-Charles refers to a misunderstood lyric of "La Marseillaise" (the French national anthem): Entendez-vous ... mugir ces féroces soldats ("Do you hear those savage soldiers roar?") is misheard as ...Séféro, ce soldat ("that soldier Séféro"). German Mondegreens are a well-known phenomenon in German, especially where non-German songs are concerned. They are sometimes called, after a well-known example, Agathe Bauer-songs ("I got the power", a song by Snap!, misinterpreted as a German female name). Journalist Axel Hacke published a series of books about them, beginning with Der weiße Neger Wumbaba ("The White Negro Wumbaba", a mishearing of the line der weiße Nebel wunderbar from "Der Mond ist aufgegangen"). In urban legend, children's paintings of nativity scenes, occasionally include next to the Child, Mary, Joseph, and so on, an additional, laughing creature known as the Owi. The reason is to be found in the line Gottes Sohn! O wie lacht / Lieb' aus Deinem göttlichen Mund ("God's Son! Oh, how does love laugh out of Thy divine mouth!") from the song "Silent Night". The subject is Lieb, a poetic contraction of die Liebe leaving off the final -e and the definite article, so that the phrase might be misunderstood as being about a person named Owi laughing "in a loveable manner". Owi lacht has been used as the title of at least one book about Christmas and Christmas songs. Hebrew Ghil'ad Zuckermann mentions the example mukhrakhím liyót saméakh (, which means "we must be happy", with a grammatical error) as a mondegreen of the original úru 'akhím belév saméakh (, which means "wake up, brothers, with a happy heart"). Although this line is taken from the
offset */ algorithm parallelMergesort(A, lo, hi, B, off) is len := hi - lo + 1 if len == 1 then B[off] := A[lo] else let T[1..len] be a new array mid := ⌊(lo + hi) / 2⌋ mid' := mid - lo + 1 fork parallelMergesort(A, lo, mid, T, 1) parallelMergesort(A, mid + 1, hi, T, mid' + 1) join parallelMerge(T, 1, mid', mid' + 1, len, B, off) In order to analyze a recurrence relation for the worst case span, the recursive calls of parallelMergesort have to be incorporated only once due to their parallel execution, obtaining For detailed information about the complexity of the parallel merge procedure, see Merge algorithm. The solution of this recurrence is given by This parallel merge algorithm reaches a parallelism of , which is much higher than the parallelism of the previous algorithm. Such a sort can perform well in practice when combined with a fast stable sequential sort, such as insertion sort, and a fast sequential merge as a base case for merging small arrays. Parallel multiway merge sort It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there are usually p > 2 processors available. A better approach may be to use a K-way merge method, a generalization of binary merge, in which sorted sequences are merged. This merge variant is well suited to describe a sorting algorithm on a PRAM. Basic Idea Given an unsorted sequence of elements, the goal is to sort the sequence with available processors. These elements are distributed equally among all processors and sorted locally using a sequential Sorting algorithm. Hence, the sequence consists of sorted sequences of length . For simplification let be a multiple of , so that for . These sequences will be used to perform a multisequence selection/splitter selection. For , the algorithm determines splitter elements with global rank . Then the corresponding positions of in each sequence are determined with binary search and thus the are further partitioned into subsequences with . Furthermore, the elements of are assigned to processor , means all elements between rank and rank , which are distributed over all . Thus, each processor receives a sequence of sorted sequences. The fact that the rank of the splitter elements was chosen globally, provides two important properties: On the one hand, was chosen so that each processor can still operate on elements after assignment. The algorithm is perfectly load-balanced. On the other hand, all elements on processor are less than or equal to all elements on processor . Hence, each processor performs the p-way merge locally and thus obtains a sorted sequence from its sub-sequences. Because of the second property, no further p-way-merge has to be performed, the results only have to be put together in the order of the processor number. Multi-sequence selection In its simplest form, given sorted sequences distributed evenly on processors and a rank , the task is to find an element with a global rank in the union of the sequences. Hence, this can be used to divide each in two parts at a splitter index , where the lower part contains only elements which are smaller than , while the elements bigger than are located in the upper part. The presented sequential algorithm returns the indices of the splits in each sequence, e.g. the indices in sequences such that has a global rank less than and . algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is for i = 1 to p do (l_i, r_i) = (0, |S_i|-1) while there exists i: l_i < r_i do // pick Pivot Element in S_j[l_j], .., S_j[r_j], chose random j uniformly v := pickPivot(S, l, r) for i = 1 to p do m_i = binarySearch(v, S_i[l_i, r_i]) // sequentially if m_1 + ... + m_p >= k then // m_1+ ... + m_p is the global rank of v r := m // vector assignment else l := m return l For the complexity analysis the PRAM model is chosen. If the data is evenly distributed over all , the p-fold execution of the binarySearch method has a running time of . The expected recursion depth is as in the ordinary Quickselect. Thus the overall expected running time is . Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel such that all splitter elements of rank for are found simultaneously. These splitter elements can then be used to partition each sequence in parts, with the same total running time of . Pseudocode Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We assume that there is a barrier synchronization before and after the multisequence selection such that every processor can determine the splitting elements and the sequence partition properly. /** * d: Unsorted Array of Elements * n: Number of Elements * p: Number of Processors * return Sorted Array */ algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is o := new Array[0, n] // the output array for i = 1 to p do in parallel // each processor in parallel S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p sort(S_i) // sort locally synch v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank i * n/p synch (S_i,1, ..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i into subsequences o[(i-1) * n/p, i * n/p] := kWayMerge(s_1,i, ..., s_p,i) // merge and assign to output array return o Analysis Firstly, each processor sorts the assigned elements locally using a sorting algorithm with complexity . After that, the splitter elements have to be calculated in time . Finally, each group of splits have to be merged in parallel by each processor with a running time of using a sequential p-way merge algorithm. Thus, the overall running time is given by . Practical adaption and application The multiway merge sort algorithm is very scalable through its high parallelization capability, which allows the use of many processors. This makes the algorithm a viable candidate for sorting large amounts of data, such as those processed in computer clusters. Also, since in such systems memory is usually not a limiting resource, the disadvantage of space complexity of merge sort is negligible. However, other factors become important in such systems, which are not taken into account when modelling on a PRAM. Here, the following aspects need to be considered: Memory hierarchy, when the data does not fit into the processors cache, or the communication overhead of exchanging data between processors, which could become a bottleneck when the data can no longer be accessed via the shared memory. Sanders et al. have presented in their paper a bulk synchronous parallel algorithm for multilevel multiway mergesort, which divides processors into groups of size . All processors sort locally first. Unlike single level multiway mergesort, these sequences are then partitioned into parts and assigned to the appropriate processor groups. These steps are repeated recursively in those groups. This reduces communication and especially avoids problems with many small messages. The hierarchical structure of the underlying real network can be used to define the processor groups (e.g. racks, clusters,...). Further variants Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with Richard Cole using a clever subsampling algorithm to ensure O(1) merge. Other sophisticated parallel sorting algorithms can achieve the same or better time bounds with a lower constant. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW parallel random-access machine (PRAM) with n processors by performing partitioning implicitly. Powers further shows that a pipelined version of Batcher's Bitonic Mergesort at O((log n)2) time on a butterfly sorting network is in practice actually faster than his O(log n) sorts on a PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix and parallel sorting. Comparison with other sort algorithms Although heapsort has the same
input is already sorted (i.e., is one run), so the natural merge sort need only make one pass through the data. In many practical cases, long natural runs are present, and for that reason natural merge sort is exploited as the key component of Timsort. Example: Start : 3 4 2 1 7 5 8 9 0 6 Select runs : (3 4)(2)(1 7)(5 8 9)(0 6) Merge : (2 3 4)(1 5 7 8 9)(0 6) Merge : (1 2 3 4 5 7 8 9)(0 6) Merge : (0 1 2 3 4 5 6 7 8 9) Formally, the natural merge sort is said to be Runs-optimal, where is the number of runs in , minus one. Tournament replacement selection sorts are used to gather the initial runs for external sorting algorithms. Analysis In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If the running time of merge sort for a list of length n is T(n), then the recurrence relation T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the n steps taken to merge the resulting two lists). The closed form follows from the master theorem for divide-and-conquer recurrences. The number of comparisons made by merge sort in the worst case is given by the sorting numbers. These numbers are equal to or slightly smaller than (n ⌈lg n⌉ − 2⌈lg n⌉ + 1), which is between (n lg n − n + 1) and (n lg n + n + O(lg n)). Merge sort's best case takes about half as many iterations as its worst case. For large n and a randomly ordered input list, merge sort's expected (average) number of comparisons approaches α·n fewer than the worst case, where In the worst case, merge sort uses approximately 39% fewer comparisons than quicksort does in its average case, and in terms of moves, merge sort's worst case complexity is O(n log n) - the same complexity as quicksort's best case. Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as Lisp, where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort. Merge sort's most common implementation does not sort in place; therefore, the memory size of the input must be allocated for the sorted output to be stored in (see below for variations that need only n/2 extra spaces). Variants Variants of merge sort are primarily concerned with reducing the space complexity and the cost of copying. A simple alternative for reducing the space overhead to n/2 is to maintain left and right as a combined structure, copy only the left part of m into temporary space, and to direct the merge routine to place the merged output into m. With this version it is better to allocate the temporary space outside the merge routine, so that only one allocation is needed. The excessive copying mentioned previously is also mitigated, since the last pair of lines before the return result statement (function merge in the pseudo code above) become superfluous. One drawback of merge sort, when implemented on arrays, is its working memory requirement. Several in-place variants have been suggested: Katajainen et al. present an algorithm that requires a constant amount of working memory: enough storage space to hold one element of the input array, and additional space to hold pointers into the input array. They achieve an time bound with small constants, but their algorithm is not stable. Several attempts have been made at producing an in-place merge algorithm that can be combined with a standard (top-down or bottom-up) merge sort to produce an in-place merge sort. In this case, the notion of "in-place" can be relaxed to mean "taking logarithmic stack space", because standard merge sort requires that amount of space for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is possible in time using a constant amount of scratch space, but their algorithm is complicated and has high constant factors: merging arrays of length and can take moves. This high constant factor and complicated in-place algorithm was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston presented a straightforward linear time algorithm practical in-place merge to merge a sorted list using fixed amount of additional space. They both have used the work of Kronrod and others. It merges in linear time and constant extra space. The algorithm takes little more average time than standard merge sort algorithms, free to exploit O(n) temporary extra memory cells, by less than a factor of two. Though the algorithm is much faster in a practical way but it is unstable also for some lists. But using similar concepts, they have been able to solve this problem. Other in-place algorithms include SymMerge, which takes time in total and is stable. Plugging such an algorithm into merge sort increases its complexity to the non-linearithmic, but still quasilinear, . A modern stable linear and in-place merging is block merge sort. An alternative to reduce the copying into multiple lists is to associate a new field of information with each key (the elements in m are called keys). This field will be used to link the keys and any associated information together in a sorted list (a key and its related information is called a record). Then the merging of the sorted lists proceeds by changing the link values; no records need to be moved at all. A field which contains only a link will generally be smaller than an entire record so less space will also be used. This is a standard sorting technique, not restricted to merge sort. Use with tape drives An external merge sort is practical to run using disk or tape drives when the data to be sorted is too large to fit into memory. External sorting explains how merge sort is implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is sequential (except for rewinds at the end of each pass). A minimal implementation can get by with just two record buffers and a few program variables. Naming the four tape drives as A, B, C, D, with the original data on A, and using only two record buffers, the algorithm is similar to the bottom-up implementation, using pairs of tape drives instead of arrays in memory. The basic algorithm can be described as follows: Merge pairs of records from A; writing two-record sublists alternately to C and D. Merge two-record sublists from C and D into four-record sublists; writing these alternately to A and B. Merge four-record sublists from A and B into eight-record sublists; writing these alternately to C and D Repeat until you have one list containing all the data, sorted—in log2(n) passes. Instead of starting with very short runs, usually a hybrid algorithm is used, where the initial pass will read many records into memory, do an internal sort to create a long run, and then distribute those long runs onto the output set. The step avoids many early passes. For example, an internal sort of 1024 records will save nine passes. The internal sort is often large because it has such a benefit. In fact, there are techniques that can make the initial runs longer than the available internal memory. One of them, the Knuth's 'snowplow' (based on a binary min-heap), generates runs twice as long (on average) as a size of memory used. With some overhead, the above algorithm can be modified to use three tapes. O(n log n) running time can also be achieved using two queues, or a stack and a queue, or three stacks. In the other direction, using k > two tapes (and O(k) items in memory), we can reduce the number of tape operations in O(log k) times by using a k/2-way merge. A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase merge sort. Optimizing merge sort On modern computers, locality of reference can be of paramount importance in software optimization, because multilevel memory hierarchies are used. Cache-aware versions of the merge sort algorithm, whose operations have been specifically chosen to minimize the movement of pages in and out of a machine's memory cache, have been proposed. For example, the algorithm stops partitioning subarrays when subarrays of size S are reached, where S is the number of data items fitting into a CPU's cache. Each of these subarrays is sorted with an in-place sorting algorithm such as insertion sort, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance on machines that benefit from cache optimization. suggested an alternative version of merge sort that uses constant additional space. This algorithm was later refined. Also, many applications of external sorting use a form of merge sorting where the input get split up to a higher number of sublists, ideally to a number for which merging them still makes the currently processed set of pages fit into main memory. Parallel merge sort Merge sort parallelizes well due to the use of the divide-and-conquer method. Several different parallel variants of the algorithm have been developed over the years. Some parallel merge sort algorithms are strongly related to the sequential top-down merge algorithm while others have a different general structure and use the K-way merge method. Merge sort with parallel recursion The sequential merge sort procedure can be described in two phases, the divide phase and the merge phase. The first consists of many recursive calls that repeatedly perform the same division process until the subsequences are trivially sorted (containing one or no element). An intuitive approach is the parallelization of those recursive calls. Following pseudocode describes the merge sort with parallel recursion using the fork and join keywords: // Sort elements lo through hi (exclusive) of array A. algorithm mergesort(A, lo, hi) is if lo+1 < hi then // Two or more elements. mid := ⌊(lo + hi) / 2⌋ fork mergesort(A, lo, mid) mergesort(A, mid, hi) join merge(A, lo, mid, hi) This algorithm is the trivial modification of the sequential version and does not parallelize well. Therefore, its speedup is not very impressive. It has a span of , which is only an improvement of compared to the sequential version (see Introduction to Algorithms). This is mainly due to the sequential merge method, as it is the bottleneck of the parallel executions. Merge sort with parallel merging Better parallelism can be achieved by using a parallel merge algorithm. Cormen et al. present a binary variant that merges two sorted sub-sequences into one sorted output sequence. In one of the sequences (the longer one if unequal length), the element of the middle index is selected. Its position in the other sequence is determined in such a way that this sequence would remain sorted if this element were inserted at this position. Thus, one knows how many other elements from both sequences are smaller and the position of the selected element in the output sequence can be calculated. For the partial sequences of the smaller and larger elements created in this way, the merge algorithm is again executed in parallel until the base case of the recursion is reached. The following pseudocode shows the modified parallel merge sort method using the parallel merge algorithm (adopted from Cormen et al.). /** * A: Input array * B: Output array * lo: lower bound * hi: upper bound * off: offset */ algorithm parallelMergesort(A, lo, hi, B, off) is len := hi - lo + 1 if len == 1 then B[off] := A[lo] else let T[1..len] be a new array mid := ⌊(lo + hi) / 2⌋ mid' := mid - lo + 1 fork parallelMergesort(A, lo, mid, T, 1) parallelMergesort(A, mid + 1, hi, T, mid' + 1) join parallelMerge(T, 1, mid', mid' + 1, len, B, off) In order to analyze a recurrence relation for the worst case span, the recursive calls of parallelMergesort have to be incorporated only once due to their parallel execution, obtaining For detailed information about the complexity of the parallel merge procedure, see Merge algorithm. The solution of this recurrence is given by This parallel merge algorithm reaches a parallelism of , which is much higher than the parallelism of the previous algorithm. Such a sort can perform well in practice when combined with a fast stable sequential sort, such as insertion sort, and a fast sequential merge as a base case for merging small arrays. Parallel multiway merge sort It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there are usually p > 2 processors available. A better approach may be to use a K-way merge method, a generalization of binary merge, in which sorted sequences are merged. This merge variant is well suited to describe a sorting algorithm on a PRAM. Basic Idea Given an unsorted sequence of elements, the goal is to sort the sequence with available processors. These elements are distributed equally among all processors and sorted locally using a sequential Sorting algorithm. Hence, the sequence consists of sorted sequences of length . For simplification let be a multiple of , so that for . These sequences will be used to perform a multisequence selection/splitter selection. For , the algorithm determines splitter elements with global rank . Then the corresponding positions of in each sequence are determined with binary search and thus the are further partitioned into subsequences with . Furthermore, the elements of are assigned to processor , means all elements between rank and rank , which are distributed over all . Thus, each processor receives a sequence of sorted sequences. The fact that the rank of the splitter elements was chosen globally, provides two important properties: On the one hand, was chosen so that each processor can still operate on elements after assignment. The algorithm is perfectly load-balanced. On the other hand, all elements on processor are less than or equal to all elements on processor . Hence, each processor performs the p-way merge locally and thus obtains a sorted sequence from its sub-sequences. Because of the second property, no further p-way-merge has to be performed, the results only have to be put together in the order of the processor number. Multi-sequence selection In its simplest form, given sorted sequences distributed evenly on processors and a rank , the task is to find an element with a global rank in the union of the sequences. Hence, this can be used to divide each in two parts at a splitter index , where the lower part contains only elements which
D. Maule, remained directly involved with factory production until her death in 2009 at the age of 92. Products The aircraft produced by Maule Air are tube-and-fabric designs and are popular with bush pilots, thanks to their very low stall speed, their tundra tires and oleo strut landing gear. Most Maules are built with
age 19. He founded the company Mechanical Products Co. in Napoleon, Michigan to market his own starter design. In 1941 the B.D. Maule Co. was founded, and Maule produced tailwheels and fabric testers. In 1953 he began design work, and started aircraft production with the "Bee-Dee" M-4 in 1957. The company is a family-owned enterprise. Its
as the head of psychiatry for a large Tokyo hospital, Morita began developing his methods while working with sufferers of shinkeishitsu, or anxiety disorders with a hypochondriac base. Theory and methods According to Morita, how a person feels is important as a sensation and as an indicator for the present moment, but is uncontrollable: we don't create feelings, feelings happen to us. Since feelings do not cause our behavior, we can coexist with unpleasant feelings while still taking constructive action. The essence of Morita's method maybe summarized in three rules: Accept all your feelings, know your purpose(s), and do what needs to be done. When once asked what shy people should do, Morita replied, "Sweat." Accept your feelings - Accepting feelings is not ignoring them or avoiding them, but welcoming them; Vietnamese poet and writer Thich Nhat Hanh recommends we say, "Hello Loneliness, how are you today? Come, sit by me and
Loneliness, how are you today? Come, sit by me and I will take care of you." Morita's advice: "In feelings, it is best to be wealthy and generous" - that is, have many and let them fly as they wish. Know your purpose - Implicit in Morita's method, and the traditional Buddhist psychological principles which he adapted, is an independence of thought and action, something a little alien to the Western ideal to "follow our whims and moods". Morita held that we can no more control our thoughts than we can control the weather, as both are phenomena of most amazingly complex natural systems. And if we have no hope of controlling our emotions, we can hardly be held responsible any more than we can be held responsible for feeling hot or cold.
(Mexico City Metrobús, Line 4), a BRT station in Mexico City Moctezuma (Mexico City Metrobús, Line 5), a BRT station in Mexico City United States Inhabited places Montezuma, Arizona, an unincorporated community Montezuma, California, a ghost town Montezuma Hills, California Montezuma, Colorado, a Statutory Town Montezuma County, Colorado Montezuma, Georgia, a city Montezuma Township, Pike County, Illinois Montezuma, Indiana, a town Montezuma, Iowa, a city Montezuma Township, Gray County, Kansas Montezuma, Kansas, a city Montezuma, New Mexico, an unincorporated community Montezuma, New York, a town Montezuma, North Carolina, an unincorporated community Montezuma, Ohio, a village Montezuma, Virginia, an unincorporated community Buildings Montezuma (Norwood, Virginia), a home on the National Register of Historic Places Montezuma Castle (hotel), Las Vegas, New Mexico Natural formations Montezuma Creek (Utah), a creek in Utah Montezuma Marsh, Cayuga Lake, New York Montezuma National Forest, Colorado Montezuma National Wildlife Refuge, New York Montezuma Range, Nevada, a
Montezuma Falls, Tasmania, Australia Music Montezuma, hero of a 1695 semi-opera The Indian Queen by Henry Purcell Motezuma, a 1733 opera by Antonio Vivaldi (until recently known under the title Montezuma) Montezuma (Graun), a 1755 opera by Carl Heinrich Graun Motezuma, a 1765 opera by Gian Francesco de Majo Motezuma (Mysliveček), a 1771 opera by Josef Mysliveček Montezuma, a 1775 opera by Antonio Sacchini Montezuma, a 1780 opera by Giacomo Insanguine Montesuma, a 1781 opera by Niccolò Antonio Zingarelli Montezuma, by Ignaz von Seyfried (1804) Montezuma, an 1884 opera by Frederick Grant Gleason Montezuma (Sessions opera), a 1963 opera by Roger Sessions Montezuma, or La Conquista, a 2005 opera by Lorenzo Ferrero Montezuma, a 1980 film score by Hans Werner Henze "Montezuma", a song from the 1994 album Apurimac II by Cusco "Montezuma", a song from the 2011 album Helplessness Blues by Fleet Foxes Ships , three ships of the United States Navy Montezuma (1804 ship), later Moctezuma of the Chilean navy , launched 1899, later RFA Abadol and RFA Oakleaf Other uses Montezuma (TV programme), a 2009 British documentary Montezuma (mythology), in the mythology of certain Amerindian tribes of the
W. Mooney Jr., American, the Charles A. Heimbold, Jr. Professor of Law, and former interim Dean, at the University of Pennsylvania Law School Chris Mooney (basketball) (born 1972), American basketball coach Darnell Mooney (born 1997), American football player Dave Mooney, professional football player Debra Mooney, American actress Edward Aloysius Mooney, Roman Catholic Cardinal Archbishop of Detroit, former Bishop of Rochester Edward F. Mooney, noted Kierkegaard scholar and Professor of Religion at Syracuse University Edward Mooney (footballer) Francie Mooney, musician; fiddler Grahame Mooney, RAF Chinook Pilot and Squadron Leader Hercules Mooney, American Revolutionary War Colonel James Mooney, anthropologist whose major works were about Native American Indians Jason Mooney (disambiguation), multiple people John Mooney (disambiguation), multiple people Kathi Mooney, American scientist Kevin Mooney, Irish musician Kyle Mooney, American comic actor, Saturday Night Live Malcolm Mooney, original lead singer of rock group |Can Matt Mooney (born 1995), American basketball player Melvin Mooney, American physicist, developed the Mooney Viscometer and other testing equipment used in the rubber industry Paschal Mooney, Irish politician Peter Mooney (conductor), Scottish
to Mooney it meant the descendant of the wealthy one. According to Irish lore, the Mooney family comes from one of the largest and most noble Irish lines. They are said to be descendants of the ancient Irish King Heremon, who, along with his brother Herber, conquered Ireland. Heremon slew his brother shortly after their invasion, took the throne for himself, and fathered a line of kings of Ireland that include Malachi II, and King Niall of the Nine Hostages. Baptismal records, parish records, ancient land grants, the Annals of the Four Masters, and books by O'Hart, McLysaght, and O'Brien were all used in researching the history of the Mooney family name. These varied and often ancient records indicate that distant septs of the name arose in several places throughout Ireland. The most known and most numerous sept came from the county of Offaly. The members of this sept were from Chieftain Monach, son of Ailill Mor, Lord of Ulster, who was descended from the Kings of Connacht. These family members gave their name to town lands called Ballymooney both in that county and in the neighbouring county of Leix. People with the surname Albert Mooney, aircraft designer and founder of Mooney Airplane Company Alex X. Mooney, Member of Congress from West Virginia Bel Mooney, English journalist and broadcaster Brian Mooney, professional football player Cameron Mooney, Australian rules footballer Carol Ann Mooney, President of Saint Mary's College in Notre Dame, Indiana Charles ("Chuck") W. Mooney Jr., American, the Charles A. Heimbold, Jr. Professor of Law, and former interim Dean, at the University of Pennsylvania Law School Chris Mooney (basketball) (born 1972), American basketball coach Darnell Mooney (born 1997), American football player Dave Mooney, professional football
in 1913 with Johnson posting a career-high 35 victories, as the team once again finished in second place. The Senators then fell into another decline for the next decade. The team had a period of prolonged success in the 1920s and 1930s, led by Walter Johnson, as well as fellow Hall-of-Famers Bucky Harris, Goose Goslin, Sam Rice, Heinie Manush, and Joe Cronin. In particular, a rejuvenated Johnson rebounded in 1924 to win 23 games with the help of his catcher, Muddy Ruel, as the Senators won the American League pennant for the first time in its history. The Senators then faced John McGraw's heavily favored New York Giants in the 1924 World Series. The two teams traded wins back and forth with three games of the first six being decided by one run. In the deciding 7th game, the Senators were trailing the Giants 3–1 in the 8th inning when Bucky Harris hit a routine ground ball to third that hit a pebble and took a bad hop over Giants third baseman Freddie Lindstrom. Two runners scored on the play, tying the score at three. An aging Walter Johnson came in to pitch the ninth inning and held the Giants scoreless into extra innings. In the bottom of the twelfth inning, Ruel hit a high, foul ball directly over home plate. The Giants' catcher, Hank Gowdy, dropped his protective mask to field the ball but, failing to toss the mask aside, stumbled over it and dropped the ball, thus giving Ruel another chance to bat. On the next pitch, Ruel hit a double; he proceeded to score the winning run when Earl McNeely hit a ground ball that took another bad hop over Lindstrom's head. This would mark the only World Series triumph for the franchise during their 60-year tenure in Washington. The following season they repeated as American League champions but ultimately lost the 1925 World Series to the Pittsburgh Pirates. After Walter Johnson retired in 1927, he was hired as manager of the Senators. After enduring a few losing seasons, the team returned to contention in 1930. In 1933, Senators owner Griffith returned to the formula that worked for him nine years earlier: 26-year-old shortstop Joe Cronin became player-manager. The Senators posted a 99–53 record and cruised to the pennant seven games ahead of the New York Yankees, but in the 1933 World Series the Giants exacted their revenge, winning in five games. Following the loss, the Senators sank all the way to seventh place in 1934 and attendance began to fall. Despite the return of Harris as manager from 1935 to 1942 and again from 1950 to 1954, Washington was mostly a losing ball club for the next 25 years contending for the pennant only during World War II. Washington came to be known as "first in war, first in peace, and last in the American League"; their hard luck drove the plot of the musical and film Damn Yankees. Cecil Travis, Buddy Myer (1935 A.L. batting champion), Roy Sievers, Mickey Vernon (batting champion in 1946 and 1953), and Eddie Yost were notable Senators players whose careers were spent in obscurity on losing teams. In 1954, the Senators signed future Hall of Fame member Harmon Killebrew. By 1959, he was the Senators’ regular third baseman and led the league with 42 home runs, earning him a starting spot on the American League All-Star team. After Griffith's death in 1955, his nephew and adopted son Calvin took over the team presidency. Calvin sold Griffith Stadium to the city of Washington and leased it back. This led to speculation that the team was planning to move, as the Boston Braves, St. Louis Browns, and Philadelphia Athletics had done in recent years. By 1957, after an early flirtation with San Francisco (where the New York Giants would move after the season), Griffith began courting Minneapolis–St. Paul, a prolonged process that resulted in his rejecting the Twin Cities' first offer before agreeing to move. Home attendance in Washington, D.C., steadily increased from 425,238 in 1955 to 475,288 in 1958, and then jumped to 615,372 in 1959. However, part of the Minnesota deal guaranteed a million fans a year for three years, plus the potential to double TV and radio money. The American League opposed the move at first, but in 1960 a deal was reached. Major League Baseball agreed to let Griffith move his team to the Minneapolis-St. Paul region and allowed a new Senators team to be formed in Washington for the 1961 season. Asked nearly two decades later why he moved the team, Griffith replied, "I’ll tell you why we came to Minnesota, it was when I found out you only had 15,000 blacks here. Black people don’t go to ball games, but they’ll fill up a rassling ring and put up such a chant it’ll scare you to death. It’s unbelievable. We came here because you’ve got good, hard-working, white people here." Minnesota Twins: 1961–present Renamed the Minnesota Twins, the team set up shop in Metropolitan Stadium. Success came quickly to the team in Minnesota. Sluggers Harmon Killebrew and Bob Allison, who had been stars in Washington, were joined by Tony Oliva and Zoilo Versalles, and later second baseman Rod Carew and pitchers Jim Kaat and Jim Perry, winning the American League pennant in 1965. A second wave of success came in the late 1980s and early 1990s under manager Tom Kelly, led by Kent Hrbek, Bert Blyleven, Frank Viola, and Kirby Puckett, winning the franchise's second and third World Series (and first and second in Minnesota). The name "Twins" was derived from "Twin Cities", a popular nickname for the Minneapolis-St. Paul region. The NBA's Minneapolis Lakers had moved to Los Angeles in 1960 due to poor attendance, blamed in part on a perceived reluctance of fans in St. Paul to support the team. Griffith was determined not to alienate fans in either city by naming the team after one city or the other. He proposed to name the team the "Twin Cities Twins", but MLB objected and Griffith therefore named the team the Minnesota Twins. The team was allowed to keep its original "TC" (for Twin Cities) insignia for its caps. The team's logo shows two men, one in a Minneapolis Millers uniform and one in a St. Paul Saints uniform, shaking hands across the Mississippi River within an outline of the state of Minnesota. The "TC" remained on the Twins' caps until 1987, when they adopted new uniforms. By this time, the team felt it was established enough to put an "M" on its cap without having St. Paul fans think it stood for Minneapolis. The "TC" logo was moved to a sleeve on the jerseys, occasionally appeared as an alternate cap design, and then was reinstated as the main cap logo in 2010. Both the "TC" and "Minnie & Paul" logos remain the team's primary insignia. 1960s The Twins were eagerly greeted in Minnesota when they arrived in 1961. They brought a nucleus of talented players: Harmon Killebrew, Bob Allison, Camilo Pascual, Zoilo Versalles, Jim Kaat, Earl Battey, and Lenny Green. Tony Oliva, who would go on to win American League batting championships in 1964, 1965 and 1971, made his major league debut in 1962. That year, the Twins won 91 games, the most by the franchise since 1933. Behind Mudcat Grant's 21 victories, Versalles' A.L. MVP season and Oliva's batting title, the Twins won 102 games and the American League Pennant in 1965, but they were defeated in the World Series by the Los Angeles Dodgers in seven games (behind the Series MVP, Sandy Koufax, who compiled a 2–1 record, including winning the seventh game). In 1962, the Minnesota State Commission on Discrimination filed a complaint against the Twins, which was the only MLB team still segregating players during spring training and when traveling in the southern United States. Heading into the final weekend of the 1967 season, when Rod Carew was named the A.L. Rookie of the Year, the Twins, Boston Red Sox, Chicago White Sox, and Detroit Tigers all had a shot at clinching the American League championship. The Twins and the Red Sox started the weekend tied for 1st place and played against each other in Boston for the final three games of the season. The Red Sox won two out of the three games, seizing their first pennant since 1946 with a 92–70 record. The Twins and Tigers both finished one game back, with 91–71 records, while the White Sox finished three games back, at 89–73. In 1969, the new manager of the Twins, Billy Martin, pushed aggressive base running all-around, with Carew stealing home seven times in the season (1 short of Ty Cobb's Major League Record) in addition to winning the first of seven A.L. batting championships. With Killebrew slugging 49 homers and winning the AL MVP Award, these 1969 Twins won the very first American League Western Division Championship, but they lost three straight games to the Baltimore Orioles, winners of 109 games, in the first American League Championship Series. The Orioles would go on to be upset by the New York Mets in the World Series. Martin was fired after the season, in part due to an August fight in Detroit with 20-game winner Dave Boswell and outfielder Bob Allison, in an alley outside the Lindell A.C. bar. Bill Rigney led the Twins to a repeat division title in 1970, behind the star pitching of Jim Perry (24–12), the A.L. Cy Young Award winner, while the Orioles again won the Eastern Division Championship behind the star pitching of Jim Palmer. Once again, the Orioles won the A.L. Championship Series in a three-game sweep, and this time they would win the World Series. 1970s After winning the division again in 1970, the team entered an eight-year dry spell, finishing around the .500 mark. Killebrew departed after 1974. Owner Calvin Griffith faced financial difficulty with the start of free agency, costing the Twins the services of Lyman Bostock and Larry Hisle, who left as free agents after the 1977 season, and Carew, who was traded after the 1978 season. In 1975, Carew won his fourth consecutive AL batting title, having already joined Ty Cobb as the only players to lead the major leagues in batting average for three consecutive seasons. In , Carew batted .388, which was the highest in baseball since Boston's Ted Williams hit .406 in ; he won the 1977 AL MVP Award. He won another batting title in 1978, hitting .333. 1980s–90s In 1982, the Twins moved into the Hubert H. Humphrey Metrodome, which they shared with the Minnesota Vikings. After a 16–54 start, the Twins were on the verge on becoming the worst team in MLB history. They turned the season around somewhat, but still lost 102 games, finishing with what is currently the second-worst record in Twins history (beaten only by the 2016 team, which lost 103 games), despite the .301 average, 23 homers and 92 RBI from rookie Kent Hrbek. In 1984, Griffith sold the Twins to multi-billionaire banker/financier Carl Pohlad. Pohlad beat a larger offer by New York businessman Donald Trump by promising to keep the club in Minnesota. The Metrodome hosted the 1985 Major League Baseball All-Star Game. After several losing seasons, the 1987 team, led by Hrbek, Gary Gaetti, Frank Viola (A.L. Cy Young winner in 1988), Bert Blyleven, Jeff Reardon, Tom Brunansky, Dan Gladden, and rising star Kirby Puckett, returned to the World Series after defeating the favored Detroit Tigers in the ALCS, 4 games to 1. Tom Kelly managed the Twins to World Series victories over the St. Louis Cardinals in 1987 and the Atlanta Braves in 1991. The 1988 Twins were the first team in American League history to draw more than 3 million fans. On July 17, 1990, the Twins became the only team in major league history to pull off two triple plays in the same game. Twins' pitcher and Minnesota native Jack Morris was the star of the series in 1991, going 2–0 in his three starts with a 1.17 ERA. 1991 also marked the first time that any team that finished in last place in their division would advance to the World Series the following season; both the Twins and the Braves did this in 1991. Contributors to the 1991 Twins' improvement from 74 wins to 95 included Chuck Knoblauch, the A.L. Rookie of the Year; Scott Erickson, 20-game winner; new closer Rick Aguilera and new designated hitter Chili Davis. The World Series in 1991 is regarded by many as one of the classics of all time. In this Series, four games were won during the teams' final at-bat, and three of these were in extra innings. The Atlanta Braves won all three of their games in Atlanta, and the Twins won all four of their games in Minnesota. The sixth game was a legendary one for Puckett, who tripled in a run, made a sensational leaping catch against the wall, and finally in the 11th inning hit the game-winning home run. The seventh game was tied 0–0 after the regulation nine innings, and marked only the second time that the seventh game of the World Series had ever gone into extra innings. The Twins won on a walk-off RBI single by Gene Larkin in the bottom of the 10th inning, after Morris had pitched ten shutout innings against the Braves. The seventh game of the 1991 World Series is widely regarded as one of the greatest games in the history of professional baseball. After a winning season in 1992 but falling short of Oakland in the division, the Twins fell into a years-long stretch of mediocrity, posting a losing record each season for the next eight: 71–91 in 1993, 50–63 in 1994, 56–88 in 1995, 78–84 in 1996, 68–94 in 1997, 70–92 in 1998, 63–97 in 1999 and 69–93 in 2000. From 1994 to 1997, a long sequence of retirements and injuries hurt the team badly, and Tom Kelly spent the remainder of his managerial career attempting to rebuild the Twins. In 1997, owner Carl Pohlad almost sold the Twins to North Carolina businessman Don Beaver, who would have moved the team to the Piedmont Triad area. Puckett was forced to retire at age 35 due to loss of vision in one eye from a central retinal vein occlusion. The 1989 A.L. batting champion, he retired as the Twins' all-time leader in career hits, runs, doubles, and total bases. At the time of his retirement, his .318 career batting average was the highest by any right-handed American League batter since Joe DiMaggio. Puckett was the fourth baseball player during the 20th century to record 1,000 hits in his first five full calendar years in Major League Baseball, and was the second to record 2,000 hits during his first 10 full calendar years. He was elected to the Baseball Hall of Fame in 2001, his first year of eligibility. 2000s The Twins dominated the Central Division in the first decade of the new century, winning the division in six of those ten years ('02, '03, '04, '06, '09 and '10), and nearly winning it in '08 as well. From 2001 to 2006, the Twins compiled the longest streak of consecutive winning seasons since moving to Minnesota. Threatened with closure by league contraction, the 2002 team battled back to reach the American League Championship Series before being eliminated 4–1 by that year's World Series champion Anaheim Angels. The Twins have not won a playoff series since the 2002 ALDS against Oakland, despite the team winning several division championships in the decade. In 2006, the Twins won the division on the last day of the regular season (the only day all season they held sole possession of first place) but lost to the Oakland Athletics in the ALDS. Ozzie Guillén coined a nickname for this squad, calling the Twins "little piranhas". The Twins players embraced the label, and in response, the Twins Front office started a "Piranha Night", with piranha finger puppets given out to the first 10,000 fans. Scoreboard operators sometimes played an animated sequence of piranhas munching under that caption in situations where the Twins were scoring runs playing "small ball", and the stadium vendors sold T-shirts and hats advertising "The Little Piranhas". The Twins also had the AL MVP in Justin Morneau, the AL batting champion in Joe Mauer, and the AL Cy Young Award winner in Johan Santana. In 2008, the Twins finished the regular season tied with the White Sox on top of the AL Central, forcing a one-game playoff in Chicago to determine the division champion. The Twins lost that game and missed the playoffs. The game location was determined by rule of a coin flip that was conducted in mid-September. This rule was changed for the start of the 2009 season, making the site for any tiebreaker game to be determined by the winner of the regular season head-to-head record between the teams involved. After a year where the Twins played .500 baseball for most of the season, the team won 17 of their last 21 games to tie the Detroit Tigers for the lead in the Central Division. The Twins were able to use the play-in game rule to their advantage when they won the AL Central at the end of the regular season by way of a 6–5 tiebreaker game that concluded with a 12th-inning walk-off hit by Alexi Casilla to right field, that scored Carlos Gómez. However, they failed to advance to the American League Championship Series as they lost the American League Divisional Series in three straight games to the eventual World Series champion New York Yankees. That year, Joe Mauer became only the second catcher in 33 years to win the AL MVP award. Iván Rodríguez won for the Texas Rangers in 1999, previous to that, the last catcher to win an AL MVP was the New York Yankees Thurman Munson in 1976. 2010 marked Minnesota's inaugural season played at Target Field, where the Twins finished the regular season with a record
Stadium Commission and obtaining a state court ruling that his team was not obligated to play in the Metrodome after the 2006 season. This cleared the way for the Twins to move or disband before the 2007 season if a new deal was not reached. Target Field In response to the threatened loss of the Twins, the Minnesota private and public sector negotiated and approved a financing package for a replacement stadium— a baseball-only outdoor, natural turf ballpark in the Warehouse District of downtown Minneapolis— owned by a new entity known as the Minnesota Ballpark Authority. Target Field was constructed at a cost of $544.4 million (including site acquisition and infrastructure), utilizing the proceeds of a $392 million public bond offering based on a 0.15% sales tax in Hennepin County and private financing of $185 million provided by the Pohlad family. As part of the deal, the Twins also signed a 30-year lease of the new stadium, effectively guaranteeing the continuation of the team in Minnesota for a long time to come. Construction of the new field began in 2007, and was completed in December 2009, in time for the 2010 season. Commissioner Bud Selig, who earlier had threatened to disband the team, observed that without the new stadium the Twins could not have committed to sign their star player, catcher Joe Mauer, to an 8-year, $184 million contract extension. The first regular-season game in Target Field was played against the Boston Red Sox on April 12, 2010, with Mauer driving in two runs and going 3-for-5 to help the Twins defeat the Red Sox, 5–2. On May 18, 2011, Target Field was named "The Best Place To Shop" by Street and Smith's SportsBusiness Journal at the magazine's 2011 Sports Business Awards Ceremony in New York City. It was also named "The Best Sports Stadium in North America" by ESPN The Magazine in a ranking that included over 120 different stadiums, ballparks and arenas from around North America. In July 2014, Target Field hosted the 85th Major League Baseball All-Star Game and the Home Run Derby. In June 2020, following protests over the murder of George Floyd, a statue of former owner Calvin Griffith was removed from Target Plaza outside of the stadium because of his history of racist comments. Uniforms Current The Twins' white home uniform, first used in 2015, features the current "Twins" script (with an underline below "win") in navy outlined in red with Kasota gold drop shadows. Letters and numerals also take on the same color as the "Twins" script. The modern "Minnie and Paul" alternate logo (with the state of Minnesota in navy outlined in Kasota gold) appears on the left sleeve. Caps are in all-navy with the interlocking "TC" outlined in Kasota gold. The Twins' red alternate home uniform, first used in 2016, features the "TC" insignia outlined in Kasota gold on the left chest. Letters and numerals are in navy outlined in white with Kasota gold drop shadows. The "Minnie and Paul" alternate logo appears on the left sleeve. The uniform is paired with a navy-brimmed red cap with the "TC" outlined in Kasota gold. The Twins' navy alternate home uniform, first used in 2019, features the classic "Twins" script (with a tail underline accent after the letter "s") in red outlined in navy and Kasota gold. Letters and numerals also take on the same color as the "Twins" script. As with the home white uniforms, it is paired with the all-navy Kasota gold "TC" cap. The gold-trimmed "TC" insignia also appears on the left sleeve. The Twins' powder blue alternate uniform, first used in 2020, is a modern buttoned version of the road uniform the team used from 1973 to 1986. The set contains the classic "Twins" script in red outlined in navy, along with red letters on the back and red numerals (both on the chest and on the back) outlined in navy. The "Minnie and Paul" alternate logo appears on the left sleeve. The uniform is paired with the primary all-navy "TC" cap minus the Kasota gold accents, which is also used on the helmets regardless of uniform. The Twins' grey road uniform, first used in 2010, features the current "Minnesota" script (with an underline below "innesot") in red trimmed in navy. Letters are in navy while numerals (both on the chest and on the back) are in red trimmed in navy. The team's primary logo appears on the left sleeve. The uniform is paired with either the all-navy or the red-brimmed navy "TC" cap. The Twins' navy alternate road uniform, first used in 2011, shares the same look as the regular road uniforms, but with a few differences. The "Minnesota" script is in red outlined in white, letters and chest numerals are in white outlined in red, and back numerals are in red outlined in white. Red piping is also added. The uniform is paired with either the all-navy or the red-brimmed navy "TC" cap. Past uniforms From 1961 to 1971 the Twins sported uniforms bearing the classic "Twins" script and numerals in navy outlined in red. They wore navy caps with an interlocking "TC" on the front; this was adopted because Griffith was well aware of the bitter rivalry between St. Paul and Minneapolis and didn't want to alienate fans in either city. The original "Minnie and Paul" alternate logo appears on the left sleeve of both the pinstriped white home uniform and grey road uniform. For the 1972 season the Twins updated their uniforms. The color scheme on the "Twins" script and numerals were reversed, pinstripes were removed from the home uniform, and an updated "Minnie and Paul" roundel patch replaced the originals on the left sleeve. In 1973 the Twins switched to polyester pullover uniforms, which included a powder blue road uniform. Chest numerals were added while a navy-brimmed red cap was used with the home uniform. The original "Minnie and Paul" logo returned to the left sleeve. Player names in red were added to the road uniform in 1977. In 1987 the Twins updated their look. Home white uniforms brought back the pinstripes along with the modern-day "Twins" script. By this time, the franchise felt it was established enough in the area that it could put a stylized "M" on its cap without having fans in St. Paul think it stood for Minneapolis. The "TC" insignia adorned the left sleeve, later replaced by the modern "Minnie and Paul" alternate in 2002. Road grey uniforms, which also featured pinstripes, were emblazoned with "Minnesota" in red block letters outlined in navy, while the updated primary logo adorned the left sleeve. Both uniforms kept the red numerals trimmed in navy, but the color on the player names was changed to navy. In 1997, player names were added to the home uniform. Initially, both uniforms were paired with an all-navy cap featuring the underlined "M" in front, but in 2002, the "TC" cap was brought back as a home cap while the "M" cap was used on the road. The "M" cap was retired following the 2010 season, though the team continued to wear them as a throwback on special occasions. For a few games during the 1997 season, the Twins wore red alternate uniforms, which featured navy piping and letters in white trimmed in navy. In that same year, the Twins also released a road navy alternate uniform, featuring red piping, "Minnesota" and player names in white block letters outlined in red, and red numerals outlined in white. The following season, the Twins replaced the red uniforms with a home navy alternate, which features the "Twins" script and back numerals in red outlined in white, and player names and chest numerals in white outlined in red. Both uniforms contained the "TC" (later modern "Minnie and Paul") and primary logo sleeve patches respectively. The Twins also brought back the navy-brimmed red cap for a few games with the home navy alternates. The road navy alternates remained in use until 2009, with the home navy version worn for the last time in the 2013 season. The Twins also wore three other alternate uniforms in the past. In 2006, the Twins wore a sleeveless variation of their regular home uniforms with navy undershirts, which they wore until 2010. They also wore a buttoned version of their 1973–86 home uniforms in 2009, before giving way to the throwback off-white version of their 1961–71 home uniforms from 2010 to 2018. Roster Minnesota Twins all-time roster: A complete list of players who played in at least one game for the Twins franchise. Minor league affiliates The Minnesota Twins farm system consists of six minor league affiliates. With the invitation of the St. Paul Saints to join the Twins' farm system, they will have the closest MiLB affiliate of any team in baseball at apart. Achievements Baseball Hall of Fame members Molitor, Morris, and Winfield were all St. Paul natives who joined the Twins late in their careers and were warmly received as "hometown heroes", but were elected to the hall primarily on the basis of their tenures with other teams. Both Molitor and Winfield had their 3,000th hit with Minnesota, while Morris pitched a complete-game shutout for the Twins in game seven of the 1991 World Series. Molitor was the first player in history to hit a triple for his 3,000th hit. Cronin, Goslin, Griffith, Harris, Johnson, Killebrew and Wynn are listed on the Washington Hall of Stars display at Nationals Park (previously they were listed at Robert F. Kennedy Stadium). So are Ossie Bluege, George Case, Joe Judge, George Selkirk, Roy Sievers, Cecil Travis, Mickey Vernon and Eddie Yost. Ford C. Frick Award recipients Team captains 3 Harmon Killebrew 1961–74 Twins Hall of Fame Retired numbers The Metrodome's upper deck in center and right fields was partly covered by a curtain containing banners of various titles won, and retired numbers. There was no acknowledgment of the Twins' prior championships in Washington and several Senator Hall of Famers, such as Walter Johnson, played in the days prior to numbers being used on uniforms. However, Killebrew played seven seasons as a Senator, including two full seasons as a regular prior to the move to Minnesota in 1961. Prior to the addition of the banners, the Twins acknowledged their retired numbers on the Metrodome's outfield fence. Harmon Killebrew's #3 was the first to be displayed, as it was the only one the team had retired when they moved in. It was joined by Rod Carew's #29 in 1987, Tony Oliva's #6 in 1991, Kent Hrbek's #14 in 1995, and Kirby Puckett's #34 in 1997 before the Twins began hanging the banners to reduce capacity. The championships, meanwhile, were marked on the "Baggie" in right field. In the Metrodome, the numbers ran in that order from left to right. In Target Field, they run from right to left, presumably to allow space for additional numbers in the future. The retired numbers also serve as entry points at Target Field, The center field gate is Gate No. 3, honoring Killebrew, the left-field gate is Gate No. 6, honoring Oliva, the home plate gate is Gate No. 14, for Hrbek, the right field gate serves as Gate No. 29, in tribute to Carew, and the plaza gate is known as Gate No. 34, honoring Puckett. The numbers that have been retired hang within Target Field in front of the tower that serves as the Twins' executive offices in left field foul territory. The championships banners have been replaced by small pennants that fly on masts at the back of the left-field upper deck. Those pennants, along with the flags flying in the plaza behind right field, serve as a visual cue for the players, suggesting the wind direction and speed. Jackie Robinson's number, 42, was retired by Major League Baseball on April 15, 1997, and formally honored by the Twins on May 23, 1997. Robinson's number was positioned to the left of the Twins numbers in both venues. Awards Team records Team seasons Radio and television In 2007, the Twins took the rights to the broadcasts in-house and created the Twins Radio Network (TRN). With that new network in place the Twins secured a new Metro Affiliate flagship radio station in KSTP (AM 1500). It replaced WCCO (AM 830), which held broadcast rights for the Twins since the team moved to Minneapolis in 1961. For 2013, the Twins moved to FM radio on KTWN-FM 96.3 K-Twin, which is owned by the Pohlad family. The original radio voices of the Twins in 1961 were Ray Scott, Halsey Hall and Bob Wolff. After the first season, Herb Carneal replaced Wolff. Twins TV and radio broadcasts were originally sponsored by the Hamm's Brewing Company. In 2009, Treasure Island Resort & Casino became the first-ever naming rights partner for the Twins Radio Network, making the commercial name of TRN the Treasure Island Baseball Network. In 2017, it was announced that WCCO would become the flagship station the Twins again starting in 2018, thus returning the team back to its original station after 11 years. Cory Provus is the current radio play by play announcer, taking over in 2012 for longtime Twins voice John Gordon who retired following the 2011 season. Former Twins OF Dan Gladden serves as color commentator. TRN broadcasts are originated from the studios at Minnesota News Network and Minnesota Farm Networks. Kris Atteberry hosts the pre-game show, the "Lineup Card" and the "Post-game Download" from those studios except when filling in for Provus or Gladden when they are on vacation. On April 1, 2007, Herb Carneal, the radio voice of the Twins for all but one year of their existence, died at his home in Minnetonka after a long battle with a list of illnesses. Carneal is in the broadcasters wing of the Baseball Hall of Fame. The television rights are held by Bally Sports North with Dick Bremer as the play-by-play announcer and former Twin, 2011 National Baseball Hall of Fame inductee, Bert
At high enough Mach numbers the temperature increases so much over the shock that ionization and dissociation of gas molecules behind the shock wave begin. Such flows are called hypersonic. It is clear that any object traveling at hypersonic speeds will likewise be exposed to the same extreme temperatures as the gas behind the nose shock wave, and hence choice of heat-resistant materials becomes important. High-speed flow in a channel As a flow in a channel becomes supersonic, one significant change takes place. The conservation of mass flow rate leads one to expect that contracting the flow channel would increase the flow speed (i.e. making the channel narrower results in faster air flow) and at subsonic speeds this holds true. However, once the flow becomes supersonic, the relationship of flow area and speed is reversed: expanding the channel actually increases the speed. The obvious result is that in order to accelerate a flow to supersonic, one needs a convergent-divergent nozzle, where the converging section accelerates the flow to sonic speeds, and the diverging section continues the acceleration. Such nozzles are called de Laval nozzles and in extreme cases they are able to reach hypersonic speeds ( at 20 °C). An aircraft Machmeter or electronic flight information system (EFIS) can display Mach number derived from stagnation pressure (pitot tube) and static pressure. Calculation When the speed of sound is known, the Mach number at which an aircraft is flying can be calculated by where: M is the Mach number u is velocity of the moving aircraft and c is the speed of sound at the given altitude (more properly temperature) and the speed of sound varies with the thermodynamic temperature as: where: is the ratio of specific heat of a gas at a constant pressure to heat at a constant volume (1.4 for air) is the specific gas constant for air. is the static air temperature. If the speed of sound is not known, Mach number may be determined by measuring the various air pressures (static and dynamic) and using the following formula that is derived from Bernoulli's equation for Mach numbers less than 1.0. Assuming air to be an ideal gas, the formula to compute Mach number in a subsonic compressible flow is: where: qc is impact pressure (dynamic pressure) and p is static pressure is the ratio of specific heat of a gas at a constant pressure to heat at a constant volume (1.4 for air) is the specific gas constant for air. The formula to compute Mach number in a supersonic compressible flow is derived from the Rayleigh supersonic pitot equation: Calculating Mach number from pitot tube pressure Mach number is a function of temperature and true airspeed. Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature.
of M > 1 flow increases towards both leading and trailing edges. As M = 1 is reached and passed, the normal shock reaches the trailing edge and becomes a weak oblique shock: the flow decelerates over the shock, but remains supersonic. A normal shock is created ahead of the object, and the only subsonic zone in the flow field is a small area around the object's leading edge. (Fig.1b) Fig. 1. Mach number in transonic airflow around an airfoil; M < 1 (a) and M > 1 (b). When an aircraft exceeds Mach 1 (i.e. the sound barrier), a large pressure difference is created just in front of the aircraft. This abrupt pressure difference, called a shock wave, spreads backward and outward from the aircraft in a cone shape (a so-called Mach cone). It is this shock wave that causes the sonic boom heard as a fast moving aircraft travels overhead. A person inside the aircraft will not hear this. The higher the speed, the more narrow the cone; at just over M = 1 it is hardly a cone at all, but closer to a slightly concave plane. At fully supersonic speed, the shock wave starts to take its cone shape and flow is either completely supersonic, or (in case of a blunt object), only a very small subsonic flow area remains between the object's nose and the shock wave it creates ahead of itself. (In the case of a sharp object, there is no air between the nose and the shock wave: the shock wave starts from the nose.) As the Mach number increases, so does the strength of the shock wave and the Mach cone becomes increasingly narrow. As the fluid flow crosses the shock wave, its speed is reduced and temperature, pressure, and density increase. The stronger the shock, the greater the changes. At high enough Mach numbers the temperature increases so much over the shock that ionization and dissociation of gas molecules behind the shock wave begin. Such flows are called hypersonic. It is clear that any object traveling at hypersonic speeds will likewise be exposed to the same extreme temperatures as the gas behind the nose shock wave, and hence choice of heat-resistant materials becomes important. High-speed flow in a channel As a flow in a channel becomes supersonic, one significant change takes place. The conservation of mass flow rate leads one to expect that contracting the flow channel would increase the flow speed (i.e. making the channel narrower results in faster air flow) and at subsonic speeds this holds true. However, once the flow becomes supersonic, the relationship of flow area and speed is reversed: expanding the channel actually increases the speed. The obvious result is that in order to accelerate a flow to supersonic, one
the Battle of Gulnabad. 1736 – Nader Shah, founder of the Afsharid dynasty, is crowned Shah of Iran. 1775 – An anonymous writer, thought by some to be Thomas Paine, publishes "African Slavery in America", the first article in the American colonies calling for the emancipation of slaves and the abolition of slavery. 1782 – Gnadenhutten massacre: Ninety-six Native Americans in Gnadenhutten, Ohio, who had converted to Christianity, are killed by Pennsylvania militiamen in retaliation for raids carried out by other Indian tribes. 1801 – War of the Second Coalition: At the Battle of Abukir, a British force under Sir Ralph Abercromby lands in Egypt with the aim of ending the French campaign in Egypt and Syria. 1817 – The New York Stock Exchange is founded. 1844 – King Oscar I ascends to the thrones of Sweden and Norway. 1844 – The Althing, the parliament of Iceland, was reopened after 45 years of closure. 1868 – Sakai incident: Japanese samurai kill 11 French sailors in the port of Sakai, Osaka. 1901–present 1910 – French aviator Raymonde de Laroche becomes the first woman to receive a pilot's license. 1916 – World War I: A British force unsuccessfully attempts to relieve the siege of Kut (present-day Iraq) in the Battle of Dujaila. 1917 – International Women's Day protests in Petrograd mark the beginning of the February Revolution (February 23 in the Julian calendar). 1917 – The United States Senate votes to limit filibusters by adopting the cloture rule. 1921 – Spanish Prime Minister Eduardo Dato Iradier is assassinated while on his way home from the parliament building in Madrid. 1924 – A mine disaster kills 172 coal miners near Castle Gate, Utah. 1936 – Daytona Beach and Road Course holds its first oval stock car race. 1937 – Spanish Civil War: The Battle of Guadalajara begins. 1942 – World War II: The Dutch East Indies surrender Java to the Imperial Japanese Army 1942 – World War II: Imperial Japanese Army forces captured Rangoon, Burma from British. 1963 – The Ba'ath Party comes to power in Syria in a coup d'état 1966 – Nelson's Pillar in Dublin, Ireland, destroyed by a bomb. 1979 – Philips demonstrates the compact disc publicly for the first time. 1983 – Cold War: While addressing a convention of Evangelicals, U.S. President Ronald Reagan labels the Soviet Union an "evil empire". 1985 – A supposed failed assassination attempt on Islamic cleric Sayyed Mohammad Hussein Fadlallah in Beirut, Lebanon kills at least 56 and injures 180 others. 2004 – A new constitution is signed by Iraq's Governing Council. 2014 – In one of aviation's greatest mysteries, Malaysia Airlines Flight 370, carrying a total of 239 people, disappears en route from Kuala Lumpur to Beijing. The fate of the flight remains unknown. 2017 – The Azure Window, a natural arch on the Maltese island of Gozo, collapses in stormy weather. 2018 – The first Aurat March (social/political demonstration) was held being International Women's Day in Karachi, Pakistan, since then annually held across Pakistan and feminist slogan Mera Jism Meri Marzi (My body, my choice), in demand for women's right to bodily autonomy and against gender-based violence came into vogue in Pakistan. 2021 – International Women's Day marches in Mexico become violent with 62 police officers and 19 civilians injured in Mexico City alone. Births Pre-1600 1495 – John of God, Portuguese friar and saint (d. 1550) 1601–1900 1712 – John Fothergill, English physician and botanist (d. 1780) 1714 – Carl Philipp Emanuel Bach, German pianist and composer (d. 1788) 1726 – Richard Howe, 1st Earl Howe, English admiral and politician, Treasurer of the Navy (d. 1799) 1746 – André Michaux, French botanist and explorer (d. 1802) 1748 – William V, Prince of Orange (d. 1806) 1761 – Jan Potocki, Polish ethnologist, historian, linguist, and author (d. 1815) 1799 – Simon Cameron, American journalist and politician, United States Secretary of War (d. 1889) 1804 – Alvan Clark, American astronomer and optician (d. 1887) 1822 – Ignacy Łukasiewicz, Polish inventor and businessman, invented the Kerosene lamp (d. 1882) 1827 – Wilhelm Bleek, German linguist and anthropologist (d. 1875) 1830 – João de Deus, Portuguese poet and educator (d. 1896) 1836 – Harriet Samuel, English businesswoman and founder the jewellery retailer H. Samuel (d. 1908) 1841 – Oliver Wendell Holmes Jr., American lawyer and jurist (d. 1935) 1851 – Frank Avery Hutchins, American librarian and educator (d. 1914) 1856 – Bramwell Booth, English 2nd General of The Salvation Army (d. 1929) 1856 – Colin Campbell Cooper, American painter and academic (d. 1937) 1859 – Kenneth Grahame, British author (d. 1932) 1865 – Frederic Goudy, American type designer (d. 1947) 1879 – Otto Hahn, German chemist and academic, Nobel Prize laureate (d. 1968) 1886 – Edward Calvin Kendall, American chemist and academic, Nobel Prize laureate (d. 1972) 1892 – Juana de Ibarbourou, Uruguayan poet and author (d. 1979) 1896 – Charlotte Whitton, Canadian journalist and politician, 46th Mayor of Ottawa (d. 1975) 1901–present 1902 – Louise Beavers, American actress and singer (d. 1962) 1902 – Jennings Randolph, American journalist and politician (d. 1998) 1907 – Konstantinos Karamanlis, Greek lawyer and politician, President of Greece (d. 1998) 1909 – Beatrice Shilling, English motorcycle racer and engineer (d. 1990) 1910 – Claire Trevor, American actress (d. 2000) 1911 – Alan Hovhaness, Armenian-American pianist and composer (d. 2000) 1912 – Preston Smith, American businessman and politician, Governor of Texas (d. 2003) 1912 – Meldrim Thomson Jr., American publisher and politician, Governor of New Hampshire (d. 2001) 1914 – Yakov Borisovich Zel'dovich, Belarusian-Russian physicist and astronomer (d. 1987) 1918 – Eileen Herlie, Scottish-American actress (d. 2008) 1921 – Alan Hale Jr., American actor and restaurateur (d. 1990) 1922 – Ralph H. Baer, German-American video game designer, created the Magnavox Odyssey (d. 2014) 1922 – Cyd Charisse, American
Douglas Hurd, English politician 1931 – John McPhee, American author and educator 1931 – Neil Postman, American author and social critic (d. 2003) 1931 – Gerald Potterton, English-Canadian animator, director, and producer 1934 – Marv Breeding, American baseball player and scout (d. 2006) 1935 – George Coleman, American saxophonist, composer, and bandleader 1936 – Sue Ane Langdon, American actress and singer 1937 – Richard Fariña, American singer-songwriter and author (d. 1966) 1937 – Juvénal Habyarimana, Rwandan politician, President of Rwanda (d. 1994) 1939 – Jim Bouton, American baseball player and journalist (d. 2019) 1939 – Lynn Seymour, Canadian ballerina and choreographer 1939 – Lidiya Skoblikova, Russian speed skater and coach 1939 – Robert Tear, Welsh tenor and conductor (d. 2011) 1941 – Norman Stone, British historian, author, and academic (d. 2019) 1942 – Dick Allen, American baseball player and tenor (d. 2020) 1942 – Ann Packer, English sprinter, hurdler, and long jumper 1943 – Susan Clark, Canadian actress and producer 1943 – Lynn Redgrave, English-American actress and singer (d. 2010) 1944 – Sergey Nikitin, Russian singer-songwriter and guitarist 1945 – Micky Dolenz, American singer-songwriter and actor 1945 – Anselm Kiefer, German painter and sculptor 1946 – Randy Meisner, American singer-songwriter and bass player 1947 – Carole Bayer Sager, American singer-songwriter and painter 1947 – Michael S. Hart, American author, founded Project Gutenberg (d. 2011) 1948 – Mel Galley, English rock singer-songwriter and guitarist (d. 2008) 1948 – Jonathan Sacks, English rabbi, philosopher, and scholar (d. 2020) 1949 – Teofilo Cubillas, Peruvian footballer 1951 – Dianne Walker, American tap dancer 1953 – Jim Rice, American baseball player, coach, and sportscaster 1954 – Steve James, American documentary filmmaker 1954 – David Wilkie, Sri Lankan-Scottish swimmer 1956 – Laurie Cunningham, English footballer (d. 1989) 1956 – David Malpass, American economist and government official 1957 – Clive Burr, English rock drummer (d. 2013) 1957 – William Edward Childs, American pianist and composer 1957 – Bob Stoddard, American baseball player 1958 – Gary Numan, English singer-songwriter, guitarist, and producer 1959 – Aidan Quinn, Irish-American actor 1960 – Irek Mukhamedov, Russian ballet dancer 1961 – Camryn Manheim, American actress 1961 – Larry Murphy, Canadian ice hockey player 1965 – Kenny Smith, American basketball player and sportscaster 1966 – Greg Barker, Baron Barker of Battle, English politician 1968 – Michael Bartels, German race car driver 1970 – Jason Elam, American football player 1972 – Lena Sundström, Swedish journalist and author 1976 – Juan Encarnación, Dominican baseball player 1976 – Freddie Prinze Jr., American actor, producer, and screenwriter 1977 – James Van Der Beek, American actor 1977 – Johann Vogel, Swiss footballer 1982 – Leonidas Kampantais, Greek footballer 1983 – André Santos, Brazilian footballer 1983 – Mark Worrell, American baseball player 1984 – Ross Taylor, New Zealand cricketer 1985 – Maria Ohisalo, Finnish politician and researcher 1990 – Asier Illarramendi, Spanish footballer 1990 – Petra Kvitová, Czech tennis player 1991 – Tom English, Australian rugby player 1994 – Claire Emslie, Scottish footballer 1996 – Matthew Hammelmann, Australian rules footballer 1997 – Tijana Bošković, Serbian volleyball player 1998 – Tali Darsigny, Canadian weightlifter Deaths Pre-1600 1126 – Urraca of León and Castile (b. 1079) 1137 – Adela of Normandy, by marriage countess of Blois (b. c. 1067) 1144 – Pope Celestine II 1403 – Bayezid I, Ottoman sultan (b. 1360) 1466 – Francesco I Sforza, Duke of Milan (b. 1401) 1550 – John of God, Portuguese friar and saint (b. 1495) 1601–1900 1619 – Veit Bach, German baker and miller 1641 – Xu Xiake, Chinese geographer and explorer (b. 1587) 1702 – William III of England (b. 1650) 1717 – Abraham Darby I, English blacksmith (b. 1678) 1723 – Christopher Wren, English architect, designed St. Paul's Cathedral (b. 1632) 1844 – Charles XIV John of Sweden (b. 1763) 1869 – Hector Berlioz, French composer, conductor, and critic (b. 1803) 1872 – Priscilla Susan Bury, British botanist (b. 1799) 1872 – Cornelius Krieghoff, Dutch-Canadian painter (b. 1815) 1874 – Millard Fillmore, American lawyer and politician, 13th President of the United States (b. 1800) 1887 – Henry Ward Beecher, American minister and activist (b. 1813) 1887 – James Buchanan Eads, American engineer, designed the Eads Bridge (b. 1820) 1889 – John Ericsson, Swedish-American engineer (b. 1803) 1901–present 1917 – Ferdinand von Zeppelin, German general and businessman (b. 1838) 1923 – Johannes Diderik van der Waals, Dutch physicist and academic, Nobel Prize laureate (b. 1837) 1930 – William Howard Taft, American politician, 27th President of the United States (b. 1857) 1930 – Edward Terry Sanford, American jurist and politician, United States Assistant Attorney General (b. 1865) 1932 – Minna Craucher, Finnish socialite and spy (b. 1891) 1937 – Howie Morenz, Canadian ice hockey player (b. 1902) 1941 – Sherwood Anderson, American novelist and short story writer (b. 1876) 1942 – José Raúl Capablanca, Cuban chess player (b. 1888) 1944 – Fredy Hirsch, German Jewish athlete who helped thousands of Jewish children in the Holocaust (b. 1916) 1948 – Hulusi Behçet, Turkish dermatologist and scientist (b. 1889) 1957 – Othmar Schoeck, Swiss composer and conductor (b. 1886) 1961 – Thomas Beecham, English conductor and composer (b. 1879) 1971 – Harold Lloyd, American actor, director, and producer (b. 1893) 1973 – Ron "Pigpen" McKernan, American keyboard player and songwriter (b. 1945) 1975 – George Stevens, American director, producer, and screenwriter (b. 1904) 1983 – Alan Lennox-Boyd, 1st Viscount Boyd of Merton, English lieutenant and politician (b. 1904) 1983 – William Walton, English composer (b. 1902) 1993 – Billy Eckstine, American trumpet player (b.
first of his New Deal policies. 1942 – World War II: Dutch East Indies unconditionally surrendered to the Japanese forces in Kalijati, Subang, West Java, and the Japanese completed their Dutch East Indies campaign. 1944 – World War II: Soviet Army planes attack Tallinn, Estonia. 1945 – World War II: A coup d'état by Japanese forces in French Indochina removes the French from power. 1946 – Bolton Wanderers stadium disaster at Burnden Park, Bolton, England, kills 33 and injures hundreds more. 1954 – McCarthyism: CBS television broadcasts the See It Now episode, "A Report on Senator Joseph McCarthy", produced by Fred Friendly. 1956 – Soviet forces suppress mass demonstrations in the Georgian SSR, reacting to Nikita Khrushchev's de-Stalinization policy. 1957 – The 8.6 Andreanof Islands earthquake shakes the Aleutian Islands, causing over $5 million in damage from ground movement and a destructive tsunami. 1959 – The Barbie doll makes its debut at the American International Toy Fair in New York. 1960 – Dr. Belding Hibbard Scribner implants for the first time a shunt he invented into a patient, which allows the patient to receive hemodialysis on a regular basis. 1961 – Sputnik 9 successfully launches, carrying a dog and a human dummy, and demonstrating that the Soviet Union was ready to begin human spaceflight. 1967 – Trans World Airlines Flight 553 crashes in a field in Concord Township, Ohio following a mid-air collision with a Beechcraft Baron, killing 26 people. 1974 – The Mars 7 Flyby bus releases the descent module too early, missing Mars. 1976 – Forty-two people die in the Cavalese cable car disaster, the worst cable-car accident to date. 1977 – The Hanafi Siege: In a thirty-nine-hour standoff, armed Hanafi Muslims seize three Washington, D.C., buildings. 1978 – President Soeharto inaugurated Jagorawi Toll Road, the first toll highway in Indonesia, connecting Jakarta, Bogor and Ciawi, West Java. 1987 – Chrysler announces its acquisition of American Motors Corporation 1997 – Comet Hale–Bopp: Observers in China, Mongolia and eastern Siberia are treated to a rare double feature as an eclipse permits Hale-Bopp to be seen during the day. 1997 – The Notorious B.I.G. is murdered in Los Angeles after attending the Soul Train Music Awards. He is gunned down leaving an after party at the Petersen Automotive Museum. His murder remains unsolved. 2011 – Space Shuttle Discovery makes its final landing after 39 flights. Births Pre-1600 1451 – Amerigo Vespucci, Italian cartographer and explorer (d. 1512) 1564 – David Fabricius, German theologian, cartographer and astronomer (d. 1617) 1568 – Aloysius Gonzaga, Italian saint (d. 1591) 1601–1900 1662 – Franz Anton von Sporck, German noble (d. 1738) 1697 – Friederike Caroline Neuber, German actress (d. 1760) 1737 – Josef Mysliveček, Czech violinist and composer (d. 1781) 1749 – Honoré Gabriel Riqueti, comte de Mirabeau, French journalist and politician (d. 1791) 1753 – Jean-Baptiste Kléber, French general (d. 1800) 1758 – Franz Joseph Gall, German neuroanatomist and physiologist (d. 1828) 1763 – William Cobbett, English journalist and author (d. 1835) 1806 – Edwin Forrest, American actor and philanthropist (d. 1872) 1814 – Taras Shevchenko, Ukrainian poet and playwright (d. 1861) 1815 – David Davis, American jurist and politician (d. 1886) 1820 – Samuel Blatchford, American lawyer and jurist (d. 1893) 1824 – Amasa Leland Stanford, American businessman and politician, founded Stanford University (d. 1893) 1847 – Martin Pierre Marsick, Belgian violinist, composer, and educator (d. 1924) 1850 – Hamo Thornycroft, English sculptor and academic (d. 1925) 1856 – Eddie Foy, Sr., American actor and dancer (d. 1928) 1863 – Mary Harris Armor, American suffragist (d. 1950) 1887 – Fritz Lenz, German geneticist and physician (d. 1976) 1890 – Rupert Balfe, Australian footballer and lieutenant (d. 1915) 1890 – Vyacheslav Molotov, Russian politician and diplomat, Soviet Minister of Foreign Affairs (d. 1986) 1891 – José P. Laurel, Filipino lawyer, politician and President of the Philippines (d. 1959) 1892 – Mátyás Rákosi, Hungarian politician (d. 1971) 1892 – Vita Sackville-West, English author, poet, and gardener (d. 1962) 1901–present 1902 – Will Geer, American actor (d. 1978) 1904 – Paul Wilbur Klipsch, American soldier and engineer, founded Klipsch Audio Technologies (d. 2002) 1910 – Samuel Barber, American pianist and composer (d. 1981) 1911 – Clara Rockmore, American classical violin prodigy and theremin player, (d. 1998) 1915 – Johnnie Johnson, English air marshal and pilot (d. 2001) 1918 – George Lincoln Rockwell, American sailor and politician, founded the American Nazi Party (d. 1967) 1918 – Mickey Spillane, American crime novelist (d. 2006) 1920 – Franjo Mihalić, Croatian-Serbian runner and coach (d. 2015) 1921 – Carl Betz, American actor (d. 1978) 1922 – Ian Turbott, New Zealand-Australian former diplomat and university administrator (d. 2016) 1923 – James L. Buckley, American lawyer, judge, and politician 1923 – André Courrèges, French fashion designer (d. 2016) 1923 – Walter Kohn, Austrian-American physicist and academic, Nobel Prize laureate (d. 2016) 1926 – Joe Franklin, American radio and television host (d. 2015) 1928 – Gerald Bull, Canadian-American engineer and academic (d. 1990) 1928 – Keely Smith, American singer and actress (d. 2017) 1929 – Desmond Hoyte, Guyanese lawyer, politician and President of Guyana (d. 2002) 1929 – Zillur Rahman, Bangladeshi politician, 19th President of Bangladesh (d. 2013) 1930 – Ornette Coleman, American saxophonist, violinist, trumpet player, and composer (d. 2015) 1931 – Jackie Healy-Rae, Irish politician (d. 2014) 1932 – Qayyum Chowdhury, Bangladeshi painter and academic (d. 2014) 1932 – Walter Mercado, Puerto Rican-American astrologer and actor (d. 2019) 1933 – Lloyd Price, American R&B singer-songwriter (d. 2021) 1933 – David Weatherall, English physician, geneticist, and academic (d. 2018)
Minister of External Affairs 1956 – David Willetts, English academic and politician 1958 – Paul MacLean, Canadian ice hockey player and coach 1959 – Takaaki Kajita, Japanese physicist and academic, Nobel Prize laureate 1959 – Lonny Price, American actor, director, and screenwriter 1960 – Linda Fiorentino, American actress 1961 – Rick Steiner, American wrestler 1961 – Darrell Walker, American basketball player and coach 1963 – Terry Mulholland, American baseball player 1963 – Jean-Marc Vallée, Canadian director and screenwriter 1964 – Juliette Binoche, French actress 1964 – Phil Housley, American ice hockey player and coach 1965 – Brian Bosworth, American football player and actor 1965 – Benito Santiago, Puerto Rican-American baseball player 1966 – Brendan Canty, American drummer and songwriter 1966 – Tony Lockett, Australian footballer 1968 – Youri Djorkaeff, French footballer 1969 – Kimberly Guilfoyle, American lawyer and journalist 1970 – Naveen Jindal, Indian businessman and politician 1970 – Martin Johnson, English rugby player and coach 1971 – Emmanuel Lewis, American actor 1972 – Jodey Arrington, United States politician 1973 – Liam Griffin, English race car driver 1975 – Juan Sebastián Verón, Argentinian footballer 1977 – Radek Dvořák, Czech ice hockey player 1979 – Oscar Isaac, Guatemalan-American actor 1980 – Matthew Gray Gubler, American actor. 1981 – Antonio Bryant, American football player 1981 – Clay Rapada, American baseball player 1982 – Ryan Bayley, Australian cyclist 1982 – Matt Bowen, Australian rugby league player 1982 – Mirjana Lučić-Baroni, Croatian tennis player 1983 – Wayne Simien, American basketball player 1983 – Clint Dempsey, American international soccer player 1984 – Abdoulay Konko, French footballer 1984 – Julia Mancuso, American skier 1985 – Brent Burns, Canadian ice hockey player 1985 – Jesse Litsch, American baseball player 1985 – Pastor Maldonado, Venezuelan race car driver 1985 – Parthiv Patel, Indian cricketer 1986 – Colin Greening, Canadian ice hockey player 1986 – Brittany Snow, American actress and producer 1989 – Taeyeon, South Korean artist 1990 – Daley Blind, Dutch footballer 1990 – Matt Robinson, New Zealand rugby league player 1990 – YG (rapper), American rapper 1991 – Jooyoung, Korean singer-songwriter 1993 – Suga, South Korean rapper, songwriter, record producer 1994 – Morgan Rielly, Canadian ice hockey player 1997 – Chika, American rapper Deaths Pre-1600 886 – Abu Ma'shar al-Balkhi, Muslim scholar and astrologer (b. 787) 1202 – Sverre of Norway 1440 – Frances of Rome, Italian nun and saint (b. 1384) 1444 – Leonardo Bruni, Italian humanist (b. c.1370) 1463 – Catherine of Bologna, Italian nun and saint (d. 1463) 1566 – David Rizzio, Italian-Scottish courtier and politician (b. 1533). 1601–1900 1649 – James Hamilton, 1st Duke of Hamilton, Scottish soldier and politician, (b. 1606) 1649 – Henry Rich, 1st Earl of Holland, English soldier and politician (b. 1590) 1661 – Cardinal Mazarin, Italian-French academic and politician, Prime Minister of France (b. 1602) 1709 – Ralph Montagu, 1st Duke of Montagu, English courtier and politician (b. 1638) 1808 – Joseph Bonomi the Elder, Italian architect (b. 1739) 1810 – Ozias Humphry, English painter and academic (b. 1742) 1825 – Anna Laetitia Barbauld, English poet, author, and critic (b. 1743) 1831 – Friedrich Maximilian von Klinger, German author and playwright (b. 1752) 1847 – Mary Anning, English paleontologist (b. 1799) 1851 – Hans Christian Ørsted, Danish physicist and chemist (b. 1777) 1888 – William I, German Emperor (b. 1797) 1895 – Leopold von Sacher-Masoch, Austrian journalist and author (b. 1836) 1897 – Sondre Norheim, Norwegian-American skier (b. 1825) 1901–present 1918 – Frank Wedekind, German author and playwright (b. 1864) 1925 – Willard Metcalf, American painter and academic (b. 1858) 1926 – Mikao Usui, Japanese spiritual leader, founded Reiki (b. 1865) 1937 – Paul Elmer More, American journalist and critic (b. 1864) 1943 – Otto Freundlich, German painter and sculptor (b. 1878) 1954 – Vagn Walfrid Ekman, Swedish oceanographer and academic (b. 1874) 1955 – Miroslava, Czech-Mexican actress (b. 1925) 1964 – Paul von Lettow-Vorbeck, German general (b. 1870) 1969 – Abdul Munim Riad, Egyptian general (b. 1919) 1971 – Pope Cyril VI of Alexandria (b. 1902) 1974 – Earl Wilbur Sutherland, Jr., American pharmacologist and biochemist, Nobel Prize laureate (b. 1915) 1974 – Harry Womack, American singer (b. 1945) 1983 – Faye Emerson, American actress (b. 1917) 1983 – Ulf von Euler, Swedish physiologist and pharmacologist, Nobel Prize laureate (b. 1905) 1988 – Kurt Georg Kiesinger, German lawyer, politician and Chancellor of Germany (b. 1904) 1989 – Robert Mapplethorpe, American photographer (b. 1946) 1991 – Jim Hardin, American baseball player (b. 1943) 1992 – Menachem Begin, Belarusian-Israeli soldier, politician and Prime Minister of Israel, Nobel Prize laureate (b. 1913) 1993 – C. Northcote Parkinson, English historian and author (b. 1909) 1994 – Charles Bukowski, American poet, novelist, and short story writer (b. 1920) 1994 – Eddie Creatchman, Canadian wrestler, referee, and manager (b. 1928) 1994 – Fernando Rey, Spanish actor (b. 1917) 1996 – George Burns, American comedian, actor, and writer (b. 1896)
file formats for various applications. Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information (ISO/IEC Joint Technical Committee 1, Subcommittee 29). MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding and MPEG-2 systems transport streams and program streams. Newer systems typically use the MPEG base media file format and dynamic streaming (a.k.a. MPEG-DASH). History MPEG was established in 1988 by the initiative of Dr. Hiroshi Yasuda (NTT) and Dr. Leonardo Chiariglione (CSELT). Chiariglione was the group's chair (called Convenor in ISO/IEC terminology) from its inception until June 6, 2020. The first MPEG meeting was in May 1988 in Ottawa, Canada. Starting around the time of the MPEG-4 project in the late 1990s and continuing to the present, MPEG had grown to include approximately 300–500 members per meeting from various industries, universities, and research institutions. On June 6, 2020, the MPEG section of Chiariglione's personal website was updated to inform readers that he had retired as Convenor, and he said that the MPEG group (then SC 29/WG 11) "was closed". Chiariglione described his reasons for stepping down in his personal blog. His decision followed a restructuring process within SC 29, in which "some of the subgroups of WG 11 (MPEG) [became] distinct MPEG working groups (WGs) and advisory groups (AGs)" in July 2020. Prof. Jörn Ostermann of University of Hannover was appointed as Acting Convenor of SC 29/WG 11 during the restructuring period and was then appointed Convenor of SC 29's Advisory Group 2, which coordinates MPEG overall technical activities. The MPEG structure that replaced the former Working Group 11 includes three Advisory Groups (AGs) and seven Working Groups (WGs) SC 29/AG 2: MPEG Technical Coordination (Convenor: Prof. Joern Ostermann of University of Hannover, Germany) SC 29/AG 3: MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim of Kyung Hee University, Korea) SC 29/AG 5: MPEG Visual Quality Assessment (Convenor: Dr. Mathias Wien of RWTH Aachen University, Germany) SC 29/WG 2: MPEG Technical Requirements (Convenor: Dr. Igor Curcio of Nokia, Finland) SC 29/WG 3: MPEG Systems (Convenor: Dr. Youngkwon Lim of Samsung, Korea) SC 29/WG 4: MPEG Video Coding (Convenor: Prof. Lu Yu of Zhejiang University, China) SC 29/WG 5: MPEG Joint Video Coding Team with ITU-T SG16 (Convenor: Prof. Jens-Rainer Ohm of RWTH Aachen University, Germany; formerly co-chairing with Dr. Gary Sullivan of Microsoft, United States) SC 29/WG 6: MPEG Audio coding (Convenor: Dr. Schuyler Quackenbush of Audio Research Labs, United States) SC 29/WG 7: MPEG 3D Graphics coding (Convenor: Prof. Marius Preda of Institut Mines-Télécom SudParis) SC 29/WG 8: MPEG Genomic coding (Convenor: Dr. Marco Mattavelli of EPFL, Switzerland) The first meeting under the current structure was held in October 2020. It (and all other MPEG meetings starting in April 2020) was held virtually by teleconference due to the COVID-19 pandemic. Cooperation with other groups MPEG-2 MPEG-2 development included a joint project between MPEG and ITU-T Study Group 15 (which later became ITU-T SG16), resulting in publication of the MPEG-2 Systems standard (ISO/IEC 13818-1, including its transport streams and program streams) as ITU-T H.222.0 and the MPEG-2 Video standard (ISO/IEC 13818-2) as ITU-T H.262. Sakae Okubo (NTT), was the ITU-T coordinator and chaired the agreements on its requirements. Joint Video Team Joint Video Team (JVT) was joint project between ITU-T SG16/Q.6 (Study Group 16 / Question 6) – VCEG (Video Coding Experts Group) and ISO/IEC JTC 1/SC 29/WG 11 – MPEG for the development of a video coding ITU-T Recommendation and ISO/IEC International Standard. It was formed in 2001 and its main result was H.264/MPEG-4 AVC (MPEG-4 Part 10), which reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.262 / MPEG-2 standard. The JVT was chaired by Dr. Gary Sullivan, with vice-chairs Dr. Thomas Wiegand of the Heinrich Hertz Institute in Germany and Dr. Ajay Luthra of Motorola in the United States. Joint Collaborative Team on Video Coding Joint Collaborative Team on Video Coding (JCT-VC) was a group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG). It was created in 2010 to develop High Efficiency Video Coding (HEVC, MPEG-H Part 2, ITU-T H.265), a video coding standard that further reduces by about 50% the data rate required for video coding, as compared to the then-current ITU-T H.264 / ISO/IEC 14496-10 standard. JCT-VC was co-chaired by Prof. Jens-Rainer Ohm and Gary Sullivan. Joint Video Experts Team Joint Video Experts Team (JVET) is a joint group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG) created in 2017 after an exploration phase that began in 2015. JVET developed Versatile Video Coding (VVC, MPEG-I Part 3, ITU-T H.266), completed in July 2020, which further reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.265 / HEVC standard, and the JCT-VC was merged into JVET in July 2020. Like JCT-VC, JVET was co-chaired by Jens-Rainer Ohm and Gary Sullivan, until July 2021 when Ohm became the sole chair (after Sullivan became the chair of SC 29). Standards The MPEG standards consist of different Parts. Each Part covers a certain aspect of the whole specification. The standards also specify profiles and levels. Profiles are intended to define a set of tools that are available, and Levels define the range of appropriate values for the properties associated with them. Some of the approved MPEG standards were revised by later amendments and/or new editions. The primary early MPEG compression formats and related standards include: MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s (ISO/IEC 11172). This initial version is known as a lossy fileformat and is the first MPEG compression standard for audio and video. It is commonly limited to about 1.5 Mbit/s although the specification is capable of much higher bit rates. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a Compact Disc. It is used on Video CD and can be used for low-quality video on DVD Video. It was used in digital satellite/cable TV services before MPEG-2 became widespread. To meet the low bit requirement, MPEG-1 downsamples the images, as well as uses picture rates of only 24–30 Hz, resulting in a moderate quality. It includes the popular MPEG-1 Audio Layer III (MP3) audio compression format. MPEG-2 (1996): Generic coding of moving pictures and associated audio information (ISO/IEC
reasons for stepping down in his personal blog. His decision followed a restructuring process within SC 29, in which "some of the subgroups of WG 11 (MPEG) [became] distinct MPEG working groups (WGs) and advisory groups (AGs)" in July 2020. Prof. Jörn Ostermann of University of Hannover was appointed as Acting Convenor of SC 29/WG 11 during the restructuring period and was then appointed Convenor of SC 29's Advisory Group 2, which coordinates MPEG overall technical activities. The MPEG structure that replaced the former Working Group 11 includes three Advisory Groups (AGs) and seven Working Groups (WGs) SC 29/AG 2: MPEG Technical Coordination (Convenor: Prof. Joern Ostermann of University of Hannover, Germany) SC 29/AG 3: MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim of Kyung Hee University, Korea) SC 29/AG 5: MPEG Visual Quality Assessment (Convenor: Dr. Mathias Wien of RWTH Aachen University, Germany) SC 29/WG 2: MPEG Technical Requirements (Convenor: Dr. Igor Curcio of Nokia, Finland) SC 29/WG 3: MPEG Systems (Convenor: Dr. Youngkwon Lim of Samsung, Korea) SC 29/WG 4: MPEG Video Coding (Convenor: Prof. Lu Yu of Zhejiang University, China) SC 29/WG 5: MPEG Joint Video Coding Team with ITU-T SG16 (Convenor: Prof. Jens-Rainer Ohm of RWTH Aachen University, Germany; formerly co-chairing with Dr. Gary Sullivan of Microsoft, United States) SC 29/WG 6: MPEG Audio coding (Convenor: Dr. Schuyler Quackenbush of Audio Research Labs, United States) SC 29/WG 7: MPEG 3D Graphics coding (Convenor: Prof. Marius Preda of Institut Mines-Télécom SudParis) SC 29/WG 8: MPEG Genomic coding (Convenor: Dr. Marco Mattavelli of EPFL, Switzerland) The first meeting under the current structure was held in October 2020. It (and all other MPEG meetings starting in April 2020) was held virtually by teleconference due to the COVID-19 pandemic. Cooperation with other groups MPEG-2 MPEG-2 development included a joint project between MPEG and ITU-T Study Group 15 (which later became ITU-T SG16), resulting in publication of the MPEG-2 Systems standard (ISO/IEC 13818-1, including its transport streams and program streams) as ITU-T H.222.0 and the MPEG-2 Video standard (ISO/IEC 13818-2) as ITU-T H.262. Sakae Okubo (NTT), was the ITU-T coordinator and chaired the agreements on its requirements. Joint Video Team Joint Video Team (JVT) was joint project between ITU-T SG16/Q.6 (Study Group 16 / Question 6) – VCEG (Video Coding Experts Group) and ISO/IEC JTC 1/SC 29/WG 11 – MPEG for the development of a video coding ITU-T Recommendation and ISO/IEC International Standard. It was formed in 2001 and its main result was H.264/MPEG-4 AVC (MPEG-4 Part 10), which reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.262 / MPEG-2 standard. The JVT was chaired by Dr. Gary Sullivan, with vice-chairs Dr. Thomas Wiegand of the Heinrich Hertz Institute in Germany and Dr. Ajay Luthra of Motorola in the United States. Joint Collaborative Team on Video Coding Joint Collaborative Team on Video Coding (JCT-VC) was a group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG). It was created in 2010 to develop High Efficiency Video Coding (HEVC, MPEG-H Part 2, ITU-T H.265), a video coding standard that further reduces by about 50% the data rate required for video coding, as compared to the then-current ITU-T H.264 / ISO/IEC 14496-10 standard. JCT-VC was co-chaired by Prof. Jens-Rainer Ohm and Gary Sullivan. Joint Video Experts Team Joint Video Experts Team (JVET) is a joint group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG) created in 2017 after an exploration phase that began in 2015. JVET developed Versatile Video Coding (VVC, MPEG-I Part 3, ITU-T H.266), completed in July 2020, which further reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.265 / HEVC standard, and the JCT-VC was merged into JVET in July 2020. Like JCT-VC, JVET was co-chaired by Jens-Rainer Ohm and Gary Sullivan, until July 2021 when Ohm became the sole chair (after Sullivan became the chair of SC 29). Standards The MPEG standards consist of different Parts. Each Part covers a certain aspect of the whole specification. The standards also specify profiles and levels. Profiles are intended to define a set of tools that are available, and Levels define the range of appropriate values for the properties associated with them. Some of the approved MPEG standards were revised by later amendments and/or new editions. The primary early MPEG compression formats and related standards include: MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s (ISO/IEC 11172). This initial version is known as a lossy fileformat and is the first MPEG compression standard for audio and video. It is commonly limited to about 1.5 Mbit/s although the specification is capable of much higher bit rates. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a Compact Disc. It is used on Video CD and can be used for low-quality video on DVD Video. It was used in digital satellite/cable TV
Part 2: Video Part 2 of the MPEG-1 standard covers video and is defined in ISO/IEC-11172-2. The design was heavily influenced by H.261. MPEG-1 Video exploits perceptual compression methods to significantly reduce the data rate required by a video stream. It reduces or completely discards information in certain frequencies and areas of the picture that the human eye has limited ability to fully perceive. It also exploits temporal (over time) and spatial (across a picture) redundancy common in video to achieve better data compression than would be possible otherwise. (See: Video compression) Color space Before encoding video to MPEG-1, the color-space is transformed to Y′CbCr (Y′=Luma, Cb=Chroma Blue, Cr=Chroma Red). Luma (brightness, resolution) is stored separately from chroma (color, hue, phase) and even further separated into red and blue components. The chroma is also subsampled to 4:2:0, meaning it is reduced to half resolution vertically and half resolution horizontally, i.e., to just one quarter the number of samples used for the luma component of the video. This use of higher resolution for some color components is similar in concept to the Bayer pattern filter that is commonly used for the image capturing sensor in digital color cameras. Because the human eye is much more sensitive to small changes in brightness (the Y component) than in color (the Cr and Cb components), chroma subsampling is a very effective way to reduce the amount of video data that needs to be compressed. However, on videos with fine detail (high spatial complexity) this can manifest as chroma aliasing artifacts. Compared to other digital compression artifacts, this issue seems to very rarely be a source of annoyance. Because of the subsampling, Y′CbCr 4:2:0 video is ordinarily stored using even dimensions (divisible by 2 horizontally and vertically). Y′CbCr color is often informally called YUV to simplify the notation, although that term more properly applies to a somewhat different color format. Similarly, the terms luminance and chrominance are often used instead of the (more accurate) terms luma and chroma. Resolution/bitrate MPEG-1 supports resolutions up to 4095×4095 (12 bits), and bit rates up to 100 Mbit/s. MPEG-1 videos are most commonly seen using Source Input Format (SIF) resolution: 352×240, 352×288, or 320×240. These relatively low resolutions, combined with a bitrate less than 1.5 Mbit/s, make up what is known as a constrained parameters bitstream (CPB), later renamed the "Low Level" (LL) profile in MPEG-2. This is the minimum video specifications any decoder should be able to handle, to be considered MPEG-1 compliant. This was selected to provide a good balance between quality and performance, allowing the use of reasonably inexpensive hardware of the time. Frame/picture/block types MPEG-1 has several frame/picture types that serve different purposes. The most important, yet simplest, is I-frame. I-frames "I-frame" is an abbreviation for "Intra-frame", so-called because they can be decoded independently of any other frames. They may also be known as I-pictures, or keyframes due to their somewhat similar function to the key frames used in animation. I-frames can be considered effectively identical to baseline JPEG images. High-speed seeking through an MPEG-1 video is only possible to the nearest I-frame. When cutting a video it is not possible to start playback of a segment of video before the first I-frame in the segment (at least not without computationally intensive re-encoding). For this reason, I-frame-only MPEG videos are used in editing applications. I-frame only compression is very fast, but produces very large file sizes: a factor of 3× (or more) larger than normally encoded MPEG-1 video, depending on how temporally complex a specific video is. I-frame only MPEG-1 video is very similar to MJPEG video. So much so that very high-speed and theoretically lossless (in reality, there are rounding errors) conversion can be made from one format to the other, provided a couple of restrictions (color space and quantization matrix) are followed in the creation of the bitstream. The length between I-frames is known as the group of pictures (GOP) size. MPEG-1 most commonly uses a GOP size of 15–18. i.e. 1 I-frame for every 14-17 non-I-frames (some combination of P- and B- frames). With more intelligent encoders, GOP size is dynamically chosen, up to some pre-selected maximum limit. Limits are placed on the maximum number of frames between I-frames due to decoding complexing, decoder buffer size, recovery time after data errors, seeking ability, and accumulation of IDCT errors in low-precision implementations most common in hardware decoders (See: IEEE-1180). P-frames "P-frame" is an abbreviation for "Predicted-frame". They may also be called forward-predicted frames or inter-frames (B-frames are also inter-frames). P-frames exist to improve compression by exploiting the temporal (over time) redundancy in a video. P-frames store only the difference in image from the frame (either an I-frame or P-frame) immediately preceding it (this reference frame is also called the anchor frame). The difference between a P-frame and its anchor frame is calculated using motion vectors on each macroblock of the frame (see below). Such motion vector data will be embedded in the P-frame for use by the decoder. A P-frame can contain any number of intra-coded blocks, in addition to any forward-predicted blocks. If a video drastically changes from one frame to the next (such as a cut), it is more efficient to encode it as an I-frame. B-frames "B-frame" stands for "bidirectional-frame" or "bipredictive frame". They may also be known as backwards-predicted frames or B-pictures. B-frames are quite similar to P-frames, except they can make predictions using both the previous and future frames (i.e. two anchor frames). It is therefore necessary for the player to first decode the next I- or P- anchor frame sequentially after the B-frame, before the B-frame can be decoded and displayed. This means decoding B-frames requires larger data buffers and causes an increased delay on both decoding and during encoding. This also necessitates the decoding time stamps (DTS) feature in the container/system stream (see above). As such, B-frames have long been subject of much controversy, they are often avoided in videos, and are sometimes not fully supported by hardware decoders. No other frames are predicted from a B-frame. Because of this, a very low bitrate B-frame can be inserted, where needed, to help control the bitrate. If this was done with a P-frame, future P-frames would be predicted from it and would lower the quality of the entire sequence. However, similarly, the future P-frame must still encode all the changes between it and the previous I- or P- anchor frame. B-frames can also be beneficial in videos where the background behind an object is being revealed over several frames, or in fading transitions, such as scene changes. A B-frame can contain any number of intra-coded blocks and forward-predicted blocks, in addition to backwards-predicted, or bidirectionally predicted blocks. D-frames MPEG-1 has a unique frame type not found in later video standards. "D-frames" or DC-pictures are independently coded images (intra-frames) that have been encoded using DC transform coefficients only (AC coefficients are removed when encoding D-frames—see DCT below) and hence are very low quality. D-frames are never referenced by I-, P- or B- frames. D-frames are only used for fast previews of video, for instance when seeking through a video at high speed. Given moderately higher-performance decoding equipment, fast preview can be accomplished by decoding I-frames instead of D-frames. This provides higher quality previews, since I-frames contain AC coefficients as well as DC coefficients. If the encoder can assume that rapid I-frame decoding capability is available in decoders, it can save bits by not sending D-frames (thus improving compression of the video content). For this reason, D-frames are seldom actually used in MPEG-1 video encoding, and the D-frame feature has not been included in any later video coding standards. Macroblocks MPEG-1 operates on video in a series of 8×8 blocks for quantization. However, to reduce the bit rate needed for motion vectors and because chroma (color) is subsampled by a factor of 4, each pair of (red and blue) chroma blocks corresponds to 4 different luma blocks. This set of 6 blocks, with a resolution of 16×16, is processed together and called a macroblock. A macroblock is the smallest independent unit of (color) video. Motion vectors (see below) operate solely at the macroblock level. If the height or width of the video are not exact multiples of 16, full rows and full columns of macroblocks must still be encoded and decoded to fill out the picture (though the extra decoded pixels are not displayed). Motion vectors To decrease the amount of temporal redundancy in a video, only blocks that change are updated, (up to the maximum GOP size). This is known as conditional replenishment. However, this is not very effective by itself. Movement of the objects, and/or the camera may result in large portions of the frame needing to be updated, even though only the position of the previously encoded objects has changed. Through motion estimation, the encoder can compensate for this movement and remove a large amount of redundant information. The encoder compares the current frame with adjacent parts of the video from the anchor frame (previous I- or P- frame) in a diamond pattern, up to a (encoder-specific) predefined radius limit from the area of the current macroblock. If a match is found, only the direction and distance (i.e. the vector of the motion) from the previous video area to the current macroblock need to be encoded into the inter-frame (P- or B- frame). The reverse of this process, performed by the decoder to reconstruct the picture, is called motion compensation. A predicted macroblock rarely matches the current picture perfectly, however. The differences between the estimated matching area, and the real frame/macroblock is called the prediction error. The larger the amount of prediction error, the more data must be additionally encoded in the frame. For efficient video compression, it is very important that the encoder is capable of effectively and precisely performing motion estimation. Motion vectors record the distance between two areas on screen based on the number of pixels (also called pels). MPEG-1 video uses a motion vector (MV) precision of one half of one pixel, or half-pel. The finer the precision of the MVs, the more accurate the match is likely to be, and the more efficient the compression. There are trade-offs to higher precision, however. Finer MV precision results in using a larger amount of data to represent the MV, as larger numbers must be stored in the frame for every single MV, increased coding complexity as increasing levels of interpolation on the macroblock are required for both the encoder and decoder, and diminishing returns (minimal gains) with higher precision MVs. Half-pel precision was chosen as the ideal trade-off for that point in time. (See: qpel) Because neighboring macroblocks are likely to have very similar motion vectors, this redundant information can be compressed quite effectively by being stored DPCM-encoded. Only the (smaller) amount of difference between the MVs for each macroblock needs to be stored in the final bitstream. P-frames have one motion vector per macroblock, relative to the previous anchor frame. B-frames, however, can use two motion vectors; one from the previous anchor frame, and one from the future anchor frame. Partial macroblocks, and black borders/bars encoded into the video that do not fall exactly on a macroblock boundary, cause havoc with motion prediction. The block padding/border information prevents the macroblock from closely matching with any other area of the video, and so, significantly larger prediction error information must be encoded for every one of the several dozen partial macroblocks along the screen border. DCT encoding and quantization (see below) also isn't nearly as effective when there is large/sharp picture contrast in a block. An even more serious problem exists with macroblocks that contain significant, random, edge noise, where the picture transitions to (typically) black. All the above problems also apply to edge noise. In addition, the added randomness is simply impossible to compress significantly. All of these effects will lower the quality (or increase the bitrate) of the video substantially. DCT Each 8×8 block is encoded by first applying a forward discrete cosine transform (FDCT) and then a quantization process. The FDCT process (by itself) is theoretically lossless, and can be reversed by applying an Inverse DCT (IDCT) to reproduce the original values (in the absence of any quantization and rounding errors). In reality, there are some (sometimes large) rounding errors introduced both by quantization in the encoder (as described in the next section) and by IDCT approximation error in the decoder. The minimum allowed accuracy of a decoder IDCT approximation is defined by ISO/IEC 23002-1. (Prior to 2006, it was specified by IEEE 1180-1990.) The FDCT process converts the 8×8 block of uncompressed pixel values (brightness or color difference values) into an 8×8 indexed array of frequency coefficient values. One of these is the (statistically high in variance) "DC coefficient", which represents the average value of the entire 8×8 block. The other 63 coefficients are the statistically smaller "AC coefficients", which have positive or negative values each representing sinusoidal deviations from the flat block value represented by the DC coefficient. An example of an encoded 8×8 FDCT block: Since the DC coefficient value is statistically correlated from one block to the next, it is compressed using DPCM encoding. Only the (smaller) amount of difference between each DC value and the value of the DC coefficient in the block to its left needs to be represented in the final bitstream. Additionally, the frequency conversion performed by applying the DCT provides a statistical decorrelation function to efficiently concentrate the signal into fewer high-amplitude values prior to applying quantization (see below). Quantization Quantization is, essentially, the process of reducing the accuracy of a signal, by dividing it by some larger step size and rounding to an integer value (i.e. finding the nearest multiple, and discarding the remainder). The frame-level quantizer is a number from 0 to 31 (although encoders will usually omit/disable some of the extreme values) which determines how much information will be removed from a given frame. The frame-level quantizer is typically either dynamically selected by the encoder to maintain a certain user-specified bitrate, or (much less commonly) directly specified by the user. A "quantization matrix" is a string of 64 numbers (ranging from 0 to 255) which tells the encoder how relatively important or unimportant each piece of visual information is. Each number in the matrix corresponds to a certain frequency component of the video image. An example quantization matrix: Quantization is performed by taking each of the 64 frequency values of the DCT block, dividing them by the frame-level quantizer, then dividing them by their corresponding values in the quantization matrix. Finally, the result is rounded down. This significantly reduces, or completely eliminates, the information in some frequency components of the picture. Typically, high frequency information is less visually important, and so high frequencies are much more strongly quantized (drastically reduced). MPEG-1 actually uses two separate quantization matrices, one for intra-blocks (I-blocks) and one for inter-block (P- and B- blocks) so quantization of different block types can be done independently, and so, more effectively. This quantization process usually reduces a significant number of the AC coefficients to zero, (known as sparse data) which can then be more efficiently compressed by entropy coding (lossless compression) in the next step. An example quantized DCT block: Quantization eliminates a large amount of data, and is the main lossy processing step in MPEG-1 video encoding. This is also the primary source of most MPEG-1 video compression artifacts, like blockiness, color banding, noise, ringing, discoloration, et al. This happens when video is encoded with an insufficient bitrate, and the encoder is therefore forced to use high frame-level quantizers (strong quantization) through much of the video. Entropy coding Several steps in the encoding of MPEG-1 video are lossless, meaning they will be reversed upon decoding, to produce exactly the same (original) values. Since these lossless data compression steps don't add noise into, or otherwise change the contents (unlike quantization), it is sometimes referred to as noiseless coding. Since lossless compression aims to remove as much redundancy as possible, it is known as entropy coding in the field of information theory. The coefficients of quantized DCT blocks tend to zero towards the bottom-right. Maximum compression can be achieved by a zig-zag scanning of the DCT block starting from the top left and using Run-length encoding techniques. The DC coefficients and motion vectors are DPCM-encoded. Run-length encoding (RLE) is a simple method of compressing repetition. A sequential string of characters, no matter how long, can be replaced with a few bytes, noting the value that repeats, and how many times. For example, if someone were to say "five nines", you would know they mean the number: 99999. RLE is particularly effective after quantization, as a significant number of the AC coefficients are now zero (called sparse data), and can be represented with just a couple of bytes. This is stored in a special 2-dimensional Huffman table that codes the run-length and the run-ending character. Huffman Coding is a very popular and relatively simple method of entropy coding, and used in MPEG-1 video to reduce the data size. The data is analyzed to find strings that repeat often. Those strings are then put into a special table, with the most frequently repeating data assigned the shortest code. This keeps the data as small as possible with this form of compression. Once the table is constructed, those strings in the data are replaced with their (much smaller) codes, which reference the appropriate entry in the table. The decoder simply reverses this process to produce the original data. This is the final step in the video encoding process, so the result of Huffman coding is known as the MPEG-1 video "bitstream." GOP configurations for specific applications I-frames store complete frame info within the frame and are therefore suited for random access. P-frames provide compression using motion vectors relative to the previous frame ( I or P ). B-frames provide maximum compression but require the previous as well as next frame for computation. Therefore, processing of B-frames requires more buffer on the decoded side. A configuration of the Group of Pictures (GOP) should be selected based on these factors. I-frame only sequences give least compression, but are useful for random access, FF/FR and editability. I- and P-frame sequences give moderate compression but add a certain degree of random access, FF/FR functionality. I-, P- and B-frame sequences give very high compression but also increase the coding/decoding delay significantly. Such configurations are therefore not suited for video-telephony or video-conferencing applications. The typical data rate of an I-frame is 1 bit per pixel while that of a P-frame is 0.1 bit per pixel and for a B-frame, 0.015 bit per pixel. Part 3: Audio Part 3 of the MPEG-1 standard covers audio and is defined in ISO/IEC-11172-3. MPEG-1 Audio utilizes psychoacoustics to significantly reduce the data rate required by an audio stream. It reduces or completely discards certain parts of the audio that it deduces that the human ear can't hear, either because they are in frequencies where the ear has limited sensitivity, or are masked by other (typically louder) sounds. Channel Encoding: Mono Joint Stereo – intensity encoded Joint Stereo – M/S encoded for Layer III only Stereo Dual (two uncorrelated mono channels) Sampling rates: 32000, 44100, and 48000 Hz Bitrates for Layer I: 32, 64, 96, 128, 160, 192, 224, 256, 288, 320, 352, 384, 416 and 448 kbit/s Bitrates for Layer II: 32, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320 and 384 kbit/s Bitrates for Layer III: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s MPEG-1 Audio is divided into 3 layers. Each higher layer is more computationally complex, and generally more efficient at lower bitrates than the previous. The layers are semi backwards compatible as higher layers reuse technologies implemented by the lower layers. A "Full" Layer II decoder can also play Layer I audio, but not Layer III audio, although not all higher level players are "full". Layer I MPEG-1 Audio Layer I is a simplified version of MPEG-1 Audio Layer II. Layer I uses a smaller 384-sample frame size for very low delay, and finer resolution. This is advantageous for applications like teleconferencing, studio editing, etc. It has lower complexity than Layer II to facilitate real-time encoding on the hardware available . Layer I saw limited adoption in its time, and most notably was used on Philips' defunct Digital Compact Cassette at a bitrate of 384 kbit/s. With the substantial performance improvements in digital processing since its introduction, Layer I quickly became unnecessary and obsolete. Layer I audio files typically use the extension ".mp1" or sometimes ".m1a". Layer II MPEG-1 Audio Layer II (the first version of MP2, often informally called MUSICAM) is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. Decoding MP2 audio is computationally simple relative to MP3, AAC, etc. History/MUSICAM MPEG-1 Audio Layer II was derived from the MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) audio codec, developed by Centre commun d'études de télévision et télécommunications (CCETT), Philips, and Institut für Rundfunktechnik (IRT/CNET) as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of digital audio broadcasting. Most key features of MPEG-1 Audio were directly inherited from MUSICAM, including the filter bank, time-domain processing, audio frame sizes, etc. However, improvements were made, and the actual MUSICAM algorithm was not used in the final MPEG-1 Audio Layer II standard. The widespread usage of the term MUSICAM to refer to Layer II is entirely incorrect and discouraged for both technical and legal reasons. Technical details MP2 is a time-domain encoder. It uses a low-delay 32 sub-band polyphased filter bank for time-frequency mapping; having overlapping ranges (i.e. polyphased) to prevent aliasing. The psychoacoustic model is based on the principles of auditory masking, simultaneous masking effects, and the absolute threshold of hearing (ATH). The size of a Layer II frame is fixed at 1152-samples (coefficients). Time domain refers to how analysis and quantization is performed on short, discrete samples/chunks of the audio waveform. This offers low delay as only a small number of samples are analyzed before encoding, as opposed to frequency domain encoding (like MP3) which must analyze many times more samples before it can decide how to transform and output encoded audio. This also offers higher performance on complex, random and transient impulses (such as percussive instruments, and applause), offering avoidance of artifacts like pre-echo. The 32 sub-band filter bank returns 32 amplitude coefficients, one for each equal-sized frequency band/segment of the audio, which is about 700 Hz wide (depending on the audio's sampling frequency). The encoder then utilizes the psychoacoustic model to determine which sub-bands contain
MJPEG video. So much so that very high-speed and theoretically lossless (in reality, there are rounding errors) conversion can be made from one format to the other, provided a couple of restrictions (color space and quantization matrix) are followed in the creation of the bitstream. The length between I-frames is known as the group of pictures (GOP) size. MPEG-1 most commonly uses a GOP size of 15–18. i.e. 1 I-frame for every 14-17 non-I-frames (some combination of P- and B- frames). With more intelligent encoders, GOP size is dynamically chosen, up to some pre-selected maximum limit. Limits are placed on the maximum number of frames between I-frames due to decoding complexing, decoder buffer size, recovery time after data errors, seeking ability, and accumulation of IDCT errors in low-precision implementations most common in hardware decoders (See: IEEE-1180). P-frames "P-frame" is an abbreviation for "Predicted-frame". They may also be called forward-predicted frames or inter-frames (B-frames are also inter-frames). P-frames exist to improve compression by exploiting the temporal (over time) redundancy in a video. P-frames store only the difference in image from the frame (either an I-frame or P-frame) immediately preceding it (this reference frame is also called the anchor frame). The difference between a P-frame and its anchor frame is calculated using motion vectors on each macroblock of the frame (see below). Such motion vector data will be embedded in the P-frame for use by the decoder. A P-frame can contain any number of intra-coded blocks, in addition to any forward-predicted blocks. If a video drastically changes from one frame to the next (such as a cut), it is more efficient to encode it as an I-frame. B-frames "B-frame" stands for "bidirectional-frame" or "bipredictive frame". They may also be known as backwards-predicted frames or B-pictures. B-frames are quite similar to P-frames, except they can make predictions using both the previous and future frames (i.e. two anchor frames). It is therefore necessary for the player to first decode the next I- or P- anchor frame sequentially after the B-frame, before the B-frame can be decoded and displayed. This means decoding B-frames requires larger data buffers and causes an increased delay on both decoding and during encoding. This also necessitates the decoding time stamps (DTS) feature in the container/system stream (see above). As such, B-frames have long been subject of much controversy, they are often avoided in videos, and are sometimes not fully supported by hardware decoders. No other frames are predicted from a B-frame. Because of this, a very low bitrate B-frame can be inserted, where needed, to help control the bitrate. If this was done with a P-frame, future P-frames would be predicted from it and would lower the quality of the entire sequence. However, similarly, the future P-frame must still encode all the changes between it and the previous I- or P- anchor frame. B-frames can also be beneficial in videos where the background behind an object is being revealed over several frames, or in fading transitions, such as scene changes. A B-frame can contain any number of intra-coded blocks and forward-predicted blocks, in addition to backwards-predicted, or bidirectionally predicted blocks. D-frames MPEG-1 has a unique frame type not found in later video standards. "D-frames" or DC-pictures are independently coded images (intra-frames) that have been encoded using DC transform coefficients only (AC coefficients are removed when encoding D-frames—see DCT below) and hence are very low quality. D-frames are never referenced by I-, P- or B- frames. D-frames are only used for fast previews of video, for instance when seeking through a video at high speed. Given moderately higher-performance decoding equipment, fast preview can be accomplished by decoding I-frames instead of D-frames. This provides higher quality previews, since I-frames contain AC coefficients as well as DC coefficients. If the encoder can assume that rapid I-frame decoding capability is available in decoders, it can save bits by not sending D-frames (thus improving compression of the video content). For this reason, D-frames are seldom actually used in MPEG-1 video encoding, and the D-frame feature has not been included in any later video coding standards. Macroblocks MPEG-1 operates on video in a series of 8×8 blocks for quantization. However, to reduce the bit rate needed for motion vectors and because chroma (color) is subsampled by a factor of 4, each pair of (red and blue) chroma blocks corresponds to 4 different luma blocks. This set of 6 blocks, with a resolution of 16×16, is processed together and called a macroblock. A macroblock is the smallest independent unit of (color) video. Motion vectors (see below) operate solely at the macroblock level. If the height or width of the video are not exact multiples of 16, full rows and full columns of macroblocks must still be encoded and decoded to fill out the picture (though the extra decoded pixels are not displayed). Motion vectors To decrease the amount of temporal redundancy in a video, only blocks that change are updated, (up to the maximum GOP size). This is known as conditional replenishment. However, this is not very effective by itself. Movement of the objects, and/or the camera may result in large portions of the frame needing to be updated, even though only the position of the previously encoded objects has changed. Through motion estimation, the encoder can compensate for this movement and remove a large amount of redundant information. The encoder compares the current frame with adjacent parts of the video from the anchor frame (previous I- or P- frame) in a diamond pattern, up to a (encoder-specific) predefined radius limit from the area of the current macroblock. If a match is found, only the direction and distance (i.e. the vector of the motion) from the previous video area to the current macroblock need to be encoded into the inter-frame (P- or B- frame). The reverse of this process, performed by the decoder to reconstruct the picture, is called motion compensation. A predicted macroblock rarely matches the current picture perfectly, however. The differences between the estimated matching area, and the real frame/macroblock is called the prediction error. The larger the amount of prediction error, the more data must be additionally encoded in the frame. For efficient video compression, it is very important that the encoder is capable of effectively and precisely performing motion estimation. Motion vectors record the distance between two areas on screen based on the number of pixels (also called pels). MPEG-1 video uses a motion vector (MV) precision of one half of one pixel, or half-pel. The finer the precision of the MVs, the more accurate the match is likely to be, and the more efficient the compression. There are trade-offs to higher precision, however. Finer MV precision results in using a larger amount of data to represent the MV, as larger numbers must be stored in the frame for every single MV, increased coding complexity as increasing levels of interpolation on the macroblock are required for both the encoder and decoder, and diminishing returns (minimal gains) with higher precision MVs. Half-pel precision was chosen as the ideal trade-off for that point in time. (See: qpel) Because neighboring macroblocks are likely to have very similar motion vectors, this redundant information can be compressed quite effectively by being stored DPCM-encoded. Only the (smaller) amount of difference between the MVs for each macroblock needs to be stored in the final bitstream. P-frames have one motion vector per macroblock, relative to the previous anchor frame. B-frames, however, can use two motion vectors; one from the previous anchor frame, and one from the future anchor frame. Partial macroblocks, and black borders/bars encoded into the video that do not fall exactly on a macroblock boundary, cause havoc with motion prediction. The block padding/border information prevents the macroblock from closely matching with any other area of the video, and so, significantly larger prediction error information must be encoded for every one of the several dozen partial macroblocks along the screen border. DCT encoding and quantization (see below) also isn't nearly as effective when there is large/sharp picture contrast in a block. An even more serious problem exists with macroblocks that contain significant, random, edge noise, where the picture transitions to (typically) black. All the above problems also apply to edge noise. In addition, the added randomness is simply impossible to compress significantly. All of these effects will lower the quality (or increase the bitrate) of the video substantially. DCT Each 8×8 block is encoded by first applying a forward discrete cosine transform (FDCT) and then a quantization process. The FDCT process (by itself) is theoretically lossless, and can be reversed by applying an Inverse DCT (IDCT) to reproduce the original values (in the absence of any quantization and rounding errors). In reality, there are some (sometimes large) rounding errors introduced both by quantization in the encoder (as described in the next section) and by IDCT approximation error in the decoder. The minimum allowed accuracy of a decoder IDCT approximation is defined by ISO/IEC 23002-1. (Prior to 2006, it was specified by IEEE 1180-1990.) The FDCT process converts the 8×8 block of uncompressed pixel values (brightness or color difference values) into an 8×8 indexed array of frequency coefficient values. One of these is the (statistically high in variance) "DC coefficient", which represents the average value of the entire 8×8 block. The other 63 coefficients are the statistically smaller "AC coefficients", which have positive or negative values each representing sinusoidal deviations from the flat block value represented by the DC coefficient. An example of an encoded 8×8 FDCT block: Since the DC coefficient value is statistically correlated from one block to the next, it is compressed using DPCM encoding. Only the (smaller) amount of difference between each DC value and the value of the DC coefficient in the block to its left needs to be represented in the final bitstream. Additionally, the frequency conversion performed by applying the DCT provides a statistical decorrelation function to efficiently concentrate the signal into fewer high-amplitude values prior to applying quantization (see below). Quantization Quantization is, essentially, the process of reducing the accuracy of a signal, by dividing it by some larger step size and rounding to an integer value (i.e. finding the nearest multiple, and discarding the remainder). The frame-level quantizer is a number from 0 to 31 (although encoders will usually omit/disable some of the extreme values) which determines how much information will be removed from a given frame. The frame-level quantizer is typically either dynamically selected by the encoder to maintain a certain user-specified bitrate, or (much less commonly) directly specified by the user. A "quantization matrix" is a string of 64 numbers (ranging from 0 to 255) which tells the encoder how relatively important or unimportant each piece of visual information is. Each number in the matrix corresponds to a certain frequency component of the video image. An example quantization matrix: Quantization is performed by taking each of the 64 frequency values of the DCT block, dividing them by the frame-level quantizer, then dividing them by their corresponding values in the quantization matrix. Finally, the result is rounded down. This significantly reduces, or completely eliminates, the information in some frequency components of the picture. Typically, high frequency information is less visually important, and so high frequencies are much more strongly quantized (drastically reduced). MPEG-1 actually uses two separate quantization matrices, one for intra-blocks (I-blocks) and one for inter-block (P- and B- blocks) so quantization of different block types can be done independently, and so, more effectively. This quantization process usually reduces a significant number of the AC coefficients to zero, (known as sparse data) which can then be more efficiently compressed by entropy coding (lossless compression) in the next step. An example quantized DCT block: Quantization eliminates a large amount of data, and is the main lossy processing step in MPEG-1 video encoding. This is also the primary source of most MPEG-1 video compression artifacts, like blockiness, color banding, noise, ringing, discoloration, et al. This happens when video is encoded with an insufficient bitrate, and the encoder is therefore forced to use high frame-level quantizers (strong quantization) through much of the video. Entropy coding Several steps in the encoding of MPEG-1 video are lossless, meaning they will be reversed upon decoding, to produce exactly the same (original) values. Since these lossless data compression steps don't add noise into, or otherwise change the contents (unlike quantization), it is sometimes referred to as noiseless coding. Since lossless compression aims to remove as much redundancy as possible, it is known as entropy coding in the field of information theory. The coefficients of quantized
to other prisoners around the world. In addition, he has written and published several books: Live From Death Row (1995), a diary of life on Pennsylvania's death row; All Things Censored (2000), a collection of essays examining issues of crime and punishment; Death Blossoms: Reflections from a Prisoner of Conscience (2003), in which he explores religious themes; and We Want Freedom: A Life in the Black Panther Party (2004), a history of the Black Panthers that draws on his own experience and research, and discusses the federal government's program known as COINTELPRO, to disrupt black activist organizations. In 1995, Abu-Jamal was punished with solitary confinement for engaging in entrepreneurship contrary to prison regulations. Subsequent to the airing of the 1996 HBO documentary Mumia Abu-Jamal: A Case For Reasonable Doubt?, which included footage from visitation interviews conducted with him, the Pennsylvania Department of Corrections banned outsiders from using any recording equipment in state prisons. In litigation before the U.S. Court of Appeals, in 1998 Abu-Jamal successfully established his right while in prison to write for financial gain. The same litigation also established that the Pennsylvania Department of Corrections had illegally opened his mail in an attempt to establish whether he was earning money by his writing. When, for a brief time in August 1999, Abu-Jamal began delivering his radio commentaries live on the Pacifica Network's Democracy Now! weekday radio newsmagazine, prison staff severed the connecting wires of his telephone from their mounting in mid-performance. He was later allowed to resume his broadcasts, and hundreds of his broadcasts have been aired on Pacifica Radio. Following the overturning of his death sentence, Abu-Jamal was sentenced to life in prison in December 2011. At the end of January 2012, he was shifted from the isolation of death row into the general prison population at State Correctional Institution – Mahanoy. In August 2015 his attorneys filed suit in the U.S. District Court for the Middle District of Pennsylvania, alleging that he has not received appropriate medical care for his serious health conditions. In April 2021, he tested positive for COVID-19 and was scheduled for heart surgery to relieve blocked coronary arteries. Popular support and opposition Labor unions, politicians, advocates, educators, the NAACP Legal Defense and Educational Fund, and human rights advocacy organizations such as Human Rights Watch and Amnesty International have expressed concern about the impartiality of the trial of Abu-Jamal. Amnesty International neither takes a position on the guilt or innocence of Abu-Jamal nor classifies him as a political prisoner. The family of Daniel Faulkner, the Commonwealth of Pennsylvania, the City of Philadelphia, politicians, and the Fraternal Order of Police have continued to support the original trial and sentencing of the journalist. In August 1999, the Fraternal Order of Police called for an economic boycott against all individuals and organizations that support Abu-Jamal. Many such groups operate within the Prison-Industrial Complex, a system which Abu-Jamal has frequently criticized. Partly based on his own writing, Abu-Jamal and his cause have become widely known internationally, and other groups have classified him as a political prisoner. About 25 cities, including Montreal, Palermo, and Paris, have made him an honorary citizen. In 2001, he received the sixth biennial Erich Mühsam Prize, named after an anarcho-communist essayist, which recognizes activism in line with that of its namesake. In October 2002, he was made an honorary member of the German political organization Society of People Persecuted by the Nazi Regime – Federation of Anti-Fascists (VVN-BdA). On April 29, 2006, a newly paved road in the Parisian suburb of Saint-Denis was named Rue Mumia Abu-Jamal in his honor. In protest of the street-naming, U.S. Congressman Michael Fitzpatrick and Senator Rick Santorum, both members of the Republican Party of Pennsylvania, introduced resolutions in both Houses of Congress condemning the decision. The House of Representatives voted 368–31 in favor of Fitzpatrick's resolution. In December 2006, the 25th anniversary of the murder, the executive committee of the Republican Party for the 59th Ward of the City of Philadelphia—covering approximately Germantown, Philadelphia—filed two criminal complaints in the French legal system against the city of Paris and the city of Saint-Denis, accusing the municipalities of "glorifying" Abu-Jamal and alleging the offense "apology or denial of crime" in respect of their actions. In 2007, the widow of Officer Faulkner co-authored a book with Philadelphia radio journalist Michael Smerconish titled Murdered by Mumia: A Life Sentence of Pain, Loss, and Injustice. The book was part memoir of Faulkner's widow, and part discussion in which they chronicled Abu-Jamal's trial and discussed evidence for his conviction. They also discussed support for the death penalty. In early 2014, President Barack Obama nominated Debo Adegbile, a former lawyer for the NAACP Legal Defense Fund, to head the civil rights division of the Justice Department. He had worked on Abu-Jamal's case, and his nomination was rejected by the U.S. Senate on a bipartisan basis because of that. On April 10, 2015, Marylin Zuniga, a teacher at Forest Street Elementary School in Orange, New Jersey, was suspended without pay after asking her students to write cards to Abu-Jamal, who was ill in prison due to complications from diabetes, without approval from the school or parents. Some parents and police leaders denounced her actions. On the other hand, community members, parents, teachers, and professors expressed their support and condemned Zuniga's suspension. Scholars and educators nationwide, including Noam Chomsky, Chris Hedges and Cornel West among others, signed a letter calling for her immediate reinstatement. On May 13, 2015, the Orange Preparatory Academy board voted to dismiss Marylin Zuniga after hearing from her and several of her supporters. Written works Have Black Lives Ever Mattered? City Lights Publishers (2017), Writing on the Wall: Selected Prison Writings of Mumia Abu-Jamal, City Lights Publishers (2015), The Classroom and the Cell: Conversations on Black Life in America, Third World Press (2011), Jailhouse Lawyers: Prisoners Defending Prisoners v. the U.S.A., City Lights Publishers (2009), We Want Freedom: A Life in the Black Panther Party, South End Press (2008), Faith Of Our Fathers: An Examination of the Spiritual Life of African and African-American People, Africa World Pr (2003), All Things Censored, Seven Stories Press (2000), Death Blossoms: Reflections from a Prisoner of Conscience, Plough Publishing House (1997), Live from Death Row, Harper Perennial (1996), Representation in popular culture HBO aired the documentary film Mumia Abu-Jamal: A Case For Reasonable Doubt? in 1996; this 57-minute film about the 1982 murder trial is directed by John Edginton. There are two versions by Edginton, both produced by Otmoor Productions. The second is 72 minutes long and contains additional information by witnesses. Political hip hop artist Immortal Technique featured Abu-Jamal on his second album Revolutionary Vol. 2. The punk band Anti-Flag has a speech from Mumia Abu-Jamal in the intro to their song "The Modern Rome Burning" from their 2008 album The Bright Lights of America. The speech also appears on the end of their preceding track "Vices". The documentary film In Prison My Whole Life (2008), directed by Marc Evans, and written by Evans and William Francome, explores the life of Abu-Jamal. See also Black Lives Matter In Prison My Whole Life – 2008 documentary film References External links Interview on the Mumia-Abu-Jamal Case, Part 1, 1995-11-01, In Black America; National Association of Black Journalists, KUT Radio, American Archive of Public Broadcasting (WGBH and the Library of Congress) Interview on the Mumia-Abu-Jamal Case, Part 2, 1995-11-01, In Black America; National Association of Black Journalists, KUT Radio, American Archive of Public Broadcasting (WGBH and the Library of Congress) Interview on the Mumia-Abu-Jamal Case, Part 3, 1996-11-01, In Black America; National Association of Black Journalists, KUT Radio, American Archive of Public Broadcasting (WGBH and the Library of Congress) Video 1996 interview with Mumia Abu-Jamal, by Monica Moorehead and Larry Holmes of Workers World Party Competing Films Offer Differing Views – video report by Democracy Now!
use of the Beverly affidavit. Some thought it usable and others rejected Beverly's story as "not credible". Private investigator George Newman claimed in 2001 that Chobert had recanted his testimony. Commentators noted that police and news photographs of the crime scene did not show Chobert's taxi, and that Cynthia White, the only witness at the original trial to testify to seeing the taxi, had previously provided crime scene descriptions that omitted it. Cynthia White was declared to be dead by the state of New Jersey in 1992, but Pamela Jenkins claimed that she saw White alive as late as 1997. The Free Mumia Coalition has claimed that White was a police informant and that she falsified her testimony against Abu-Jamal. Kenneth Pate, who was imprisoned with Abu-Jamal on other charges, has since claimed that his step-sister Priscilla Durham, a hospital security guard, admitted later she had not heard the "hospital confession" to which she had testified at trial. The hospital doctors said that Abu-Jamal was "on the verge of fainting" when brought in, and they did not hear any such confession. In 2008, the Supreme Court of Pennsylvania rejected a further request from Abu-Jamal for a hearing into claims that the trial witnesses perjured themselves, on the grounds that he had waited too long before filing the appeal. On March 26, 2012, the Supreme Court of Pennsylvania rejected his appeal for retrial. His defense had asserted, based on a 2009 report by the National Academy of Sciences, that forensic evidence presented by the prosecution and accepted into evidence in the original trial was unreliable. This was reported as Abu-Jamal's last legal appeal. On April 30, 2018, the Pennsylvania Supreme Court ruled that Abu-Jamal would not be immediately granted another appeal and that the proceedings had to continue until August 30 of that year. The defense argued that former Pennsylvania Supreme Court Chief justice Ronald D. Castille should have recused himself from the 2012 appeals decision after his involvement as Philadelphia District Attorney (DA) in the 1989 appeal. Both sides of the 2018 proceedings repeatedly cited a 1990 letter sent by Castille to then-Governor Bob Casey, urging Casey to sign the execution warrants of those convicted of murdering police. This letter, demanding Casey send "a clear and dramatic message to all cop killers," was claimed one of many reasons to suspect Castille's bias in the case. Philadelphia's current DA Larry Krasner stated he could not find any document supporting the defense's claim. On August 30, 2018, the proceedings to determine another appeal were once again extended and a ruling on the matter was delayed for at least 60 more days. Federal District Court 2001 ruling The Free Mumia Coalition published statements by William Cook and his brother Abu-Jamal in the spring of 2001. Cook, who had been stopped by the police officer, had not made any statement before April 29, 2001, and did not testify at his brother's trial. In 2001 he said that he had not seen who had shot Faulkner. Abu-Jamal did not make any public statements about Faulkner's murder until May 4, 2001. In his version of events, he claimed that he was sitting in his cab across the street when he heard shouting, saw a police vehicle, and heard the sound of gunshots. Upon seeing his brother appearing disoriented across the street, Abu-Jamal ran to him from the parking lot and was shot by a police officer. In 2001 Judge William H. Yohn, Jr. of the United States District Court for the Eastern District of Pennsylvania upheld the conviction, saying that Abu-Jamal did not have the right to a new trial. But he vacated the sentence of death on December 18, 2001, citing irregularities in the penalty phase of the trial and the original process of sentencing. Particularly, he said that He ordered the State of Pennsylvania to commence new sentencing proceedings within 180 days, and ruled unconstitutional the requirement that a jury be unanimous in its finding of circumstances mitigating against a sentence of death. Eliot Grossman and Marlene Kamish, attorneys for Abu-Jamal, criticized the ruling on the grounds that it denied the possibility of a trial de novo, at which they could introduce evidence that their client had been framed. Prosecutors also criticized the ruling. Officer Faulkner's widow Maureen said the judgment would allow Abu-Jamal, whom she described as a "remorseless, hate-filled killer", to "be permitted to enjoy the pleasures that come from simply being alive". Both parties appealed. Federal appeal and review On December 6, 2005, the Third Circuit Court of Appeals admitted four issues for appeal of the ruling of the District Court: in relation to sentencing, whether the jury verdict form had been flawed and the judge's instructions to the jury had been confusing; in relation to conviction and sentencing, whether racial bias in jury selection existed to an extent tending to produce an inherently biased jury and therefore an unfair trial (the Batson claim); in relation to conviction, whether the prosecutor improperly attempted to reduce jurors' sense of responsibility by telling them that a guilty verdict would be subsequently vetted and subject to appeal; and in relation to post-conviction review hearings in 1995–6, whether the presiding judge, who had also presided at the trial, demonstrated unacceptable bias in his conduct. The Third Circuit Court heard oral arguments in the appeals on May 17, 2007, at the United States Courthouse in Philadelphia. The appeal panel consisted of Chief Judge Anthony Joseph Scirica, Judge Thomas Ambro, and Judge Robert Cowen. The Commonwealth of Pennsylvania sought to reinstate the sentence of death, on the basis that Yohn's ruling was flawed, as he should have deferred to the Pennsylvania Supreme Court which had already ruled on the issue of sentencing. The prosecution said that the Batson claim was invalid because Abu-Jamal made no complaints during the original jury selection. The resulting jury was racially mixed, with 2 blacks and 10 whites at the time of the unanimous conviction, but defense counsel told the Third Circuit Court that Abu-Jamal did not get a fair trial because the jury was racially biased, misinformed, and the judge was a racist. He noted that the prosecution used eleven out of fourteen peremptory challenges to eliminate prospective black jurors. Terri Maurer-Carter, a former Philadelphia court stenographer, stated in a 2001 affidavit that she overheard Judge Sabo say "Yeah, and I'm going to help them fry the nigger" in the course of a conversation with three people present regarding Abu-Jamal's case. Sabo denied having made any such comment. On March 27, 2008, the three-judge panel issued a majority 2–1 opinion upholding Yohn's 2001 opinion but rejecting the bias and Batson claims, with Judge Ambro dissenting on the Batson issue. On July 22, 2008, Abu-Jamal's formal petition seeking reconsideration of the decision by the full Third Circuit panel of 12 judges was denied. On April 6, 2009, the United States Supreme Court refused to hear Abu-Jamal's appeal, allowing his conviction to stand. On January 19, 2010, the Supreme Court ordered the appeals court to reconsider its decision to rescind the death penalty. The same three-judge panel convened in Philadelphia on November 9, 2010, to hear oral argument. On April 26, 2011, the Third Circuit Court of Appeals reaffirmed its prior decision to vacate the death sentence on the grounds that the jury instructions and verdict form were ambiguous and confusing. The Supreme Court declined to hear the case in October. Death penalty dropped On December 7, 2011, District Attorney of Philadelphia R. Seth Williams announced that prosecutors, with the support of the victim's family, would no longer seek the death penalty for Abu-Jamal and would accept a sentence of life imprisonment without parole. This sentence was reaffirmed by the Superior Court of Pennsylvania on July 9, 2013. After the press conference on the sentence, widow Maureen Faulkner said that she did not want to relive the trauma of another trial. She understood that it would be extremely difficult to present the case against Abu-Jamal again, after the passage of 30 years and the deaths of several key witnesses. She also reiterated her belief that Abu-Jamal will be punished further after death. Life as a prisoner In 1991 Abu-Jamal published an essay in the Yale Law Journal, on the death penalty and his death row experience. In May 1994, Abu-Jamal was engaged by National Public Radio's All Things Considered program to deliver a series of monthly three-minute commentaries on crime and punishment. The broadcast plans and commercial arrangement were canceled following condemnations from, among others, the Fraternal Order of Police and Senate Minority Leader Bob Dole. Abu-Jamal sued NPR for not airing his work, but a federal judge dismissed the suit. His commentaries later were published in May 1995 as part of his first book, Live from Death Row. In 1996, he completed a B.A. degree via correspondence classes at Goddard College, which he had attended for a time as a young man. He has been invited as commencement speaker by a number of colleges, and has participated via recordings. In 1999, Abu-Jamal was invited to record a keynote address for the graduating class at Evergreen State College in Washington State. The event was protested by some. In 2000, he recorded a commencement address for Antioch College. The now defunct New College of California School of Law presented him with an honorary degree "for his struggle to resist the death penalty." On October 5, 2014, he gave the commencement speech at Goddard College, via playback of a recording. As before, the choice of Abu-Jamal was controversial. Ten days later the Pennsylvania legislature had passed an addition to the Crime Victims Act called "Revictimization Relief." The new provision is intended to prevent actions that cause "a temporary or permanent state of mental anguish" to those who have previously been victimized by crime. It was signed by Republican governor Tom Corbett five days later. Commentators suggest that the bill was directed to control Abu-Jamal's journalism, book publication, and public speaking, and that it would be challenged on the grounds of free speech. With occasional interruptions due to prison disciplinary actions, Abu-Jamal has for many years been a regular commentator on an online broadcast, sponsored by Prison Radio. He also is published as a regular columnist for Junge Welt, a Marxist newspaper in Germany. For almost a decade, Abu-Jamal taught introductory courses in Georgist economics by correspondence to other prisoners around the world. In addition, he has written and published several books: Live From Death Row (1995), a diary of life on Pennsylvania's death row; All Things Censored (2000), a collection of essays examining issues of crime and punishment; Death Blossoms: Reflections from a Prisoner of Conscience (2003), in which he explores religious themes; and We Want Freedom: A Life in the Black Panther Party (2004), a history of the Black Panthers that draws on his own experience and research, and discusses the federal government's program known as COINTELPRO, to disrupt black activist organizations. In 1995, Abu-Jamal was punished with solitary confinement for engaging in entrepreneurship contrary to prison regulations. Subsequent to the airing of the 1996 HBO documentary Mumia Abu-Jamal: A Case For Reasonable Doubt?, which included footage from visitation interviews conducted with him, the Pennsylvania Department of Corrections banned outsiders from using any recording equipment in state prisons. In litigation before the U.S. Court of Appeals, in 1998 Abu-Jamal successfully established his right while in prison to write for financial gain. The same litigation also established that the Pennsylvania Department of Corrections had illegally opened his mail in an attempt to establish whether he was earning money by his writing. When, for a brief time in August 1999, Abu-Jamal began delivering his radio commentaries live on the Pacifica Network's Democracy Now! weekday radio newsmagazine, prison staff severed the connecting wires of his telephone from their mounting in mid-performance. He was later allowed to resume his broadcasts, and hundreds of his broadcasts have been aired on Pacifica Radio. Following the overturning of his death sentence, Abu-Jamal was sentenced to life in prison in December 2011. At the end of January 2012, he was shifted from the isolation of death row into the general prison population at State Correctional Institution – Mahanoy. In August 2015 his attorneys filed suit in the U.S. District Court for the Middle District of Pennsylvania, alleging that he has not received appropriate medical care for his serious health conditions. In April 2021, he tested positive for COVID-19 and was scheduled for heart surgery to relieve blocked coronary arteries. Popular support and opposition Labor unions, politicians, advocates, educators, the NAACP Legal Defense and Educational Fund, and human rights advocacy organizations such as Human Rights Watch and Amnesty International have expressed concern about the impartiality of the trial of Abu-Jamal. Amnesty International neither takes a position on the guilt or innocence of Abu-Jamal nor classifies him as a political prisoner. The family of Daniel Faulkner, the Commonwealth of Pennsylvania, the City of Philadelphia, politicians, and the Fraternal Order of Police have continued to support the original trial and sentencing of the journalist. In August 1999, the Fraternal Order of Police called for an economic boycott against all individuals and organizations that support Abu-Jamal. Many such groups operate within the Prison-Industrial Complex, a system which Abu-Jamal has frequently criticized. Partly based on his own writing, Abu-Jamal and his cause have become widely known internationally, and other groups have classified him as a political prisoner. About 25 cities, including Montreal, Palermo, and Paris, have made him an honorary citizen. In 2001, he received the sixth biennial Erich Mühsam Prize, named after an anarcho-communist essayist, which recognizes activism in line with that of its namesake. In October 2002, he was made an honorary member of the German political organization Society of People Persecuted by the Nazi Regime – Federation of Anti-Fascists (VVN-BdA). On April 29, 2006, a newly paved road in the Parisian suburb of Saint-Denis was named Rue Mumia Abu-Jamal in his honor. In protest of the street-naming, U.S. Congressman Michael Fitzpatrick and Senator Rick Santorum, both members of the Republican Party of Pennsylvania, introduced resolutions in both Houses of
the number of ways, reversal of order is allowed. For example: and therefore r2(1) = 4 ≠ 1. This shows that the function is not multiplicative. However, r2(n)/4 is multiplicative. In the On-Line Encyclopedia of Integer Sequences, sequences of values of a multiplicative function have the keyword "mult". See arithmetic function for some other examples of non-multiplicative functions. Properties A multiplicative function is completely determined by its values at the powers of prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if n is a product of powers of distinct primes, say n = pa qb ..., then f(n) = f(pa) f(qb) ... This property of multiplicative functions significantly reduces the need for computation, as in the following examples for n = 144 = 24 · 32: Similarly, we have: In general, if f(n) is a multiplicative function and a, b are any two positive integers, then Every completely multiplicative function is a homomorphism of monoids and is completely determined by its restriction to the prime numbers. Convolution If f and g are two multiplicative functions, one defines a new multiplicative function , the Dirichlet convolution of f and g, by where the sum extends over all positive divisors d of n. With this operation, the set of all multiplicative functions turns into an abelian group; the identity element is ε. Convolution is commutative, associative, and distributive over addition. Relations among the multiplicative functions discussed above include: (the Möbius inversion formula) (generalized Möbius inversion) The Dirichlet convolution can be defined for general arithmetic functions, and yields a ring structure, the Dirichlet ring. The Dirichlet convolution of two multiplicative functions is again multiplicative. A proof of this fact is given by the following expansion for relatively prime : Dirichlet series for some multiplicative functions More examples are shown in the article on Dirichlet series. Multiplicative function over Let , the polynomial ring over the finite field with q elements. A is a principal ideal domain and therefore A is a unique factorization domain. A complex-valued function on A is called multiplicative if whenever f and g are relatively prime. Zeta function and Dirichlet series in Let h be a polynomial arithmetic function (i.e. a function on set of monic polynomials over A). Its corresponding Dirichlet series is defined to be where for set if and otherwise. The polynomial zeta function is then Similar to the situation in , every Dirichlet series of a multiplicative function h has a product representation (Euler product): where the product runs over all monic irreducible polynomials P. For example, the product representation of the zeta function is as for the integers: Unlike the classical zeta function, is a simple rational function: In a similar way, If f and g are two polynomial arithmetic functions, one defines f * g, the Dirichlet convolution of f and g, by where the
C. The indicator function 1C(n) is multiplicative precisely when the set C has the following property for any coprime numbers a and b: the product ab is in C if and only if the numbers a and b are both themselves in C. This is the case if C is the set of squares, cubes, or k-th powers, or if C is the set of square-free numbers. Other examples of multiplicative functions include many functions of importance in number theory, such as: gcd(n,k): the greatest common divisor of n and k, as a function of n, where k is a fixed integer. : Euler's totient function , counting the positive integers coprime to (but not bigger than) n μ(n): the Möbius function, the parity (−1 for odd, +1 for even) of the number of prime factors of square-free numbers; 0 if n is not square-free σk(n): the divisor function, which is the sum of the k-th powers of all the positive divisors of n (where k may be any complex number). Special cases we have σ0(n) = d(n) the number of positive divisors of n, σ1(n) = σ(n), the sum of all the positive divisors of n. a(n): the number of non-isomorphic abelian groups of order n. λ(n): the Liouville function, λ(n) = (−1)Ω(n) where Ω(n) is the total number of primes (counted with multiplicity) dividing n. (completely multiplicative). γ(n), defined by γ(n) = (−1)ω(n), where the additive function ω(n) is the number of distinct primes dividing n. τ(n): the Ramanujan tau function. All Dirichlet characters are completely multiplicative functions. For example (n/p), the Legendre symbol, considered as a function of n where p is a fixed prime number. An example of a non-multiplicative function is the arithmetic function r2(n) - the number of representations of n as a sum of squares of two integers, positive, negative, or zero, where in counting the number of ways, reversal of order is allowed. For example: and therefore r2(1) = 4 ≠ 1. This shows that the function is not multiplicative. However, r2(n)/4 is multiplicative. In the On-Line Encyclopedia of Integer Sequences, sequences of values of a multiplicative function have the keyword "mult". See arithmetic function for some other examples of non-multiplicative functions. Properties A multiplicative function is completely determined by its values at the powers of prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if n is a product of powers of distinct primes, say n = pa qb ..., then f(n) =