sentence1 stringlengths 1 133k | sentence2 stringlengths 1 131k |
|---|---|
security, peace, the stability of ASEAN, and making the Southeast Asia region far from conflicts. Indonesia's bilateral relations with three neighbouring ASEAN members — Malaysia, Singapore, and Vietnam — are not without challenges. If not appropriately managed, it would result in mutual mistrust and suspicion, thus hindering bilateral and regional co-operation. In the era of rising Indonesia, which might assert its leadership role within ASEAN, the problem could become more significant. Nevertheless, the rise of Indonesia should be regarded in the sense of optimism. First, although Indonesia is likely to become assertive, the general tone of its foreign policy is mainly liberal and accommodating. The consolidation of the Indonesian democratic government played a key role and influence in ASEAN. The second, institutional web of ASEAN will sustain engagements and regular meetings between regional elites, thus deepening their mutual understanding and personal connections. Non-Aligned Movement (NAM) Indonesia also was one of the founders of NAM and has taken moderate positions in its councils. As NAM Chairman in 1992–95, it led NAM positions away from the rhetoric of North-South confrontation, advocating the broadening of North-South co-operation instead in the area of development. Indonesia continues to be a prominent, and generally helpful, leader of the Non-Aligned Movement. Organization of Islamic Cooperation (OIC) Indonesia has the world's largest Muslim population and is a member of OIC. It carefully considers the interests of Islamic solidarity in its foreign policy decisions but generally has been an influence for moderation in the OIC. APEC Indonesia has been a strong supporter of the Asia-Pacific Economic Cooperation (APEC) forum. Mainly through the efforts of President Suharto at the 1994 meeting in Indonesia, APEC members agreed to implement free trade in the region by 2010 for industrialised economies and 2020 for developing economies. As the largest economy in Southeast Asia, Indonesia also belongs to other economic groupings such as G20 and Developing 8 Countries (D-8). G-20 major economies In 2008, Indonesia was admitted as a member of the G20, as the only ASEAN member state in the group. Through its membership in the global economic powerhouse that accounted of 85% of the global economy, Indonesia is keen to position itself as a mouthpiece for ASEAN countries, and as a representative of the developing world within the G-20. IGGI and CGI After 1966, Indonesia welcomed and maintained close relations with the international donor community, particularly the United States, western Europe, Australia, and Japan, through the meetings of the Inter-Governmental Group on Indonesia (IGGI) and its successor, the Consultative Group on Indonesia (CGI), which coordinated substantial foreign economic assistance. Problems in Timor and Indonesia's reluctance to implement economic reform at times complicated Indonesia's relationship with donors. In 1992 the IGGI aid coordination group ceased to meet and the coordination activities were transferred to meetings arranged by the World Bank through the CGI. The CGI, in turn, ceased activities in 2007 when the Indonesian government suggested that an internationally-organised aid coordination program was no longer needed. International disputes Indonesia has numerous outlying and remote islands, some of which are inhabited by numerous pirate groups that regularly attack ships in the Strait of Malacca in the north, and illegal fishing crews known for penetrating Australian and Filipino waters. While Indonesian waters itself is the target of many illegal fishing activities by numerous foreign vessels. Indonesia has some present and historic territorial disputes with neighboring nations, such as: Ambalat Block in dispute with Malaysia (ongoing, overlapping EEZ line drawn by both countries) Ashmore and Cartier Islands in dispute with Australia (ongoing, the islands known by Indonesians as Pulau Pasir) Fatu Sinai Island (Pulau Batek) formerly disputed with East Timor (settled, East Timor ceded the island to Indonesia in August 2004) Miangas (Las Palmas) formerly disputed with Philippine Islands (settled, see Island | second, institutional web of ASEAN will sustain engagements and regular meetings between regional elites, thus deepening their mutual understanding and personal connections. Non-Aligned Movement (NAM) Indonesia also was one of the founders of NAM and has taken moderate positions in its councils. As NAM Chairman in 1992–95, it led NAM positions away from the rhetoric of North-South confrontation, advocating the broadening of North-South co-operation instead in the area of development. Indonesia continues to be a prominent, and generally helpful, leader of the Non-Aligned Movement. Organization of Islamic Cooperation (OIC) Indonesia has the world's largest Muslim population and is a member of OIC. It carefully considers the interests of Islamic solidarity in its foreign policy decisions but generally has been an influence for moderation in the OIC. APEC Indonesia has been a strong supporter of the Asia-Pacific Economic Cooperation (APEC) forum. Mainly through the efforts of President Suharto at the 1994 meeting in Indonesia, APEC members agreed to implement free trade in the region by 2010 for industrialised economies and 2020 for developing economies. As the largest economy in Southeast Asia, Indonesia also belongs to other economic groupings such as G20 and Developing 8 Countries (D-8). G-20 major economies In 2008, Indonesia was admitted as a member of the G20, as the only ASEAN member state in the group. Through its membership in the global economic powerhouse that accounted of 85% of the global economy, Indonesia is keen to position itself as a mouthpiece for ASEAN countries, and as a representative of the developing world within the G-20. IGGI and CGI After 1966, Indonesia welcomed and maintained close relations with the international donor community, particularly the United States, western Europe, Australia, and Japan, through the meetings of the Inter-Governmental Group on Indonesia (IGGI) and its successor, the Consultative Group on Indonesia (CGI), which coordinated substantial foreign economic assistance. Problems in Timor and Indonesia's reluctance to implement economic reform at times complicated Indonesia's relationship with donors. In 1992 the IGGI aid coordination group ceased to meet and the coordination activities were transferred to meetings arranged by the World Bank through the CGI. The CGI, in turn, ceased activities in 2007 when the Indonesian government suggested that an internationally-organised aid coordination program was no longer needed. International disputes Indonesia |
and the country of Malaysia's states of Sabah and Sarawak. Sulawesi, formerly Celebes. Lesser Sunda Islands Bali Lombok Sumbawa Flores Sumba Timor: divided between Indonesian West Timor and the country of East Timor. Maluku Islands (Moluccas) New Guinea: divided between the two Indonesian provinces of Papua and West Papua and the country of Papua New Guinea. List of islands The following islands are listed by province: Java Banten Panaitan Sangiang Tinjil Umang Central Java Karimunjawa Nusa Kambangan Dungeon's Island Special Capital Region of Jakarta Thousand Islands (Kepulauan Seribu) East Java Bawean Gili Iyang Island Kangean Islands Madura Raas Nusa Barong Raja Island Sempu Island West Java Monitor Lizard Island (Pulau Biawak), Indramayu Sumatra Aceh Banyak Islands Tuangku Lasia Island Simeulue Weh North Sumatra Batu Islands Berhala on the Strait of Malacca Hinako Islands Makole Island Masa Island Nias Islands Samosir, Lake Toba West Sumatra Mentawai Islands North Pagai Siberut Sipura South Pagai Pasumpahan Sikuai Bengkulu Enggano Island Mega Island Lampung Child of Krakatoa (Anak Krakatau) Legundi Sebesi Sebuku Riau Basu Island Bengkalis Padang Rangsang Rupat Tebing Tinggi Island Riau Islands Natuna Islands (Kepulauan Natuna) Anambas Islands Natuna Besar Islands South Natuna Islands Tambelan Islands Badas Islands Riau Archipelago Batam Bintan Bulan Galang Karimun islands Great Natuna Penyengat Great Karimun Little Karimun Kundur Rempang Lingga Islands Lingga with nearby islands Singkep with nearby islands Bangka-Belitung Islands Bangka Belitung Kalimantan Central Kalimantan Damar Baning Island Buaya Island Burung Island East Kalimantan Balabalagan Islands Derawan Islands Kakaban North Kalimantan Bunyu Sebatik: divided between Indonesia and Sabah, East Malaysia Tarakan South Kalimantan Laut Laut Kecil Islands Sebuku West Kalimantan Bawal Galam Karimata Islands Karimata Maya Sulawesi Central Sulawesi Banggai Islands Banggai Bowokan Islands (Kepulauan Treko) Buka Buka Peleng Masoni Island Simatang Island Togian Islands Togian Tolitoli North Sulawesi Bangka Bunaken Lembeh Manado Tua Nain Sangihe Islands Nanipa Bukide Sangir Besar Siau Tagulandang Talaud Islands Kabaruan Karakelang Salibabu Talise South Sulawesi Pabbiring Islands Sabalana Islands Selayar Islands Selayar Island Takabonerate Islands Tengah Islands Southeast Sulawesi Buton Kabaena Muna Tukangbesi Islands Wakatobi Wangiwangi Wowoni Lesser Sunda Islands Bali Bali Menjangan Island Nusa Lembongan Nusa Penida Serangan Island Nusa Ceningan East Nusa Tenggara Alor Islands Alor Kepa Pantar Flores Babi Island Mules Island Komodo Gili Lawadarat Gili Lawalaut Mangiatan Island Makasar | Malacca Hinako Islands Makole Island Masa Island Nias Islands Samosir, Lake Toba West Sumatra Mentawai Islands North Pagai Siberut Sipura South Pagai Pasumpahan Sikuai Bengkulu Enggano Island Mega Island Lampung Child of Krakatoa (Anak Krakatau) Legundi Sebesi Sebuku Riau Basu Island Bengkalis Padang Rangsang Rupat Tebing Tinggi Island Riau Islands Natuna Islands (Kepulauan Natuna) Anambas Islands Natuna Besar Islands South Natuna Islands Tambelan Islands Badas Islands Riau Archipelago Batam Bintan Bulan Galang Karimun islands Great Natuna Penyengat Great Karimun Little Karimun Kundur Rempang Lingga Islands Lingga with nearby islands Singkep with nearby islands Bangka-Belitung Islands Bangka Belitung Kalimantan Central Kalimantan Damar Baning Island Buaya Island Burung Island East Kalimantan Balabalagan Islands Derawan Islands Kakaban North Kalimantan Bunyu Sebatik: divided between Indonesia and Sabah, East Malaysia Tarakan South Kalimantan Laut Laut Kecil Islands Sebuku West Kalimantan Bawal Galam Karimata Islands Karimata Maya Sulawesi Central Sulawesi Banggai Islands Banggai Bowokan Islands (Kepulauan Treko) Buka Buka Peleng Masoni Island Simatang Island Togian Islands Togian Tolitoli North Sulawesi Bangka Bunaken Lembeh Manado Tua Nain Sangihe Islands Nanipa Bukide Sangir Besar Siau Tagulandang Talaud Islands Kabaruan Karakelang Salibabu Talise South Sulawesi Pabbiring Islands Sabalana Islands Selayar Islands Selayar Island Takabonerate Islands Tengah Islands Southeast Sulawesi Buton Kabaena Muna Tukangbesi Islands Wakatobi Wangiwangi Wowoni Lesser Sunda Islands Bali Bali Menjangan Island Nusa Lembongan Nusa Penida Serangan Island Nusa Ceningan East Nusa Tenggara Alor Islands Alor Kepa Pantar Flores Babi Island Mules Island Komodo Gili Lawadarat Gili Lawalaut Mangiatan Island Makasar Island Taka Makasar Mauwang Island Pararambah Island Siaba Besar Island Siaba Kecil Island Mangiatan Island Tatawa Island Tukoh Pemaroh Pararambah Island Padar Island Batubilah |
the national budget. All parliamentary candidates and all legislation from the assembly must be approved by the Guardian Council. The Guardian Council comprises twelve jurists, including six appointed by the Supreme Leader. Others are elected by the Parliament, from among the jurists nominated by the Head of the Judiciary. The Council interprets the constitution and may veto the Parliament. If a law is deemed incompatible with the constitution or Sharia (Islamic law), it is referred back to the Parliament for revision. The Expediency Council has the authority to mediate disputes between the Parliament and the Guardian Council, and serves as an advisory body to the Supreme Leader, making it one of the most powerful governing bodies in the country. Local city councils are elected by public vote to four-year terms in all cities and villages of Iran. Law The Supreme Leader appoints the head of the country's judiciary, who in turn appoints the head of the Supreme Court and the chief public prosecutor. There are several types of courts, including public courts that deal with civil and criminal cases, and revolutionary courts which deal with certain categories of offenses, such as crimes against national security. The decisions of the revolutionary courts are final and cannot be appealed. The Chief Justice of Iran is the head of the Judicial system of the Islamic Republic of Iran and is responsible for its administration and supervision. He is also the highest judge of the Supreme Court of Iran. The Supreme Leader of Iran appoints and can dismiss the Chief Justice. The Chief Justice nominates some candidates for serving as minister of justice and then the President select one of them. The Chief Justice can serve for two five-year terms. The Special Clerical Court handles crimes allegedly committed by clerics, although it has also taken on cases involving laypeople. The Special Clerical Court functions independently of the regular judicial framework, and is accountable only to the Supreme Leader. The Court's rulings are final and cannot be appealed. The Assembly of Experts, which meets for one week annually, comprises 86 "virtuous and learned" clerics elected by adult suffrage for eight-year terms. Foreign relations Since the time of the 1979 Revolution, Iran's foreign relations have often been portrayed as being based on two strategic principles; eliminating outside influences in the region, and pursuing extensive diplomatic contacts with developing and non-aligned countries. Since 2005, Iran's nuclear program has become the subject of contention with the international community, mainly the United States. Many countries have expressed concern that Iran's nuclear program could divert civilian nuclear technology into a weapons program. This has led the United Nations Security Council to impose sanctions against Iran which had further isolated Iran politically and economically from the rest of the global community. In 2009, the U.S. Director of National Intelligence said that Iran, if choosing to, would not be able to develop a nuclear weapon until 2013. , the government of Iran maintains diplomatic relations with 99 members of the United Nations, but not with the United States, and not with Israel—a state which Iran's government has derecognized since the 1979 Revolution. Among Muslim nations, Iran has an adversarial relationship with Saudi Arabia due to different political and Islamic ideologies. While Iran is a Shia Islamic Republic, Saudi Arabia is a conservative Sunni monarchy. Regarding the Israeli–Palestinian conflict, the government of Iran has recognized Jerusalem as the capital of the State of Palestine, after Trump recognized Jerusalem as the capital of Israel. Since the 2000s, Iran's controversial nuclear program has raised concerns, which is part of the basis of the international sanctions against the country. On 14 July 2015, Tehran and the P5+1 came to a historic agreement (Joint Comprehensive Plan of Action) to end economic sanctions in exchange for Iran's restriction in producing enriched uranium after demonstrating a peaceful nuclear research project that would meet the International Atomic Energy Agency standards. Iran is a member of dozens of international organizations, including the G-15, G-24, G-77, IAEA, IBRD, IDA, IDB, IFC, ILO, IMF, IMO, Interpol, OIC, OPEC, WHO, and the United Nations, and currently has observer status at the World Trade Organization. Reports are that Iran will begin the processes of becoming a full member of the Shanghai Cooperation Organization (SCO), a Eurasian political, economic, and security alliance. Military The Islamic Republic of Iran has two types of armed forces: the regular forces of the Army, the Air Force, and the Navy, and the Revolutionary Guards, totaling about 545,000 active troops. Iran also has around 350,000 Reserve Force, totaling around 900,000 trained troops. The government of Iran has a paramilitary, volunteer militia force within the Islamic Revolutionary Guard Corps, called the Basij, which includes about 90,000 full-time, active-duty uniformed members. Up to 11 million men and women are members of the Basij who could potentially be called up for service. GlobalSecurity.org estimates Iran could mobilize "up to one million men", which would be among the largest troop mobilizations in the world. In 2007, Iran's military spending represented 2.6% of the GDP or $102 per capita, the lowest figure of the Persian Gulf nations. Iran's military doctrine is based on deterrence. In 2014, the country spent $15 billion on arms, while the states of the Gulf Cooperation Council spent eight times more. The government of Iran supports the military activities of its allies in Syria, Iraq, and Lebanon (Hezbollah) with military and financial aid. Iran and Syria are close strategic allies, and Iran has provided significant support for the Syrian Government in the Syrian Civil War. According to some estimates, Iran controlled over 80,000 pro-Assad Shi'ite fighters in Syria. Since the 1979 Revolution, to overcome foreign embargoes, the government of Iran has developed its own military industry, produced its own tanks, armored personnel carriers, missiles, submarines, military vessels, missile destroyer, radar systems, helicopters, and fighter planes. In recent years, official announcements have highlighted the development of weapons such as the Hoot, Kowsar, Zelzal, Fateh-110, Shahab-3, Sejjil, and a variety of unmanned aerial vehicles (UAVs). Iran has the largest and most diverse ballistic missile arsenal in the Middle East. The Fajr-3, a liquid fuel missile with an undisclosed range which was developed and produced domestically, is currently the most advanced ballistic missile of the country. In June 1925, Reza Shah introduced conscription law at National Consultative Majlis. At that time every male person who had reached 21 years old must serve for military for two years. The conscription exempted women from military service after 1979 revolution. Iranian constitution obliges all men of 18 years old and higher to serve in military or police bases. They cannot leave the country or be employed without completion of the service period. The period varies from 18 to 24 months. Human rights According to international reports, Iran's human rights record is exceptionally poor. The regime in Iran is undemocratic, has frequently persecuted and arrested critics of the government and its Supreme Leader, and severely restricts the participation of candidates in popular elections as well as other forms of political activity. Women's rights in Iran are described as seriously inadequate, and children's rights have been severely violated, with more child offenders being executed in Iran than in any other country in the world. Sexual activity between members of the same sex is illegal and is punishable by up to death. Over the past decade, numbers of anti-government protests have broken out throughout Iran (such as the 2019–20 Iranian protests), demanding reforms or the end to the Islamic Republic. However, the IRGC and police often suppressed mass protests by violent means, which resulted in thousands of protesters killed. Economy Iran's economy is a mixture of central planning, state ownership of oil and other large enterprises, village agriculture, and small-scale private trading and service ventures. In 2017, GDP was $427.7 billion ($1.631 trillion at PPP), or $20,000 at PPP per capita. Iran is ranked as an lower-middle income economy by the World Bank. In the early 21st century, the service sector contributed the largest percentage of the GDP, followed by industry (mining and manufacturing) and agriculture. The Central Bank of the Islamic Republic of Iran is responsible for developing and maintaining the Iranian rial, which serves as the country's currency. The government does not recognize trade unions other than the Islamic labour councils, which are subject to the approval of employers and the security services. The minimum wage in June 2013 was 487 million rials a month ($134). Unemployment has remained above 10% since 1997, and the unemployment rate for women is almost double that of the men. In 2006, about 45% of the government's budget came from oil and natural gas revenues, and 31% came from taxes and fees. , Iran had earned $70 billion in foreign-exchange reserves, mostly (80%) from crude oil exports. Iranian budget deficits have been a chronic problem, mostly due to large-scale state subsidies, that include foodstuffs and especially gasoline, totaling more than $84 billion in 2008 for the energy sector alone. In 2010, the economic reform plan was approved by parliament to cut subsidies gradually and replace them with targeted social assistance. The objective is to move towards free market prices in a five-year period and increase productivity and social justice. The administration continues to follow the market reform plans of the previous one, and indicates that it will diversify Iran's oil-reliant economy. Iran has also developed a biotechnology, nanotechnology, and pharmaceutical industry. However, nationalized industries such as the bonyads have often been managed badly, making them ineffective and uncompetitive with years. Currently, the government is trying to privatize these industries, and, despite successes, there are still several problems to be overcome, such as the lagging corruption in the public sector and lack of competitiveness. Iran has leading manufacturing industries in the fields of automobile manufacture, transportation, construction materials, home appliances, food and agricultural goods, armaments, pharmaceuticals, information technology, and petrochemicals in the Middle East. According to the 2012 data from the Food and Agriculture Organization, Iran has been among the world's top five producers of apricots, cherries, sour cherries, cucumbers and gherkins, dates, eggplants, figs, pistachios, quinces, walnuts, and watermelons. Economic sanctions against Iran, such as the embargo against Iranian crude oil, have injured the economy. In 2015, Iran and the P5+1 reached a deal on the nuclear program that removed the main sanctions pertaining to Iran's nuclear program by 2016. According to the BBC, renewed U.S. sanctions against Iran "have led to a sharp downturn in Iran's economy, pushing the value of its currency to record lows, quadrupling its annual inflation rate, driving away foreign investors, and triggering protests." Tourism Although tourism declined significantly during the war with Iraq, it has been subsequently recovered. About 1,659,000 foreign tourists visited Iran in 2004, and 2.3 million in 2009, mostly from Asian countries, including the republics of Central Asia, while about 10% came from the European Union and North America. Since the removal of some sanctions against Iran in 2015, tourism has re-surged in the country. Over five million tourists visited Iran in the fiscal year of 2014–2015, four percent more than the previous year. Alongside the capital, the most popular tourist destinations are Isfahan, Mashhad, and Shiraz. In the early 2000s, the industry faced serious limitations in infrastructure, communications, industry standards, and personnel training. The majority of the 300,000 travel visas granted in 2003 were obtained by Asian Muslims, who presumably intended to visit pilgrimage sites in Mashhad and Qom. Several organized tours from Germany, France, and other European countries come to Iran annually to visit archaeological sites and monuments. In 2003, Iran ranked 68th in tourism revenues worldwide. According to the UNESCO and the deputy head of research for Iran's Tourism Organization, Iran is rated fourth among the top 10 destinations in the Middle East. Domestic tourism in Iran is one of the largest in the world. Weak advertising, unstable regional conditions, a poor public image in some parts of the world, and absence of efficient planning schemes in the tourism sector have all hindered the growth of tourism. Transportation Iran has a long paved road system linking most of its towns and all of its cities. In 2011 the country had of roads, of which 73% were paved. In 2008 there were nearly 100 passenger cars for every 1,000 inhabitants. Trains operate on 11,106 km (6,942 mi) of railroad track. The country's major port of entry is Bandar-Abbas on the Strait of Hormuz. After arriving in Iran, imported goods are distributed throughout the country by trucks and freight trains. The Tehran–Bandar-Abbas railroad, opened in 1995, connects Bandar-Abbas to the railroad system of Central Asia via Tehran and Mashhad. Other major ports include Bandar e-Anzali and Bandar e-Torkeman on the Caspian Sea and Khorramshahr and Bandar-e Emam Khomeyni on the Persian Gulf. Dozens of cities have airports that serve passenger and cargo planes. Iran Air, the national airline, was founded in 1962 and operates domestic and international flights. All large cities have mass transit systems using buses, and several private companies provide bus service between cities. Hamadan and Tehran hold the highest betweenness and closeness centrality among the cities of Iran, regarding road and air routes, respectively. Transport in Iran is inexpensive because of the government's subsidization of the price of gasoline. The downside is a huge draw on government coffers, economic inefficiency because of highly wasteful consumption patterns, smuggling to neighboring countries and air pollution. In 2008, more than one million people worked in the transportation sector, accounting for 9% of GDP. Energy Iran has the world's second largest proved gas reserves after Russia, with 33.6 trillion cubic meters, and the third largest natural gas production after Indonesia and Russia. It also ranks fourth in oil reserves with an estimated 153,600,000,000 barrels. It is OPEC's second largest oil exporter, and is an energy superpower. In 2005, Iran spent US$4 billion on fuel imports, because of contraband and inefficient domestic use. Oil industry output averaged in 2005, compared with the peak of six million barrels per day reached in 1974. In the early 2000s, industry infrastructure was increasingly inefficient because of technological lags. Few exploratory wells were drilled in 2005. In 2004, a large share of Iran's natural gas reserves were untapped. The addition of new hydroelectric stations and the streamlining of conventional coal and oil-fired stations increased installed capacity to 33,000 megawatts. Of that amount, about 75% was based on natural gas, 18% on oil, and 7% on hydroelectric power. In 2004, Iran opened its first wind-powered and geothermal plants, and the first solar thermal plant was to come online in 2009. Iran is the world's third country to have developed GTL technology. Demographic trends and intensified industrialization have caused electric power demand to grow by 8% per year. The government's goal of 53,000 megawatts of installed capacity by 2010 is to be reached by bringing on line new gas-fired plants, and adding hydropower and nuclear power generation capacity. Iran's first nuclear power plant at Bushire went online in 2011. It is the second nuclear power plant ever built in the Middle East after the Metsamor Nuclear Power Plant in Armenia. Education, science and technology Education in Iran is highly centralized. K–12 is supervised by the Ministry of Education, and higher education is under the supervision of the Ministry of Science and Technology. According to Fars News Agency, the adult literacy rated 93.0% in September 2015, while according to UNESCO it had rated 85.0% in 2008 (up from 36.5% in 1976). According to the data provided by UNESCO, Iran's literacy rate among people aged 15 years and older was 85.54% as of 2016, with men (90.35%) being significantly more educated than women (80.79%), with the number of illiterate people of the same age amounting to around 8,700,000 of the country's 85 million population. According to this report, Iranian government's expenditure on education amounts to around 4% of the GDP. The requirement to enter into higher education is to have a high school diploma and pass the Iranian University Entrance Exam (officially known as konkur (کنکور)), which is the equivalent of the SAT and ACT exams of the United States. Many students do a 1–2-year course of pre-university (piš-dānešgāh), which is the equivalent of the GCE A-levels and the International Baccalaureate. The completion of the pre-university course earns students the Pre-University Certificate. Iran's higher education is sanctioned by different levels of diplomas, including an associate degree (kārdāni; also known as fowq e diplom) delivered in two years, a bachelor's degree (kāršenāsi; also known as lisāns) delivered in four years, and a master's degree (kāršenāsi e aršad) delivered in two years, after which another exam allows the candidate to pursue a doctoral program (PhD; known as doktorā). According to the Webometrics Ranking of World Universities (), Iran's top five universities include Tehran University of Medical Sciences (478th worldwide), the University of Tehran (514th worldwide), Sharif University of Technology (605th worldwide), Amirkabir University of Technology (726th worldwide), and the Tarbiat Modares University (789th worldwide). Iran was ranked 67th in the Global Innovation Index in 2020, down from 61st in 2019. Iran has increased its publication output nearly tenfold from 1996 through 2004, and has been ranked first in terms of output growth rate, followed by China. According to a study by SCImago in 2012, Iran would rank fourth in the world in terms of research output by 2018, if the current trend persists. In 2009, a SUSE Linux-based HPC system made by the Aerospace Research Institute of Iran (ARI) was launched with 32 cores, and now runs 96 cores. Its performance was pegged at 192 GFLOPS. The Iranian humanoid robot Sorena 2, which was designed by engineers at the University of Tehran, was unveiled in 2010. The Institute of Electrical and Electronics Engineers (IEEE) has placed the name of Surena among the five prominent robots of the world after analyzing its performance. In the biomedical sciences, Iran's Institute of Biochemistry and Biophysics has a UNESCO chair in biology. In late 2006, Iranian scientists successfully cloned a sheep by somatic cell nuclear transfer, at the Royan Research Center in Tehran. According to a study by David Morrison and Ali Khadem Hosseini (Harvard-MIT and Cambridge), stem cell research in Iran is amongst the top 10 in the world. Iran ranks 15th in the world in nanotechnologies. Iran placed its domestically built satellite Omid into orbit on the 30th anniversary of the 1979 Revolution, on 2February 2009, through its first expendable launch vehicle Safir, becoming the ninth country in the world capable of both producing a satellite and sending it into space from a domestically made launcher. The Iranian nuclear program was launched in the 1950s. Iran is the seventh country to produce uranium hexafluoride, and controls the entire nuclear fuel cycle. Iranian scientists outside Iran have also made some major contributions to science. In 1960, Ali Javan co-invented the first gas laser, and fuzzy set theory was introduced by Lotfi A. Zadeh. Iranian cardiologist Tofigh Mussivand invented and developed the first artificial cardiac pump, the precursor of the artificial heart Furthering research and treatment of diabetes, the HbA1c was discovered by Samuel Rahbar. A substantial number of papers in string theory are published in Iran. Iranian American string theorist Kamran Vafa proposed the Vafa–Witten theorem together with Edward Witten. In August 2014, Iranian mathematician Maryam Mirzakhani became the first woman, as well as the first Iranian, to receive the Fields Medal, the highest prize in mathematics. Demographics Iran is a diverse country, consisting of numerous ethnic and linguistic groups that are unified through a shared Iranian nationality. Iran's population grew rapidly during the latter half of the 20th century, increasing from about 19 million in 1956 to more than 84 million by July 2020. However, Iran's fertility rate has dropped significantly in recent years, coming down from a fertility rate of 6.5 per woman to just a little more than 2 two decades later, leading to a population growth rate of about 1.39% as of 2018. Due to its young population, studies project that the growth will continue to slow until it stabilizes around 105 million by 2050. Iran hosts one of the largest refugee populations in the world, with almost one million refugees, mostly from Afghanistan and Iraq. Since 2006, Iranian officials have been working with the UNHCR and Afghan officials for their repatriation. According to estimates, about five million Iranian citizens have emigrated to other countries, mostly since the 1979 Revolution. According to the Iranian Constitution, the government is required to provide every citizen of the country with access to social security, covering retirement, unemployment, old age, disability, accidents, calamities, health and medical treatment and care services. This is covered by tax revenues and income derived from public contributions. Languages The majority of the population speak Persian, which is also the official language of the country. Others include speakers of a number of other Iranian languages within the greater Indo-European family, and languages belonging to some other ethnicities living in Iran. In northern Iran, mostly confined to Gilan and Mazenderan, the Gilaki and Mazenderani languages are widely spoken, both having affinities to the neighboring Caucasian languages. In parts of Gilan, the Talysh language is also widely spoken, which stretches up to the neighboring Republic of Azerbaijan. Varieties of Kurdish are widely spoken in the province of Kurdistan and nearby areas. In Khuzestan, several distinct varieties of Persian are spoken. Luri and Lari are also spoken in southern Iran. Azerbaijani, which is by far the most spoken language in the country after Persian, as well as a number of other Turkic languages and dialects, is spoken in various regions of Iran, especially in the region of Azerbaijan. Notable minority languages in Iran include Armenian, Georgian, Neo-Aramaic, and Arabic. Khuzi Arabic is spoken by the Arabs in Khuzestan, as well as the wider group of Iranian Arabs. Circassian was also once widely spoken by the large Circassian minority, but, due to assimilation over the many years, no sizable number of Circassians speak the language anymore. Percentages of spoken language continue to be a point of debate, as many opt that they are politically motivated; most notably regarding the largest and second largest ethnicities in Iran, the Persians and Azerbaijanis. Percentages given by the CIA's World Factbook include 53% Persian, 16% Azerbaijani, 10% Kurdish, 7% Mazenderani and Gilaki, 7% Luri, 2% Turkmen, 2% Balochi, 2% Arabic, and 2% the remainder Armenian, Georgian, Neo-Aramaic, and Circassian. Ethnic groups As with the spoken languages, the ethnic group composition also remains a point of debate, mainly regarding the largest and second largest ethnic groups, the Persians and Azerbaijanis, due to the lack of Iranian state censuses based on ethnicity. The CIA's World Factbook has estimated that around 79% of the population of Iran are a diverse Indo-European ethno-linguistic group that comprise speakers of various Iranian languages, with Persians (including Mazenderanis and Gilaks) constituting 61% of the population, Kurds 10%, Lurs 6%, and Balochs 2%. Peoples of other ethno-linguistic groups make up the remaining 21%, with Azerbaijanis constituting 16%, Arabs 2%, Turkmens and other Turkic tribes 2%, and others (such as Armenians, Talysh, Georgians, Circassians, Assyrians) 1%. The Library of Congress issued slightly different estimates: 65% Persians (including Mazenderanis, Gilaks, and the Talysh), 16% Azerbaijanis, 7% Kurds, 6% Lurs, 2% Baloch, 1% Turkic tribal groups (incl. Qashqai and Turkmens), and non-Iranian, non-Turkic groups (incl. Armenians, Georgians, Assyrians, Circassians, and Arabs) less than 3%. It determined that Persian is the first language of at least 65% of the country's population, and is the second language for most of the remaining 35%. Religion Twelver Shia Islam is the official state religion, to which about 90% to 95% of the population adhere. About 4% to 8% of the population are Sunni Muslims, mainly Kurds and Baloches. The remaining 2% are non-Muslim religious minorities, including Christians, Zoroastrians, Jews, Baháʼís, Mandeans, and Yarsanis. A 2020 survey by the World Values Survey found that 96.6% of Iranians believe in Islam. On the other hand, another 2020 survey conducted online by an organization based outside of Iran found a much smaller percentage of Iranians identifying as Muslim (32.2% as Shia, 5.0% as Sunni, and 3.2% as Sufi), and a significant fraction not identifying with any organized religion (22.2% identifying as "None," and some others identifying as atheists, spiritual, agnostics, and secular humanists). According to the CIA World Factbook, around 90–95% of Iranian Muslims associate themselves with the Shia branch of Islam, the official state religion, and about 5–10% with the Sunni and Sufi branches of Islam. There are a large population of adherents of Yarsanism, a Kurdish indigenous religion, making it the largest (unrecognized) minority religion in Iran. Its followers are mainly Gorani Kurds and certain groups of Lurs. They are based in Kurdistan Province, Kermanshah Province and Lorestan mainly. Christianity, Judaism, Zoroastrianism, and the Sunni branch of Islam are officially recognized by the government, and have reserved seats in the Iranian Parliament. Historically, early Iranian religions such as the Proto-Iranic religion and the subsequent Zoroastrianism and Manichaeism were the dominant religions in Iran, particularly during the Median, Achaemenid, Parthian, and Sasanian eras. This changed after the fall of the Sasanian Empire by the centuries-long Islamization that followed the Muslim Conquest of Iran. Iran was predominantly Sunni until the conversion of the country (as well as the people of what is today the neighboring Republic of Azerbaijan) to Shia Islam by the order of the Safavid dynasty in the 16th century. Judaism has a long history in Iran, dating back to the Achaemenid conquest of Babylonia. Although many left in the wake of the establishment of the State of Israel and the 1979 Revolution, about 8,756 to 25,000 Jewish people live in Iran. Iran has the largest Jewish population in the Middle East outside of Israel. Around 250,000 to 370,000 Christians reside in Iran, and Christianity is the country's largest recognized minority religion. Most are of Armenian background, as well as a sizable minority of Assyrians. A large number of Iranians have converted to Christianity from the predominant Shia Islam. The Baháʼí Faith is not officially recognized and has been subject to official persecution. According to the United Nations Special Rapporteur on Human Rights in Iran, Baháʼís are the largest non-Muslim religious minority in Iran, with an estimated 350,000 adherents. Since the 1979 Revolution, the persecution of Baháʼís has increased with executions and denial of civil rights, especially the denial of access to higher education and employment. Iranian officials have continued to support the rebuilding and renovation of Armenian churches in the Islamic Republic. The Armenian Monastic Ensembles of Iran has also received continued support. In 2019, the Iranian government registered the Holy Savior Cathedral, commonly referred to as Vank Cathedral, in the New Julfa district of Isfahan, as a UNESCO World Heritage Site, with significant expenditures for its congregation. Currently three Armenian churches in Iran have been included in the UNESCO World Heritage List. Culture The earliest attested cultures in Iran date back to the Lower Paleolithic. Owing to its geopolitical position, Iran has influenced cultures as far as Greece and Italy to the west, Russia to the north, the Arabian Peninsula to the south, and south and east Asia to the east. Art The art of Iran encompasses many disciplines, including architecture, stonemasonry, metalworking, weaving, pottery, painting, and calligraphy. Iranian works of art show a great variety in style, in different regions and periods. The art of the Medes remains obscure, but has been theoretically attributed to the Scythian style. The Achaemenids borrowed heavily from the art of their neighboring civilizations, but produced a synthesis of a unique style, with an eclectic architecture remaining at sites such as Persepolis and Pasargadae. Greek iconography was imported by the Seleucids, followed by the recombination of Hellenistic and earlier Near Eastern elements in the art of the Parthians, with remains such as the Temple of Anahita and the Statue of the Parthian Nobleman. By the time of the Sasanians, Iranian art came across a general renaissance. Although of unclear development, Sasanian art was highly influential, and spread into far regions. Taq-e-Bostan, Taq-e-Kasra, Naqsh-e-Rostam, and the Shapur-Khwast Castle are among the surviving monuments from the Sasanian period. During the Middle Ages, Sasanian art played a prominent role in the formation of both European and Asian medieval art, which carried forward to the Islamic world, and much of what later became known as Islamic learning—including medicine, architecture, philosophy, philology, and literature—were of Sasanian basis. The Safavid era is known as the Golden Age of Iranian art, and Safavid works of art show a far more unitary development than in any other period, as part of a political evolution that reunified Iran as a cultural entity. Safavid art exerted noticeable influences upon the neighboring Ottomans, the Mughals, and the Deccans, and was also influential through its fashion and garden architecture on 11th–17th-century Europe. Iran's contemporary art traces its origins back to the time of Kamal-ol-Molk, a prominent realist painter at the court of the Qajar dynasty who affected the norms of painting and adopted a naturalistic style that would compete with photographic works. A new Iranian school of fine art was established by Kamal-ol-Molk in 1928, and was followed by the so-called "coffeehouse" style of painting. Iran's avant-garde modernists emerged by the arrival of new western influences during World War II. The vibrant contemporary art scene originates in the late 1940s, and Tehran's first modern art gallery, Apadana, was opened in September 1949 by painters Mahmud Javadipur, Hosein Kazemi, and Hushang Ajudani. The new movements received official encouragement by the mid-1950s, which led to the emergence of artists such as Marcos Grigorian, signaling a commitment to the creation of a form of modern art grounded in Iran. Architecture The history of architecture in Iran goes back to the seventh millennium BC. Iranians were among the first to use mathematics, geometry and astronomy in architecture. Iranian architecture displays great variety, both structural and aesthetic, developing gradually and coherently out of earlier traditions and experience. The guiding motif of Iranian architecture is its cosmic symbolism, "by which man is brought into communication and participation with the powers of heaven". Iran ranks seventh among UNESCO's list of countries with the most archaeological ruins and attractions from antiquity. Traditionally, the guiding formative motif of Iranian architecture has been its cosmic symbolism "by which man is brought into communication and participation with the powers of heaven". This theme has not only given unity and continuity to the architecture of Persia, but has been a primary source of its emotional character as well. According to Persian historian and archaeologist Arthur Pope, the supreme Iranian art, in the proper meaning of the word, has always been its architecture. The supremacy of architecture applies to both pre- and post-Islamic periods. Weaving Iran's carpet-weaving has its origins in the Bronze Age, and is one of the most distinguished manifestations of Iranian art. Iran is the world's largest producer and exporter of handmade carpets, producing three-quarters of the world's total output and having a share of 30% of world's export markets. Literature Iran's oldest literary tradition is that of Avestan, the Old Iranian sacred language of the Avesta, which consists of the legendary and religious texts of Zoroastrianism and the ancient Iranian religion, with its earliest records dating back to the pre-Achaemenid times. Of the various modern languages used in Iran, Persian, various dialects of which are spoken throughout the Iranian Plateau, has the most influential literature. Persian has been dubbed as a worthy language to serve as a conduit for poetry, and is considered one of the four main bodies of world literature. In spite of originating from the region of Persis (better known as Persia) in southwestern Iran, the Persian language was used and developed further through Persianate societies in Asia Minor, Central Asia, and South Asia, leaving massive influences on Ottoman and Mughal literatures, among others. Iran has a number of famous medieval poets, most notably Rumi, Ferdowsi, Hafez, Saadi Shirazi, Omar Khayyam, and Nezami Ganjavi. Iranian literature also inspired writers such as Johann Wolfgang von Goethe, Henry David Thoreau, and Ralph Waldo Emerson. Philosophy Iranian philosophy originates from Indo-European roots, with Zoroaster's reforms having major influences. According to The Oxford Dictionary of Philosophy, the chronology of the subject and science of philosophy starts with the Indo-Iranians, dating this event to 1500 BC. The Oxford dictionary also states, "Zarathushtra's philosophy entered to influence Western tradition through Judaism, and therefore on Middle Platonism." While there are ancient relations between the Indian Vedas and the Iranian Avesta, the two main families of the Indo-Iranian philosophical traditions were characterized by fundamental differences, especially in their implications for the human being's position in society and their view of man's role in the universe. The Cyrus Cylinder, which is known as "the first charter of human rights", is often seen as a reflection of the questions and thoughts expressed by Zoroaster, and developed in Zoroastrian schools of the Achaemenid era. The earliest tenets of Zoroastrian schools are part of the extant scriptures of the Zoroastrian religion in Avestan. Among them are treatises such as the Zatspram, Shkand-gumanik Vizar, and Denkard, as well as older passages of the Avesta and the Gathas. The current trends in Iranian philosophy have grown limited in scope because of Islamic frames of thought although the liberal ways of thought remain open to be generated in Iranian publications by Iranian intellectuals, especially outside Iran, where the Iranian regime has less power to restrict Iranian thought and philosophy. Mythology Iranian mythology consists of ancient Iranian folklore and stories, all involving extraordinary beings, reflecting attitudes towards the confrontation of good and evil, actions of the gods, and the exploits of heroes and fabulous creatures. Myths play a crucial part in Iranian culture, and understanding of them is increased when they are considered within the context of actual events in Iranian history. The geography of Greater Iran, a vast area covering present-day Iran, the Caucasus, Anatolia, Mesopotamia and Central Asia, with its high mountain ranges, plays the main role in much of Iranian mythology. Tenth-century Persian poet Ferdowsi's long epic poem Šāhnāme ("Book of Kings"), which is for the most part based on Xwadāynāmag, a Middle Persian compilation of the history of Iranian kings and heroes from mythical times down to the reign of Chosroes II, is considered the national epic of Iran. It draws heavily on the stories and characters of the Zoroastrian tradition, from the texts of the Avesta, the Denkard, and the Bundahishn. Music Iran is the apparent birthplace of the earliest complex instruments, dating back to the third millennium BC. The use of both vertical and horizontal angular harps have been documented at the sites Madaktu and Kul-e Farah, with the largest collection of Elamite instruments documented at Kul-e Farah. Multiple depictions of horizontal harps were also sculpted in Assyrian palaces, dating back between 865 and 650 BC. Xenophon's Cyropaedia mentions a great number of singing women at the court of the Achaemenid Empire. Athenaeus of Naucratis, in his Deipnosophistae, points out to the capture of Achaemenid singing girls at the court of the last Achaemenid king Darius III (336–330 BC) by Macedonian general Parmenion. Under the Parthian Empire, the gōsān (Parthian for "minstrel") had a prominent role in the society. According to Plutarch's Life of Crassus (32.3), they praised their national heroes and ridiculed their Roman rivals. Likewise, Strabo's Geographica reports that the Parthian youth were taught songs about "the deeds both of the gods and of the noblest men". The history of Sasanian music is better documented than the earlier periods, and is especially more evident in Avestan texts. By the time of Chosroes II, the Sasanian royal court hosted a number of prominent musicians, namely Azad, Bamshad, Barbad, Nagisa, Ramtin, and Sarkash. Iranian traditional musical instruments include string instruments such as chang (harp), qanun, santur, rud (oud, barbat), tar, dotar, setar, tanbur, and kamanche, wind instruments such as sorna (zurna, karna) and ney, and percussion instruments such as tompak, kus, daf (dayere), and naqare. Iran's first symphony orchestra, the Tehran Symphony Orchestra, was founded by Qolam-Hoseyn Minbashian in 1933. It was reformed by | Guardian Council can, and has dismissed some elected members of the Iranian parliament in the past. For example, Minoo Khaleghi was disqualified by Guardian Council even after winning election, as she had been photographed in a meeting without wearing headscarf. President After the Supreme Leader, the Constitution defines the President of Iran as the highest state authority. The President is elected by universal suffrage for a term of four years, however, the president is still required to gain the Leader's official approval before being sworn in before the Parliament (Majlis). The Leader also has the power to dismiss the elected president anytime. The President can only be re-elected for one term. The President is responsible for the implementation of the constitution, and for the exercise of executive powers in implementing the decrees and general policies as outlined by the Supreme Leader, except for matters directly related to the Supreme Leader, who has the final say in all matters. Unlike the executive in other countries, the President of Iran does not have full control over anything, as these are ultimately under the control of the Supreme Leader. Chapter IX of the Constitution of the Islamic Republic of Iran sets forth the qualifications for presidential candidates. The procedures for presidential election and all other elections in Iran are outlined by the Supreme Leader. The President functions as the executive of affairs such as signing treaties and other international agreements, and administering national planning, budget, and state employment affairs, all as approved by the Supreme Leader. The President appoints the ministers, subject to the approval of the Parliament, as well as the approval of the Supreme Leader, who can dismiss or reinstate any of the ministers at any time, regardless of the decisions made by the President or the Parliament. The President supervises the Council of Ministers, coordinates government decisions, and selects government policies to be placed before the legislature. The current Supreme Leader, Ali Khamenei, has fired as well as reinstated Council of Ministers members. Eight Vice Presidents serve under the President, as well as a cabinet of twenty-two ministers, who must all be approved by the legislature. Legislature The legislature of Iran, known as the Islamic Consultative Assembly, is a unicameral body comprising 290 members elected for four-year terms. It drafts legislation, ratifies international treaties, and approves the national budget. All parliamentary candidates and all legislation from the assembly must be approved by the Guardian Council. The Guardian Council comprises twelve jurists, including six appointed by the Supreme Leader. Others are elected by the Parliament, from among the jurists nominated by the Head of the Judiciary. The Council interprets the constitution and may veto the Parliament. If a law is deemed incompatible with the constitution or Sharia (Islamic law), it is referred back to the Parliament for revision. The Expediency Council has the authority to mediate disputes between the Parliament and the Guardian Council, and serves as an advisory body to the Supreme Leader, making it one of the most powerful governing bodies in the country. Local city councils are elected by public vote to four-year terms in all cities and villages of Iran. Law The Supreme Leader appoints the head of the country's judiciary, who in turn appoints the head of the Supreme Court and the chief public prosecutor. There are several types of courts, including public courts that deal with civil and criminal cases, and revolutionary courts which deal with certain categories of offenses, such as crimes against national security. The decisions of the revolutionary courts are final and cannot be appealed. The Chief Justice of Iran is the head of the Judicial system of the Islamic Republic of Iran and is responsible for its administration and supervision. He is also the highest judge of the Supreme Court of Iran. The Supreme Leader of Iran appoints and can dismiss the Chief Justice. The Chief Justice nominates some candidates for serving as minister of justice and then the President select one of them. The Chief Justice can serve for two five-year terms. The Special Clerical Court handles crimes allegedly committed by clerics, although it has also taken on cases involving laypeople. The Special Clerical Court functions independently of the regular judicial framework, and is accountable only to the Supreme Leader. The Court's rulings are final and cannot be appealed. The Assembly of Experts, which meets for one week annually, comprises 86 "virtuous and learned" clerics elected by adult suffrage for eight-year terms. Foreign relations Since the time of the 1979 Revolution, Iran's foreign relations have often been portrayed as being based on two strategic principles; eliminating outside influences in the region, and pursuing extensive diplomatic contacts with developing and non-aligned countries. Since 2005, Iran's nuclear program has become the subject of contention with the international community, mainly the United States. Many countries have expressed concern that Iran's nuclear program could divert civilian nuclear technology into a weapons program. This has led the United Nations Security Council to impose sanctions against Iran which had further isolated Iran politically and economically from the rest of the global community. In 2009, the U.S. Director of National Intelligence said that Iran, if choosing to, would not be able to develop a nuclear weapon until 2013. , the government of Iran maintains diplomatic relations with 99 members of the United Nations, but not with the United States, and not with Israel—a state which Iran's government has derecognized since the 1979 Revolution. Among Muslim nations, Iran has an adversarial relationship with Saudi Arabia due to different political and Islamic ideologies. While Iran is a Shia Islamic Republic, Saudi Arabia is a conservative Sunni monarchy. Regarding the Israeli–Palestinian conflict, the government of Iran has recognized Jerusalem as the capital of the State of Palestine, after Trump recognized Jerusalem as the capital of Israel. Since the 2000s, Iran's controversial nuclear program has raised concerns, which is part of the basis of the international sanctions against the country. On 14 July 2015, Tehran and the P5+1 came to a historic agreement (Joint Comprehensive Plan of Action) to end economic sanctions in exchange for Iran's restriction in producing enriched uranium after demonstrating a peaceful nuclear research project that would meet the International Atomic Energy Agency standards. Iran is a member of dozens of international organizations, including the G-15, G-24, G-77, IAEA, IBRD, IDA, IDB, IFC, ILO, IMF, IMO, Interpol, OIC, OPEC, WHO, and the United Nations, and currently has observer status at the World Trade Organization. Reports are that Iran will begin the processes of becoming a full member of the Shanghai Cooperation Organization (SCO), a Eurasian political, economic, and security alliance. Military The Islamic Republic of Iran has two types of armed forces: the regular forces of the Army, the Air Force, and the Navy, and the Revolutionary Guards, totaling about 545,000 active troops. Iran also has around 350,000 Reserve Force, totaling around 900,000 trained troops. The government of Iran has a paramilitary, volunteer militia force within the Islamic Revolutionary Guard Corps, called the Basij, which includes about 90,000 full-time, active-duty uniformed members. Up to 11 million men and women are members of the Basij who could potentially be called up for service. GlobalSecurity.org estimates Iran could mobilize "up to one million men", which would be among the largest troop mobilizations in the world. In 2007, Iran's military spending represented 2.6% of the GDP or $102 per capita, the lowest figure of the Persian Gulf nations. Iran's military doctrine is based on deterrence. In 2014, the country spent $15 billion on arms, while the states of the Gulf Cooperation Council spent eight times more. The government of Iran supports the military activities of its allies in Syria, Iraq, and Lebanon (Hezbollah) with military and financial aid. Iran and Syria are close strategic allies, and Iran has provided significant support for the Syrian Government in the Syrian Civil War. According to some estimates, Iran controlled over 80,000 pro-Assad Shi'ite fighters in Syria. Since the 1979 Revolution, to overcome foreign embargoes, the government of Iran has developed its own military industry, produced its own tanks, armored personnel carriers, missiles, submarines, military vessels, missile destroyer, radar systems, helicopters, and fighter planes. In recent years, official announcements have highlighted the development of weapons such as the Hoot, Kowsar, Zelzal, Fateh-110, Shahab-3, Sejjil, and a variety of unmanned aerial vehicles (UAVs). Iran has the largest and most diverse ballistic missile arsenal in the Middle East. The Fajr-3, a liquid fuel missile with an undisclosed range which was developed and produced domestically, is currently the most advanced ballistic missile of the country. In June 1925, Reza Shah introduced conscription law at National Consultative Majlis. At that time every male person who had reached 21 years old must serve for military for two years. The conscription exempted women from military service after 1979 revolution. Iranian constitution obliges all men of 18 years old and higher to serve in military or police bases. They cannot leave the country or be employed without completion of the service period. The period varies from 18 to 24 months. Human rights According to international reports, Iran's human rights record is exceptionally poor. The regime in Iran is undemocratic, has frequently persecuted and arrested critics of the government and its Supreme Leader, and severely restricts the participation of candidates in popular elections as well as other forms of political activity. Women's rights in Iran are described as seriously inadequate, and children's rights have been severely violated, with more child offenders being executed in Iran than in any other country in the world. Sexual activity between members of the same sex is illegal and is punishable by up to death. Over the past decade, numbers of anti-government protests have broken out throughout Iran (such as the 2019–20 Iranian protests), demanding reforms or the end to the Islamic Republic. However, the IRGC and police often suppressed mass protests by violent means, which resulted in thousands of protesters killed. Economy Iran's economy is a mixture of central planning, state ownership of oil and other large enterprises, village agriculture, and small-scale private trading and service ventures. In 2017, GDP was $427.7 billion ($1.631 trillion at PPP), or $20,000 at PPP per capita. Iran is ranked as an lower-middle income economy by the World Bank. In the early 21st century, the service sector contributed the largest percentage of the GDP, followed by industry (mining and manufacturing) and agriculture. The Central Bank of the Islamic Republic of Iran is responsible for developing and maintaining the Iranian rial, which serves as the country's currency. The government does not recognize trade unions other than the Islamic labour councils, which are subject to the approval of employers and the security services. The minimum wage in June 2013 was 487 million rials a month ($134). Unemployment has remained above 10% since 1997, and the unemployment rate for women is almost double that of the men. In 2006, about 45% of the government's budget came from oil and natural gas revenues, and 31% came from taxes and fees. , Iran had earned $70 billion in foreign-exchange reserves, mostly (80%) from crude oil exports. Iranian budget deficits have been a chronic problem, mostly due to large-scale state subsidies, that include foodstuffs and especially gasoline, totaling more than $84 billion in 2008 for the energy sector alone. In 2010, the economic reform plan was approved by parliament to cut subsidies gradually and replace them with targeted social assistance. The objective is to move towards free market prices in a five-year period and increase productivity and social justice. The administration continues to follow the market reform plans of the previous one, and indicates that it will diversify Iran's oil-reliant economy. Iran has also developed a biotechnology, nanotechnology, and pharmaceutical industry. However, nationalized industries such as the bonyads have often been managed badly, making them ineffective and uncompetitive with years. Currently, the government is trying to privatize these industries, and, despite successes, there are still several problems to be overcome, such as the lagging corruption in the public sector and lack of competitiveness. Iran has leading manufacturing industries in the fields of automobile manufacture, transportation, construction materials, home appliances, food and agricultural goods, armaments, pharmaceuticals, information technology, and petrochemicals in the Middle East. According to the 2012 data from the Food and Agriculture Organization, Iran has been among the world's top five producers of apricots, cherries, sour cherries, cucumbers and gherkins, dates, eggplants, figs, pistachios, quinces, walnuts, and watermelons. Economic sanctions against Iran, such as the embargo against Iranian crude oil, have injured the economy. In 2015, Iran and the P5+1 reached a deal on the nuclear program that removed the main sanctions pertaining to Iran's nuclear program by 2016. According to the BBC, renewed U.S. sanctions against Iran "have led to a sharp downturn in Iran's economy, pushing the value of its currency to record lows, quadrupling its annual inflation rate, driving away foreign investors, and triggering protests." Tourism Although tourism declined significantly during the war with Iraq, it has been subsequently recovered. About 1,659,000 foreign tourists visited Iran in 2004, and 2.3 million in 2009, mostly from Asian countries, including the republics of Central Asia, while about 10% came from the European Union and North America. Since the removal of some sanctions against Iran in 2015, tourism has re-surged in the country. Over five million tourists visited Iran in the fiscal year of 2014–2015, four percent more than the previous year. Alongside the capital, the most popular tourist destinations are Isfahan, Mashhad, and Shiraz. In the early 2000s, the industry faced serious limitations in infrastructure, communications, industry standards, and personnel training. The majority of the 300,000 travel visas granted in 2003 were obtained by Asian Muslims, who presumably intended to visit pilgrimage sites in Mashhad and Qom. Several organized tours from Germany, France, and other European countries come to Iran annually to visit archaeological sites and monuments. In 2003, Iran ranked 68th in tourism revenues worldwide. According to the UNESCO and the deputy head of research for Iran's Tourism Organization, Iran is rated fourth among the top 10 destinations in the Middle East. Domestic tourism in Iran is one of the largest in the world. Weak advertising, unstable regional conditions, a poor public image in some parts of the world, and absence of efficient planning schemes in the tourism sector have all hindered the growth of tourism. Transportation Iran has a long paved road system linking most of its towns and all of its cities. In 2011 the country had of roads, of which 73% were paved. In 2008 there were nearly 100 passenger cars for every 1,000 inhabitants. Trains operate on 11,106 km (6,942 mi) of railroad track. The country's major port of entry is Bandar-Abbas on the Strait of Hormuz. After arriving in Iran, imported goods are distributed throughout the country by trucks and freight trains. The Tehran–Bandar-Abbas railroad, opened in 1995, connects Bandar-Abbas to the railroad system of Central Asia via Tehran and Mashhad. Other major ports include Bandar e-Anzali and Bandar e-Torkeman on the Caspian Sea and Khorramshahr and Bandar-e Emam Khomeyni on the Persian Gulf. Dozens of cities have airports that serve passenger and cargo planes. Iran Air, the national airline, was founded in 1962 and operates domestic and international flights. All large cities have mass transit systems using buses, and several private companies provide bus service between cities. Hamadan and Tehran hold the highest betweenness and closeness centrality among the cities of Iran, regarding road and air routes, respectively. Transport in Iran is inexpensive because of the government's subsidization of the price of gasoline. The downside is a huge draw on government coffers, economic inefficiency because of highly wasteful consumption patterns, smuggling to neighboring countries and air pollution. In 2008, more than one million people worked in the transportation sector, accounting for 9% of GDP. Energy Iran has the world's second largest proved gas reserves after Russia, with 33.6 trillion cubic meters, and the third largest natural gas production after Indonesia and Russia. It also ranks fourth in oil reserves with an estimated 153,600,000,000 barrels. It is OPEC's second largest oil exporter, and is an energy superpower. In 2005, Iran spent US$4 billion on fuel imports, because of contraband and inefficient domestic use. Oil industry output averaged in 2005, compared with the peak of six million barrels per day reached in 1974. In the early 2000s, industry infrastructure was increasingly inefficient because of technological lags. Few exploratory wells were drilled in 2005. In 2004, a large share of Iran's natural gas reserves were untapped. The addition of new hydroelectric stations and the streamlining of conventional coal and oil-fired stations increased installed capacity to 33,000 megawatts. Of that amount, about 75% was based on natural gas, 18% on oil, and 7% on hydroelectric power. In 2004, Iran opened its first wind-powered and geothermal plants, and the first solar thermal plant was to come online in 2009. Iran is the world's third country to have developed GTL technology. Demographic trends and intensified industrialization have caused electric power demand to grow by 8% per year. The government's goal of 53,000 megawatts of installed capacity by 2010 is to be reached by bringing on line new gas-fired plants, and adding hydropower and nuclear power generation capacity. Iran's first nuclear power plant at Bushire went online in 2011. It is the second nuclear power plant ever built in the Middle East after the Metsamor Nuclear Power Plant in Armenia. Education, science and technology Education in Iran is highly centralized. K–12 is supervised by the Ministry of Education, and higher education is under the supervision of the Ministry of Science and Technology. According to Fars News Agency, the adult literacy rated 93.0% in September 2015, while according to UNESCO it had rated 85.0% in 2008 (up from 36.5% in 1976). According to the data provided by UNESCO, Iran's literacy rate among people aged 15 years and older was 85.54% as of 2016, with men (90.35%) being significantly more educated than women (80.79%), with the number of illiterate people of the same age amounting to around 8,700,000 of the country's 85 million population. According to this report, Iranian government's expenditure on education amounts to around 4% of the GDP. The requirement to enter into higher education is to have a high school diploma and pass the Iranian University Entrance Exam (officially known as konkur (کنکور)), which is the equivalent of the SAT and ACT exams of the United States. Many students do a 1–2-year course of pre-university (piš-dānešgāh), which is the equivalent of the GCE A-levels and the International Baccalaureate. The completion of the pre-university course earns students the Pre-University Certificate. Iran's higher education is sanctioned by different levels of diplomas, including an associate degree (kārdāni; also known as fowq e diplom) delivered in two years, a bachelor's degree (kāršenāsi; also known as lisāns) delivered in four years, and a master's degree (kāršenāsi e aršad) delivered in two years, after which another exam allows the candidate to pursue a doctoral program (PhD; known as doktorā). According to the Webometrics Ranking of World Universities (), Iran's top five universities include Tehran University of Medical Sciences (478th worldwide), the University of Tehran (514th worldwide), Sharif University of Technology (605th worldwide), Amirkabir University of Technology (726th worldwide), and the Tarbiat Modares University (789th worldwide). Iran was ranked 67th in the Global Innovation Index in 2020, down from 61st in 2019. Iran has increased its publication output nearly tenfold from 1996 through 2004, and has been ranked first in terms of output growth rate, followed by China. According to a study by SCImago in 2012, Iran would rank fourth in the world in terms of research output by 2018, if the current trend persists. In 2009, a SUSE Linux-based HPC system made by the Aerospace Research Institute of Iran (ARI) was launched with 32 cores, and now runs 96 cores. Its performance was pegged at 192 GFLOPS. The Iranian humanoid robot Sorena 2, which was designed by engineers at the University of Tehran, was unveiled in 2010. The Institute of Electrical and Electronics Engineers (IEEE) has placed the name of Surena among the five prominent robots of the world after analyzing its performance. In the biomedical sciences, Iran's Institute of Biochemistry and Biophysics has a UNESCO chair in biology. In late 2006, Iranian scientists successfully cloned a sheep by somatic cell nuclear transfer, at the Royan Research Center in Tehran. According to a study by David Morrison and Ali Khadem Hosseini (Harvard-MIT and Cambridge), stem cell research in Iran is amongst the top 10 in the world. Iran ranks 15th in the world in nanotechnologies. Iran placed its domestically built satellite Omid into orbit on the 30th anniversary of the 1979 Revolution, on 2February 2009, through its first expendable launch vehicle Safir, becoming the ninth country in the world capable of both producing a satellite and sending it into space from a domestically made launcher. The Iranian nuclear program was launched in the 1950s. Iran is the seventh country to produce uranium hexafluoride, and controls the entire nuclear fuel cycle. Iranian scientists outside Iran have also made some major contributions to science. In 1960, Ali Javan co-invented the first gas laser, and fuzzy set theory was introduced by Lotfi A. Zadeh. Iranian cardiologist Tofigh Mussivand invented and developed the first artificial cardiac pump, the precursor of the artificial heart Furthering research and treatment of diabetes, the HbA1c was discovered by Samuel Rahbar. A substantial number of papers in string theory are published in Iran. Iranian American string theorist Kamran Vafa proposed the Vafa–Witten theorem together with Edward Witten. In August 2014, Iranian mathematician Maryam Mirzakhani became the first woman, as well as the first Iranian, to receive the Fields Medal, the highest prize in mathematics. Demographics Iran is a diverse country, consisting of numerous ethnic and linguistic groups that are unified through a shared Iranian nationality. Iran's population grew rapidly during the latter half of the 20th century, increasing from about 19 million in 1956 to more than 84 million by July 2020. However, Iran's fertility rate has dropped significantly in recent years, coming down from a fertility rate of 6.5 per woman to just a little more than 2 two decades later, leading to a population growth rate of about 1.39% as of 2018. Due to its young population, studies project that the growth will continue to slow until it stabilizes around 105 million by 2050. Iran hosts one of the largest refugee populations in the world, with almost one million refugees, mostly from Afghanistan and Iraq. Since 2006, Iranian officials have been working with the UNHCR and Afghan officials for their repatriation. According to estimates, about five million Iranian citizens have emigrated to other countries, mostly since the 1979 Revolution. According to the Iranian Constitution, the government is required to provide every citizen of the country with access to social security, covering retirement, unemployment, old age, disability, accidents, calamities, health and medical treatment and care services. This is covered by tax revenues and income derived from public contributions. Languages The majority of the population speak Persian, which is also the official language of the country. Others include speakers of a number of other Iranian languages within the greater Indo-European family, and languages belonging to some other ethnicities living in Iran. In northern Iran, mostly confined to Gilan and Mazenderan, the Gilaki and Mazenderani languages are widely spoken, both having affinities to the neighboring Caucasian languages. In parts of Gilan, the Talysh language is also widely spoken, which stretches up to the neighboring Republic of Azerbaijan. Varieties of Kurdish are widely spoken in the province of Kurdistan and nearby areas. In Khuzestan, several distinct varieties of Persian are spoken. Luri and Lari are also spoken in southern Iran. Azerbaijani, which is by far the most spoken language in the country after Persian, as well as a number of other Turkic languages and dialects, is spoken in various regions of Iran, especially in the region of Azerbaijan. Notable minority languages in Iran include Armenian, Georgian, Neo-Aramaic, and Arabic. Khuzi Arabic is spoken by the Arabs in Khuzestan, as well as the wider group of Iranian Arabs. Circassian was also once widely spoken by the large Circassian minority, but, due to assimilation over the many years, no sizable number of Circassians speak the language anymore. Percentages of spoken language continue to be a point of debate, as many opt that they are politically motivated; most notably regarding the largest and second largest ethnicities in Iran, the Persians and Azerbaijanis. Percentages given by the CIA's World Factbook include 53% Persian, 16% Azerbaijani, 10% Kurdish, 7% Mazenderani and Gilaki, 7% Luri, 2% Turkmen, 2% Balochi, 2% Arabic, and 2% the remainder Armenian, Georgian, Neo-Aramaic, and Circassian. Ethnic groups As with the spoken languages, the ethnic group composition also remains a point of debate, mainly regarding the largest and second largest ethnic groups, the Persians and Azerbaijanis, due to the lack of Iranian state censuses based on ethnicity. The CIA's World Factbook has estimated that around 79% of the population of Iran are a diverse Indo-European ethno-linguistic group that comprise speakers of various Iranian languages, with Persians (including Mazenderanis and Gilaks) constituting 61% of the population, Kurds 10%, Lurs 6%, and Balochs 2%. Peoples of other ethno-linguistic groups make up the remaining 21%, with Azerbaijanis constituting 16%, Arabs 2%, Turkmens and other Turkic tribes 2%, and others (such as Armenians, Talysh, Georgians, Circassians, Assyrians) 1%. The Library of Congress issued slightly different estimates: 65% Persians (including Mazenderanis, Gilaks, and the Talysh), 16% Azerbaijanis, 7% Kurds, 6% Lurs, 2% Baloch, 1% Turkic tribal groups (incl. Qashqai and Turkmens), and non-Iranian, non-Turkic groups (incl. Armenians, Georgians, Assyrians, Circassians, and Arabs) less than 3%. It determined that Persian is the first language of at least 65% of the country's population, and is the second language for most of the remaining 35%. Religion Twelver Shia Islam is the official state religion, to which about 90% to 95% of the population adhere. About 4% to 8% of the population are Sunni Muslims, mainly Kurds and Baloches. The remaining 2% are non-Muslim religious minorities, including Christians, Zoroastrians, Jews, Baháʼís, Mandeans, and Yarsanis. A 2020 survey by the World Values Survey found that 96.6% of Iranians believe in Islam. On the other hand, another 2020 survey conducted online by an organization based outside of Iran found a much smaller percentage of Iranians identifying as Muslim (32.2% as Shia, 5.0% as Sunni, and 3.2% as Sufi), and a significant fraction not identifying with any organized religion (22.2% identifying as "None," and some others identifying as atheists, spiritual, agnostics, and secular humanists). According to the CIA World Factbook, around 90–95% of Iranian Muslims associate themselves with the Shia branch of Islam, the official state religion, and about 5–10% with the Sunni and Sufi branches of Islam. There are a large population of adherents of Yarsanism, a Kurdish indigenous religion, making it the largest (unrecognized) minority religion in Iran. Its followers are mainly Gorani Kurds and certain groups of Lurs. They are based in Kurdistan Province, Kermanshah Province and Lorestan mainly. Christianity, Judaism, Zoroastrianism, and the Sunni branch of Islam are officially recognized by the government, and have reserved seats in the Iranian Parliament. Historically, early Iranian religions such as the Proto-Iranic religion and the subsequent Zoroastrianism and Manichaeism were the dominant religions in Iran, particularly during the Median, Achaemenid, Parthian, and Sasanian eras. This changed after the fall of the Sasanian Empire by the centuries-long Islamization that followed the Muslim Conquest of Iran. Iran was predominantly Sunni until the conversion of the country (as well as the people of what is today the neighboring Republic of Azerbaijan) to Shia Islam by the order of the Safavid dynasty in the 16th century. Judaism has a long history in Iran, dating back to the Achaemenid conquest of Babylonia. Although many left in the wake of the establishment of the State of Israel and the 1979 Revolution, about 8,756 to 25,000 Jewish people live in Iran. Iran has the largest Jewish population in the Middle East outside of Israel. Around 250,000 to 370,000 Christians reside in Iran, and Christianity is the country's largest recognized minority religion. Most are of Armenian background, as well as a sizable minority of Assyrians. A large number of Iranians have converted to Christianity from the predominant Shia Islam. The Baháʼí Faith is not officially recognized and has been subject to official persecution. According to the United Nations Special Rapporteur on Human Rights in Iran, Baháʼís are the largest non-Muslim religious minority in Iran, with an estimated 350,000 adherents. Since the 1979 Revolution, the persecution of Baháʼís has increased with executions and denial of civil rights, especially the denial of access to higher education and employment. Iranian officials have continued to support the rebuilding and renovation of Armenian churches in the Islamic Republic. The Armenian Monastic Ensembles of Iran has also received continued support. In 2019, the Iranian government registered the Holy Savior Cathedral, commonly referred to as Vank Cathedral, in the New Julfa district of Isfahan, as a UNESCO World Heritage Site, with significant expenditures for its congregation. Currently three Armenian churches in Iran have been included in the UNESCO World Heritage List. Culture The earliest attested cultures in Iran date back to the Lower Paleolithic. Owing to its geopolitical position, Iran has influenced cultures as far as Greece and Italy to the west, Russia to the north, the Arabian Peninsula to the south, and south and east Asia to the east. Art The art of Iran encompasses many disciplines, including architecture, stonemasonry, metalworking, weaving, pottery, painting, and calligraphy. Iranian works of art show a great variety in style, in different regions and periods. The art of the Medes remains obscure, but has been theoretically attributed to the Scythian style. The Achaemenids borrowed heavily from the art of their neighboring civilizations, but produced a synthesis of a unique style, with an eclectic architecture remaining at sites such as Persepolis and Pasargadae. Greek iconography was imported by the Seleucids, followed by the recombination of Hellenistic and earlier Near Eastern elements in the art of the Parthians, with remains such as the Temple of Anahita and the Statue of the Parthian Nobleman. By the time of the Sasanians, Iranian art came across a general renaissance. Although of unclear development, Sasanian art was highly influential, and spread into far regions. Taq-e-Bostan, Taq-e-Kasra, Naqsh-e-Rostam, and the Shapur-Khwast Castle are among the surviving monuments from the Sasanian period. During the Middle Ages, Sasanian art played a prominent role in the formation of both European and Asian medieval art, which carried forward to the Islamic world, and much of what later became known as Islamic learning—including medicine, architecture, philosophy, philology, and literature—were of Sasanian basis. The Safavid era is known as the Golden Age of Iranian art, and Safavid works of art show a far more unitary development than in any other period, as part of a political evolution that reunified Iran as a cultural entity. Safavid art exerted noticeable influences upon the neighboring Ottomans, the Mughals, and the Deccans, and was also influential through its fashion and garden architecture on 11th–17th-century Europe. Iran's contemporary art traces its origins back to the time of Kamal-ol-Molk, a prominent realist painter at the court of the Qajar dynasty who affected the norms of painting and adopted a naturalistic style that would compete with photographic works. A new Iranian school of fine art was established by Kamal-ol-Molk in 1928, and was followed by the so-called "coffeehouse" style of painting. Iran's avant-garde modernists emerged by the arrival of new western influences during World War II. The vibrant contemporary art scene originates in the late 1940s, and Tehran's first modern art gallery, Apadana, was opened in September 1949 by painters Mahmud Javadipur, Hosein Kazemi, and Hushang Ajudani. The new movements received official encouragement by the mid-1950s, which led to the emergence of artists such as Marcos Grigorian, signaling a commitment to the creation of a form of modern art grounded in Iran. Architecture The history of architecture in Iran goes back to the seventh millennium BC. Iranians were among the first to use mathematics, geometry and astronomy in architecture. Iranian architecture displays great variety, both structural and aesthetic, developing gradually and coherently out of earlier traditions and experience. The guiding motif of Iranian architecture is its cosmic symbolism, "by which man is brought into communication and participation with the powers of heaven". Iran ranks seventh among UNESCO's list of countries with the most archaeological ruins and attractions from antiquity. Traditionally, the guiding formative motif of Iranian architecture has been its cosmic symbolism "by which man is brought into communication and participation with the powers of heaven". This theme has not only given unity and continuity to the architecture of Persia, but has been a primary source of its emotional character as well. According to Persian historian and archaeologist Arthur Pope, the supreme Iranian art, in the proper meaning of the word, has always been its architecture. The supremacy of architecture applies to both pre- and post-Islamic periods. Weaving Iran's carpet-weaving has its origins in the Bronze Age, and is one of the most distinguished manifestations of Iranian art. Iran is the world's largest producer and exporter of handmade carpets, producing three-quarters of the world's total output and having a share of 30% of world's export markets. Literature Iran's oldest literary tradition is that of Avestan, the Old Iranian sacred language of the Avesta, which consists of the legendary and religious texts of Zoroastrianism and the ancient Iranian religion, with its earliest records dating back to the pre-Achaemenid times. Of the various modern languages used in Iran, Persian, various dialects of which are spoken throughout the Iranian Plateau, has the most influential literature. Persian has been dubbed as a worthy language to serve as a conduit for poetry, and is considered one of the four main bodies of world literature. In spite of originating from the region of Persis (better known as Persia) in southwestern Iran, the Persian language was used and developed further through Persianate societies in Asia Minor, Central Asia, and South Asia, leaving massive influences on Ottoman and Mughal literatures, among others. Iran has a number of famous medieval poets, most notably Rumi, Ferdowsi, Hafez, Saadi Shirazi, Omar Khayyam, and Nezami Ganjavi. Iranian literature also inspired writers such as Johann Wolfgang von Goethe, Henry David Thoreau, and Ralph Waldo Emerson. Philosophy Iranian philosophy originates from Indo-European roots, with Zoroaster's reforms having major influences. According to The Oxford Dictionary of Philosophy, the chronology of the subject and science of philosophy starts with the Indo-Iranians, dating this event to 1500 BC. The Oxford dictionary also states, "Zarathushtra's philosophy entered to influence Western tradition through Judaism, and therefore on Middle Platonism." While there are ancient relations between the Indian Vedas and the Iranian Avesta, the two main families of the Indo-Iranian philosophical traditions were characterized by fundamental differences, especially in their implications for the human being's position in society and their view of man's role in the universe. The Cyrus Cylinder, which is known as "the first charter of human rights", is often seen as a reflection of the questions and thoughts expressed by Zoroaster, and developed in Zoroastrian schools of the Achaemenid era. The earliest tenets of Zoroastrian schools are part of the extant scriptures of the Zoroastrian religion in Avestan. Among them are treatises such as the Zatspram, Shkand-gumanik Vizar, and Denkard, as well as older passages of the Avesta and the Gathas. The current trends in Iranian philosophy have grown limited in scope because of Islamic frames of thought although the liberal ways of thought remain open to be generated in Iranian publications by Iranian intellectuals, especially outside Iran, where the Iranian regime has less power to restrict Iranian thought and philosophy. Mythology Iranian mythology consists of ancient Iranian folklore and stories, all involving extraordinary beings, reflecting attitudes towards the confrontation of good and evil, actions of the gods, and the exploits of heroes and fabulous creatures. Myths play a crucial part in Iranian culture, and understanding of them is increased when they are considered within the context of actual events in Iranian history. The geography of Greater Iran, a vast area covering present-day Iran, the Caucasus, Anatolia, Mesopotamia and Central Asia, with its high mountain ranges, plays the main role in much of Iranian mythology. Tenth-century Persian poet Ferdowsi's long epic poem Šāhnāme ("Book of Kings"), which is for the most part based on Xwadāynāmag, a Middle Persian compilation of the history of Iranian kings and heroes from mythical times down to the reign of Chosroes II, is considered the national epic of Iran. It draws heavily on the stories and characters of the Zoroastrian tradition, from the texts of the Avesta, the Denkard, and the Bundahishn. Music Iran is the apparent birthplace of the earliest complex instruments, dating back to the third millennium BC. The use of both vertical and horizontal angular harps have been documented at the sites Madaktu and Kul-e Farah, with the largest collection of Elamite instruments documented at Kul-e Farah. Multiple depictions of horizontal harps were also sculpted in Assyrian palaces, dating back between 865 and 650 BC. Xenophon's Cyropaedia mentions a great number of singing women at the court of the Achaemenid Empire. Athenaeus of Naucratis, in his Deipnosophistae, points out to the capture of Achaemenid singing girls at the court of the last Achaemenid king Darius III (336–330 BC) by Macedonian general Parmenion. Under the Parthian Empire, the gōsān (Parthian for "minstrel") had a prominent role in the society. According to Plutarch's Life of Crassus (32.3), they praised their national heroes and ridiculed their Roman rivals. Likewise, Strabo's Geographica reports that the Parthian youth were taught songs about "the deeds both of the gods and of the noblest men". The history of Sasanian music is better documented than the earlier periods, and is especially more evident in Avestan texts. By the time of Chosroes II, the Sasanian royal court hosted a number of prominent musicians, namely Azad, Bamshad, Barbad, Nagisa, Ramtin, and Sarkash. Iranian traditional musical instruments include string instruments such as chang (harp), qanun, santur, rud (oud, barbat), tar, dotar, setar, tanbur, and kamanche, wind instruments such as sorna (zurna, karna) and ney, and percussion instruments such as tompak, kus, daf (dayere), and naqare. Iran's first symphony orchestra, the Tehran Symphony Orchestra, was founded by Qolam-Hoseyn Minbashian in 1933. It was reformed by Parviz Mahmoud in 1946, and is currently Iran's oldest and largest symphony orchestra. Later, by the late 1940s, Ruhollah Khaleqi founded the country's first national music society, and established the School of National Music in 1949. Iranian pop music has its origins in the Qajar era. It was significantly developed since the 1950s, using indigenous instruments and forms accompanied by electric guitar and other imported characteristics. The emergence of genres such as rock in the 1960s and hip hop in the 2000s also resulted in major movements and influences in Iranian music. Theater The earliest recorded representations of dancing figures within Iran were found in prehistoric sites such as Tepe Sialk and Tepe Mūsīān. The oldest Iranian initiation of theater and the phenomena of acting can be traced in the ancient epic ceremonial theaters such as Sug-e Siāvuš ("mourning of Siāvaš"), as well as dances and theater narrations of Iranian mythological tales reported by Herodotus and Xenophon. Iran's traditional theatrical genres include Baqqāl-bāzi ("grocer play", a form of slapstick comedy), Ruhowzi (or Taxt-howzi, comedy performed over a courtyard pool covered with boards), Siāh-bāzi (in which the central comedian appears in blackface), Sāye-bāzi (shadow play), Xeyme-šab-bāzi (marionette), and Arusak-bāzi (puppetry), and Ta'zie (religious tragedy plays). Before the 1979 Revolution, the Iranian national stage had become a famous performing scene for known international artists and troupes, with the Roudaki Hall of Tehran constructed to function as the national stage for opera and ballet. Opened on 26 October 1967, the hall is home to the Tehran Symphony Orchestra, the Tehran Opera Orchestra, and the Iranian National Ballet Company, and was officially renamed Vahdat Hall after the 1979 Revolution. Loris Tjeknavorian's Rostam and Sohrab, based on the tragedy of Rostam and Sohrab from Ferdowsi's epic poem Šāhnāme, is an example of opera with Persian libretto. Tjeknavorian, a celebrated Iranian Armenian composer and conductor, composed it in 25 years, and it was finally performed for the first time at Tehran's Roudaki Hall, with Darya Dadvar in the role of Tahmina. Cinema and animation A third-millennium BC earthen goblet discovered at the Burnt City, a Bronze Age urban settlement in southeastern Iran, depicts what could possibly be the world's oldest example of animation. The artifact, associated with Jiroft, bears five sequential images depicting a wild goat jumping up to eat the leaves of a tree. The earliest attested Iranian examples of visual representations, however, are traced back to the bas-reliefs of Persepolis, the ritual center of the Achaemenid Empire. The figures at Persepolis remain bound by the rules of grammar and syntax of visual language. The Iranian visual arts reached a pinnacle by the Sasanian era, and several works from this period have been found to articulate movements and actions in a highly sophisticated manner. It is even possible to see a progenitor of the cinematic close-up shot in one of these works of art, which shows a wounded wild pig escaping from the hunting ground. By the early 20th century, the five-year-old industry of cinema came to Iran. The first Iranian filmmaker was probably Mirza Ebrahim (Akkas Bashi), the court photographer of Mozaffar-ed-Din Shah of the Qajar dynasty. Mirza Ebrahim obtained a camera and filmed the Qajar ruler's visit to Europe. Later in 1904, Mirza Ebrahim (Sahhaf Bashi), a businessman, opened the first public movie theater in Tehran. After him, several others like Russi Khan, Ardeshir Khan, and Ali Vakili tried to establish new movie theaters in Tehran. Until the early 1930s, there were around 15 cinema theaters in Tehran and 11 in other provinces. The first Iranian feature film, Abi and Rabi, was a silent comedy directed by Ovanes Ohanian in 1930. The first sounded one, Lor Girl, was produced by Ardeshir Irani and Abd-ol-Hosein Sepanta in 1932. Iran's animation industry began by the 1950s, and was followed by the establishment of the influential Institute for the Intellectual Development of Children and Young Adults in January 1965. The 1960s was a significant decade for Iranian cinema, with 25 commercial films produced annually on average throughout the early 60s, increasing to 65 by the end of the decade. The majority of the production focused on melodrama and thrillers. With the screening of the films Qeysar and The Cow, directed by Masoud Kimiai and Dariush Mehrjui respectively in 1969, alternative films set out to establish their status in the film industry and Bahram Beyzai's Downpour and Nasser Taghvai's Tranquility in the Presence of Others followed soon. Attempts to organize a film festival, which had begun in 1954 within the framework of the Golrizan Festival, resulted in the festival of Sepas in 1969. The endeavors also resulted in the formation of the Tehran's World Film Festival in 1973. After the Revolution of 1979, and following the Cultural Revolution, a new age emerged in Iranian cinema, starting with Long Live! by Khosrow Sinai and followed by many other directors, such as Abbas Kiarostami and Jafar Panahi. Kiarostami, an acclaimed Iranian director, planted Iran firmly on the map of world cinema when he won the Palme d'Or for Taste of Cherry in 1997. The continuous presence of Iranian films in prestigious international festivals, such as the Cannes Film Festival, the Venice Film Festival, and the Berlin International Film Festival, attracted world attention to Iranian masterpieces. In 2006, six Iranian films, of six different styles, represented Iranian cinema at the Berlin International Film Festival. Critics considered this a remarkable event in the history of Iranian cinema. Asghar Farhadi, a well-known Iranian director, has received a Golden Globe Award and two Academy Awards, representing Iran for Best Foreign Language Film in 2012 and 2017. In 2012, he was named as one of the 100 Most Influential People in the world by the American news magazine Time. Observances Iran's official New Year begins with Nowruz, an ancient Iranian tradition celebrated annually on the vernal equinox. It is enjoyed by people adhering to different religions, but is considered a holiday for the Zoroastrians. It was registered on the UNESCO's list of Masterpieces of the Oral and Intangible Heritage of Humanity in 2009, described as the Persian New Year, shared with a number of other countries in which it has historically been celebrated. On the eve of the last Wednesday of the preceding year, as a prelude to Nowruz, the ancient festival of Čāršanbe Suri celebrates Ātar ("fire") by performing rituals such as jumping over bonfires and lighting off firecrackers and fireworks. The Nowruz celebrations last by the end of the 13th day of the Iranian year (Farvardin 13, usually coincided with 1or 2April), celebrating the festival of Sizdebedar, during which the people traditionally go outdoors to picnic. Yaldā, another nationally celebrated ancient tradition, commemorates the ancient goddess Mithra and marks the longest night of the year on the eve of the winter solstice (; usually falling on 20 or 21 December), during which families gather together to recite poetry and eat fruits—particularly the red fruits watermelon and pomegranate, as well as mixed nuts. In some regions of the provinces of Mazanderan and Markazi, there is also the midsummer festival of Tirgān, which is observed on Tir 13 (2 or 3July) as a celebration of water. Alongside the ancient Iranian celebrations, Islamic annual events such as Ramezān, Eid e Fetr, and Ruz e Āšurā are marked by the country's large Muslim population, Christian traditions such as Noel, Čelle ye Ruze, and Eid e Pāk are observed by the Christian communities, Jewish traditions such as Purim, Hanukā, and Eid e Fatir (Pesah) are observed by the Jewish communities, and Zoroastrian traditions such as Sade and Mehrgān are observed by the Zoroastrians. Public holidays Iran's official calendar is the Solar Hejri calendar, beginning at the vernal equinox in the Northern Hemisphere, which was first enacted by the Iranian Parliament on 31 March 1925. Each of the 12 months of the Solar Hejri calendar correspond with a zodiac sign, and the length of each year is absolutely solar. The months are named after the ancient Iranian months, namely Farvardin (), Ordibehešt (), Xordād (), Tir (), Amordād (), Šahrivar (), Mehr (), Ābān (), Āzar (), Dey (), Bahman (), and Esfand (). Alternatively, the Lunar Hejri calendar is used to indicate Islamic events, and the Gregorian calendar remarks the international events. Legal public holidays based on the Iranian solar calendar include the cultural celebrations of Nowruz (Farvardin 1–4; 21–24 March) and Sizdebedar (Farvardin 13; 2April), and the political events of Islamic Republic Day (Farvardin 12; 1April), the death of Ruhollah Khomeini (Khordad 14; 4June), the Khordad 15 event (Khordad 15; 5June), the anniversary of the 1979 Revolution (Bahman 22; 10 February), and Oil Nationalization Day (Esfand 29; 19 March). Lunar Islamic public holidays include Tasua (Muharram 9; 30 September), Ashura (Muharram 10; 1October), Arba'een (Safar 20; 10 November), the death of Muhammad (Safar 28; 17 November), the death of Ali al-Ridha (Safar 29 or 30; 18 November), the birthday of Muhammad (Rabi-al-Awwal 17; 6December), the death of Fatimah (Jumada-al-Thani 3; 2March), the birthday of Ali (Rajab 13; 10 April), Muhammad's first revelation (Rajab 27; 24 April), the birthday of Muhammad al-Mahdi (Sha'ban 15; 12 May), the death of Ali (Ramadan 21; 16 June), Eid al-Fitr (Shawwal 1–2; 26–27 June), the death of Ja'far al-Sadiq (Shawwal 25; 20 July), Eid al-Qurban (Zulhijja 10; 1September), and Eid al-Qadir (Zulhijja 18; 9September). Cuisine Due to its variety of ethnic groups and the influences from the neighboring cultures, the cuisine of Iran is diverse. Herbs are frequently used, along with fruits such as plums, pomegranate, quince, prunes, apricots, and raisins. To achieve a balanced taste, characteristic flavorings such as saffron, dried lime, cinnamon, and parsley are mixed delicately and used in some special dishes. Onion and garlic are commonly used in the preparation of the accompanying course, but are also served separately during meals, either in raw or pickled form. Iranian cuisine includes a wide range of main dishes, including various types of kebab, pilaf, stew (khoresh), soup and āsh, and omelette. Lunch and dinner meals are commonly accompanied by side dishes such as plain yogurt or mast-o-khiar, sabzi, salad Shirazi, and torshi, and might follow dishes such as borani, Mirza Qasemi, or kashk e bademjan as the appetizer. In Iranian culture, tea () is widely consumed. Iran is the world's seventh major tea producer, and a cup of tea is typically the first thing offered to a guest. One of Iran's most popular desserts is the falude, consisting of vermicelli in a rose water syrup, which has its roots in the fourth century BC. There is also the popular saffron ice cream, known as bastani sonnati ("traditional ice cream"), which is sometimes accompanied with carrot juice. Iran is also famous for its caviar. Sports Iran is most likely the birthplace of polo, locally known as čowgān, with its earliest records attributed to the ancient Medes. Freestyle wrestling is traditionally considered the national sport of Iran, and the national wrestlers have been world champions on many occasions. Iran's traditional wrestling, called košti e pahlevāni ("heroic wrestling"), is registered on UNESCO's Intangible Cultural Heritage list. Being a mountainous country, Iran is a venue for skiing, snowboarding, hiking, rock climbing, and mountain climbing. It is home to several ski resorts, the most famous being Tochal, Dizin, and Shemshak, all within one to three hours traveling from the capital city Tehran. The resort of Tochal, located in the Alborz mountain rage, is the world's fifth-highest ski resort ( at its highest station). Iran's National Olympic Committee was founded in 1947. Wrestlers and weightlifters have achieved the country's highest records at the Olympics. In September 1974, Iran became the first country in West Asia to host the Asian Games. The Azadi Sport Complex, which is the largest sport complex in Iran, was |
common epithet, the Cradle of Civilization. Over the next 700 years, the regions forming modern Iraq came under Greek, Parthian, and Roman rule, with the Greeks and Parthians establishing new imperial capitals in the area with the cities of Seleucia and Ctesiphon, respectively. By the 3rd century AD, when the area once again fell under Persian (Sasanian) control, nomadic Arab tribesmen originating from South Arabia (consisting mostly of modern-day Yemen) began to migrate and settle within Lower Mesopotamia, culminating in the creation of the Sassanid-aligned Lakhmid Kingdom in around 300 AD; the Arabic name al-ʿIrāq dates to roughly this time. The Sassanid Empire was eventually conquered by the Rashidun Caliphate in the 7th century, with Iraq specifically falling under Islamic rule following the Battle of al-Qadisiyyah in 636. The city of Kufa was founded shortly thereafter in close proximity to the previous Lakhmid capital of Al-Hirah, and it became the home of the Rashidun dynasty from 656 until their overthrow by the Umayyads in 661. For nearly a century thereafter, the Umayyad Caliphate would use Damascus as its administrative capital. With the rise of the Abbasids in 750, Iraq once again became the center of Caliphate rule—first in Kufa from 750-752, then in Anbar for the following decade, and finally in the city of Baghdad after its founding in 762 (with al-Rūmīya serving as temporary capital for a few months prior). Baghdad would remain the capital of the Abbasid Caliphate for the majority of its existence, during which time it became the cultural and intellectual center of the world in what is known today as the Islamic Golden Age. Baghdad's rapid growth and prosperity in the 9th century would be followed by a period of stagnation in the 10th century due to the Buwayhid and Seljuq invasions, but it remained of central importance until the Mongol invasion of 1258. After this, Iraq became a province of the Turco-Mongol Ilkhanate and declined in importance. After the disintegration of the Ilkhanate, Iraq was ruled by the Jalairids and Kara Koyunlu until its eventual absorption into the Ottoman Empire in the 16th century, intermittently falling under Iranian Safavid and Mamluk control. Ottoman rule ended with World War I, after which the British Empire administered Mandatory Iraq alongside a nominally self-governing Hashemite monarchy headed by King Faisal I. The Kingdom of Iraq was eventually granted full independence in 1932 under the terms of the Anglo-Iraqi Treaty, signed by High Commissioner Francis Humphrys and Iraqi Prime Minister Nuri al-Said two years prior. A republic formed in 1958 following a coup d'état. Saddam Hussein governed from 1968 to 2003, into which period fall the Iran–Iraq War and the Gulf War. Saddam Hussein was deposed following the 2003 invasion of Iraq. Over the following years in the U.S. occupation of Iraq, Iraq disintegrated into a civil war from 2006 to 2008, and the situation deteriorated in 2011 which later escalated into a renewed war following ISIL gains in the country in 2014. By 2015, Iraq was effectively divided, the central and southern part being controlled by the government, the northwest by the Kurdistan Regional Government and the western part by the Islamic State. IS was expelled from Iraq in 2017, but a low-intensity ISIL insurgency continues mostly in the rural parts of northern western parts of the country, due to Iraq's long border with Syria. Prehistory During 1957–1961 Shanidar Cave was excavated by Ralph Solecki and his team from Columbia University, and nine skeletons of Neanderthal man of varying ages and states of preservation and completeness (labelled Shanidar I–IX) were discovered dating from 60,000–80,000 years BP. A tenth individual was recently discovered by M. Zeder during examination of a faunal assemblage from the site at the Smithsonian Institution. The remains seemed to Zeder to suggest that Neanderthals had funeral ceremonies, burying their dead with flowers (although the flowers are now thought to be a modern contaminant), and that they took care of injured and elderly individuals. Mesopotamia is the site of the earliest developments of the Neolithic Revolution from around 10,000 BC. It has been identified as having "inspired some of the most important developments in human history including the invention of the wheel, the planting of the first cereal crops and the development of cursive script, Mathematics, Astronomy and Agriculture." Ancient Mesopotamia Bronze Age Sumer emerged as the civilization of Lower Mesopotamia out of the prehistoric Ubaid period (mid-6th millennium BC) in the Early Bronze Age (Uruk period) Classical Sumer ends with the rise of the Akkadian Empire in the 24th century BC. Following the Gutian period, the Ur III kingdom was once again able to unite large parts of southern and central Mesopotamia under a single ruler in the 21st century. It may have eventually disintegrated due to Amorite incursions. The Amorite dynasty of Isin persisted until c. 1600 BC, when southern Mesopotamia was united under Kassite Babylonian rule. The north of Mesopotamia had become the Akkadian-speaking state of Assyria by the late 25th century BC. Along with the rest of Mesopotamia it was ruled by Akkadian kings from the late 24th to mid 22nd centuries BC, after which it once again became independent. Babylonia was a state in Lower Mesopotamia with Babylon as its capital. It was founded as an independent state by an Amorite king named Sumuabum in 1894 BC. Akkadian gradually replaced Sumerian as the spoken language of Mesopotamia somewhere around the turn of the 3rd and the 2nd millennium BC, but Sumerian continued to be used as a written or ceremonial language in Mesopotamia well into the period of classical antiquity. Babylonia emerged from the Amorite dynasties (c. 1900 BC) when Hammurabi (c. 1792–1750 BC), unified the territories of the former kingdoms of Sumer and Akkad. During the early centuries of what is called the "Amorite period", the most powerful city-states were Isin and Larsa, although Shamshi-Adad I came close to uniting the more northern regions around Assur and Mari. One of these Amorite dynasties was established in the city-state of Babylon, which would ultimately take over the others and form the first Babylonian empire, during what is also called the Old Babylonian Period. Assyria was an Akkadian (East Semitic) kingdom in Upper Mesopotamia, that came to rule regional empires a number of times through history. It was named for its original capital, the ancient city of Assur (Akkadian ). Of the early history of the kingdom of Assyria, little is positively known. In the Assyrian King List, the earliest king recorded was Tudiya. He was a contemporary of Ibrium of Ebla who appears to have lived in the late 25th or early 24th century BC, according to the king list. The foundation of the first true urbanised Assyrian monarchy was traditionally ascribed to Ushpia a contemporary of Ishbi-Erra of Isin and Naplanum of Larsa. c. 2030 BC. Assyria had a period of empire from the 19th to 18th centuries BC. From the 14th to 11th centuries BC Assyria once more became a major power with the rise of the Middle Assyrian Empire. Iron Age The Neo-Assyrian Empire (911–609 BC) was the dominant political force in the Ancient Near East during the Iron Age, eclipsing Babylonia, Egypt, Urartu and Elam. During this period, Aramaic was also made an official language of the empire, alongside the Akkadian language. The Neo-Babylonian Empire (626–539 BC) marks the final period of the history of the Ancient Near East preceding Persian conquest. A year after the death of the last strong Assyrian ruler, Assurbanipal, in 627 BC, the Assyrian empire spiralled into a series of brutal civil wars. Babylonia rebelled under Nabopolassar, a member of the Chaldean tribe which had migrated from the Levant to south eastern Babylonia in the early 9th century BC. In alliance with the Medes, Persians, Scythians and Cimmerians, they sacked the city of Nineveh in 612 BC, and the seat of empire was transferred to Babylonia for the first time since the death of Hammurabi in the mid 18th century BC. This period witnessed a general improvement in economic life and agricultural production, and a great flourishing of architectural projects, the arts and science. The Neo-Babylonian period ended with the reign of Nabonidus in 539 BC. To the east, the Persians had been growing in strength, and eventually Cyrus the Great established his dominion over Babylon. Classical Antiquity Achaemenid and Seleucid rule Mesopotamia was conquered by the Achaemenid Persians under Cyrus the Great in 539 BC, and remained under Persian rule for two centuries. The Persian Empire fell to Alexander of Macedon in 331 BC and came under Greek rule as part of the Seleucid Empire. Babylon declined after the founding of Seleucia on the Tigris, the new Seleucid Empire capital. The Seleucid Empire at the height of its power stretched from the Aegean in the west to India in the east. It was a major center of Hellenistic culture that maintained the preeminence of Greek customs where a Greek political elite dominated, mostly in the urban areas. The Greek population of the cities who formed the dominant elite were reinforced by immigration from Greece. Much of the eastern part of the empire was conquered by the Parthians under Mithridates I of Parthia in the mid-2nd century BC. Parthian and Roman rule At the beginning of the 2nd century AD, the Romans, led by emperor Trajan, invaded Parthia and conquered Mesopotamia, making it an imperial province. It was returned to the Parthians shortly after by Trajan's successor, Hadrian. Christianity reached Mesopotamia in the 1st century AD, and Roman Syria in particular became the center of Eastern Rite Christianity and the Syriac literary tradition. Mandeism is also believed to have either originated there around this time or entered as Mandaeans sought refuge from Palestine. Sumerian-Akkadian religious tradition disappeared during this period, as did the last remnants of cuneiform literacy, although temples were still being dedicated to the Assyrian national god Ashur in his home city as late as the 4th century. Sassanid Empire In the 3rd century AD, the Parthians were in turn succeeded by the Sassanid dynasty, which ruled Mesopotamia until the 7th-century Islamic invasion. The Sassanids conquered the independent states of Adiabene, Osroene, Hatra and finally Assur during the 3rd century. In the mid-6th century the Persian Empire under the Sassanid dynasty was divided by Khosrow I into four quarters, of which the western one, called Khvārvarān, included most of modern Iraq, and subdivided to provinces of Mishān, Asuristān (Assyria), Adiabene and Lower Media. The term Iraq is widely used in the medieval Arabic sources for the area in the center and south of the modern republic as a geographic rather than a political term, implying no greater precision of boundaries than the term "Mesopotamia" or, indeed, many of the names of modern states before the 20th century. There was a substantial influx of Arabs in the Sassanid period. Upper Mesopotamia came to be known as Al-Jazirah in Arabic (meaning "The Island" in reference to the "island" between the Tigris and Euphrates rivers), and Lower Mesopotamia came to be known as ʿIrāq-i ʿArab, meaning "the escarpment of the Arabs" (viz. to the south and east of "the island". Until 602, the desert frontier of the Persian Empire had been guarded by the Arab Lakhmid kings of Al-Hirah. In that year, Shahanshah Khosrow II Aparviz (Persian خسرو پرويز) abolished the Lakhmid kingdom and laid the frontier open to nomad incursions. Farther north, the western quarter was bounded by the Byzantine Empire. The frontier more or less followed the modern Syria-Iraq border and continued northward, passing between Nisibis (modern Nusaybin) as the Sassanian frontier fortress and Dara and Amida (modern Diyarbakır) held by the Byzantines. Middle Ages Islamic conquest The first organized conflict between invading Arab tribes and occupying Persian forces in Mesopotamia seems to have been in 634, when the Arabs were defeated at the Battle of the Bridge. There was a force of some 5,000 Muslims under Abū `Ubayd ath-Thaqafī, which was routed by the Persians. This was followed by Khalid ibn al-Walid's successful campaign which saw all of Iraq come under Arab rule within a year, with the exception of the Persian Empire's capital, Ctesiphon. Around 636, a larger Arab Muslim force under Sa`d ibn Abī Waqqās defeated the main Persian army at the Battle of al-Qādisiyyah and moved on to capture the Persian capital of Ctesiphon. By the end of 638, the Muslims had conquered all of the Western Sassanid provinces (including modern Iraq), and the last Sassanid Emperor, Yazdegerd III, had fled to central and then northern Persia, where he was killed in 651. The Islamic expansions constituted the largest of the Semitic expansions in history. These new arrivals did | Mesopotamia came to be known as Al-Jazirah in Arabic (meaning "The Island" in reference to the "island" between the Tigris and Euphrates rivers), and Lower Mesopotamia came to be known as ʿIrāq-i ʿArab, meaning "the escarpment of the Arabs" (viz. to the south and east of "the island". Until 602, the desert frontier of the Persian Empire had been guarded by the Arab Lakhmid kings of Al-Hirah. In that year, Shahanshah Khosrow II Aparviz (Persian خسرو پرويز) abolished the Lakhmid kingdom and laid the frontier open to nomad incursions. Farther north, the western quarter was bounded by the Byzantine Empire. The frontier more or less followed the modern Syria-Iraq border and continued northward, passing between Nisibis (modern Nusaybin) as the Sassanian frontier fortress and Dara and Amida (modern Diyarbakır) held by the Byzantines. Middle Ages Islamic conquest The first organized conflict between invading Arab tribes and occupying Persian forces in Mesopotamia seems to have been in 634, when the Arabs were defeated at the Battle of the Bridge. There was a force of some 5,000 Muslims under Abū `Ubayd ath-Thaqafī, which was routed by the Persians. This was followed by Khalid ibn al-Walid's successful campaign which saw all of Iraq come under Arab rule within a year, with the exception of the Persian Empire's capital, Ctesiphon. Around 636, a larger Arab Muslim force under Sa`d ibn Abī Waqqās defeated the main Persian army at the Battle of al-Qādisiyyah and moved on to capture the Persian capital of Ctesiphon. By the end of 638, the Muslims had conquered all of the Western Sassanid provinces (including modern Iraq), and the last Sassanid Emperor, Yazdegerd III, had fled to central and then northern Persia, where he was killed in 651. The Islamic expansions constituted the largest of the Semitic expansions in history. These new arrivals did not disperse and settle throughout the country; instead they established two new garrison cities, at al-Kūfah, near ancient Babylon, and at Basrah in the south, while the north remained largely Assyrian and Arab Christian in character. Abbasid Caliphate The city of Baghdad was built in the 8th century and became the capital of the Abbasid Caliphate. Baghdad soon became the primary cultural center of the Muslim world during the centuries of the incipient "Islamic Golden Age" of the 8th to 9th centuries. In the 9th century, the Abbasid Caliphate entered a period of decline. During the late 9th to early 11th centuries, a period known as the "Iranian Intermezzo", parts of (the modern territory of) Iraq were governed by a number of minor Iranian emirates, including the Tahirids, Saffarids, Samanids, Buyids and Sallarids. Tughril, the founder of the Seljuk Empire, captured Baghdad in 1055. In spite of having lost all governance, the Abbasid caliphs nevertheless maintained a highly ritualized court in Baghdad and remained influential in religious matters, maintaining the orthodoxy of their Sunni sect in opposition to the Ismaili and Shia sects of Islam. Mongol invasion In the later 11th century, Iraq fell under the rule of the Khwarazmian dynasty. Both Turkic secular rule and Abassid caliphate came to an end with the Mongol invasions of the 13th century. The Mongols under Genghis Khan had conquered Khwarezmia by 1221, but Iraq proper gained a respite due to the death of Genghis Khan in 1227 and the subsequent power struggles. Möngke Khan from 1251 began a renewed expansion of the Mongol Empire, and when caliph al-Mustasim refused to submit to the Mongols, Baghdad was besieged and captured by Hulagu Khan in 1258. With the destruction of the Abbasid Caliphate, Hulagu had an open route to Syria and moved against the other Muslim powers in the region. Turco-Mongol rule Iraq now became a province on the southwestern fringes of the Ilkhanate and Baghdad would never regain its former importance. The Jalayirids were a Mongol Jalayir dynasty which ruled over Iraq and western Persia after the breakup of the Ilkhanate in the 1330s. The Jalayirid sultanate lasted about fifty years, until disrupted by Tamerlane's conquests and the revolts of the "Black Sheep Turks" or Qara Qoyunlu Turkmen. After Tamerlane's death in 1405, there was a brief attempt to re-establish the sultanate in southern Iraq and Khuzistan. The Jalayirids were finally eliminated by Kara Koyunlu in 1432. Ottoman and Mamluk rule During the late 14th and early 15th centuries, the Black Sheep Turkmen ruled the area now known as Iraq. In 1466, the White Sheep Turkmen defeated the Black Sheep and took control. Later, the White Sheep were defeated by the Safavids, who took control over Mesopotamia for some time. In the 16th century, most of the territory of present-day Iraq came under the control of Ottoman Empire as the pashalik of Baghdad. Throughout most of the period of Ottoman rule (1533–1918) the territory of present-day Iraq was a battle zone between the rival regional empires and tribal alliances. Iraq was divided into three vilayets: Mosul Province Baghdad Province Basra Province The Safavid dynasty of Iran briefly asserted their hegemony over Iraq in the periods of 1508–1533 and 1622–1638. During the years 1747–1831 Iraq was ruled by the Mamluk officers of Georgian origin who succeeded in obtaining autonomy from the Ottoman Empire, suppressed tribal revolts, curbed the power of the Janissaries, restored order and introduced a program of modernization of economy and military. In 1831, the Ottomans managed to overthrow the Mamluk regime and again imposed their direct control over Iraq. 20th century British mandate of Mesopotamia Ottoman rule over Iraq lasted until World War I, when the Ottomans sided with Germany and the Central Powers. In the Mesopotamian campaign against the Central Powers, British forces invaded the country and suffered a defeat at the hands of the Turkish army during the Siege of Kut (1915–16). However the British finally won in the Mesopotamian Campaign with the capture of Baghdad in March 1917. During the war the British employed the help of a number of Assyrian, Armenian and Arab tribes against the Ottomans, who in turn employed the Kurds as allies. After the war the Ottoman Empire was divided up, and the British Mandate of Mesopotamia was established by League of Nations mandate. Britain imposed a Hāshimite monarchy on Iraq and defined the territorial limits of Iraq without taking into account the politics of the different ethnic and religious groups in the country, in particular those of the Kurds and the Christian Assyrians to the north. During the British occupation, the Kurds fought for independence, and the British employed Assyrian Levies to help quell these insurrections. Iraq also became an oligarchy government at this time. Although the monarch Faisal I of Iraq was legitimized and proclaimed King by a plebiscite in 1921, independence was achieved in 1932, when the British Mandate officially ended. Independent Kingdom of Iraq Establishment of Arab Sunni domination in Iraq was followed by Assyrian, Yazidi and Shi'a unrests, which were all brutally suppressed. In 1936, the first military coup took place in the Kingdom of Iraq, as Bakr Sidqi succeeded in replacing the acting Prime Minister with his associate. Multiple coups followed in a period of political instability, peaking in 1941. During World War II, Iraqi regime of Regent 'Abd al-Ilah was overthrown in 1941 by the Golden Square officers, headed by Rashid Ali. The short lived pro-Nazi government of Iraq was defeated in May 1941 by the allied forces (with local Assyrian and Kurdish help) in Anglo-Iraqi War. Iraq was later used as a base for allied attacks on Vichy-French held Mandate of Syria and support for the Anglo-Soviet invasion of Iran. In 1945, Iraq joined the United Nations and became a founding member of the Arab League. At the same time, the Kurdish leader Mustafa Barzani led a rebellion against the central government in Baghdad. After the failure of the uprising, Barzani and his followers fled to the Soviet Union. In 1948, massive violent protests known as the Al-Wathbah uprising broke out across Baghdad with partial communist support, having demands against the government's treaty with Britain. Protests continued into spring and were interrupted in May when martial law was enforced as Iraq entered the failed 1948 Arab–Israeli War along with other Arab League members. In February 1958, King Hussein of Jordan and `Abd al-Ilāh proposed a union of Hāshimite monarchies to counter the recently formed Egyptian-Syrian union. The prime minister Nuri as-Said wanted Kuwait to be part of the proposed Arab-Hāshimite Union. Shaykh `Abd-Allāh as-Salīm, the ruler of Kuwait, was invited to Baghdad to discuss Kuwait's future. This policy brought the government of Iraq into direct conflict with Britain, which did not want to grant independence to Kuwait. At that point, the monarchy found itself completely isolated. Nuri as-Said was able to contain the rising discontent only by resorting to even greater political oppression. Republic of Iraq Inspired by Gamal Abdel Nasser of Egypt, officers from the Nineteenth Brigade, 3rd Division known as "The Four Colonials", under the leadership of Brigadier Abd al-Karīm Qāsim (known as "az-Za`īm", 'the leader') and Colonel Abdul Salam Arif overthrew the Hashemite monarchy on July 14, 1958. The new government proclaimed Iraq to be a republic and rejected the idea of a union with Jordan. Iraq's activity in the Baghdad Pact ceased. In 1961, Kuwait gained independence from Britain and Iraq claimed sovereignty over Kuwait. A period of considerable instability followed. The same year, Mustafa Barzani, who had been invited to return to Iraq by Qasim three years earlier, began engaging Iraqi government forces and establishing Kurdish control in the north in what was the beginning of the First Kurdish Iraqi War. Ba'athist Iraq Qāsim was assassinated in February 1963, when the Ba'ath Party took power under the leadership of General Ahmed Hassan al-Bakr (prime minister) and Colonel Abdul Salam Arif (president). In June 1963, Syria, which by then had also fallen under Ba'athist rule, took part in the Iraqi military campaign against the Kurds by providing aircraft, armoured vehicles and a force of 6,000 soldiers. Several months later, `Abd as-Salam Muhammad `Arif led a successful coup against the Ba'ath government. Arif declared a ceasefire in February 1964 which provoked a split among Kurdish urban radicals on one hand and Peshmerga (Freedom fighters) forces led by Barzani on the other. On April 13, 1966, President Abdul Salam Arif died in a helicopter crash and was succeeded by his brother, General Abdul Rahman Arif. Following this unexpected death, the Iraqi government launched a last-ditch effort to defeat the Kurds. This campaign failed in May 1966, when Barzani forces thoroughly defeated the Iraqi Army at the Battle of Mount Handrin, near Rawanduz. Following the Six-Day War of 1967, the Ba'ath Party felt strong enough to retake power in 1968. Ahmed Hassan al-Bakr became president and chairman of the Revolutionary Command Council (RCC). The Ba'ath government started a campaign to end the Kurdish insurrection, which stalled in 1969. This can be partly attributed to the internal power struggle in Baghdad and also tensions with Iran. Moreover, the Soviet Union pressured the Iraqis to come to terms with Barzani. The war ended with more than 100,000 mortal casualties, with little achievements to both Kurdish rebels and the Iraqi government. In the aftermath of the First Kurdish Iraqi War, a peace plan was announced in March 1970 and provided for broader Kurdish autonomy. The plan also gave Kurds representation in government bodies, to be implemented in four years. Despite this, the Iraqi government embarked on an Arabization program in the oil rich regions of Kirkuk and Khanaqin in the same period. In the following years, Baghdad government overcame its internal divisions and concluded a treaty of friendship with the Soviet Union in April 1972 and ended its isolation within the Arab world. On the other hand, Kurds remained dependent on the Iranian military support and could do little to strengthen their forces. By 1974 the situation in the north escalated again into the Second Kurdish Iraqi War, to last until 1975. Under Saddam Hussein In July 1979, President Ahmed Hassan al-Bakr was forced to resign by Saddam Hussein, who assumed the offices of both President and Chairman of the Revolutionary Command Council. Saddam then purged his opponents including those from within the Baath party. Iraq's Territorial Claims to Neighboring Countries Iraq's territorial claims to neighboring countries were largely due to the plans and promises of the Entente countries in 1919–1920, when the Ottoman Empire was divided, to create a more extensive Arab state in Iraq and Jazeera, which would also include significant territories of eastern Syria, southeastern Turkey, all of Kuwait and Iran’s border areas, which are shown on this English map of 1920. Territorial disputes with Iran led to an inconclusive and costly eight-year war, the Iran–Iraq War (1980–1988, termed Qādisiyyat-Saddām – 'Saddam's Qādisiyyah'), which devastated the economy. Iraq falsely declared victory in 1988 but actually only achieved a weary return to the status quo ante bellum, meaning both sides retained their original borders. The war began when Iraq invaded Iran, launching a simultaneous invasion by air and land into Iranian territory on 22 September 1980, following a long history of border disputes, and fears of Shia insurgency among Iraq's long-suppressed Shia majority influenced by the Iranian Revolution. Iraq was also aiming to replace Iran as the dominant Persian Gulf state. The United States supported Saddam Hussein in the war against Iran. Although Iraq hoped to take advantage of the revolutionary chaos in Iran and attacked without formal warning, they made only limited progress into Iran and within several months were repelled by the Iranians who regained virtually all lost territory by June 1982. For the next six years, Iran was on the offensive. Despite calls for a ceasefire by the United Nations Security Council, hostilities continued until 20 August 1988. The war finally ended with a United Nations-brokered ceasefire in the form of United Nations Security Council Resolution 598, which was accepted by both sides. It took several weeks for the Iranian armed forces to evacuate Iraqi territory to honor pre-war international borders between the two nations (see 1975 Algiers Agreement). The last prisoners of war were exchanged in 2003. The war came at a great cost in lives and economic damage—half a million Iraqi and Iranian soldiers, as well as civilians, are believed to have died in the war with many more injured—but it brought neither reparations nor change in borders. The conflict is often compared to World War I, in that the tactics used closely mirrored those of that conflict, including large scale trench warfare, manned machine-gun posts, bayonet charges, use of barbed wire across trenches, human wave attacks across no-man's land, and extensive use of chemical weapons such as mustard gas by the Iraqi government against Iranian troops and civilians as well as Iraqi Kurds. At the time, the UN Security Council issued statements that "chemical weapons had been used in the war." However, in these UN statements, it was never made clear that it was only Iraq that was using chemical weapons, so it has been said that "the international community remained silent as Iraq used weapons of mass destruction against Iranian as well as Iraqi Kurds" and it is believed. A long-standing territorial dispute was the ostensible reason for Iraq's invasion of Kuwait in 1990. In November 1990, the UN Security Council adopted Resolution 678, permitting member states to use all necessary means, authorizing military action against the Iraqi forces occupying Kuwait and demanded a complete withdrawal by January 15, 1991. When Saddam Hussein failed to comply with this demand, the Gulf War (Operation "Desert Storm") ensued on January 17, 1991. Estimates range from 1,500 to as many as 30,000 Iraqi soldiers killed, as well as less than a thousand civilians. In March 1991 revolts in the Shia-dominated southern Iraq started involving demoralized Iraqi Army troops and the anti-government Shia parties. Another wave of insurgency broke out shortly afterwards in the Kurdish populated northern Iraq (see 1991 Iraqi uprisings). Although they presented a serious threat to the Iraqi Ba'ath Party regime, Saddam Hussein managed to suppress the rebellions with massive and indiscriminate force and maintained power. They were ruthlessly crushed by the loyalist forces spearheaded by the Iraqi Republican Guard and the population was successfully terrorized. During the few weeks of unrest tens of thousands of people were killed. Many more died during the following months, while nearly two million Iraqis fled for their lives. In the aftermath, the government intensified the forced relocating of Marsh Arabs and the draining of the Iraqi marshlands, while the Coalition established the Iraqi no-fly zones. On 6 August 1990, after the Iraqi invasion of Kuwait, the U.N. Security Council adopted Resolution 661 which imposed |
led to crop failures. Mean minimum temperatures in the winter range from near freezing (just before dawn) in the northern and northeastern foothills and the western desert to and in the alluvial plains of southern Iraq. They rise to a mean maximum of about in the western desert and the northeast, and in the south. In the summer mean minimum temperatures range from about and rise to maxima between roughly . Temperatures sometimes fall below freezing and have fallen as low as at Ar Rutbah in the western desert. A such summer heat, even in a hot desert, is high and this can be easily explained by the very low elevations of deserts regions which experience these exceptionally searing high temperatures. In fact, the elevations of cities such as Baghdad or Basra are near the sea level (0 m) because deserts are located predominantly along the Persian Gulf. That's why some Gulf's countries like Iraq, Iran and Kuwait experience extreme heat during summer, even more extreme than the normal level. The searing summer heat only exists in low elevations in these countries while mountains and higher elevations know much more moderated summer temperatures. The summer months are marked by two kinds of wind phenomena. The southern and southeasterly sharqi, a dry, dusty wind with occasional gusts of , occurs from April to early June and again from late September through November. It may last for a day at the beginning and end of the season but for several days at other times. This wind is often accompanied by violent duststorms that may rise to heights of several thousand meters and close airports for brief periods. From mid-June to mid-September the prevailing wind, called the shamal, is from the north and northwest. It is a steady wind, absent only occasionally during this period. The very dry air brought by this shamal permits intensive sun heating of the land surface, but the breeze has some cooling effect. The combination of rain shortage and extreme heat makes much of Iraq a desert. Because of very high rates of evaporation, soil and plants rapidly lose the little moisture obtained from the rain, and vegetation could not survive without extensive irrigation. Some areas, however, although arid, do have natural vegetation in contrast to the desert. For example, in the Zagros Mountains in northeastern Iraq there is permanent vegetation, such as oak trees, and date palms are found in the south. Area and boundaries In 1922 British officials concluded the Treaty of Mohammara with Abd al Aziz ibn Abd ar Rahman Al Saud, who in 1932 formed the Kingdom of Saudi Arabia. The treaty provided the basic agreement for the boundary between the eventually independent nations. Also in 1922 the two parties agreed to the creation of the diamond-shaped Neutral Zone of approximately adjacent to the western tip of Kuwait in which neither Iraq nor Saudi Arabia would build dwellings or installations. Bedouins from either country could utilize the limited water and seasonal grazing resources of the zone. In April 1975, an agreement signed in Baghdad fixed the borders of the countries. Through Algerian mediation, Iran and Iraq agreed in March 1975 to normalize their relations, and three months later they signed a treaty known as the Algiers Accord. The document defined the common border all along the Khawr Abd Allah (Shatt) River estuary as the thalweg. To compensate Iraq for the loss of what formerly had been regarded as its territory, pockets of territory along the mountain border in the central sector of its common boundary with Iran were assigned to it. Nonetheless, in September 1980 Iraq went to war with Iran, citing among other complaints the fact that Iran had not turned over to it the land specified in the Algiers Accord. This problem has subsequently proved to be a stumbling block to a negotiated settlement of the ongoing conflict. In 1988 the boundary with Kuwait was another outstanding problem. It was fixed in a 1913 treaty between the Ottoman Empire and British officials acting on behalf of Kuwait's ruling family, which in 1899 had ceded control over foreign affairs to Britain. The boundary was accepted by Iraq when it became independent in 1932, but in the 1960s and again in the mid-1970s, the Iraqi government advanced a claim to parts of Kuwait. Kuwait made several representations to the Iraqis during the war to fix the border once and for all but Baghdad repeatedly demurred, claiming that the issue is a potentially divisive one that could inflame nationalist sentiment inside Iraq. Hence in 1988 it was likely that a solution would have to wait until the war ended. Area: total: land: water: Land boundaries: total: border countries: Iran , Saudi Arabia , Syria , Turkey , Kuwait , Jordan Coastline: Maritime claims: territorial sea: continental shelf: not specified Terrain: mostly broad plains; reedy marshes along Iranian border in south with large flooded areas; mountains along borders with Iran and Turkey Elevation extremes: lowest point: Persian Gulf 0 m highest point: Cheekah Dar Resources and land use Natural resources: petroleum, natural gas, phosphates, sulfur Land use: arable land: 7.89% permanent crops: 0.53% other: 91.58% (2012) Irrigated land: (2003) Total renewable water resources: (2011) Freshwater withdrawal (domestic/industrial/agricultural): total: 66 km3/yr (7%/15%/79%) per capita: 2,616 m3/yr (2000) While its proven oil reserves of ranks Iraq Fifth in the world behind Iran, the United States Department of Energy estimates that up to 90 percent of the country remains unexplored. Unexplored regions of Iraq could yield an additional . Iraq's oil production costs are among the lowest in the world. However, only about 2,000 oil wells have been drilled in Iraq, compared to about 1 million wells in Texas alone. Environmental concerns Natural hazards: dust storms, sandstorms, floods Environment - current issues: government water control projects have drained most of the inhabited marsh areas east of An Kshatriya by drying up or diverting the feeder streams and rivers; a once sizable population of Shi'a Muslims, who have inhabited these areas for thousands of years, has been displaced; furthermore, the destruction of the natural habitat poses serious threats to the area's wildlife populations; inadequate supplies of potable | is possible only by small boat. Here and there a few natural islands permit slightly larger clusters. Some of these people are primarily water buffalo herders and lead a semi-nomadic life. In the winter, when the waters are at a low point, they build fairly large temporary villages. In the summer they move their herds out of the marshes to the river banks. The war has had its effect on the lives of these denizens of the marshes. With much of the fighting concentrated in their areas, they have either migrated to settled communities away from the marshes or have been forced by government decree to relocate within the marshes. Also, in early 1988, the marshes had become the refuge of deserters from the Iraqi army who attempted to maintain life in the fastness of the overgrown, desolate areas while hiding out from the authorities. These deserters in many instances have formed into large gangs that raid the marsh communities; this also has induced many of the marsh dwellers to abandon their villages. The war has also affected settlement patterns in the northern Kurdish areas. There, the struggle for a Kurdish state by guerrillas was rejected by the government as it steadily escalated violence against the local communities. Starting in 1984, the government launched a scorched-earth campaign to drive a wedge between the villagers and the guerrillas in the remote areas of two provinces of Kurdistan in which Kurdish guerrillas were active. In the process whole villages were torched and subsequently bulldozed, which resulted in the Kurds flocking into the regional centers of Irbil and As Sulaymaniyah. Also as a "military precaution", the government has cleared a broad strip of territory in the Kurdish region along the Iranian border of all its inhabitants, hoping in this way to interdict the movement of Kurdish guerrillas back and forth between Iran and Iraq. The majority of Kurdish villages, however, remained intact in early 1988. In the arid areas of Iraq to the west and south, cities and large towns are almost invariably situated on watercourses, usually on the major rivers or their larger tributaries. In the south this dependence has had its disadvantages. Until the recent development of flood control, Baghdad and other cities were subject to the threat of inundation. Moreover, the dikes needed for protection have effectively prevented the expansion of the urban areas in some directions. The growth of Baghdad, for example, was restricted by dikes on its eastern edge. The diversion of water to the Milhat ath Tharthar and the construction of a canal transferring water from the Tigris north of Baghdad to the Diyala River have permitted the irrigation of land outside the limits of the dikes and the expansion of settlement. Climate The climate of Iraq is mainly a hot desert climate or a hot semi-arid climate to the northernmost part. Averages high temperatures are generally above 40 °C (104 °F) at low elevations during summer months (June, July and August) while averages low temperatures can drop to below 0 °C (32 °F) during the coldest month of the year during winter The all-time record high temperature in Iraq of 52 °C (126 °F) was recorded near An Nasiriyah on 2 August 2011. Most of the rainfall occurs from December through April and averages between annually. The mountainous region of northern Iraq receives appreciably more precipitation than the central or southern desert region, where they tend to have a Mediterranean climate. Roughly 90% of the annual rainfall occurs between November and April, most of it in the winter months from December through March. The remaining six months, particularly the hottest ones of June, July, and August, are extremely dry. Except in the north and northeast, mean annual rainfall ranges between . Data available from stations in the foothills and steppes south and southwest of the mountains suggest mean annual rainfall between for that area. Rainfall in the mountains is more abundant and may reach a year in some places, but the terrain precludes extensive cultivation. Cultivation on nonirrigated land is limited essentially to the mountain valleys, foothills, and steppes, which have or more of rainfall annually. Even in this zone, however, only one crop a year can be grown, and shortages of rain have often led to crop failures. Mean minimum temperatures in the winter range from near freezing (just before dawn) in the northern and northeastern foothills and the western desert to and in the alluvial plains of southern Iraq. They rise to a mean maximum of about in the western desert and the northeast, and in the south. In the summer mean minimum temperatures range from about and rise to maxima between roughly . Temperatures sometimes fall below freezing and have fallen as low as at Ar Rutbah in the western desert. A such summer heat, even in a hot desert, is high and this can be easily explained by the very low elevations of deserts regions which experience these exceptionally searing high temperatures. In fact, the elevations of cities such as Baghdad or Basra are near the sea level (0 m) because deserts are located predominantly along the Persian Gulf. That's why some Gulf's countries like Iraq, Iran and Kuwait experience extreme heat during summer, even more extreme than the normal level. The searing summer heat only exists in low elevations in these countries while mountains and higher elevations know much more moderated summer temperatures. The summer months are marked by two kinds of wind phenomena. The southern and southeasterly sharqi, a dry, dusty wind with occasional gusts of , occurs from April to early June and again from late September through November. It may last for a day at the beginning and end of the season but for several days at other times. This wind is often accompanied by violent duststorms that may rise to heights of several thousand meters and close airports for brief periods. From mid-June to mid-September the prevailing wind, called the shamal, is from the north and northwest. It is a steady wind, absent only occasionally during this period. The very dry air brought by this shamal permits intensive sun heating of the land surface, but the breeze has some cooling effect. The combination of rain shortage and extreme heat makes much of Iraq a desert. Because of very high rates of evaporation, soil and plants rapidly lose the little moisture obtained from the rain, and vegetation could not survive without extensive irrigation. Some areas, however, although arid, do have natural vegetation in contrast to the desert. For example, in the Zagros Mountains in northeastern Iraq there is permanent vegetation, such as oak trees, and date palms are found in the south. Area and boundaries In 1922 British officials concluded the Treaty of Mohammara with Abd al Aziz ibn Abd ar Rahman Al Saud, who in 1932 formed the Kingdom of Saudi Arabia. The treaty provided the basic agreement for the boundary between the eventually independent nations. Also in 1922 the two parties agreed to the creation of the diamond-shaped Neutral Zone of approximately adjacent to the western tip of Kuwait in which neither Iraq nor Saudi Arabia would build dwellings or installations. Bedouins from either country could utilize the limited water and seasonal grazing resources of the zone. In April 1975, an agreement signed in Baghdad fixed the borders of the countries. Through Algerian mediation, Iran and Iraq agreed in March 1975 to normalize their relations, and three months later they signed a treaty known as the Algiers Accord. The document defined the common border all along the Khawr Abd Allah (Shatt) River estuary as the thalweg. To compensate Iraq for the loss of what formerly had been regarded as its territory, pockets of territory along the mountain border in the central sector of its common boundary with Iran were assigned to it. Nonetheless, in September 1980 Iraq went to war with Iran, citing among other complaints the fact that Iran had not turned over to it the land specified in the Algiers Accord. This problem has subsequently proved to |
(1 July 2013) (Estimates) : Ethnic and religious groups Iraq's dominant ethnic group are Arabs, who account for more than three-quarters of the population. According to the CIA World Factbook, citing a 1987 Iraqi government estimate, the population of Iraq is formed of 75-80% Arabs followed by 15-20% Kurds. In addition, the estimate claims that other minorities form 5% of the country's population, including the Turkmen, Yazidis, Shabaks, Kaka'i, Bedouins, Roma, Chaldeans, Assyrians, Circassians, Arab-Kurds, Sabian-Mandaeans, and Persians. However, the International Crisis Group points out that figures from the 1987 census, as well as the 1967, 1977, and 1997 censuses, "are all considered highly problematic, due to suspicions of regime manipulation" because Iraqi citizens were only allowed to indicate belonging to either the Arab or Kurdish ethnic groups; consequently, this skewed the number of other ethnic minorities, such as Iraq's third largest ethnic group – the Turkmen. A report published by the European Parliamentary Research Service suggests that in 2015 there was 24 million Arabs (15 million Shia and 9 million Sunni); 8.4 million Kurds; 3 million Iraqi Turkmen; 1 million Black Iraqis; 500,000 Christians (including, in alphabetical order: Arab Christians, Armenians, Assyrians, Chaldean Catholics, and Syriac Orthodox); 500,000 Yazidis; 250,000 Shabaks; 50,000 Roma; 3,000 Sabian-Mandaeans; 2,000 Circassians; 1,000 Baháʼí; and a few dozen Jews. Languages Arabic and Kurdish are the two official languages of Iraq. Arabic is taught across all schools in Iraq, however in the north the Kurdish language is the most spoken. Eastern Aramaic languages, such as Syriac and Mandaic are spoken, as well as the Iraqi Turkmen language, and various other indigenous languages. Kurdish, including several dialects, is the second largest language and has regional language status in the north of the country. Aramaic, in antiquity spoken throughout the whole country, is now only spoken by the Assyrian Chaldean minority. The Iraqi Turkmen dialect is spoken in northern Iraq (particularly in the Turkmeneli region) and numerous languages of the Caucasus are also spoken by minorities, notably the Chechen community. Religions According to the CIA World Factbook, 98-99% of Iraqis follow Islam: 64-69% Shia and 29-34% Sunni. 5% of Muslims in Iraq describe themselves as "Just a Muslim". Christianity accounts for 1–2%, and the rest practice Yazidis, Mandaeism, and other religions. While there has been voluntary relocation of many Christian families to northern Iraq, recent reporting indicates that the overall Christian population may have dropped by as much as 50 percent since the | Kurdish are the two official languages of Iraq. Arabic is taught across all schools in Iraq, however in the north the Kurdish language is the most spoken. Eastern Aramaic languages, such as Syriac and Mandaic are spoken, as well as the Iraqi Turkmen language, and various other indigenous languages. Kurdish, including several dialects, is the second largest language and has regional language status in the north of the country. Aramaic, in antiquity spoken throughout the whole country, is now only spoken by the Assyrian Chaldean minority. The Iraqi Turkmen dialect is spoken in northern Iraq (particularly in the Turkmeneli region) and numerous languages of the Caucasus are also spoken by minorities, notably the Chechen community. Religions According to the CIA World Factbook, 98-99% of Iraqis follow Islam: 64-69% Shia and 29-34% Sunni. 5% of Muslims in Iraq describe themselves as "Just a Muslim". Christianity accounts for 1–2%, and the rest practice Yazidis, Mandaeism, and other religions. While there has been voluntary relocation of many Christian families to northern Iraq, recent reporting indicates that the overall Christian population may have dropped by as much as 50 percent since the fall of Saddam Hussein in 2003, with many fleeing to Syria, Jordan, and Lebanon (2010 estimate). The percentage of Christians has fallen from 6% in 1991 or 1.5 million to about one third of this. Estimates say there are 500,000 Christians in Iraq. Nearly all Iraqi Kurds identify as Sunni Muslims. A survey in Iraq concluded that "98% of Kurds in Iraq identified themselves as Sunnis and only 2% identified as Shias". The religious differences between Sunni Arabs and Sunni Kurds are small. While 98 percent of Shia Arabs believe that visiting the shrines of saints is acceptable, 71 percent of Sunni Arabs did and 59 percent of Sunni Kurds support this practice. About 94 percent of the population in Iraqi Kurdistan is Muslim. Demographic statistics The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. Age structure 0–14 years: 39.01% (male 8,005,327/female 7,674,802) 15-24 years: 19.42% (male 3,976,085/female 3,829,086) 25-54 years: 33.97% (male 6,900,984/female 6,752,797) 55-64 years: 4.05% (male 788,602/female 839,291) 65 years and over: 3.55% (male 632,753/female 794,489) (2018 est.) Ethnic groups Arab: 70% Kurd: 15-25% Turkoman: 7-9% Chaldean, Assyrian and Other: 2% Languages Arabic (official) Kurdish (official) Iraqi Turkmen dialect (official only in majority speaking area) Assyrian dialect (Neo-Aramaic) (official |
275 representatives present, with the remaining representatives boycotting the vote. Legislators from the Iraqi Accord Front, Sadrist Movement and Islamic Virtue Party all opposed the bill. Under the law, a region can be created out of one or more existing governorates or two or more existing regions, and a governorate can also join an existing region to create a new region. A new region can be proposed by one third or more of the council members in each affected governorate plus 500 voters or by one tenth or more voters in each affected governorate. A referendum must then be held within three months, which requires a simple majority in favour to pass. In the event of competing proposals, the multiple proposals are put to a ballot and the proposal with the most supporters is put to the referendum. In the event of an affirmative referendum a Transitional Legislative Assembly is elected for one year, which has the task of writing a constitution for the Region, which is then put to a referendum requiring a simple majority to pass. The President, Prime Minister and Ministers of the region are elected by simple majority, in contrast to the Iraqi Council of Representatives which requires two thirds support. Provinces Iraq is divided into 19 governorates, which are further divided into districts: Political parties Parliamentary alliances and parties National Iraqi Alliance Supreme Islamic Iraqi Council (al-Majlis al-alalith-thaura l-islamiyya fil-Iraq) – led by Ammar al-Hakim Sadrist Movement – led by Muqtada al-Sadr Islamic Dawa Party – Iraq Organisation (Hizb al-Da'wa al-Islami Tendeem al-Iraq) – led by Kasim Muhammad Taqi al-Sahlani Islamic Dawa Party (Hizb al-Da'wa al-Islamiyya) – led by Nouri al-Maliki Tribes of Iraq Coalition – led by Hamid al-Hais Islamic Fayli Grouping in Iraq – led by Muqdad Al-Baghdadi Democratic Patriotic Alliance of Kurdistan Kurdistan Democratic Party (Partiya Demokrat a Kurdistanê) – led by Massoud Barzani Patriotic Union of Kurdistan (Yaketi Nishtimani Kurdistan) – led by Jalal Talabani Kurdistan Islamic Union (Yekîtiya Islamiya Kurdistan) Movement for Change (Bizutnaway Gorran) – led by Nawshirwan Mustafa Kurdistan Toilers' Party (Parti Zahmatkeshan Kurdistan) Kurdistan Communist Party (Partiya Komunîst Kurdistan) Assyrian Patriotic Party Civil Democratic Alliance People's Party led by Faiq Al Sheikh Ali. Iraqi Ummah Party led by Mithal Al-Alusi. Iraqi Liberal Party National Democratic Action Party Iraqi List (al-Qayimaal Iraqia) Iraqi National Accord – led by Iyad Allawi The Iraqis – led by Ghazi al-Yawer Iraqi Turkmen Front (Irak Türkmen Cephesi)) (same as Alliance of the Turkomen Front of Iraq?) National Independent Cadres and Elites People's Union (Ittihad Al Shaab) Iraqi Communist Party – led by Hamid Majid Mousa Islamic Kurdish Society – led by Ali Abd-al Aziz Islamic Labour Movement in Iraq National Democratic Party (Hizb al Dimuqratiyah al Wataniyah) – led by Samir al-Sumaidai National Rafidain List Assyrian Democratic Movement (Zowaa Dimuqrataya Aturaya) – led by Yonadam Kanna Reconciliation and Liberation Bloc The Upholders of the Message (Al-Risaliyun) Mithal al-Alusi List Yazidi Movement for Reform and Progress Other parties Communist Party of Iraq Worker-Communist Party of Iraq Leftist Worker-Communist Party of Iraq Alliance of Independent Democrats – led by Adnan Pachachi National Democratic Party – Naseer al-Chaderchi Green Party of Iraq Iraqi Democratic Union Iraqi National Accord Constitutional Monarchy Movement – led by Sharif Ali Bin al-Hussein Assyrian Patriotic Party – on the Democratic Patriotic Alliance of Kurdistan list Assyria Liberation Party Kurdistan Conservative Party Turkmen People's Party Iraqi Islamic Party – led by Ayad al-Samarrai Al Neshoor Party Illegal parties Hizb ut-Tahrir Arab Socialist Ba'ath Party (Regional Command National Command) Elections Iraqi parliamentary election, January 2005 Elections for the National Assembly of Iraq were held on January 30, 2005 in Iraq. The 275-member National Assembly was a parliament created under the Transitional Law during the Occupation of Iraq. The newly elected transitional Assembly was given a mandate to write the new and permanent Constitution of Iraq and exercised legislative functions until the new Constitution came into effect, and resulted in the formation of the Iraqi Transitional Government. The United Iraqi Alliance, tacitly backed by Shia Grand Ayatollah Ali al-Sistani, led with some 48% of the vote. The Democratic Patriotic Alliance of Kurdistan was in second place with some 26% of the vote. Prime Minister Ayad Allawi's party, the Iraqi List, came third with some 14%. In total, twelve parties received enough votes to win a seat in the assembly. Low Arab Sunni turnout threatened the legitimacy of the election, which was as low as 2% in Anbar province. More than 100 armed attacks on polling places took place, killing at least 44 people (including nine suicide bombers) across Iraq, including at least 20 in Baghdad. Iraqi parliamentary election, December 2005 Following the ratification of the Constitution of Iraq on 15 October 2005, a general election was held on 15 December to elect the permanent 275-member Iraqi Council of Representatives. The elections took place under a list system, whereby voters chose from a list of parties and coalitions. 230 seats were apportioned among Iraq's 18 governorates based on the number of registered voters in each as of the January 2005 elections, including 59 | of the Iraqi Transitional Government. The United Iraqi Alliance, tacitly backed by Shia Grand Ayatollah Ali al-Sistani, led with some 48% of the vote. The Democratic Patriotic Alliance of Kurdistan was in second place with some 26% of the vote. Prime Minister Ayad Allawi's party, the Iraqi List, came third with some 14%. In total, twelve parties received enough votes to win a seat in the assembly. Low Arab Sunni turnout threatened the legitimacy of the election, which was as low as 2% in Anbar province. More than 100 armed attacks on polling places took place, killing at least 44 people (including nine suicide bombers) across Iraq, including at least 20 in Baghdad. Iraqi parliamentary election, December 2005 Following the ratification of the Constitution of Iraq on 15 October 2005, a general election was held on 15 December to elect the permanent 275-member Iraqi Council of Representatives. The elections took place under a list system, whereby voters chose from a list of parties and coalitions. 230 seats were apportioned among Iraq's 18 governorates based on the number of registered voters in each as of the January 2005 elections, including 59 seats for Baghdad Governorate. The seats within each governorate were allocated to lists through a system of Proportional Representation. An additional 45 "compensatory" seats were allocated to those parties whose percentage of the national vote total (including out of country votes) exceeds the percentage of the 275 total seats that they have been allocated. Women were required to occupy 25% of the 275 seats. The change in the voting system gave more weight to Arab Sunni voters, who make up most of the voters in several provinces. It was expected that these provinces would thus return mostly Sunni Arab representatives, after most Sunnis boycotted the last election. Turnout was high (79.6%). The White House was encouraged by the relatively low levels of violence during polling, with one insurgent group making good on a promised election day moratorium on attacks, even going so far as to guard the voters from attack. President Bush frequently pointed to the election as a sign of progress in rebuilding Iraq. However, post-election violence threatened to plunge the nation into civil war, before the situation began to calm in 2007. The election results themselves produced a shaky coalition government headed by Nouri al-Maliki. Iraqi parliamentary election, 2010 A parliamentary election was held in Iraq on 7 March 2010. The election decided the 325 members of the Council of Representatives of Iraq who will elect the Iraqi Prime Minister and President. The election resulted in a partial victory for the Iraqi National Movement, led by former Interim Prime Minister Ayad Allawi, which won a total of 91 seats, making it the largest alliance in the council. The State of Law Coalition, led by incumbent Prime Minister Nouri Al-Maliki, was the second largest grouping with 89 seats. The election was rife with controversy. Prior to the election, the Supreme Court in Iraq ruled that the existing electoral law/rule was unconstitutional, and a new elections law made changes in the electoral system. On 15 January 2010, the Independent High Electoral Commission (IHEC) banned 499 candidates from the election due to alleged links with the Ba'ath Party. Before the start of the campaign on 12 February 2010, IHEC confirmed that most of the appeals by banned candidates had been rejected and 456 of the initially banned candidates would not be allowed to run for the election. There were numerous allegations of fraud, and a recount of the votes in Baghdad was ordered on 19 April 2010. On May 14, IHEC announced that after 11,298 ballot boxes had been recounted, there was no sign of fraud or violations. The new parliament opened on 14 June 2010. After months of fraught negotiations, an agreement was reached on the formation of a new government on November 11. Talabani would continue as president, Al-Maliki would stay on as prime minister and Allawi would head a new security council. Iraqi parliamentary election, 2014 Parliamentary elections were held in Iraq on 30 April 2014. The elections decided the 328 members of the Council of Representatives who will in turn elect the Iraqi President and Prime Minister. Iraqi parliamentary election, 2021 On 30 November 2021, the political bloc led by Shia leader Muqtada al-Sadr was confirmed the winner of the October parliamentary election. His Sadrist Movement, won a total of 73 out of the 329 seats in the parliament. The Taqadum, or Progress Party-led by Parliament Speaker Mohammed al-Halbousi, a Sunni – secured 37 seats. Former Prime Minister Nouri al-Maliki’s State of Law party got 33 seats in parliament. Al-Fatah alliance, whose main components are militia groups affiliated with the Iran-backed Popular Mobilisation Forces, sustained its crushing loss and snatched 17 seats. The Kurdistan Democratic Party (KDP) received 31 seats, and the Patriotic Union of Kurdistan (PUK) gained 18. Issues Corruption According to Transparency International, Iraq's is the most corrupt government in the Middle East, and is described as a "hybrid regime" (between a "flawed democracy" and an "authoritarian regime"). The 2011 report "Costs of War" from Brown University's Watson Institute for International Studies concluded that U.S. military presence in Iraq has not been able to prevent this corruption, noting that as early as 2006, "there were clear signs that post-Saddam Iraq was not going to be the |
prices. At the outbreak of the war, Iraq had amassed an estimated $35 billion in foreign exchange reserves. It had the best education and health care systems in the Middle East, and thousands of migrant workers from Egypt, Somalia, and the Indian subcontinent came to the country to work in construction projects. The Iran–Iraq War and the 1980s oil glut depleted Iraq's foreign exchange reserves, devastated its economy and left the country saddled with a foreign debt of more than $40 billion. After the initial destruction of the war, oil exports gradually increased with the construction of new pipelines and the restoration of damaged facilities. Sanctions Iraq's seizure of Kuwait in August 1990, subsequent international economic sanctions on Iraq, and damage from military action by an international coalition beginning in January 1991, drastically reduced economic activity. The regime exacerbated shortages by supporting large military and internal security forces and by allocating resources to key supporters of the Ba'ath Party. The implementation of the UN's Oil for Food program in December 1996 helped improve economic conditions. For the first six six-month phases of the program, Iraq was allowed to export increasing amounts of oil in exchange for food, medicine, and other humanitarian goods. In December 1999, the UN Security Council authorized Iraq to export as much oil as required to meet humanitarian needs. Per capita, food imports increased substantially, while medical supplies and health care services steadily improved, though per capita economic production and living standards were still well below their prewar level. Iraq changed its oil reserve currency from the U.S. dollar to the euro in 2000. However, 28% of Iraq's export revenues under the program were deducted to meet UN Compensation Fund and UN administrative expenses. The drop in GDP in 2001 was largely the result of the global economic slowdown and lower oil prices. After the fall of Saddam Hussein The removal of sanctions on 24 May 2003 and rising oil prices in the mid-to-late 2000s led to a doubling in oil production from a low of 1.3 mbpd during the turbulence of 2003 to a high of 2.6 mbpd in 2011. Furthermore, reduced inflation and violence since 2007 have translated to real increases in living standards for Iraqis. One of the key economic challenges was Iraq's immense foreign debt, estimated at $130 billion. Although some of this debt was derived from normal export contracts that Iraq had failed to pay for, some was a result of military and financial support during Iraq's war with Iran. The Jubilee Iraq campaign argued that much of these debts were odious (illegitimate). However, as the concept of odious debt is not accepted, trying to deal with the debt on those terms would have embroiled Iraq in legal disputes for years. Iraq decided to deal with its debt more pragmatically and approached the Paris Club of official creditors. In a December 2006 Newsweek International article, a study by Global Insight in London was reported to show "that Civil war or not, Iraq has an economy, and—mother of all surprises—it's doing remarkably well. Real estate is booming. Construction, retail and wholesale trade sectors are healthy, too, according to [the report]. The U.S. Chamber of Commerce reports 34,000 registered companies in Iraq, up from 8,000 three years ago. Sales of secondhand cars, televisions and mobile phones have all risen sharply. Estimates vary, but one from Global Insight puts GDP growth at 17 per cent last year and projects 13 per cent for 2006. The World Bank has it lower: at 4 per cent this year. But, given all the attention paid to deteriorating security, the startling fact is that Iraq is growing at all." Industry Traditionally, most of Iraq's manufacturing activity has been closely connected to the oil industry. The major industries in that category have been petroleum refining and the manufacture of chemicals and fertilizers. Before 2003, diversification was hindered by limitations on privatization and the effects of the international sanctions of the 1990s. Since 2003, security problems have blocked efforts to establish new enterprises. The construction industry is an exception; in 2000 cement was the only major industrial product not based on hydrocarbons. The construction industry has profited from the need to rebuild after Iraq's several wars. In the 1990s, the industry benefited from government funding of extensive infrastructure and housing projects and elaborate palace complexes. Primary sectors Agriculture Agriculture contributes just 3.3% to the gross national product and employs a fifth of the labor. Historically, 50 to 60 per cent of Iraq's arable land has been under cultivation. Because of ethnic politics, valuable farmland in Kurdish territory has not contributed to the national economy, and inconsistent agricultural policies under Saddam Hussein discouraged domestic market production. Despite its abundant land and water resources, Iraq is a net food importer. Under the UN Oil for Food program, Iraq imported large quantities of grains, meat, poultry, and dairy products. The government abolished its farm collectivization program in 1981, allowing a greater role for private enterprise in agriculture. Iraqi agriculture suffered substantial physical disruption from the Gulf War, and economic disruption from sanctions imposed by the United Nations (August 1990). Sanctions curtailed imports by cutting off Iraq's petroleum exports and embargoing those agricultural production inputs deemed to have potential military applications. The Iraqi government responded by monopolizing grain and oilseed marketing, imposing production quotas, and instituting a Public Distribution System for basic foodstuffs. By mid-1991 the government supplied a "basket" of foodstuffs that provided about one-third of the caloric daily requirement, and cost consumers about five per cent of its market value. With subsidies for agricultural inputs diminished, the government's prices failed to cover their costs. The implicit tax on agricultural production was estimated to reach 20 to 35 per cent by the mid-1990s. In October 1991 the Baghdad regime had withdrawn personnel from the northern region controlled by two Kurdish parties. Kurdistan Region was described as "... a market economy essentially left alone by a very weak governing structure, but heavily influenced by substantial international humanitarian aid flows." Under an "Oil for Food Program" negotiated with the United Nations, in December 1996 Iraq started exporting petroleum and used the proceeds to start importing foodstuffs three months later. Grain imports averaged $828 million from 1997 to 2001, an increase of over 180 per cent from the previous five-year period. Due to foreign competition, Iraqi production declined (29 per cent for wheat, 31 per cent for barley, and 52 per cent for maize). Because the government had generally neglected the production of forage crops, fruits, vegetables, and livestock other than poultry, those sectors had remained more traditional and market-based and less buffeted by international affairs. Nevertheless, severe drought, an outbreak of screwworm, and an epizootic of foot-and-mouth disease devastated production during this period. As the Oil for Food Program expanded to cover more agricultural inputs and machinery, the productivity of Iraqi agriculture stabilized around 2002. Following the invasion led by the United States in March 2003, with the incomes of many Iraqis devastated, the market for foodstuffs shrunk. Seeking to re-orient Iraq's economy toward private ownership and international competitiveness, the United States saw the dismantling of the Public Distribution System as essential for a market-driven agriculture. Because of the great reliance of most Iraqis on government-subsidized food, this goal was never realized. Increased productivity became the focus of much of the US-funded agricultural reconstruction program. Many of these projects were undertaken by the Agricultural Reconstruction and Development Iraq (ARDI) program run by Development Alternatives, Inc. (DAI) of Bethesda, Maryland, under a contract with USAID signed on 15 October 2003. While ARDI participated in limited ways, the restoration of Iraq's irrigation systems was mostly funded under USAID's contract with Bechtel International. ARDI conducted demonstration trials of improved practices and varieties of many crops: winter cereals (wheat and barley), summer cereals (rice, maize, and sorghum), potatoes, and tomatoes. Feed supplements and veterinary treatments were demonstrated to increase ovulation, conception, and birth weights of livestock. Surveys were conducted of poultry growers and apple farmers. Nurseries were established for date palms and grapes. College buildings and farm tractors were rebuilt. ARDI had projects promoting trade associations and producers' co-ops and supported extension as an appropriate governmental function. The contract eventually cost over $100 million and lasted through December 2006. Under its Community Action Program, USAID also funded an analysis of markets for sheep and wool. It awarded a contract to the University of Hawaii to revitalize higher education in agriculture. It awarded a contract for $120 million to the Louis Berger Group to promote Iraq's private sector, including agriculture. Starting in 2006, agricultural reconstruction was also conducted by Provincial Reconstruction Teams within the occupying Multi-National Force – Iraq. Intended to promote goodwill and sap the insurgency, "PRTs" allowed military commanders to identify local needs and, with few bureaucratic hurdles, to dispense up to $500,000. Civilians from many agencies within the U.S. Department of Agriculture, as well as USAID, served tours on PRTs. Some participants criticized the absence of a national agricultural strategy, or clear direction on the design of projects. Others complained that projects emphasized "American-style, 21st-century agricultural technologies and methodologies..." that were inappropriate for Iraq. Agricultural production has not rebounded noticeably from the reconstruction program. According to the Food and Agriculture Organization (FAO), between 2002 and 2013, production of wheat increased 11 per cent and milled rice | (measured in $1990 ) increased significantly during the 1950s, 60s and 70s, which can be explained by both higher oil production levels as well as oil prices, which famously peaked in the 1970s due to the OPEC's oil embargo, causing the 1973 oil crisis. In the following two decades, however, GDP per capita in Iraq dropped substantially because of multiple wars, namely the 1980-88 war with Iran, the 1990-1991 Gulf War. Iran-Iraq War Before the outbreak of the war with Iran in September 1980, Iraq's economic prospects were bright. Oil production had reached a level of 560,000 m³ (3.5 million barrels) per day in 1979, and oil revenues were 21 billion dollars in 1979 and $27 billion in 1980 due to record oil prices. At the outbreak of the war, Iraq had amassed an estimated $35 billion in foreign exchange reserves. It had the best education and health care systems in the Middle East, and thousands of migrant workers from Egypt, Somalia, and the Indian subcontinent came to the country to work in construction projects. The Iran–Iraq War and the 1980s oil glut depleted Iraq's foreign exchange reserves, devastated its economy and left the country saddled with a foreign debt of more than $40 billion. After the initial destruction of the war, oil exports gradually increased with the construction of new pipelines and the restoration of damaged facilities. Sanctions Iraq's seizure of Kuwait in August 1990, subsequent international economic sanctions on Iraq, and damage from military action by an international coalition beginning in January 1991, drastically reduced economic activity. The regime exacerbated shortages by supporting large military and internal security forces and by allocating resources to key supporters of the Ba'ath Party. The implementation of the UN's Oil for Food program in December 1996 helped improve economic conditions. For the first six six-month phases of the program, Iraq was allowed to export increasing amounts of oil in exchange for food, medicine, and other humanitarian goods. In December 1999, the UN Security Council authorized Iraq to export as much oil as required to meet humanitarian needs. Per capita, food imports increased substantially, while medical supplies and health care services steadily improved, though per capita economic production and living standards were still well below their prewar level. Iraq changed its oil reserve currency from the U.S. dollar to the euro in 2000. However, 28% of Iraq's export revenues under the program were deducted to meet UN Compensation Fund and UN administrative expenses. The drop in GDP in 2001 was largely the result of the global economic slowdown and lower oil prices. After the fall of Saddam Hussein The removal of sanctions on 24 May 2003 and rising oil prices in the mid-to-late 2000s led to a doubling in oil production from a low of 1.3 mbpd during the turbulence of 2003 to a high of 2.6 mbpd in 2011. Furthermore, reduced inflation and violence since 2007 have translated to real increases in living standards for Iraqis. One of the key economic challenges was Iraq's immense foreign debt, estimated at $130 billion. Although some of this debt was derived from normal export contracts that Iraq had failed to pay for, some was a result of military and financial support during Iraq's war with Iran. The Jubilee Iraq campaign argued that much of these debts were odious (illegitimate). However, as the concept of odious debt is not accepted, trying to deal with the debt on those terms would have embroiled Iraq in legal disputes for years. Iraq decided to deal with its debt more pragmatically and approached the Paris Club of official creditors. In a December 2006 Newsweek International article, a study by Global Insight in London was reported to show "that Civil war or not, Iraq has an economy, and—mother of all surprises—it's doing remarkably well. Real estate is booming. Construction, retail and wholesale trade sectors are healthy, too, according to [the report]. The U.S. Chamber of Commerce reports 34,000 registered companies in Iraq, up from 8,000 three years ago. Sales of secondhand cars, televisions and mobile phones have all risen sharply. Estimates vary, but one from Global Insight puts GDP growth at 17 per cent last year and projects 13 per cent for 2006. The World Bank has it lower: at 4 per cent this year. But, given all the attention paid to deteriorating security, the startling fact is that Iraq is growing at all." Industry Traditionally, most of Iraq's manufacturing activity has been closely connected to the oil industry. The major industries in that category have been petroleum refining and the manufacture of chemicals and fertilizers. Before 2003, diversification was hindered by limitations on privatization and the effects of the international sanctions of the 1990s. Since 2003, security problems have blocked efforts to establish new enterprises. The construction industry is an exception; in 2000 cement was the only major industrial product not based on hydrocarbons. The construction industry has profited from the need to rebuild after Iraq's several wars. In the 1990s, the industry benefited from government funding of extensive infrastructure and housing projects and elaborate palace complexes. Primary sectors Agriculture Agriculture contributes just 3.3% to the gross national product and employs a fifth of the labor. Historically, 50 to 60 per cent of Iraq's arable land has been under cultivation. Because of ethnic politics, valuable farmland in Kurdish territory has not contributed to the national economy, and inconsistent agricultural policies under Saddam Hussein discouraged domestic market production. Despite its abundant land and water resources, Iraq is a net food importer. Under the UN Oil for Food program, Iraq imported large quantities of grains, meat, poultry, and dairy products. The government abolished its farm collectivization program in 1981, allowing a greater role for private enterprise in agriculture. Iraqi agriculture suffered substantial physical disruption from the Gulf War, and economic disruption from sanctions imposed by the United Nations (August 1990). Sanctions curtailed imports by cutting off Iraq's petroleum exports and embargoing those agricultural production inputs deemed to have potential military applications. The Iraqi government responded by monopolizing grain and oilseed marketing, imposing production quotas, and instituting a Public Distribution System for basic foodstuffs. By mid-1991 the government supplied a "basket" of foodstuffs that provided about one-third of the caloric daily requirement, and cost consumers about five per cent of its market value. With subsidies for agricultural inputs diminished, the government's prices failed to cover their costs. The implicit tax on agricultural production was estimated to reach 20 to 35 per cent by the mid-1990s. In October 1991 the Baghdad regime had withdrawn personnel from the northern region controlled by two Kurdish parties. Kurdistan Region was described as "... a market economy essentially left alone by a very weak governing structure, but heavily influenced by substantial international humanitarian aid flows." Under an "Oil for Food Program" negotiated with the United Nations, in December 1996 Iraq started exporting petroleum and used the proceeds to start importing foodstuffs three months later. Grain imports averaged $828 million from 1997 to 2001, an increase of over 180 per cent from the previous five-year period. Due to foreign competition, Iraqi production declined (29 per cent for wheat, 31 per cent for barley, and 52 per cent for maize). Because the government had generally neglected the production of forage crops, fruits, vegetables, and livestock other than poultry, those sectors had remained more traditional and market-based and less buffeted by international affairs. Nevertheless, severe drought, an outbreak of screwworm, and an epizootic of foot-and-mouth disease devastated production |
Basra, with a memorandum of understanding with Alstom having been signed. Maps UNHCR Atlas Map UN Map Railway links with adjacent countries All adjacent countries generally use , but may vary in couplings. Neighbours with electrified railways – Turkey and Iran – both use the world standard 25 kVAC Turkey – via Syria Iran – one link partially under construction and a second link planned Khorramshahr, Iran, to Basra, Iraq – almost complete (2006) Kermanshah, Iran, and the Iraqi province of Diyala – construction commenced. see () or 2005. Kuwait – no railways Saudi Arabia - Jordan – partially constructed – break of gauge / gauge Syria – same gauge – at Rabiya/Nurabiya Road Transport An overland trans-desert bus service between Beirut, Haifa, Damascus and Baghdad was established by the Nairn Transport Company of Damascus in 1923. Roads total: 44,900 km paved: 37,851 km, unpaved: 7,049 km (2002) Waterways 5,729 km (Euphrates River (2,815 km), Tigris River 1,899 km, Third River (565 km)); Shatt al Arab is usually navigable by maritime traffic for about 130 km. The channel has been dredged to 3 m and is in use. The | troops hope to use the 68 km long railway to transport much-needed aid supplies from the port town of Umm Qasr to Basra. In June 2011, it was announced that planning had begun for a new high-speed rail line between Baghdad and Basra, with a memorandum of understanding with Alstom having been signed. Maps UNHCR Atlas Map UN Map Railway links with adjacent countries All adjacent countries generally use , but may vary in couplings. Neighbours with electrified railways – Turkey and Iran – both use the world standard 25 kVAC Turkey – via Syria Iran – one link partially under construction and a second link planned Khorramshahr, Iran, to Basra, Iraq – almost complete (2006) Kermanshah, Iran, and the Iraqi province of Diyala – construction commenced. see () or 2005. Kuwait – no railways Saudi Arabia - Jordan – partially constructed – break of gauge / gauge Syria – same gauge – at Rabiya/Nurabiya Road Transport An overland trans-desert bus service |
countries and with international organizations are supervised by the Ministry of Foreign Affairs. In 1988 the minister of foreign affairs was Tariq Aziz, who was an influential leader of the Ba'ath Party and had served in that post since 1983. Aziz, Saddam Hussein, and the other members of the Revolutionary Command Council (RCC) formulated Iraq's foreign policy, and the Ministry of Foreign Affairs bureaucracy implemented RCC directives. The Baath maintained control over the Ministry of Foreign Affairs and over all Iraqi diplomatic missions abroad. Since the overthrow of Saddam Hussein in 2003, Hoshyar Zebari was first appointed Minister of Foreign Affairs in the Iraqi Governing Council in Baghdad on 3 September 2003. On 28 June 2004, he was reappointed as Minister of Foreign Affairs by the Iraqi Interim Government, under Prime Minister Ayad Allawi. On 3 May 2005 he was sworn in as Minister of Foreign Affairs by the Iraqi Transitional Government, under Prime Minister Ibrahim al-Jaafari. On 20 May 2006, he was delegated in for the fourth consecutive time as Foreign Minister in the government of Nouri Al-Maliki. International disputes Iran and Iraq restored diplomatic relations in 1990 but are still trying to work out written agreements settling outstanding disputes from their eight-year war concerning border demarcation, prisoners-of-war, and freedom of navigation and sovereignty over the Shatt al-Arab waterway; in November 1994, Iraq formally accepted the United Nations-demarcated border with Kuwait which had been spelled out in Security Council Resolutions 687 (1991), 773 (1992), and 883 (1993); this formally ends earlier claims to Kuwait and to Bubiyan and Warbah islands although the government continues periodic rhetorical challenges; dispute over | Council, Economic and Social Commission for Western Asia, G-77, International Atomic Energy Agency, International Monetary Fund, International Maritime Organization, Interpol, International Organization for Standardization, International Telecommunication Union, Non-Aligned Movement, Organization of Petroleum Exporting Countries, Organization of Arab Petroleum Exporting Countries, Organisation of Islamic Cooperation, United Nations, Universal Postal Union, World Health Organization and World Bank, MENAFATF. Ministry of Foreign Affairs Iraq's relations with other countries and with international organizations are supervised by the Ministry of Foreign Affairs. In 1988 the minister of foreign affairs was Tariq Aziz, who was an influential leader of the Ba'ath Party and had served in that post since 1983. Aziz, Saddam Hussein, and the other members of the Revolutionary Command Council (RCC) formulated Iraq's foreign policy, and the Ministry of Foreign Affairs bureaucracy implemented RCC directives. The Baath maintained control over the Ministry of Foreign Affairs and over all Iraqi diplomatic missions abroad. Since the overthrow of Saddam Hussein in 2003, Hoshyar Zebari was first appointed |
with the declining population predicted for most European countries. A report published in 2008 predicted that the population would reach 6.7 million by 2060. The Republic has also been experiencing a baby boom, with increasing birth rates and overall fertility rates. Despite this, the total fertility rate is still below replacement depending on when the measurement is taken. The Irish fertility rate is still the highest of any European country. This increase is significantly fuelled by non-Irish immigration – in 2009, a quarter of all children born in the Republic were born to mothers who had immigrated from other countries. Ethnic groups and immigration Gaelic culture and language forms an important part of the Irish national identity. The Irish Travellers are an indigenous minority ethnic group, formally recognised by the Irish State since 1 March 2017. In 2008, Ireland had the highest birth rate (18.1 per 1,000), lowest death rate (6.1 per 1,000) and highest net-migration rate (14.1 per 1,000) in the entire European Union – and the largest population growth rate (4.4%) in the 27-member bloc as a result. There is only genetic evidence for pre-Celtic migration into Ireland. The Irish people may therefore be described as strongly influenced by Celtic language and traditions. Nationalities Ireland contains several immigrant communities, especially in Dublin and other cities across the country. The largest immigrant groups, with over 10,000 people, being Poles, Lithuanians, Romanians, Latvians, Indians, Americans, Brazilians, Spanish, Italians, French,Germans and the British. Total fertility rate from 1850 to 1899 The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World In Data and Gapminder Foundation. Population statistics from 1900 Current vital statistics Deaths -From January-November 2020 = 28,959 -From January-November 2021 = 30,792 Nationality of mothers Of the 55,959 births in 2020, there were 43,019 babies (76.9%) born to mothers of Irish nationality compared to 46,036 (77.0%) in 2019. There were 9.6% (10.4% in 2019) of births to mothers of EU15 to EU27 nationality, 2.1% of mothers were of UK nationality, and 2.2% were of EU14 nationality (excluding Ireland). Mothers of nationalities other than Ireland, UK and the EU accounted for 9.3% (8.4% in 2019) of total births registered. Life expectancy Source: UN World Population Prospects Demographic statistics Demographic statistics according to the World Population Review in 2019. One birth every 8 minutes One death every 16 minutes One net migrant every 90 minutes Net gain of one person every 14 minutes The following demographic statistics are from the Republic of Ireland's Central Statistics Office (CSO), Eurostat and the CIA World Factbook. Population Population: 5,068,050 (July 2018 est.) Ethnic groups Irish 82.2%, Irish | Central Statistics Office (CSO), Eurostat and the CIA World Factbook. Population Population: 5,068,050 (July 2018 est.) Ethnic groups Irish 82.2%, Irish travellers 0.7%, other white 9.5%, Asian 2.1%, black 1.4%, other 1.5%, unspecified 2.6% (2016 est.) Age structure 0–14 years: 21.37% (male 554,110 /female 529,067) 15–24 years: 11.92% (male 306,052 /female 297,890) 25–54 years: 42.86% (male 1,091,495 /female 1,080,594) 55–64 years: 10.53% (male 267,255 /female 266,438) 65 years and over: 13.32% (male 312,694 /female 362,455) (2018 est.) Median age total: 37.1 years Country comparison to the world: 70th male: 36.8 years female: 37.5 years (2018 est.) Birth rate 13.8 births/1,000 population (2018 est.) Country comparison to the world: 137th Death rate 6.6 deaths/1,000 population (2018 est.) Country comparison to the world: 140th Total fertility rate 1.96 children born/woman (2018 est.) Country comparison to the world: 125th Net migration rate 4 migrant(s)/1,000 population (2017 est.) Country comparison to the world: 28th Population growth rate 1.11% (2018 est.) Country comparison to the world: 98th Mother's mean age at first birth 30.7 years (2015 est.) Dependency ratios total dependency ratio: 53.8 (2015 est.) youth dependency ratio: 33.4 (2015 est.) elderly dependency ratio: 20.3 (2015 est.) potential support ratio: 4.9 (2015 est.): Sex ratio at birth: 1.06 male(s)/female (2017 est.) 0–14 years: 1.05 male(s)/female (2017 est.) 15–24 years: 1.03 male(s)/female (2017 est.) 25–54 years: 1.01 male(s)/female (2017 est.) 55–64 years: 1 male(s)/female (2017 est.) 65 years and over: 0.86 male(s)/female (2017 est.) total population: 1 male(s)/female (2017 est.) Life expectancy at birth total population: 81 years. Country comparison to the world: 35th male: 78.7 years female: 83.5 years (2018 est.) Nationality noun: Irishman (men), Irishwoman (women), Irish people (collective plural) adjective: Irish Nationalities in the Republic of Ireland Ethnic groups Irish, with Norse (Scandinavian), Norman, English, French, Scottish, and Welsh, Ulster-Scots and various immigrant populations – the largest immigrant groups, with over 10,000 people, are the British, Poles, Americans, Lithuanians, Latvians, Romanians, Indians, Brazilians, Spanish, Italians, French, and Germans. Ethnic backgrounds: Irish: 82.2%, Irish Traveller: 0.7%, Other White: 9.5% (total White: 92.4%), Asian: 2.1%, Black: 1.3%, Other: 1.5%, Not Stated: 2.6% (2016) Religion The Republic of Ireland is a predominantly Christian country. The majority are Catholic, however, the number of people who declare themselves Catholic has been declining in recent years. Irreligion has almost doubled since 2011 with 9.8% declaring 'No Religion' in 2016, overtaking Protestantism as the second largest group in the state. The various Protestant and other Christian faiths represent 5.6. Immigration has brought other faiths, with Islam at 1.3%, other religions 2.4% and 2.6% gave no answer. Geographic Population Distribution Urban population (areas with >1,500 people): 62.0% (2011) Rural population: 38.0% (2011) Languages English is the most commonly used language, with 84% of the population calling it their mother tongue. Irish is the first official language of the state, with 11% calling it their mother tongue. Irish is the main language of the Gaeltacht regions, where 96,628 people live. The main sign language used is Irish Sign Language. Literacy definition: age 15 and over who can read and write total population: 99% male: 99% female: 99% (2003 est.) School life expectancy (primary to tertiary education) total: 19 years male: 19 years female: 19 years (2016) Unemployment, youth ages 15–24 total: 17.2%. Country comparison to the world: 79th male: 19.5% female: 14.6% (2016 est.) See also 2011 census of Ireland Irish diaspora Irish population analysis Stamp 4 Groups: Lithuanians in Ireland Polish minority in the Republic of Ireland Romani people in Ireland Turks in Ireland |
been the norm. Seanad Éireann The Seanad is a largely advisory body. It consists of sixty members called Senators. An election for the Seanad must take place no later than 90 days after a general election for the members of the Dáil. Eleven Senators are nominated by the Taoiseach while a further six are elected by certain national universities. The remaining 43 are elected from special vocational panels of candidates, the electorate for which consists of the 60 members of the outgoing Senate, the 160 TDs of the incoming Dáil and the 883 elected members of 31 local authorities. The Seanad has the power to delay legislative proposals and is allowed 90 days to consider and amend bills sent to it by the Dáil (excluding money bills). The Seanad is only allowed 21 days to consider money bills sent to it by the Dáil. The Seanad cannot amend money bills but can make recommendations to the Dáil on such bills. No more than two members of a government may be members of the Seanad, and only twice since 1937 have members of the Seanad been appointed to government. Executive branch Executive authority is exercised by a cabinet known simply as the Government. Article 28 of the Constitution states that the Government may consist of no less than seven and no more than fifteen members, namely the Taoiseach (prime minister), the Tánaiste (deputy prime minister) and up to thirteen other ministers. The Minister for Finance is the only other position named in the Constitution. The Taoiseach is appointed by the President, after being nominated by Dáil Éireann. The remaining ministers are nominated by the Taoiseach and appointed by the President following their approval by the Dáil. The Government must enjoy the confidence of Dáil Éireann and, in the event that they cease to enjoy the support of the lower house, the Taoiseach must either resign or request the President to dissolve the Dáil, in which case a general election follows. Judicial branch Ireland is a common law jurisdiction. The judiciary consists of the Supreme Court, the Court of Appeal and the High Court established by the Constitution and other lower courts established by statute law. Judges are appointed by the President after being nominated by the Government and can be removed from office only for misbehaviour or incapacity, and then only by resolution of both houses of the Oireachtas. The final court of appeal is the Supreme Court, which consists of the Chief Justice, nine ordinary judges and, the Presidents of the Court of Appeal and the High Court. The Supreme Court rarely sits as a full bench and normally hears cases in chambers of three, five or seven judges. The courts established by the constitution have the power of judicial review and may declare to be invalid both laws and acts of the state which are repugnant to the constitution. Public sector The Government, through the civil and public services and state-sponsored bodies, is a significant employer in the state; these three sectors are often called the public sector. Management of these various bodies vary, for instance in the civil service there will be clearly defined routes and patterns whilst among public services a sponsoring minister or the Minister for Finance may appoint a board or commission. Commercial activities, where the state involves itself, are typically through the state-sponsored bodies which are usually organised in a similar fashion to private companies. A 2005 report on public sector employment, showed that in June 2005 the numbers employed in the public sector stood at 350,100; of these by sector they were 38,700 (civil service), 254,100 (public service) and 57,300 (state-sponsored). The total workforce of the state was 1,857,400 that year, thus the public sector represents approximately 20% of the total workforce. Civil service The civil service of Ireland consists of two broad components, the Civil Service of the Government and the Civil Service of the State. Whilst these two components are largely theoretical, they do have some fundamental operational differences. The civil service is expected to maintain the political impartiality in its work, and some sections of it are entirely independent of Government decision making. Public service The public service is a relatively broad term and is not clearly defined and sometimes is taken to include the civil service. The public service proper consists of Government agencies and bodies which provide services on behalf of the Government but are not the core civil service. For instance local authorities, Education and Training Boards and Garda Síochána are considered to be public services. Local government Article 28A of the constitution of Ireland provides a constitutional basis for local government. The Oireachtas is empowered to establish the number, size and powers of local authorities by law. Under Article 28A, members of local authorities must be directly elected by voters at least once every five years. Local government in Ireland is governed by a series of Local Government Acts, beginning with the Local Government (Ireland) Act 1898. The most significant of these is the Local Government Act 2001, which established a two-tier structure of local government. The Local Government Reform Act 2014 abolished the bottom tier, the town councils, leaving 31 local authorities. There are 26 County Councils (County Dublin having been divided into three council areas), 3 City Councils (Dublin, Cork and Galway), and 2 City and County Councils (Limerick and Waterford). Political parties A number of political parties are represented in the Dáil and coalition governments are common. The Irish electoral system has been historically characterised as a two-and-a-half party system, | employer in the state; these three sectors are often called the public sector. Management of these various bodies vary, for instance in the civil service there will be clearly defined routes and patterns whilst among public services a sponsoring minister or the Minister for Finance may appoint a board or commission. Commercial activities, where the state involves itself, are typically through the state-sponsored bodies which are usually organised in a similar fashion to private companies. A 2005 report on public sector employment, showed that in June 2005 the numbers employed in the public sector stood at 350,100; of these by sector they were 38,700 (civil service), 254,100 (public service) and 57,300 (state-sponsored). The total workforce of the state was 1,857,400 that year, thus the public sector represents approximately 20% of the total workforce. Civil service The civil service of Ireland consists of two broad components, the Civil Service of the Government and the Civil Service of the State. Whilst these two components are largely theoretical, they do have some fundamental operational differences. The civil service is expected to maintain the political impartiality in its work, and some sections of it are entirely independent of Government decision making. Public service The public service is a relatively broad term and is not clearly defined and sometimes is taken to include the civil service. The public service proper consists of Government agencies and bodies which provide services on behalf of the Government but are not the core civil service. For instance local authorities, Education and Training Boards and Garda Síochána are considered to be public services. Local government Article 28A of the constitution of Ireland provides a constitutional basis for local government. The Oireachtas is empowered to establish the number, size and powers of local authorities by law. Under Article 28A, members of local authorities must be directly elected by voters at least once every five years. Local government in Ireland is governed by a series of Local Government Acts, beginning with the Local Government (Ireland) Act 1898. The most significant of these is the Local Government Act 2001, which established a two-tier structure of local government. The Local Government Reform Act 2014 abolished the bottom tier, the town councils, leaving 31 local authorities. There are 26 County Councils (County Dublin having been divided into three council areas), 3 City Councils (Dublin, Cork and Galway), and 2 City and County Councils (Limerick and Waterford). Political parties A number of political parties are represented in the Dáil and coalition governments are common. The Irish electoral system has been historically characterised as a two-and-a-half party system, with two large catch-all parties, this being the centre-right Fine Gael and the centrist Fianna Fáil, dominating, and the "half-party", being Labour. This changed after the 2011 general election, following the large drop in support for Fianna Fáil and the rise in support for other parties. Ireland's political landscape changed dramatically after the 2020 general election, when Sinn Féin made gains to become the joint-largest party in the Dáil, making Ireland a three party system. Fianna Fáil, a traditionally Irish republican party founded in 1927 by Éamon de Valera, is the joint-largest party in the Dáil and considered centrist in Irish politics. It first formed a government on the basis of a populist programme of land redistribution and national preference in trade and republican populism remains a key part of its appeal. It has formed government seven times since Ireland gained independence: 1932–1948, 1951–1954, 1957–1973, 1977–1981, 1982, 1987–1994, and 1997–2011. Fianna Fáil was the largest party in the Dáil from 1932 to 2011. It lost a huge amount of support in the 2011 general election, going from 71 to 20 seats, its lowest ever. Its loss in support was mainly due to its handling of the 2008 economic recession. It has since regained some support, but is yet to recover to its pre-2011 levels. The other joint-largest party is Sinn Féin, established in its current form in 1970. The original Sinn Féin played a huge role in the Irish War of Independence and the First Dáil. Fine Gael and Fianna Fáil trace their origins to that party. The current-day party has been historically linked to the Provisional IRA. The party is a Republican party which takes a more left wing stance on economics and social policy than the Labour Party. Sinn Féin received the highest percentage vote in the 2020 general election. The third-largest party in the Dáil is Fine Gael, which has its origins in the pro-treaty movement of Michael Collins in the Irish Civil War. Traditionally the party of law and order, it is associated with strong belief in pro-enterprise and reward. Despite expressions of Social Democracy by previous leader Garret FitzGerald, today, it remains a Christian democratic, economically liberal party along European lines, with a strongly pro-European outlook. Fine Gael was formed out of a merger of Cumann na nGaedheal, the National Centre Party and the Blueshirts. In recent years it has generally been associated with a liberal outlook. It has formed government in the periods 1922–1932 (Cumann na nGaedheal), 1948–1951, 1954–1957, 1973–1977, 1981–1982, 1982–1987, 1994–1997, and 2011 to present. Fine Gael made massive gains at the 2011 general election, winning 78 seats, its highest ever. The fourth-largest party in the Dáil is the Green Party, which made significant gains at the 2020 general election. The fifth largest party in the Dáil is the centre-left Labour Party which was founded by James Connolly and Jim Larkin in 1912. Labour have formal links with the trade union movement and have governed in seven coalition governments – six led by Fine Gael and one by Fianna Fáil. This role as a junior coalition partner has led to Labour being classed as the half party of Ireland's two and a half party system. Labour won a record number of seats, 37, at the 2011 general election, becoming the second-largest party for the first time. It went into coalition with Fine Gael, who also won a record number of seats. Labour was Ireland's third party or "half-party" up until the 2016 general election, when it suffered the worst general election defeat in its history, gaining just 7 seats. A lot of this was due to its being in government with Fine Gael, who introduced austerity measures to deal with the economic crisis. The sixth-largest Dáil party is the Social Democrats. The Social Democrats were founded in 2015 and made gains at the 2020 general election. The Solidarity–People Before Profit electoral alliance, consisting of the Solidarity and People Before Profit are the 7th-largest party grouping within Dáil Éireann. Formed in 2015, the group represents a left-wing, socialist viewpoint. Aontú and Right to Change have one TD each. Nineteen of the 160 members of the 33rd Dáil are Independent. Party details Party representation Foreign relations Ireland's foreign relations are substantially influenced by its membership of the European Union, although bilateral relations with the United States and United Kingdom are also important to the country. It is one of the group of smaller nations in the EU, and has traditionally followed a non-aligned foreign policy. Military neutrality Ireland tends towards independence in foreign policy, thus it is not a member of NATO and has a longstanding policy of military neutrality. This policy has helped the Irish Defence Forces to be successful in their contributions to UN peace-keeping missions since 1960 (in the Congo Crisis ONUC) and subsequently in Cyprus (UNFICYP), Lebanon (UNIFIL), Iran/Iraq Border (UNIIMOG), Bosnia and Herzegovina (SFOR & EUFOR Althea), Ethiopia and Eritrea (UNMEE), Liberia (UNMIL), East Timor (INTERFET), Darfur and Chad (EUFOR Tchad/RCA). Irish Defence Forces do not deploy in Missions Northern Ireland Northern Ireland has been a major factor in Irish politics since the island of Ireland was divided between Northern Ireland and what is now the Republic in 1920. The creation of Northern Ireland led to conflict between northern nationalists (mostly Roman Catholic) who seek unification with the Republic and Unionists (mostly Protestant) who opposed British plans for Irish Home Rule and wished for Northern Ireland to remain within the United Kingdom. After the formation of Northern Ireland in 1921 following its opt out from the newly formed Irish Free State, many Roman Catholics and Republicans were discriminated against. The abolition of Proportional Representation and the gerrymandering of constituency boundaries led to Unionists being over-represented at Stormont and at Westminster. Even James Craig who was prime minister of Northern Ireland boasted of his Protestant Parliament for a Protestant People. In the 1960s NICRA was set up to end discrimination between Catholics and Protestants. |
taken over by the Irish Department of Posts and Telegraphs (P&T) from the British Post Office in 1921 and used a mixture of manual and step-by-step automatic exchanges. Development of the network was relatively stagnant with slow rollout of automatic switching using step-by-step exchanges until after WWII. From 1957 onwards, P&T began to roll out more modern crossbar switches primarily using equipment supplied by Ericsson built at their Athlone facility. ITT Pentaconta crossbar switches, built by CGCT (Compagnie générale de constructions téléphoniques) were also used in some areas. This saw significant improvements to many services, but the network was still quite underdeveloped in rural areas. Digital switching was introduced in 1980 using Ericsson AXE and Alcatel E10 switches both of which were manufactured at facilities in Ireland. This saw a total transformation of the telephone network with modern automatic and digital services reaching even the most rural parts of Ireland by the mid-1980s. The fixed-line network is now made up of multiple operators using a diverse range of digital technologies including VoIP. Ireland's first mobile telephone network, Eircell, went live in 1986 using the analogue TACS system. 2G GSM services from Eircell launched on 1 July 1993. Digifone followed in 1997, then Meteor in 2001 (having been licensed in 1998) and 3 Ireland launched its UMTS 3G-only service in 2005. 3G services launched in 2004 (Vodafone Ireland) and other networks quickly followed suit, and 4G launched in 2013 (Meteor) and is now available on most networks. Meteor was bought out by Eir in 2005 and eventually rebranded as Eir in 2017. Internet Internet users: 3.6 million, 77% of the population, 70th in the world (2011); 3.0 million, 67th in the world (2009) Dial-up subscriptions: 11,437 (Q4 2012 ComStat) Fixed broadband subscriptions: 1,506,832 (Q3 2020, ComReg) Mobile broadband subscriptions: 323,530 (Q3 2020, ComReg) Internet hosts: 1.4 million, 40th in the world (2012) Internet censorship: None (2011) Top-level domain name: .ie Broadband Internet access is available in Ireland via DSL, cable, wireless, and satellite. By the end of 2011 Eircom announced that 75% of its working lines would be connected to Next Generation Broadband (NGB) enabled exchanges. Currently available services (Q3 2014) ADSL - up to 24Mbit/s - (several providers and unbundled services are available.) FTTC - VDSL up to 100Mbit/s down 20 Mbit/s up (several providers - vectoring technology is used) Cable - Speeds of up to 360Mbit/s - Main provider Virgin Media Ireland Fixed Wireless Access (FWA) - various technologies in use - mostly used in rural areas. Mobile broadband - UMTS (3G) and LTE (4G) services are available from several providers. A typical monthly broadband Internet subscription cost $26.02 in 2011, 14% less than the average of $30.16 for the 34 Organisation for Economic Co-operation and Development (OECD) countries surveyed. In August 2012 Pat Rabbitte, the Minister for Communications, Energy and Natural Resources, outlined a national broadband plan with goals of: 70–100 Mbit/s broadband service available to at least 50 per cent of the population, at least 40 Mbit/s available to at least a further 20 per cent, and a minimum of 30 Mbit/s available to everyone, no matter how rural or remote. Founded in 1996, the Internet Neutral Exchange (INEX) is an industry-owned association that provides IP peering and traffic exchange for its members in Ireland. The INEX switching centres are located in five secure data centres in Dublin and one in Cork: TeleCity Group in Kilcarbery Park, Dublin 22 & TeleCity Group in Citywest Business Campus, Dublin 24 and Interxion DUB1, and Interxion DUB2 in Park West, and Vodafone Clonshaugh as well as at CIX, Hollyhill, Cork T23 R68N. The switches are connected by dedicated resilient fibre links. In June 2015 it listed 74 full and 21 associate members. Established in 1998, the Internet Service Providers Association of Ireland (ISPAI) listed 24 Internet access and hosting providers as members in 2012. Radio and television Infrastructure Television in Ireland is broadcast using DVB-T using the common platform specifications defined by NorDig which apply in the Nordic countries and Ireland. Video is encoded using the MPEG4 system. The analogue PAL-I broadcasting system is no longer on air. Cable systems operate using the DVB-C standard and Satellite is broadcast using DVB-S/S2. Some areas still carry a range of cable channels in analogue PAL-I format. Although, this is normally just a legacy service provided by default. It is not possible to subscribe to analogue cable as a new customer. Radio is broadcast primarily using FM 88-108 MHz. Digital DAB Radio is also available in some areas. RTÉ Radio 1 is also broadcast on Longwave 252 kHz (AM) Mediumwave services have been discontinued. 2RN operates a national FM network and DAB services. However, most independent FM stations own their own broadcasting infrastructure. Raidió Teilifís Éireann (Radio [and] Television of Ireland; abbreviated as RTÉ) is a statutory semi-state company and the public service broadcaster that dominates the radio and TV sectors in Ireland. The first commercial radio stations began broadcasting in 1989. Prior to 1989 hundreds of pirate radio stations were a mainstay of radio listener-ship, particularly in Dublin, and a handful of pirate stations continue to operate illegally today. In 1998 TV3 became the first privately owned commercial TV station and it remains the main free-to-air service after RTÉ. Competition also comes from British public and private terrestrial TV. Satellite and cable TV are widely available. There are also non-commercial community and special interest radio stations. RTÉ both produces programmes and broadcasts them on television, radio and the Internet in English and Irish. The radio service began on 1 January 1926, while regular television broadcasts began on 31 December 1961, making RTÉ one | They also provide an extensive vectored VDSL2 based FTTC access network, using the legacy cooper network. This offers speeds of up to 100Mbit/s down and 20Mbps up. Retail services using this next generation access infrastructure are provided by approximately 15 operators. SIRO, a joint venture between ESB Group and Vodafone Ireland, provides another open access fibre to home network, used by multiple ISPs to deliver service. Fibre is run alongside ESB Networks 230V/400V LV electricity distribution system, sharing underground ducts and poles, with fibre typically entering premises next to the electricity meter. This, similar to Eir's FTTH network, delivers speeds of up to 1 Gbit/s and is capable of delivery of 10 Gbit/s in the future. Ireland has three mobile networks that own and operate their own network infrastructure and a number of MVNO operators that operate mobile phone services using one of these infrastructure providers' radio networks. The three infrastructure owning networks are Eir Mobile, Three, and Vodafone. Meteor and Eir Mobile were the first to launch 4G LTE services in Ireland on 26 September 2013, followed by Vodafone on 14 October 2013, and Three on 27 January 2014. O2 was due to launch its 4G services later in 2014, but plans were put on hold when its acquisition by Three was approved in May, and from the time of the merger in 2015, previous O2 customers gained 4G coverage through Three's network, albeit with initial service problems. In 2016, 41.9% of Ireland's mobile subscriptions were using 4G technology. 3G remained the dominant technology with 44.6% share, however, it is likely to be overtaken by 4G in 2017. Telephone system Fixed telephone lines in use 1,168,591 (Q3 2020, ComReg) Mobile cellular telephones: 5,182,682 (Q3 2020, ComReg) Country code: 353 As mobile phone services become more price competitive, more Irish customers are opting to drop landline services. This is reflected by a sharp fall in the number of fixed line channels in use and an equivalent increase in mobile subscriptions. Details are tracked on ComReg's ComStat website There are three mobile telecommunications providers: Three Ireland, Eir Mobile and Vodafone Ireland. There are also some MNVOs (Mobile Network Virtual Operators), such as: 48, GoMo, Lycamobile, An Post Mobile, Tesco Mobile, Virgin Mobile and Clear Mobile. History The original network was taken over by the Irish Department of Posts and Telegraphs (P&T) from the British Post Office in 1921 and used a mixture of manual and step-by-step automatic exchanges. Development of the network was relatively stagnant with slow rollout of automatic switching using step-by-step exchanges until after WWII. From 1957 onwards, P&T began to roll out more modern crossbar switches primarily using equipment supplied by Ericsson built at their Athlone facility. ITT Pentaconta crossbar switches, built by CGCT (Compagnie générale de constructions téléphoniques) were also used in some areas. This saw significant improvements to many services, but the network was still quite underdeveloped in rural areas. Digital switching was introduced in 1980 using Ericsson AXE and Alcatel E10 switches both of which were manufactured at facilities in Ireland. This saw a total transformation of the telephone network with modern automatic and digital services reaching even the most rural parts of Ireland by the mid-1980s. The fixed-line network is now made up of multiple operators using a diverse range of digital technologies including VoIP. Ireland's first mobile telephone network, Eircell, went live in 1986 using the analogue TACS system. 2G GSM services from Eircell launched on 1 July 1993. Digifone followed in 1997, then Meteor in 2001 (having been licensed in 1998) and 3 Ireland launched its UMTS 3G-only service in 2005. 3G services launched in 2004 (Vodafone Ireland) and other networks quickly followed suit, and |
published the Transport 21 plan which includes €18bn for improved roads and €16bn for improved rail, including the Western Railway Corridor and the Dublin Metro. The Republic of Ireland's transport sector is responsible for 21% of the state's greenhouse gas emissions. In Northern Ireland, the road network and railways are in state ownership. The Department for Infrastructure is responsible for these and other areas (such as water services). Two of the three main airports in Northern Ireland are privately operated and owned. The exception is City of Derry Airport, which is owned and funded by Derry City Council. A statutory corporation, the Northern Ireland Transport Holding Company (which trades as Translink) operates public transport services through its three subsidiaries – NI Railways Company Limited, Ulsterbus Limited, and Citybus Limited (now branded as Metro). Railways Total broad gauge (1998); electrified; double track; some additions and removals since 1997 standard gauge (2004) (Luas tramway); electrified; double track; additional track under construction narrow gauge (2006) (industrial railway operated by Bord na Móna) Ireland's railways are in State ownership, with Iarnród Éireann (Irish Rail) operating services in the Republic and NI Railways operating services in Northern Ireland. The two companies co-operate in providing the joint Enterprise service between Dublin and Belfast. InterCity services are provided between Dublin and the major towns and cities of the Republic, and in Ulster along the Belfast–Derry railway line. Suburban railway networks operate in Dublin, Dublin Suburban Rail, and Belfast, Belfast Suburban Rail, with limited local services being offered in, or planned for, Cork, Limerick, and Galway. The rail network in Ireland was developed by various private companies during the 19th century, with some receiving government funding. The network reached its greatest extent by 1920. A broad gauge of 1600mm (5 ft 3in) was agreed as the standard for the island, although there were also hundreds of kilometres of 914mm (3 ft) narrow gauge railways. Many lines in the west were decommissioned in the 1930s under Éamon de Valera, with a further large cull in services by both CIÉ and the Ulster Transport Authority (UTA) during the 1960s, leaving few working lines in the northern third of the island. There is a campaign to bring some closed lines back into service, in particular the Limerick-Sligo line (the Western Railway Corridor), to facilitate economic regeneration in the west, which has lagged behind the rest of the country. There is also a move to restore service on the Dublin to Navan line, and smaller campaigns to re-establish the rail links between Sligo and Enniskillen/Omagh/Derry and Mullingar and Athlone/Galway. Under the Irish government's Transport 21 plan, the Cork to Midleton rail link was reopened in 2009. The re-opening of the Navan-Clonsilla rail link and the Western Rail Corridor are amongst future projects as part of the same plan. Public transport services in Northern Ireland are sparse in comparison with those of the rest of Ireland or Great Britain. A large railway network was severely curtailed in the 1950s and 1960s. Current services includes suburban routes to Larne, Newry and Bangor, as well as services to Derry. There is also a branch from Coleraine to Portrush. Since 1984 an electrified train service run by Iarnród Éireann has linked Dublin with its coastal suburbs. Running initially between Bray and Howth, the Dublin Area Rapid Transit (DART) system was extended from Bray to Greystones in 2000 and further extended from Howth Junction to Malahide. In 2004 a light rail system, Luas, was opened in Dublin serving the central and western suburbs, run by Veolia under franchise from the Railway Procurement Agency. The construction of the Luas system caused much disruption in Dublin. Plans to construct a Dublin Metro service including underground lines were mooted in 2001, but stalled in the financial crisis at the end of that decade. Ireland has one of the largest dedicated freight railways in Europe, operated by Bord na Móna totalling nearly . Road transport Roads and cars in Ireland Total – South: including of motorway (2010) North: including of motorway (2008) paved – , unpaved – Ireland's roads link Dublin with all the major cities (Belfast, Cork, Limerick, Derry, Galway, and Waterford). Driving is on the left. Signposts in the Republic of Ireland are shown in kilometres and speed limits in kilometres per hour. Distance and speed limit signs in Northern Ireland use imperial units in common with the rest of the United Kingdom. Historically, land owners developed most roads and later turnpike trusts collected tolls so that as early as 1800 Ireland had a road network. In 2005 the Irish Government launched Transport 21, a plan envisaging the investment of €34 billion in transport infrastructure from 2006 until 2015. Several road projects were progressed but the economic crisis that began in 2008–09 has prevented its full implementation. Between 2011 and 2015, diesel cars constituted 70% of new cars. In 2015, 27 new cars per 1,000 inhabitants were registered in Ireland, the same as the EU average. Bus services Ireland's first mail coach services were contracted with the government by John Anderson with William Bourne in 1791 who also paid to improve the condition of the roads. The system of mail coaches, carriages and "bians" was further developed by Charles Bianconi, based in Clonmel, from 1815 as a fore-runner of the modern Irish public transportation system. State-owned Bus Éireann (Irish Bus) currently provides most bus services in the Republic of Ireland, outside Dublin, including an express coach network connecting most cities in Ireland, along with local bus services in the provincial cities. Bus Átha Cliath (Dublin Bus), a sister company of Bus Éireann, provides most of the bus services in Dublin, with some other operators providing a number of routes. These include Aircoach, a subsidiary of FirstGroup which provides services to Dublin Airport from Dublin city centre, South Dublin City, Greystones and Bray. They also operate two intercity express non-stop services service between Dublin Airport, Dublin City Centre, and Cork and also a non-stop route between Belfast City Centre, Dublin Airport and Dublin City. Other operators such as Irish Citylink and GoBus.ie compete on the Dublin-Galway route. Matthews Coaches run a direct service from Bettystown, Laytown and Julianstown to Dublin whilst Dublin Coach operate services to Portlaoise and Limerick. JJ Kavanagh and Sons also operates regular services on the Portlaoise/Limerick route as well as offering services to Waterford, Carlow, Kilkenny, Clonmel and a selection of regional towns | from Dublin city centre, South Dublin City, Greystones and Bray. They also operate two intercity express non-stop services service between Dublin Airport, Dublin City Centre, and Cork and also a non-stop route between Belfast City Centre, Dublin Airport and Dublin City. Other operators such as Irish Citylink and GoBus.ie compete on the Dublin-Galway route. Matthews Coaches run a direct service from Bettystown, Laytown and Julianstown to Dublin whilst Dublin Coach operate services to Portlaoise and Limerick. JJ Kavanagh and Sons also operates regular services on the Portlaoise/Limerick route as well as offering services to Waterford, Carlow, Kilkenny, Clonmel and a selection of regional towns and villages in the south. Some private rural operators exist, such as Halpenny's in Blackrock, County Louth, which was the first private bus operator to run a public service in Ireland, Bus Feda (Feda O'Donnell Coaches), which operates twice daily routes from Ranafast, County Donegal to Galway and back. In Northern Ireland Ulsterbus provides the bus network, with its sister company Metro providing services in Belfast. Both are part of state-owned Translink. Tiger Coaches operates a very late night bus service on Friday and Saturday nights between Belfast and Lisburn. Private hire companies offer groups travelling throughout Ireland with options ranging from cars to 56 passenger coaches. Private Coach Hire Companies can be found at CTTC.ie. Cross-border services (e.g. Dublin city centre to Belfast) are run primarily by a partnership of Ulsterbus and Bus Éireann with some services run across the border exclusively by one of the two companies (e.g. Derry–Sligo run by Bus Éireann). Aircoach, a private operator, does however operate a competing Dublin to Belfast Express service via Dublin Airport. Modal share Waterways Total (2004) – (pleasure craft only on inland waterways, several lengthy estuarine waterways) Grand Canal (Ireland) Royal Canal Shannon–Erne Waterway River Barrow River Shannon Pipelines Natural gas transmission network (2003). There is a much more extensive distribution network. Ports and harbours Ireland has major ports in Dublin, Belfast, Cork, Rosslare, Derry and Waterford. Smaller ports exist in Arklow, Ballina, Drogheda, Dundalk, Dún Laoghaire, Foynes, Galway, Larne, Limerick, New Ross, Sligo, Warrenpoint and Wicklow. Ports in the Republic of Ireland handled 2.8 million travellers crossing the sea between Ireland and Great Britain in 2014, a decrease of 1 million passengers movements since 2003. This has been steadily dropping for a number of years (20% since 1999), probably as a result of low cost airlines. Ferry connections between Britain and Ireland via the Irish Sea include the routes from Fishguard and Pembroke to Rosslare, and Cairnryan to Larne. The Stranraer to Belfast and Larne routes and the Swansea to Cork route have closed. There is also a connection between Liverpool and Belfast via the Isle of Man. The world's largest car ferry, Ulysses, is operated by Irish Ferries on the Dublin–Holyhead route. In addition, there are ferries from Rosslare and Dublin to Cherbourg and Roscoff in France. The vast majority of heavy goods trade is done by sea. Northern Irish ports handle 10 megatonnes (Mt) of goods trade with Britain annually, while ports in the south handle 7.6 Mt, representing 50% and 40% respectively of total trade by weight. Mercantile Marine Total – 35 ships (with a volume of or over) totalling / Ships by type – bulk carrier 7, cargo ship 22, chemical tanker 1, container ship 3, roll-on/roll-off ship 1, short-sea passenger 1 Foreign-owned – Germany 3, Italy 7, Norway 2 Registered in other countries – 18 (2003 est.) Aviation Ireland has five main international airports: Dublin Airport, Belfast International Airport (Aldergrove), Cork Airport, Shannon Airport and Ireland West Airport (Knock). Dublin Airport is the busiest of these carrying almost 28 million passengers per year; a second terminal (T2) was opened in November 2010. All provide services to Great Britain and continental Europe, while Cork, Dublin and Shannon also offer transatlantic services. The London to Dublin air route is the ninth busiest international air route in the world, and also the busiest international air route in Europe, with 14,500 flights between the two in 2017. In 2015, 4.5 million people took the route, at that time, the world's second-busiest. Aer Lingus is the flag carrier of Ireland, although Ryanair is the country's largest airline. Ryanair is Europe's largest low-cost carrier, the second largest in terms of passenger numbers, and the world's largest in terms of international passenger numbers. For several decades until 2007 Shannon was a mandatory stopover for transatlantic routes to the United States. In recent years it has opened a pre-screening service allowing passengers to pass through US immigration services before departing from Ireland. There are also several smaller regional airports: George Best Belfast City Airport, City of Derry Airport, Galway Airport, Kerry Airport (Farranfore), Sligo Airport (Strandhill), |
the Rockall area. However, neither have concluded similar agreements with Iceland or Denmark (on behalf of the Faroe Islands) and the matter remains under negotiation. Iceland now claims a substantial area of the continental shelf to the west of Ireland, to a point 49°48'N 19°00'W, which is further south than Ireland. The controversial Sellafield nuclear fuel reprocessing plant in north-western England has also been a contentious issue between the two governments. The Irish government has sought the closure of the plant, taking a case against the UK government under the United Nations Convention on the Law of the Sea. However, the European Court of Justice found that the case should have been dealt with under EU law. In 2006, however, both countries came to a friendly agreement which enabled both the Radiological Protection Institute of Ireland and the Garda Síochána (Irish Police Force) access to the site to conduct investigations. United States The United States recognised the Irish Free State on 28 June 1924 with diplomatic relations being established on 7 October 1924. In 1927, the United States opened an American Legation in Dublin. Due to the ancestral ties between the two countries, Ireland and the US have a strong relationship, both politically and economically, with the US being Ireland's biggest trading partner since 2000. Ireland also receives more foreign direct investment from the US than many larger nations, with investments in Ireland equal to France and Germany combined and, in 2012, more than all of developing Asia put together. The use of Shannon Airport as a stop-over point for US forces en route to Iraq has caused domestic controversy in Ireland. Opponents of this policy brought an unsuccessful High Court case against the government in 2003, arguing that this use of Irish airspace violated Irish neutrality. Restrictions such as carrying no arms, ammunition, or explosives, and that the flights in question did not form part of military exercises or operations were put in place to defend Irish neutrality, however allegations have been made against the Central Intelligence Agency that the airport has been used between 30 and 50 times for illegal extraordinary rendition flights to the U.S without the knowledge of the Irish Government, despite diplomatic assurances by the US that Irish airspace would not be used for transport of detainees. In July 2006, the former Irish Minister for Foreign Affairs, Dermot Ahern voiced concern over the 2006 Lebanon War. A shipment of bombs being sent to Israel by the United States was banned using Irish airspace or airfields. In 1995 a decision was made by the U.S. government to appoint a Special Envoy to Northern Ireland to help with the Northern Ireland peace process. During the 2008 presidential campaign in the United States, however, Democratic Party candidate Barack Obama was reported as having questioned the necessity to keep a US Special Envoy for Northern Ireland. His remarks caused uproar within the Republican Party, with Senator John McCain questioning his leadership abilities and his commitment to the ongoing peace process in Northern Ireland. , Daniel Mulhall is the Irish ambassador to the United States while the position of U.S. ambassador to Ireland is held by Claire D. Cronin. China Ireland's official relationship with the People's Republic of China began on 22 June 1979. Following his visit to China in 1999, former Taoiseach Bertie Ahern authorised the establishment of an Asia Strategy. The aim of this Strategy was to ensure that the Irish Government and Irish enterprise work coherently to enhance the important relationships between Ireland and Asia. In recent years due to the rapid expansion of the Chinese economy, China is becoming a key trade partner of Ireland, with over $6bn worth of bilateral trade between the two countries in 2010. In July 2013, the Tánaiste and Minister for Foreign Affairs and Trade were invited to China by the Chinese foreign minister Wang Yi on a trade mission to boost both investment and political ties between the two countries. Ireland has raised its concerns in the area of human rights with China on a number of occasions. | both states signed the Good Friday Agreement and now co-operate closely to find a solution to the region's problems. Articles 2 and 3 of the Constitution of Ireland were amended as part of this agreement, the territorial claim being replaced with a statement of aspiration to unite the people of the island of Ireland. As part of the Good Friday Agreement, the states also ended their dispute over their respective names: Ireland and the United Kingdom of Great Britain and Northern Ireland. Each agreed to accept and use the others' correct name. When the Troubles were raging in Northern Ireland, the Irish Government sought, with mixed success, to prevent the import of weapons and ammunition through its territory by illegal paramilitary organisations for use in their conflict with the security forces in Northern Ireland. In 1973 three ships of the Irish Naval Service intercepted a ship carrying weapons from Libya which were probably destined for Irish Republican paramilitaries. Law enforcement acts such as these additionally improved relations with the government of the United Kingdom. However, the independent judiciary blocked a number of attempts to extradite suspects between 1970 and 1998 on the basis that their crime might have been 'political' and thus contrary to international law at the time. Ireland is one of the parties to the Rockall continental shelf dispute that also involves Denmark, Iceland, and the United Kingdom. Ireland and the United Kingdom have signed a boundary agreement in the Rockall area. However, neither have concluded similar agreements with Iceland or Denmark (on behalf of the Faroe Islands) and the matter remains under negotiation. Iceland now claims a substantial area of the continental shelf to the west of Ireland, to a point 49°48'N 19°00'W, which is further south than Ireland. The controversial Sellafield nuclear fuel reprocessing plant in north-western England has also been a contentious issue between the two governments. The Irish government has sought the closure of the plant, taking a case against the UK government under the United Nations Convention on the Law of the Sea. However, the European Court of Justice found that the case should have been dealt with under EU law. In 2006, however, both countries came to a friendly agreement which enabled both the Radiological Protection Institute of Ireland and the Garda Síochána (Irish Police Force) access to the site to conduct investigations. United States The United States recognised the Irish Free State on 28 June 1924 with diplomatic relations being established on 7 October 1924. In 1927, the United States opened an American Legation in Dublin. Due to the ancestral ties between the two countries, Ireland and the US have a strong relationship, both politically and economically, with the US being Ireland's biggest trading partner since 2000. Ireland also receives more foreign direct investment from the US than many larger nations, with investments in Ireland equal to France and Germany combined and, in 2012, more than all of developing Asia put together. The use of Shannon Airport as a stop-over point for US forces en route to Iraq has caused domestic controversy in Ireland. Opponents of this policy brought an unsuccessful High Court case against the government in 2003, arguing that this use of Irish airspace violated Irish neutrality. Restrictions such as carrying no arms, ammunition, or explosives, and that the flights in question did not form part of military exercises or operations were put in place to defend Irish neutrality, however allegations have been made against the Central Intelligence Agency that the airport has been used between 30 and 50 times for illegal extraordinary rendition flights to the U.S without the knowledge of the Irish Government, despite diplomatic assurances by the US that Irish airspace would not be used for transport of detainees. In July 2006, the former Irish Minister for Foreign Affairs, Dermot Ahern voiced concern over the 2006 Lebanon War. A shipment of bombs being sent to Israel by the United States was banned using Irish airspace or airfields. In 1995 a decision was made by the U.S. government to appoint a Special Envoy to Northern Ireland to help with the Northern Ireland peace process. During the 2008 presidential campaign in the United States, however, Democratic Party candidate Barack Obama was reported as having questioned the necessity to keep a US Special Envoy for Northern Ireland. His remarks caused uproar within the Republican Party, with Senator John McCain questioning his leadership abilities and his commitment to the ongoing peace process in Northern Ireland. , Daniel Mulhall is the Irish ambassador to the United States while the position of U.S. ambassador to Ireland is held by Claire D. Cronin. China Ireland's official relationship with the People's Republic of China began on 22 June 1979. Following his visit to China in 1999, former Taoiseach Bertie Ahern authorised the establishment of an Asia Strategy. The aim of this Strategy was to ensure that the Irish Government and Irish enterprise work coherently to enhance the important relationships between Ireland and Asia. In recent years |
conditions in the south, and snow-capped mountains in the north. Israel is located at the eastern end of the Mediterranean Sea in Western Asia. It is bounded to the north by Lebanon, the northeast by Syria, the east by Jordan and the West Bank, and to the southwest by Egypt. To the west of Israel is the Mediterranean Sea, which makes up the majority of Israel's coastline, and the Gaza Strip. Israel has a small coastline on the Red Sea in the south. Israel's area is approximately , which includes of inland water. Israel stretches from north to south, and its width ranges from at its widest point to at its narrowest point. It has an Exclusive Economic Zone of . The Israeli-occupied territories include the West Bank, , East Jerusalem, and the Golan Heights, . Geographical features in these territories will be noted as such. Of these areas, Israel has annexed East Jerusalem and the Golan Heights, an act not recognized by the international community. Southern Israel is dominated by the Negev desert, covering some , more than half of the country's total land area. The north of the Negev contains the Judean Desert, which, at its border with Jordan, contains the Dead Sea which, at is the lowest point on Earth. The inland area of central Israel is dominated by the Judean Hills of the West Bank, whilst the central and northern coastline consists of the flat and fertile Israeli coastal plain. Inland, the northern region contains the Mount Carmel mountain range, which is followed inland by the fertile Jezreel Valley, and then the hilly Galilee region. The Sea of Galilee is located beyond this region and is bordered to the east by the Golan Heights, a plateau bordered to the north by the Israeli-occupied part of the Mount Hermon massif, which includes the highest point under Israel's control, a peak of . The highest point in territory internationally recognized as Israeli is Mount Meron at . Location and boundaries Israel lies to the north of the equator around 31°30' north latitude and 34°45' east longitude. It measures from north to south and, at its widest point , from east to west. At its narrowest point, however, this is reduced to just . It has a land frontier of and a coastline of . It is ranked 153 on the List of countries and outlying territories by total area. Prior to the establishment of the British Mandate for Palestine, there was no clear-cut definition of the geographical and territorial limits of the area known as "Palestine." On the eve of World War I it was described by Encyclopædia Britannica as a "nebulous geographical concept." The Sykes-Picot Treaty in 1916 divided the region that later became Palestine into four political units. Under the British Mandate for Palestine, the first geo-political framework was created that distinguished the area from the larger countries that surrounded it. The boundary demarcation at this time did not introduce geographical changes near the frontiers and both sides of the border were controlled by the British administration. Modern Israel is bounded to the north by Lebanon, the northeast by Syria, the east by Jordan and the West Bank, and to the southwest by Egypt. To the west of Israel is the Mediterranean Sea, which makes up the majority of Israel's 273 km (170 mi) coastline and the Gaza Strip. Israel has a small coastline on the Red Sea in the south. The southernmost settlement in Israel is the city of Eilat whilst the northernmost is the town of Metula. The territorial waters of Israel extend into the sea to a distance of twelve nautical miles measured from the appropriate baseline. The statistics provided by the Israel Central Bureau of Statistics include the annexed East Jerusalem and Golan Heights, but exclude the West Bank and Gaza Strip. The population of Israel includes Israeli settlers in the West Bank. The route of the Israeli West Bank barrier incorporates some parts of the West Bank. Physiographic regions Israel is divided into four physiographic regions: the Mediterranean coastal plain, the Central Hills, the Jordan Rift Valley and the Negev Desert. Coastal plain The Israeli Coastal Plain stretches from the Lebanese border in the north to Gaza in the south, interrupted only by Cape Carmel at Haifa Bay. It is about wide at Gaza and narrows toward the north to about at the Lebanese border. The region is fertile and humid (historically malarial) and is known for its citrus orchards and viticulture. The plain is traversed by several short streams. From north to south these are: Kishon, Hadera, Alexander, Poleg, and Yarkon. All of these streams were badly polluted, but in the last ten years much work has been done to clean them up. Today the Kishon, Alexander and Yarkon again flow year round, and also have parks along their banks. Geographically, the region is divided into five sub-regions. The northernmost section lays between the Lebanese border, the Western Galilee to the east, and the sea. It stretches from Rosh HaNikra in the north and down to Haifa, Israel's third-largest city. It is a fertile region, and off the coast there are many small islands. Along the Mount Carmel range is Hof HaCarmel, or the Carmel Coastal Plain. It stretches from the point where Mount Carmel almost touches the sea, at Haifa, and down to Nahal Taninim, a stream that marks the southern limit of the Carmel range. The Sharon Plain is the next section, running from Nahal Taninim (south of Zikhron Ya'akov) to Tel Aviv's Yarkon River. This area is Israel's most densely populated. South of this, running to Nahal Shikma, is the Central Coastal Plain, also known as the Western Negev. The last segment is the Southern Coastal Plain, which extends south around the Gaza Strip. It is divided into two – in the north, the Besor region, a savanna-type area with a relatively large number of communities, and south of it the Agur-Halutza region, which is very sparsely populated. Central hills Inland (east) of the coastal plain lies the central highland region. In the north of this region lie the mountains and hills of Upper Galilee and Lower Galilee, which are generally to in height, although they reach a maximum height of at Mount Meron. South of the Galilee, in the West Bank, are the Samarian Hills with numerous small, fertile valleys rarely reaching the height of . South of Jerusalem, also mainly within the West Bank, are the Judean Hills, including Mount Hebron. The central highlands average in height and reach their highest elevation at Har Meron, at , in Galilee near Safed. Several valleys cut across the highlands roughly from east to west; the largest is the Jezreel Valley (also known as the Plain of Esdraelon), which stretches from Haifa southeast to the valley of the Jordan River, and is across at its widest point. Jordan Rift Valley East of the central highlands lies the Jordan Rift Valley, which is a small part of the -long Syrian-East African Rift. In Israel the Rift Valley is dominated by the Jordan River, the Sea of Galilee (an important freshwater source also known as Lake Tiberias and Lake Kinneret), and the Dead Sea. The Jordan, Israel's largest river (), originates in the Dan, Baniyas, and Hasbani rivers near Mount Hermon in the Anti-Lebanon Mountains and flows south through the drained Hula Basin into the freshwater Lake Tiberias. Lake Tiberias is in size and, depending on the season and rainfall, is at about below sea level. With a water capacity estimated at , it serves as the principal reservoir of the National Water Carrier (also known as the Kinneret-Negev Conduit). The Jordan River continues its course from the southern end of Lake Tiberias (forming the boundary between the West Bank and Jordan) to its terminus in the highly saline Dead Sea. The Dead Sea is in size and, at below sea level, is the lowest surface point on the earth. South of the Dead Sea, the Rift Valley continues in the Arabah (Hebrew "Arava", Arabic "Wadi 'Arabah"), which has no permanent water flow, for to the Gulf of Eilat. Negev Desert The Negev Desert comprises approximately , more than half of Israel's total land area. Geographically it is an extension of the | point in territory internationally recognized as Israeli is Mount Meron at . Location and boundaries Israel lies to the north of the equator around 31°30' north latitude and 34°45' east longitude. It measures from north to south and, at its widest point , from east to west. At its narrowest point, however, this is reduced to just . It has a land frontier of and a coastline of . It is ranked 153 on the List of countries and outlying territories by total area. Prior to the establishment of the British Mandate for Palestine, there was no clear-cut definition of the geographical and territorial limits of the area known as "Palestine." On the eve of World War I it was described by Encyclopædia Britannica as a "nebulous geographical concept." The Sykes-Picot Treaty in 1916 divided the region that later became Palestine into four political units. Under the British Mandate for Palestine, the first geo-political framework was created that distinguished the area from the larger countries that surrounded it. The boundary demarcation at this time did not introduce geographical changes near the frontiers and both sides of the border were controlled by the British administration. Modern Israel is bounded to the north by Lebanon, the northeast by Syria, the east by Jordan and the West Bank, and to the southwest by Egypt. To the west of Israel is the Mediterranean Sea, which makes up the majority of Israel's 273 km (170 mi) coastline and the Gaza Strip. Israel has a small coastline on the Red Sea in the south. The southernmost settlement in Israel is the city of Eilat whilst the northernmost is the town of Metula. The territorial waters of Israel extend into the sea to a distance of twelve nautical miles measured from the appropriate baseline. The statistics provided by the Israel Central Bureau of Statistics include the annexed East Jerusalem and Golan Heights, but exclude the West Bank and Gaza Strip. The population of Israel includes Israeli settlers in the West Bank. The route of the Israeli West Bank barrier incorporates some parts of the West Bank. Physiographic regions Israel is divided into four physiographic regions: the Mediterranean coastal plain, the Central Hills, the Jordan Rift Valley and the Negev Desert. Coastal plain The Israeli Coastal Plain stretches from the Lebanese border in the north to Gaza in the south, interrupted only by Cape Carmel at Haifa Bay. It is about wide at Gaza and narrows toward the north to about at the Lebanese border. The region is fertile and humid (historically malarial) and is known for its citrus orchards and viticulture. The plain is traversed by several short streams. From north to south these are: Kishon, Hadera, Alexander, Poleg, and Yarkon. All of these streams were badly polluted, but in the last ten years much work has been done to clean them up. Today the Kishon, Alexander and Yarkon again flow year round, and also have parks along their banks. Geographically, the region is divided into five sub-regions. The northernmost section lays between the Lebanese border, the Western Galilee to the east, and the sea. It stretches from Rosh HaNikra in the north and down to Haifa, Israel's third-largest city. It is a fertile region, and off the coast there are many small islands. Along the Mount Carmel range is Hof HaCarmel, or the Carmel Coastal Plain. It stretches from the point where Mount Carmel almost touches the sea, at Haifa, and down to Nahal Taninim, a stream that marks the southern limit of the Carmel range. The Sharon Plain is the next section, running from Nahal Taninim (south of Zikhron Ya'akov) to Tel Aviv's Yarkon River. This area is Israel's most densely populated. South of this, running to Nahal Shikma, is the Central Coastal Plain, also known as the Western Negev. The last segment is the Southern Coastal Plain, which extends south around the Gaza Strip. It is divided into two – in the north, the Besor region, a savanna-type area with a relatively large number of communities, and south of it the Agur-Halutza region, which is very sparsely populated. Central hills Inland (east) of the coastal plain lies the central highland region. In the north of this region lie the mountains and hills of Upper Galilee and Lower Galilee, which are generally to in height, although they reach a maximum height of at Mount Meron. South of the Galilee, in the West Bank, are the Samarian Hills with numerous small, fertile valleys rarely reaching the height of . South of Jerusalem, also mainly within the West Bank, are the Judean Hills, including Mount Hebron. The central highlands average in height and reach their highest elevation at Har Meron, at , in Galilee near Safed. Several valleys cut across the highlands roughly from east to west; the largest is the Jezreel Valley (also known as the Plain of Esdraelon), which stretches from Haifa southeast to the valley of the Jordan River, and is across at its widest point. Jordan Rift Valley East of the central highlands lies the Jordan Rift Valley, which is a small part of the -long Syrian-East African Rift. In Israel the Rift Valley is dominated by the Jordan River, the Sea of Galilee (an important freshwater source also known as Lake Tiberias and Lake Kinneret), and the Dead Sea. The Jordan, Israel's largest river (), originates in the Dan, Baniyas, and Hasbani rivers near Mount Hermon in the Anti-Lebanon Mountains and flows south through the drained Hula Basin into the freshwater Lake Tiberias. Lake Tiberias is in size and, depending on the season and rainfall, is at about below sea level. With a water capacity estimated at , it serves as the principal reservoir of the National Water Carrier (also known as the Kinneret-Negev Conduit). The Jordan River continues its course from the southern end of Lake Tiberias (forming the boundary between the West Bank and Jordan) to its terminus in the highly saline Dead Sea. The Dead Sea is in size and, at below sea level, is the lowest surface point on the earth. South of the Dead Sea, the Rift Valley continues in the Arabah (Hebrew "Arava", Arabic "Wadi 'Arabah"), which has no permanent water flow, for to the Gulf of Eilat. Negev Desert The Negev Desert comprises approximately , more than half of Israel's total land area. Geographically it is an extension of the Sinai Desert, forming a rough triangle with its base in the north near Beersheba, the Dead Sea, and the southern Judean Mountains, and it has its apex in the southern tip of the country at Eilat. Topographically, it parallels the other regions of the country, with lowlands in the west, hills in the central portion, and the Arava valley as its eastern border. Unique to the Negev region are the craterlike makhteshim cirques; Makhtesh Ramon, Makhtesh Gadol and Makhtesh Katan. The Negev is also sub-divided into five different ecological regions: northern, western and central Negev, the high plateau and the Arabah Valley. The northern Negev receives of rain annually and has fairly fertile soils. The western Negev receives of rain per year, with light and partially sandy soils. The central Negev has an annual precipitation of and is characterized by impervious soil, allowing minimum penetration of water with greater soil erosion and water runoff. This can result in rare flash floods during heavy rains as water runs across the surface of the impervious desert soil. The high plateau area of Ramat HaNegev stands between and above sea level with extreme temperatures in summer and winter. The area gets of rain each year, with inferior and partially salty soils. The Arabah Valley along the Jordanian border stretches from Eilat in the south to the tip of the Dead Sea in the north and is very arid with barely of rain annually. Geology Israel is divided east–west by a mountain range running north to south along the coast. Jerusalem sits on the top of this ridge, east of which lies the Dead Sea graben which is a pull-apart basin on the Dead Sea Transform fault. The numerous limestone and sandstone layers of the Israeli mountains serve as aquifers through which water flows from the west flank to the east. Several springs have formed along the Dead Sea, each an oasis, most notably the oases at Ein Gedi and Ein Bokek (Neve Zohar) where settlements have developed. Israel also has a number of areas of karst topography. Caves in the region have been used for thousands of years as shelter, storage rooms, barns and as places of public gatherings. The far northern coastline of the country has some chalk landscapes best seen at Rosh HaNikra, a chalk cliff into which a |
the country, and Arabic is given special status, while English and Russian are the two most widely spoken non-official languages. A certain degree of English is spoken widely, and is the language of choice for many Israeli businesses. Hebrew and English language are mandatory subjects in the Israeli school system, and most schools offer either Arabic, French, Spanish, German, Italian, or Russian. Religion According to a 2010 Israel Central Bureau of Statistics study of Israelis aged over 18: 8% of Israeli Jews define themselves as Haredim (or ultra-Orthodox); 12% are "religious" (non-Haredi Orthodox, also known as: dati leumi/national-religious or religious Zionist); 13% consider themselves "religious-traditionalists" (mostly adhering to Jewish Halakha); 25% are "non-religious traditionalists" (only partly respecting the Jewish Halakha), and 43% are "secular". While the ultra-Orthodox, or Haredim, represented only 5% of Israel's population in 1990, they are expected to represent more than one-fifth of Israel's Jewish population by 2028. By 2020, they were 12% of the population. Education Education between ages 5 and 15 is compulsory. It is not free, but it is subsidized by the government, individual organizations (such as the Beit Yaakov System), or a combination. Parents are expected to participate in courses as well. The school system is organized into kindergartens, 6-year primary schools, and either 6-year secondary schools or 3-year junior secondary schools + 3-year senior secondary schools (depending on region), after which a comprehensive examination is offered for university admissions. Policy Israel is the thirtieth-most-densely-crowded country in the world. In an academic article, Jewish National Fund Board member Daniel Orenstein, argues that, as elsewhere, overpopulation is a stressor on the environment in Israel; he shows that environmentalists have conspicuously failed to consider the impact of population on the environment, and argues that overpopulation in Israel has not been appropriately addressed for ideological reasons. Citizenship and Entry Law The Citizenship and Entry into Israel Law (Temporary Order) 5763 was first passed on 31 July 2003, and has since been extended until 31 July 2008. The law places age restrictions for the automatic granting of Israeli citizenship and residency permits to spouses of Israeli citizens, such that spouses who are inhabitants of the West Bank and Gaza Strip are ineligible. On 8 May 2005, the Israeli ministerial committee for issues of legislation once again amended the Citizenship and Entry into Israel Law, to restrict citizenship and residence in Israel only to Palestinian men over the age of 35, and Palestinian women over the age of 25. Those in favor of the law say the law not only limits the possibility of the entrance of terrorists into Israel, but, as Ze'ev Boim asserts, allows Israel "to maintain the state's democratic nature, but also its Jewish nature" (i. e., its Jewish demographic majority). Critics, including the United Nations Committee on the Elimination of Racial Discrimination, say the law disproportionately affects Arab citizens of Israel, since Arabs in Israel are far more likely to have spouses from the West Bank and Gaza Strip than other Israeli citizens. In the constitutional challenges to the Citizenship and Entry to Israel Law, the state, represented by the Attorney General, insisted that security was the only objective behind the law. The state also added that even if the law was intended to achieve demographic objectives, it is still in conformity with Israel's Jewish and democratic definition, and thus constitutional. In a 2012 ruling by the Supreme Court on the issue, some of the judges on the panel discussed demography, and were inclined to accept that demography is a legitimate consideration in devising family reunification policies that violate the right to family life. Soviet immigration During the 1970s about 163,000 people of Jewish descent immigrated to Israel from the USSR. Later Ariel Sharon, in his capacity as Minister of Housing & Construction and member of the Ministerial Committee for Immigration & Absorption, launched an unprecedented large-scale construction effort to accommodate the new Russian population in Israel so as to facilitate their smooth integration and encourage further Jewish immigration as an ongoing means of increasing the Jewish population of Israel. Between 1989 and 2006, about 979,000 Jews emigrated from the former Soviet Union to Israel. Statistics Total population Note: includes over 200,000 Israelis and 250,000 Arabs in East Jerusalem, about 421,400 Jewish settlers on the West Bank, and about 42,000 in the Golan Heights (July 2007 estimate). Does not include Arab populations in the West Bank and Gaza Strip. Does not include 222,000 foreigners living in the country. Age structure Total: 0–14 years: 28.0% 15–64 years: 62.1% 65 years and over: 9.9% Jews: 0–14 years: 25.5% 15–64 years: 63.1% 65 years and over: 11.4% Arab: 0–14 years: 37.5% 15–64 years: 58.6% 65 years and over: 3.9% (2010 est.) Median age Total: 29.7 Jewish: 31.6 Arab: 21.1 The Jewish median age in Jerusalem district and the West Bank are 24.9 and 19.7, respectively, and both account for 16% of the Jewish population, but 24% of 0- to 4-year-olds. The lowest median age in Israel, and one of the lowest in the world, is found in two of the West Bank's biggest Jewish cities: Modi'in Illit (11), Beitar Illit (11) followed by Bedouin towns in the Negev (15.2). Population growth rate 2.0% (2016) During the 1990s, the Jewish population growth rate was about 3% per year, as a result of massive immigration to Israel, primarily from the republics of the former Soviet Union. There is also a very high population growth rate among certain Jewish groups, especially adherents of Orthodox Judaism. The growth rate of the Arab population in Israel is 2.2%, while the growth rate of the Jewish population in Israel is 1.8%. The growth rate of the Arab population has slowed from 3.8% in 1999 to 2.2% in 2013, and for the Jewish population, the growth rate declined from 2.7% to its lowest rate of 1.4% in 2005. Due to a rise in fertility of the Jewish population since 1995 and immigration, the growth rate has since risen to 1.8%. Birth rate 2021 : Total: 19.7 births/1,000 population Jews and others: 19.1 births/1,000 population Muslims: 23.4 births/1,000 population Christians: 13.3 births/1,000 population Druze: 15.8 births/1,000 population Births, in absolute numbers, by mother's religion Between the mid-1980s and 2000, the fertility rate in the Muslim sector was stable at 4.6–4.7 children per woman; after 2001, a gradual decline became evident, reaching 3.51 children per woman in 2011. By point of comparison, in 2011, there was a rising fertility rate of 2.98 children among the Jewish population. Births and deaths Current vital statistics Structure of the population Death rate 5.3 deaths/1,000 population (2015 est.) There were a total of 38,666 deaths in 2006. (39,026 in 2005 & 37,688 in 2000). Of this 33,568 were Jews (34,031 in 2005 & 33,421 in 2000). 3,078 were Muslims (2,968 in 2005 & 2,683 in 2000). 360 were Druze (363 in 2005 & 305 in 2000). 712 were Christian (686 in 2005 & 666 in 2000). Net migration rate 1.81 migrant(s)/1,000 population (2013 est.) There were a total of 28,629 immigrants who made Aliyah to Israel in 2019 (jan-oct): 12,722 from Russia; 5,247 from Ukraine; 2,470 from the United States; 276 from Canada; 143 from Australia; 1,996 from France; 469 from the UK; 350 from Brazil; 321 from South Africa; 93 from Venezuela; 127 from Mexico; 143 from Turkey; 57 from Iran; 14 from Thailand and 5 from Japan. Immigration/Aliyah Immigrants by last country of residence in recent years (according to CBS): Emigration For many years definitive data on Israeli emigration was unavailable. In The Israeli Diaspora sociologist Stephen J. Gold maintains that calculation of Jewish emigration has been a contentious issue, explaining, "Since Zionism, the philosophy that underlies the existence of the Jewish state, calls for return home of the world's Jews, the opposite movement—Israelis leaving the Jewish state to reside elsewhere—clearly presents an ideological and demographic problem." In the past several decades, emigration (yerida) has seen a considerable increase. From 1990 to 2005, 230,000 Israelis left the country; a large proportion of these departures included people who initially immigrated to Israel and then reversed their course (48% of all post-1990 departures and even 60% of 2003 and 2004 departures were former immigrants to Israel). 8% of Jewish immigrants in the post-1990 period left Israel, while 15% of non-Jewish immigrants did. In 2005 alone, 21,500 Israelis left the country and had not yet returned at the end of 2006; among them 73% were Jews, 5% Arabs, and 22% "Others" (mostly non-Jewish immigrants, with Jewish ancestry, from USSR). At the same time, 10,500 Israelis came back to Israel after over one year abroad; 84% were Jews, 9% Others, and 7% Arabs. According to the Israel Central Bureau of Statistics, as of 2005, 650,000 Israelis had left the country for over one year and not returned. Of them, 530,000 are still alive today. This number does not include the children born overseas. It should also be noted that Israeli law grants citizenship only to the first generation of children born to Israeli emigrants. Density Geographic deployment, as of 2018: Central District: 24.5% (2,196,100) Tel Aviv District: 15.9% (1,427,200) Northern District: 16.2% (1,448,100) Southern District: 14.5% (1,302,000) Haifa District: 11.5% (1,032,800) Jerusalem District: 12.6% (1,133,700) Judea and Samaria Area (Israelis only): 4.8% (427,800) Sex ratio At birth: 1.05 male(s)/female Under 15 years: 1.05 male(s)/female 15–64 years: 1.03 male(s)/female 65 years and over: 0.78 male(s)/female Total population: 1.01 male(s)/female (2011 est.) Maternal mortality rate 2 deaths/100,000 live births (2017). Infant mortality rate Total: 4.03 deaths/1,000 live births Male: 4.20 deaths/1,000 live births Female: 3.84 deaths/1,000 live births (2013 est.) Life expectancy at birth As of 2019: Total population: 82.8 years Male: 81 years Female: 84.7 years Total fertility rate The total fertility rate (TFR) of a population is the average number of children that an average woman would have, in her lifetime. 3.01 children born/woman (2019) Jewish total fertility rate increased by 10.2% during 1998–2009, and was recorded at 2.90 during 2009. During the same time period, Arab TFR decreased by 20.5%. Muslim TFR was measured at 3.73 for 2009. During 2000, the Arab TFR in Jerusalem (4.43) was higher than that of the Jews residing there (3.79). But as of 2009, Jewish TFR in | and the Muslim residents of East Jerusalem and Area C, who have Israeli residency or citizenship. Cities Within Israel's system of local government, an urban municipality can be granted a city council by the Israeli Interior Ministry when its population exceeds 20,000. The term "city" does not generally refer to local councils or urban agglomerations, even though a defined city often contains only a small portion of an urban area or metropolitan area's population. Ethnic and religious groups The most prominent ethnic and religious groups, who live in Israel at present and who are Israeli citizens or nationals, are as follows: Jews According to Israel's Central Bureau of Statistics, in 2008, of Israel's 7.3 million people, 75.6 percent were Jews of any background. Among them, 70.3 percent were Sabras (born in Israel), mostly second- or third-generation Israelis, and the rest are olim (Jewish immigrants to Israel)—20.5 percent from Europe and the Americas, and 9.2 percent from Asia and Africa, including the Arab countries. About 44.9% percent of Israel's Jewish population identify as either Mizrahi or Sephardi, 44.2% identify as Ashkenazi, about 3% as Beta Israel and 7.9% as mixed or other. The paternal lineage of the Jewish population of Israel as of 2015 is as follows: Arabs Arab citizens of Israel are those Arab residents of Mandatory Palestine, who remained within Israel's borders following the 1948 Arab–Israeli War, and the establishment of the state of Israel. It is including those born within the state borders subsequent to this time, as well as those who had left during the establishment of the state (or their descendants), who have since re-entered by means accepted as lawful residence by the Israeli state (primarily family reunifications). In 2019, the official number of Arab residents in Israel was 1,890,000 people, representing 21% of Israel's population. This figure includes 209,000 Arabs (14% of the Israeli Arab population) in East Jerusalem, also counted in the Palestinian statistics, although 98 percent of East Jerusalem Palestinians have either Israeli residency or Israeli citizenship. Arab Muslims Most Arab citizens of Israel are Muslim, particularly of the Sunni branch of Islam. A small minority are Ahmadiyya sect and there are also some Alawites (affiliated with Shia Islam) in the northernmost village of Ghajar with Israeli citizenship. As of 2019, Arab citizens of Israel composed 21 percent of the country's total population. About 82 percent of the Arab population in Israel are Sunni Muslims, a very small minority are Shia Muslims, another 9 percent are Druze, and around 9 percent are Christian (mostly Eastern Orthodox and Catholic denominations). Bedouin The Arab Muslim citizens of Israel include also the Bedouins, who are divided into two main groups: the Bedouin in the north of Israel, who live in villages and towns for the most part, and the Bedouin in the Negev, who include half-nomadic and inhabitants of towns and Unrecognized villages. According to the Israeli Ministry of Foreign Affairs, as of 1999, 110,000 Bedouins live in the Negev, 50,000 in the Galilee and 10,000 in the central region of Israel. The vast majority of Arab Bedouins of Israel practice Sunni Islam. Ahmadiyya The Ahmadiyya community was first established in the region in the 1920s, in what was then Mandatory Palestine. There is a large community in Kababir, a neighbourhood on Mount Carmel in Haifa. It is unknown how many Israeli Ahmadis there are, although it is estimated there are about 2,200 Ahmadis in Kababir alone. Arab Christians As of December 2013, about 161,000 Israeli citizens practiced Christianity, together comprising about 2% of the total population. The largest group consists of Melkites (about 60% of Israel's Christians), followed by the Greek Orthodox (about 30%), with the remaining ca. 10% spread between the Roman Catholic (Latin), Maronite, Anglican, Lutheran, Armenian, Syriac, Ethiopian, Coptic and other denominations. Druze The Arab citizens of Israel include also the Druze, who numbered at an estimated 143,000 in April 2019. All of the Druze living in what was then British Mandate Palestine became Israeli citizens after the declaration of the State of Israel. Druze serve prominently in the Israel Defense Forces, and are represented in mainstream Israeli politics and business as well, unlike Muslim or christian Arabs who are not required to and generally choose not to serve in the Israeli army. Though a few individuals identify themselves as "Palestinian Druze", the vast majority of Druze do not consider themselves to be 'Palestinian', and consider their Israeli identity stronger than their Arab identity. A 2017 Pew Research Center poll reported that the majority of the Israeli Druze identified as ethnically Arab. Syriac Christians Arameans In 2014, Israel decided to recognize the Aramaic community within its borders as a national minority, allowing some of the Christians in Israel to be registered as "Aramean" instead of "Arab". As of October 2014, some 600 Israelis requested to be registered as Arameans, with several thousand eligible for the status – mostly members of the Maronite community. The Maronite Christian community in Israel of around 7,000 resides mostly in the Galilee, with a presence in Haifa, Nazareth and Jerusalem. It is largely composed of families that lived in Upper Galilee in villages such as Jish long before the establishment of Israel in 1948. In the year 2000, the community was joined by a group of Lebanese SLA militia members and their families, who fled Lebanon after 2000 withdrawal of IDF from South Lebanon. Assyrians There are around 1,000 Assyrians living in Israel, mostly in Jerusalem and Nazareth. Assyrians are an Aramaic speaking, Eastern Rite Christian minority who are descended from the ancient Mesopotamians. The old Syriac Orthodox monastery of Saint Mark lies in Jerusalem. Other than followers of the Syriac Orthodox Church, there are also followers of the Assyrian Church of the East and the Chaldean Catholic Church living in Israel. Other citizens Copts Some 1,000 Israeli citizens belong to the Coptic community, originating in Egypt. Samaritans The Samaritans are an ethnoreligious group of the Levant. Ancestrally, they claim descent from a group of Israelite inhabitants who have connections to ancient Samaria from the beginning of the Babylonian Exile up to the beginning of the Common Era. 2007 population estimates show that 712 Samaritans live half in Holon, Israel and half at Mount Gerizim in the West Bank. The Holon community holds Israeli citizenship, while the Gerizim community resides at an Israeli-controlled enclave, holding dual Israeli-Palestinian citizenship. Armenians About 4,000 Armenians reside in Israel mostly in Jerusalem (including in the Armenian Quarter), but also in Tel Aviv, Haifa and Jaffa. Armenians have a Patriarchate in Jerusalem and churches in Jerusalem, Haifa and Jaffa. Although Armenians of Old Jerusalem have Israeli identity cards, they are officially holders of Jordanian passports. Circassians In Israel, there are also a few thousand Circassians, living mostly in Kfar Kama (2,000) and Reyhaniye (1,000). These two villages were a part of a greater group of Circassian villages around the Golan Heights. The Circassians in Israel enjoy, like Druzes, a status aparte. Male Circassians (at their leader's request) are mandated for military service, while females are not. People from post-Soviet states Ethnic Russians, Ukrainians, and Belarusians, immigrants from the former Soviet Union, who were eligible to emigrate due to having, or being married to somebody who has, at least one Jewish grandparent and thus qualified for Israeli citizenship under the revised Law of Return. A number of these immigrants also belong to various ethnic groups from the Former Soviet Union such as Armenians, Georgians, Azeris, Uzbeks, Moldovans, Tatars, among others. Some of them, having a Jewish father or grandfather, identify as Jews, but being non-Jewish by Orthodox Halakha (religious law), they are not recognized formally as Jews by the state. Most of them are in the mainstream of Israel culture and |
industrial branches in Israel, second only to the foodstuff industry. Textiles constituted about 12% of industrial exports, becoming the second-largest export branch after polished diamonds. In the 1990s, cheap East Asian labor decreased the profitability of the sector. Much of the work was subcontracted to 400 Israeli Arab sewing shops. As these closed down, Israeli firms, among them Delta, Polgat, Argeman and Kitan, began doing their sewing work in Jordan and Egypt, usually under the QIZ arrangement. In the early 2000s, Israeli companies had 30 plants in Jordan. Israeli exports reached $370 million a year, supplying such retailers and designers as Marks & Spencer, The Gap, Victoria's Secret, Walmart, Sears, Ralph Lauren, Calvin Klein, and Donna Karan. In its first two decades of existence, Israel's strong commitment to development led to economic growth rates that exceeded 10% annually. Between 1950 and 1963, the expenditure among wage-earner's families rose 97% in real terms. Between 1955 and 1966, per capita consumption rose by 221%. The years after the 1973 Yom Kippur War were a lost decade economically, as growth stalled, inflation soared and government expenditures rose significantly. Also worthy of mention is the 1983 Bank stock crisis. By 1984, the economic situation became almost catastrophic with inflation reaching an annual rate close to 450% and projected to reach over 1000% by the end of the following year. However, the successful economic stabilization plan implemented in 1985 and the subsequent introduction of market-oriented structural reforms reinvigorated the economy and paved the way for its rapid growth in the 1990s and became a model for other countries facing similar economic crises. Two developments have helped to transform Israel's economy since the beginning of the 1990s. The first is waves of Jewish immigration, predominantly from the countries of the former USSR, that has brought over one million new citizens to Israel. These new Soviet Jewish immigrants, many of them highly educated, had a wellspring of scientific and technical expertise to help spur Israel's burgeoning technology sector, now constitute some 15% of Israel's population. The second development benefiting the Israeli economy is the peace process that begun at the Madrid conference of October 1991, which led to the signing of accords and later to a peace treaty between Israel and Jordan (1994). In the early 2000s, the Israeli economy went into a downturn due to the crashing of the global dot-com bubble which bankrupted many startups established during the height of the bubble. The Second Intifada, which cost Israel billions of dollars in security costs, and a decline in investment and tourism, sent unemployment in Israel to the double digits; growth in one quarter of 2000 was 10%. In 2002, the Israeli economy declined about 4% in one quarter. Afterward, Israel managed to create a remarkable recovery by opening up new markets to Israeli exporters farther afield, such as in the rapidly growing countries of East Asia. This was possible thanks to a rebound in the Israeli tech sector, spurred on by the gradual bottoming out of the dotcom crash and a growing increase in demand for computer software, which in turn was due to burgeoning rates of global internet usage at this time. The explosion in demand for security and defense products following 9/11 also allowed Israel to sell even more of its technologies abroad - a situation only made possible due to Israel’s prior investments in the technology sector in an effort to curb high levels of domestic unemployment. In the 2000s, there was an influx of foreign investment in Israel from companies that formerly shunned the Israeli market. In 2006, foreign investment in Israel totalled $13 billion, according to the Manufacturers Association of Israel. The Financial Times said that "bombs drop, yet Israel's economy grows". Moreover, while Israel's total gross external debt is US$95 billion, or approximately 41.6% of GDP, since 2001 it has become a net lender nation in terms of net external debt (the total value of assets vs. liabilities in debt instruments owed abroad), which stood at a significant surplus of US$60 billion. The country also maintains a current account surplus in an amount equivalent to about 3% of its gross domestic product in 2010. The Israeli economy withstood the late-2000s recession, registering positive GDP growth in 2009 and ending the decade with an unemployment rate lower than that of many western countries. There are several reasons behind this economic resilience, for example, the fact that the country is a net lender rather than a borrower nation and the government and the Bank of Israel's generally conservative macro-economic policies. Two policies, in particular, can be cited, one is the refusal of the government to succumb to pressure by the banks to appropriate large sums of public money to aid them early in the crisis, thus limiting their risky behavior. The second is the implementation of the recommendations of the Bach'ar commission in the early to mid-2000s which recommended decoupling the banks' depository- and Investment banking activities, contrary to the then-opposite trend, particularly in the United States, of easing such restrictions which had the effect of encouraging more risk-taking in the financial systems of those countries. OECD membership In May 2007, Israel was invited to open accession discussions with the OECD. In May 2010, the OECD voted unanimously to invite Israel to join, despite Palestinian objections. It became a full member on 7 September 2010. The OECD praised Israel's scientific and technological progress and described it as having "produced outstanding outcomes on a world scale." Challenges Despite economic prosperity, the Israeli economy faces many challenges, some are short term and some are long term challenges. On the short term its inability to duplicate its success in the telecommunication industry into other growing industries hampers its economic outlooks. Its inability to foster large multinational companies in the last decade also calls into question its ability to employ large numbers of people in advanced industries. On the long term, Israel is facing challenges of high dependency of the growing number of Ultra-Orthodox Jews who have a low level of official labor force participation amongst men, and this situation could lead to a materially lower employment-to-population ratio and a higher dependency ratio in the future. The governor of the Bank of Israel, Stanley Fischer, stated that the growing poverty amongst the Ultra-Orthodox is hurting the Israeli economy. According to the data published by Ian Fursman, 60% of the poor households in Israel are of the Haredi Jews and the Israeli Arabs. Both groups together represent 25–28% of the Israeli population. Organizations such as The Kemach Foundation, Gvahim, Jerusalem Village and The Jerusalem Business Networking Forum are addressing these challenges with job placement services and networking events. Data The following table shows the main economic indicators in 1980–2018. Inflation under 2% is in green. Sectors Agriculture In 2017, 2.4% of the country's GDP is derived from agriculture. Of a total labor force of 2.7 million, 2.6% are employed in agricultural production while 6.3% in services for agriculture. While Israel imports substantial quantities of grain (approximately 80% of local consumption), it is largely self-sufficient in other agricultural products and food stuffs. For centuries, farmers of the region have grown varieties of citrus fruits, such as grapefruit, oranges and lemons. Citrus fruits are still Israel's major agricultural export. In addition, Israel is one of the world's leading greenhouse-food-exporting countries. The country exports more than $1.3 billion worth of agricultural products every year, including farm produce as well as $1.2 billion worth of agricultural inputs and technology. Financial services Israel has over 100 active venture capital funds operating throughout the country with US$10 billion under management. In 2004, international foreign funds from various nations around the world committed over 50 percent of the total dollars invested exemplifying the country's strong and sound reputation as an internationally sought after foreign investment by many countries. Israel's venture capital sector has rapidly developed from the early 1990s, and has about 70 active venture capital funds (VC), of which 14 international VCs have Israeli offices. Israel's thriving venture capital and business-incubator industry played an important role in financing the country's flourishing high-tech sector. In 2008, venture capital investment in Israel, rose 19 percent to $1.9 billion. "Between 1991 and 2000, Israel's annual venture-capital outlays, nearly all private, rose nearly 60-fold, from $58 million to $3.3 billion; companies launched by Israeli venture funds rose from 100 to 800; and Israel's information-technology revenues rose from $1.6 billion to $12.5 billion. By 1999, Israel ranked second only to the United States in invested private-equity capital as a share of GDP. Israel led the world in the share of its growth attributable to high-tech ventures: 70 percent." Israel's thriving venture capital industry has played an important role in funding the country's booming high-technology sector, with hundreds of prosperous Israeli private equity and venture capital firms. The financial crisis of 2007–08 negatively affected the availability of venture capital locally. In 2009, there were 63 mergers and acquisitions in the Israeli market worth a total of $2.54 billion; 7% below 2008 levels ($2.74 billion), when 82 Israeli companies were merged or acquired, and 33% lower than 2007 proceeds ($3.79 billion) when 87 Israeli companies were merged or acquired. Numerous Israeli high tech companies have been acquired by global corporations for its reliable corporate management and quality personnel. In addition to venture capital funds, many of the world's leading investment banks, pension funds, and insurance companies have a strong presence in Israel committing their funds to financially back Israeli high-tech firms and benefit from its prosperous high tech sector. These institutional investors include Goldman Sachs, Bear Stearns, Deutsche Bank, JP Morgan, Credit Suisse First Boston, Merrill Lynch, CalPERS, Ontario Teachers Pension Plan, and AIG. Israel also has a small but fast growing hedge fund industry. Within five years between 2007 and 2012, the number of active hedge funds doubled to 60. Israel-based hedge funds have registered an increase of 162% from 2006 to 2012, when they managed a total of $2 billion (₪8 billion) and employed about 300 people. The ever-growing hedge fund industry in Israel is also attracting a myriad of investors from around the world, particularly from the United States. High technology Science and technology in Israel is one of the country's most highly developed and industrialized sectors. The modern Israeli ecosystem of high technology is highly optimized making up a significant bulk of the Israeli economy. The percentage of Israelis engaged in scientific and technological inquiry, and the amount spent on research and development (R&D) concerning gross domestic product (GDP), is among the highest in the world, with 140 scientists and technicians per 10,000 employees. In comparison, the same is 85 per 10,000 in the United States and 83 per 10,000 in Japan. Israel ranks fourth in the world in scientific activity, as measured by the number of scientific publications per million citizens. Israel's percentage of the total number of scientific articles published worldwide is almost 10 times higher than its percentage of the world's population. The country is home to over 1,400 life science companies, including about 300 pharmaceutical companies, 600 medical device companies, 450 digital health companies, and 468 biotechnology companies. Israeli scientists, engineers, and technicians have contributed to the modern advancement of the natural sciences, agricultural sciences, computer sciences, electronics, genetics, medicine, optics, solar energy and various fields of engineering. The country has one of the world's technologically most literate populations. In 1998, Tel Aviv was named by Newsweek as one of the ten technologically most influential cities in the world. In 2012, the city was also named one of the best places for high-tech startup companies, placed second behind its California counterpart. In 2013, The Boston Globe ranked Tel Aviv as the second-best city for business start-ups, after Silicon Valley Israel has the largest number of startup companies globally, second only to the United States and remains one of the largest centers in the world for technology start-up enterprises. Around 200 start-ups are created annually and more than 2,500 start-up companies are operating throughout the country. Israel is also home to nearly 400 research and development centers owned by multinational companies, including giants such as Google, Microsoft, and Intel. As a result of the country's highly renowned and creative start-up culture, Israel is often referred to as the Start-up Nation (adapted from the book Start-Up Nation, by Dan Senor and Saul Singer) and the "Silicon Valley of the Middle East". Programs that send people to Israel to explore the "Start-Up Nation" economy include TAVtech Ventures and TAMID Group. This success has been attributed by some to widespread service in the Israel Defense Forces and its development of talent which then fuels the high-tech industry upon discharge. In recent years, the industry has faced a shortage of technology specialists; 15% of positions in the high technology sector of Israel were unfilled as of 2019. However, the largest number of unfilled positions (31%) are in software engineering specialities: DevOps, back-end, data science, machine learning and artificial intelligence. Therefore, salaries of specialists in the Israeli market also increased significantly. To solve this problem, IT companies look for filling the gaps abroad. Consequently, they employ about 25% of their entire workforce overseas. Most companies choose to hire employees from Ukraine (45%) and the United States (with 16%) are the second most popular offshoring destination country. In 2017, the Council for Higher Education in Israel launched a five-year program to increase the number of graduates from computer science and engineering programs by 40%. Energy Historically, Israel relied on external imports for meeting most of its energy needs, spending an amount equivalent to over 5% of its GDP per year in 2009 on imports of energy products. The transportation sector relies mainly on gasoline and diesel fuel, while the majority of electricity production is generated using imported coal. As of 2013, Israel was importing about 100 mln barrels of oil per year. The country possesses negligible reserves of crude oil but does have domestic natural gas resources which were discovered in more significant quantities starting in 2009, after many decades of previously unsuccessful exploration. Natural gas Until the early 2000s, natural gas use in Israel was minimal. In the late 1990s, the government of Israel decided to encourage the usage of natural gas because of environmental, cost, and resource diversification reasons. At the time however, there were no domestic sources of natural gas and the expectation was that gas would be supplied from overseas in the form of LNG and by a future pipeline from Egypt (which eventually became the Arish–Ashkelon pipeline). Plans were made for the Israel Electric Corporation to construct several natural gas-driven power plants, for erecting a national gas distribution grid, and for an LNG import terminal. Recent discoveries In 2000, a 33-billion-cubic-metre (BCM), or 1,200-billion-cubic-foot, the natural-gas field was located offshore Ashkelon, with commercial production starting in 2004. however, this field is nearly depleted—earlier than expected due to increased pumping to partially compensate for the loss of imported Egyptian gas in the wake of unrest associated with the fall of the Mubarak regime in 2011. In 2009, a significant gas find named Tamar, with proven reserves of 223 BCM or (307 BCM total proven + probable) was located in deep water approximately west of Haifa, as well as a smaller 15 BCM () field situated nearer the coastline. Furthermore, results of 3D seismic surveys and test drilling conducted since 2010 have confirmed that an estimated 621 BCM () natural-gas deposit named Leviathan exists in a large underwater geological formation nearby the large gas field already discovered in 2009. The Tamar field began commercial production on 30 March 2013 after four years of development. The supply of gas from Tamar was expected to aid the Israeli economy, which had suffered losses of more than ₪20 billion between 2011 and 2013 resulting from the disruption of gas supplies from neighboring Egypt (and which are not expected to resume due to Egypt's decision to indefinitely suspend its gas supply agreement to Israel). As a result, Israel, as well as its other neighbor Jordan, which also suffered from disruption of gas deliveries from Egypt, had to resort to importing significantly more expensive and polluting liquid heavy fuels as substitute sources of energy. The ensuing energy crisis in Israel was lifted once the Tamar field came online in 2013, while Jordan committed to a US$10 billion, 15-year gas supply deal totalling 45 BCM from the Israeli Leviathan field which is scheduled to come online in late 2019. The agreement is estimated to save Jordan US$600 million per year in energy costs. In 2018, the owners of the Tamar and Leviathan fields announced that they are negotiating an agreement with a consortium of Egyptian firms for the supply of up to 64 BCM of gas over 10 years valued at up to US$15 billion. In early 2012 the Israeli cabinet announced plans to set up a sovereign wealth fund (called "the Israeli Citizens' Fund"). Electricity Since the founding of the state through the mid-2010s decade, the state-owned utility, Israel Electric Corporation (IEC) had an effective monopoly on power generation in the country. In 2010 the company sold 52,037 GWh of electricity. Until the mid-2010s the country also faced a persistently low operating reserve, which is mostly the result of Israel being an "electricity island". Most countries have the capability of relying on power drawn from producers in adjacent countries in the event of a power shortage. Israel's grid however, is unconnected to those of neighboring countries. This is mostly due to political reasons but also to the considerably less-developed nature of the power systems of Jordan and Egypt, whose systems constantly struggle to meet domestic demand and whose per-capita electric generation is less than one fifth that of Israel's. Nevertheless, while operating reserves in Israel were low, the country possessed sufficient generation and transmission capacity to meet domestic electricity needs and unlike in the countries surrounding it, rolling blackouts have historically been quite rare, even at periods of extreme demand. Facing the increasing demand for electricity and concerned about the low reserve situation, the government of Israel began taking steps to increase the supply of electricity and operating reserve, as well to reduce the monopoly position of the IEC and increase competition in the electricity market starting in the second half of the 2000s decade. It instructed the IEC to construct several new power stations and encouraged private investment in the generation sector. By 2015, the IEC's share of total | ranks 35th on the World Bank's ease of doing business index. It has the second-largest number of startup companies in the world after the United States, and the third-largest number of NASDAQ-listed companies after the U.S. and China. American companies such as Intel, Microsoft, and Apple built their first overseas research and development facilities in Israel. Other high-tech multi-national corporations, such as IBM, Google, Hewlett-Packard, Cisco Systems, Facebook and Motorola have opened R&D centers in the country. The country's major economic sectors are technology and industrial manufacturing. The Israeli diamond industry is one of the world's centers for diamond cutting and polishing, amounting to 23.2% of all exports. Israel is relatively poor in natural resources, and consequently depends on imports of petroleum, raw materials, wheat, motor vehicles, uncut diamonds and production inputs. The country's nearly total reliance on energy imports may change in the future. There were recent discoveries of natural gas reserves off its coast, and the Israeli solar energy industry has taken a leading role. Israel's quality university education and the establishment of a highly motivated and educated populace is largely responsible for ushering in the country's high technology boom and rapid economic development by regional standards. The country has developed a strong educational infrastructure, and a high-quality incubation system for new cutting edge ideas to create value driven goods and services. These developments have allowed the country to create a high concentration of high-tech companies across the country's regions. These companies are financially backed by a strong venture capital industry. Its central high technology hub, the "Silicon Wadi", is considered second in importance only to its Californian counterpart. Numerous Israeli companies have been acquired by global corporations for their reliable and quality corporate personnel. The economic dynamism of Israel has attracted attention from international business leaders such as Microsoft founder Bill Gates, investor Warren Buffett, real estate developer and former U.S. President Donald Trump, as well as telecommunications giant Carlos Slim. Each entrepreneur has invested heavily across numerous Israeli industries beyond their traditional business activities and investments back in their home nations. In 2006, Berkshire Hathaway (a holding company for American investor Warren Buffett) bought an Israeli company for $4 billion, ISCAR Metalworking. It was Berkshire Hathaway's first acquisition outside the United States. In September 2010, Israel was invited to join the OECD. Israel has also signed free trade agreements with the European Union, the United States, the European Free Trade Association, Turkey, Mexico, Canada, Ukraine, Jordan, and Egypt. On 18 December 2007, Israel became the first non-Latin-American country to sign a free trade agreement with the Mercosur trade bloc. Israel is also a major tourist destination, especially for those of Jewish descent, with 4.55 million foreign tourists visiting it in 2019 (about 1 tourist per 2 Israelis). History The British Mandate for Palestine that came into effect in 1920 aimed at restricting land purchases by Jewish immigrants. For this reason, the Jewish population was initially more urban and had a higher share in industrial occupations. This particular development resulted economically in one of the few growth miracles of the region whereby the structure of firms was determined mainly by private entrepreneurs rather than by the government. The first survey of the Dead Sea in 1911, by the Russian Jewish engineer Moshe Novomeysky, led to the establishment of Palestine Potash Ltd. in 1930, later renamed the Dead Sea Works. In 1923, Pinhas Rutenberg was granted an exclusive concession for the production and distribution of electric power. He founded the Palestine Electric Company, later the Israel Electric Corporation. Between 1920 and 1924, some of the countries largest factories were established, including the Shemen Oil Company, the Societe des Grand Moulins, the Palestine Silicate Company and the Palestine Salt Company. In 1937, there were 86 spinning and weaving factories in the country, employing a workforce of 1,500. Capital and technical expertise were supplied by Jewish professionals from Europe. The Ata textile plant in Kiryat Ata, which went on to become an icon of the Israeli textile industry, was established in 1934. In 1939, the cornerstone was laid for one of the kibbutz industry's first factories: the Naaman brick factory, which supplied the growing need for construction materials. The textile underwent rapid development during World War II, when supplies from Europe were cut off while local manufacturers were commissioned for army needs. By 1943, the number of factories had grown to 250, with a workforce of 5,630, and output increased tenfold. From 1924, trade fairs were held in Tel Aviv. The Levant Fair was inaugurated in 1932. After independence After statehood, Israel faced a deep economic crisis. As well as having to recover from the devastating effects of the 1948 Arab–Israeli War, it also had to absorb hundreds of thousands of Jewish refugees from Europe and almost a million from the Arab world. Israel was financially overwhelmed and faced a deep economic crisis, which led to a policy of austerity from 1949 to 1959. Unemployment was high, and foreign currency reserves were scarce. In 1952, Israel and West Germany signed an agreement stipulating that West Germany was to pay Israel for the persecution of Jews during the Holocaust, and compensate for Jewish property stolen by the Nazis. Over the next 14 years, West Germany paid Israel 3 billion marks (equivalent to US$111.5bn in modern currency). The reparations became a decisive part of Israel's income, comprising as high as 87.5% of Israel's income in 1956. In 1950, the Israeli government launched Israel Bonds for American and Canadian Jews to buy. In 1951, the final results of the bonds program exceeded $52 million. Additionally, many American Jews made private donations to Israel, which in 1956 were thought to amount to $100 million a year. In 1957, bond sales amounted to 35% of Israel's special development budget. Later in the century, Israel became significantly reliant on economic aid from the United States, a country that also became Israel's most important source of political support internationally. The proceeds from these sources were invested in industrial and agricultural development projects, which allowed Israel to become economically self-sufficient. Among the projects made possible by the aid was the Hadera power plant, the Dead Sea Works, the National Water Carrier, port development in Haifa, Ashdod, and Eilat, desalination plants, and national infrastructure projects. After statehood, priority was given to establishing industries in areas slated for development, among them, Lachish, Ashkelon, the Negev and Galilee. The expansion of Israel's textile industry was a consequence of the development of cotton growing as a profitable agricultural branch. By the late 1960s, textiles were one of the largest industrial branches in Israel, second only to the foodstuff industry. Textiles constituted about 12% of industrial exports, becoming the second-largest export branch after polished diamonds. In the 1990s, cheap East Asian labor decreased the profitability of the sector. Much of the work was subcontracted to 400 Israeli Arab sewing shops. As these closed down, Israeli firms, among them Delta, Polgat, Argeman and Kitan, began doing their sewing work in Jordan and Egypt, usually under the QIZ arrangement. In the early 2000s, Israeli companies had 30 plants in Jordan. Israeli exports reached $370 million a year, supplying such retailers and designers as Marks & Spencer, The Gap, Victoria's Secret, Walmart, Sears, Ralph Lauren, Calvin Klein, and Donna Karan. In its first two decades of existence, Israel's strong commitment to development led to economic growth rates that exceeded 10% annually. Between 1950 and 1963, the expenditure among wage-earner's families rose 97% in real terms. Between 1955 and 1966, per capita consumption rose by 221%. The years after the 1973 Yom Kippur War were a lost decade economically, as growth stalled, inflation soared and government expenditures rose significantly. Also worthy of mention is the 1983 Bank stock crisis. By 1984, the economic situation became almost catastrophic with inflation reaching an annual rate close to 450% and projected to reach over 1000% by the end of the following year. However, the successful economic stabilization plan implemented in 1985 and the subsequent introduction of market-oriented structural reforms reinvigorated the economy and paved the way for its rapid growth in the 1990s and became a model for other countries facing similar economic crises. Two developments have helped to transform Israel's economy since the beginning of the 1990s. The first is waves of Jewish immigration, predominantly from the countries of the former USSR, that has brought over one million new citizens to Israel. These new Soviet Jewish immigrants, many of them highly educated, had a wellspring of scientific and technical expertise to help spur Israel's burgeoning technology sector, now constitute some 15% of Israel's population. The second development benefiting the Israeli economy is the peace process that begun at the Madrid conference of October 1991, which led to the signing of accords and later to a peace treaty between Israel and Jordan (1994). In the early 2000s, the Israeli economy went into a downturn due to the crashing of the global dot-com bubble which bankrupted many startups established during the height of the bubble. The Second Intifada, which cost Israel billions of dollars in security costs, and a decline in investment and tourism, sent unemployment in Israel to the double digits; growth in one quarter of 2000 was 10%. In 2002, the Israeli economy declined about 4% in one quarter. Afterward, Israel managed to create a remarkable recovery by opening up new markets to Israeli exporters farther afield, such as in the rapidly growing countries of East Asia. This was possible thanks to a rebound in the Israeli tech sector, spurred on by the gradual bottoming out of the dotcom crash and a growing increase in demand for computer software, which in turn was due to burgeoning rates of global internet usage at this time. The explosion in demand for security and defense products following 9/11 also allowed Israel to sell even more of its technologies abroad - a situation only made possible due to Israel’s prior investments in the technology sector in an effort to curb high levels of domestic unemployment. In the 2000s, there was an influx of foreign investment in Israel from companies that formerly shunned the Israeli market. In 2006, foreign investment in Israel totalled $13 billion, according to the Manufacturers Association of Israel. The Financial Times said that "bombs drop, yet Israel's economy grows". Moreover, while Israel's total gross external debt is US$95 billion, or approximately 41.6% of GDP, since 2001 it has become a net lender nation in terms of net external debt (the total value of assets vs. liabilities in debt instruments owed abroad), which stood at a significant surplus of US$60 billion. The country also maintains a current account surplus in an amount equivalent to about 3% of its gross domestic product in 2010. The Israeli economy withstood the late-2000s recession, registering positive GDP growth in 2009 and ending the decade with an unemployment rate lower than that of many western countries. There are several reasons behind this economic resilience, for example, the fact that the country is a net lender rather than a borrower nation and the government and the Bank of Israel's generally conservative macro-economic policies. Two policies, in particular, can be cited, one is the refusal of the government to succumb to pressure by the banks to appropriate large sums of public money to aid them early in the crisis, thus limiting their risky behavior. The second is the implementation of the recommendations of the Bach'ar commission in the early to mid-2000s which recommended decoupling the banks' depository- and Investment banking activities, contrary to the then-opposite trend, particularly in the United States, of easing such restrictions which had the effect of encouraging more risk-taking in the financial systems of those countries. OECD membership In May 2007, Israel was invited to open accession discussions with the OECD. In May 2010, the OECD voted unanimously to invite Israel to join, despite Palestinian objections. It became a full member on 7 September 2010. The OECD praised Israel's scientific and technological progress and described it as having "produced outstanding outcomes on a world scale." Challenges Despite economic prosperity, the Israeli economy faces many challenges, some are short term and some are long term challenges. On the short term its inability to duplicate its success in the telecommunication industry into other growing industries hampers its economic outlooks. Its inability to foster large multinational companies in the last decade also calls into question its ability to employ large numbers of people in advanced industries. On the long term, Israel is facing challenges of high dependency of the growing number of Ultra-Orthodox Jews who have a low level of official labor force participation amongst men, and this situation could lead to a materially lower employment-to-population ratio and a higher dependency ratio in the future. The governor of the Bank of Israel, Stanley Fischer, stated that the growing poverty amongst the Ultra-Orthodox is hurting the Israeli economy. According to the data published by Ian Fursman, 60% of the poor households in Israel are of the Haredi Jews and the Israeli Arabs. Both groups together represent 25–28% of the Israeli population. Organizations such as The Kemach Foundation, Gvahim, Jerusalem Village and The Jerusalem Business Networking Forum are addressing these challenges with job placement services and networking events. Data The following table shows the main economic indicators in 1980–2018. Inflation under 2% is in green. Sectors Agriculture In 2017, 2.4% of the country's GDP is derived from agriculture. Of a total labor force of 2.7 million, 2.6% are employed in agricultural production while 6.3% in services for agriculture. While Israel imports substantial quantities of grain (approximately 80% of local consumption), it is largely self-sufficient in other agricultural products and food stuffs. For centuries, farmers of the region have grown varieties of citrus fruits, such as grapefruit, oranges and lemons. Citrus fruits are still Israel's major agricultural export. In addition, Israel is one of the world's leading greenhouse-food-exporting countries. The country exports more than $1.3 billion worth of agricultural products every year, including farm produce as well as $1.2 billion worth of agricultural inputs and technology. Financial services Israel has over 100 active venture capital funds operating throughout the country with US$10 billion under management. In 2004, international foreign funds from various nations around the world committed over 50 percent of the total dollars invested exemplifying the country's strong and sound reputation as an internationally sought after foreign investment by many countries. Israel's venture capital sector has rapidly developed from the early 1990s, and has about 70 active venture capital funds (VC), of which 14 international VCs have Israeli offices. Israel's thriving venture capital and business-incubator industry played an important role in financing the country's flourishing high-tech sector. In 2008, venture capital investment in Israel, rose 19 percent to $1.9 billion. "Between 1991 and 2000, Israel's annual venture-capital outlays, nearly all private, rose nearly 60-fold, from $58 million to $3.3 billion; companies launched by Israeli venture funds rose from 100 to 800; and Israel's information-technology revenues rose from $1.6 billion to $12.5 billion. By 1999, Israel ranked second only to the United States in |
of place. Meir Argov pushed to mention the Displaced Persons camps in Europe and to guarantee freedom of language. Ben-Gurion agreed with the latter but noted that Hebrew should be the main language of the state. The debate over wording did not end completely even after the Declaration had been made. Declaration signer Meir David Loewenstein later claimed, "It ignored our sole right to Eretz Israel, which is based on the covenant of the Lord with Abraham, our father, and repeated promises in the Tanach. It ignored the aliya of the Ramban and the students of the Vilna Gaon and the Ba'al Shem Tov, and the [rights of] Jews who lived in the 'Old Yishuv'." Declaration ceremony The ceremony was held in the Tel Aviv Museum (today known as Independence Hall) but was not widely publicised as it was feared that the British Authorities might attempt to prevent it or that the Arab armies might invade earlier than expected. An invitation was sent out by messenger on the morning of 14 May telling recipients to arrive at 15:30 and to keep the event a secret. The event started at 16:00 (a time chosen so as not to breach the sabbath) and was broadcast live as the first transmission of the new radio station Kol Yisrael. The final draft of the declaration was typed at the Jewish National Fund building following its approval earlier in the day. Ze'ev Sherf, who stayed at the building in order to deliver the text, had forgotten to arrange transport for himself. Ultimately, he had to flag down a passing car and ask the driver (who was driving a borrowed car without a license) to take him to the ceremony. Sherf's request was initially refused but he managed to persuade the driver to take him. The car was stopped by a policeman for speeding while driving across the city though a ticket was not issued after it was explained that he was delaying the declaration of independence. Sherf arrived at the museum at 15:59. At 16:00, Ben-Gurion opened the ceremony by banging his gavel on the table, prompting a spontaneous rendition of Hatikvah, soon to be Israel's national anthem, from the 250 guests. On the wall behind the podium hung a picture of Theodor Herzl, the founder of modern Zionism, and two flags, later to become the official flag of Israel. After telling the audience "I shall now read to you the scroll of the Establishment of the State, which has passed its first reading by the National Council", Ben-Gurion proceeded to read out the declaration, taking 16 minutes, ending with the words "Let us accept the Foundation Scroll of the Jewish State by rising" and calling on Rabbi Fishman to recite the Shehecheyanu blessing. Signatories As leader of the Yishuv, David Ben-Gurion was the first person to sign. The declaration was due to be signed by all 37 members of Moetzet HaAm. However, twelve members could not attend, with eleven of them trapped in besieged Jerusalem and one abroad. The remaining 25 signatories present were called up in alphabetical order to sign, leaving spaces for those absent. Although a space was left for him between the signatures of Eliyahu Dobkin and Meir Vilner, Zerach Warhaftig signed at the top of the next column, leading to speculation that Vilner's name had been left alone to isolate him, or to stress that even a communist had agreed with the declaration. However, Warhaftig later denied this, stating that a space had been left for him (as he was one of the signatories trapped in Jerusalem) where a Hebraicised form of his name would have fitted alphabetically, but he insisted on signing under his actual name so as to honour his father's memory and so moved down two spaces. He and Vilner would be the last surviving signatories, and remained close for the rest of their lives. Of the signatories, two were women (Golda Meir and Rachel Cohen-Kagan). When Herzl Rosenblum, a journalist, was called up to sign, Ben-Gurion instructed him to sign under the name Herzl Vardi, his pen name, as he wanted more Hebrew names on the document. Although Rosenblum acquiesced to Ben-Gurion's request and legally changed his name to Vardi, he later admitted to regretting not signing as Rosenblum. Several other signatories later Hebraised their names, including Meir Argov (Grabovsky), Peretz Bernstein (then Fritz Bernstein), Avraham Granot (Granovsky), Avraham Nissan (Katznelson), Moshe Kol (Kolodny), Yehuda Leib Maimon (Fishman), Golda Meir (Meyerson/Myerson), Pinchas Rosen (Felix Rosenblueth) and Moshe Sharett (Shertok). Other signatories added their own touches, including Saadia Kobashi who added the phrase "HaLevy", referring to the tribe of Levi. After Sharett, the last of the signatories, had put his name to paper, the audience again stood and the Israel Philharmonic Orchestra played "Hatikvah". Ben-Gurion concluded the event with the words "The State of Israel is established! This meeting is adjourned!" Aftermath The declaration was signed in the context of civil war between the Arab and Jewish populations of the Mandate that had started the day after the partition vote at the UN six months earlier. Neighbouring Arab states and the Arab League were opposed to the vote and had declared they would intervene to prevent its implementation. In a cablegram on 15 May 1948 to the Secretary-General of the United Nations, the Secretary-General of the League of Arab States claimed that "the Arab states find themselves compelled to intervene in order to restore law and order and to check further bloodshed". Over the next few days after the declaration, armies of Egypt, Trans-Jordan, Iraq, and Syria engaged Israeli troops inside the area of what had just ceased to be Mandatory Palestine, thereby starting the 1948 Arab–Israeli War. A truce began on 11 June, but fighting resumed on 8 July and stopped again on 18 July, before restarting in mid-October and finally ending on 24 July 1949 with the signing of the armistice agreement with Syria. By then Israel had retained its independence and | Museum (today known as Independence Hall) but was not widely publicised as it was feared that the British Authorities might attempt to prevent it or that the Arab armies might invade earlier than expected. An invitation was sent out by messenger on the morning of 14 May telling recipients to arrive at 15:30 and to keep the event a secret. The event started at 16:00 (a time chosen so as not to breach the sabbath) and was broadcast live as the first transmission of the new radio station Kol Yisrael. The final draft of the declaration was typed at the Jewish National Fund building following its approval earlier in the day. Ze'ev Sherf, who stayed at the building in order to deliver the text, had forgotten to arrange transport for himself. Ultimately, he had to flag down a passing car and ask the driver (who was driving a borrowed car without a license) to take him to the ceremony. Sherf's request was initially refused but he managed to persuade the driver to take him. The car was stopped by a policeman for speeding while driving across the city though a ticket was not issued after it was explained that he was delaying the declaration of independence. Sherf arrived at the museum at 15:59. At 16:00, Ben-Gurion opened the ceremony by banging his gavel on the table, prompting a spontaneous rendition of Hatikvah, soon to be Israel's national anthem, from the 250 guests. On the wall behind the podium hung a picture of Theodor Herzl, the founder of modern Zionism, and two flags, later to become the official flag of Israel. After telling the audience "I shall now read to you the scroll of the Establishment of the State, which has passed its first reading by the National Council", Ben-Gurion proceeded to read out the declaration, taking 16 minutes, ending with the words "Let us accept the Foundation Scroll of the Jewish State by rising" and calling on Rabbi Fishman to recite the Shehecheyanu blessing. Signatories As leader of the Yishuv, David Ben-Gurion was the first person to sign. The declaration was due to be signed by all 37 members of Moetzet HaAm. However, twelve members could not attend, with eleven of them trapped in besieged Jerusalem and one abroad. The remaining 25 signatories present were called up in alphabetical order to sign, leaving spaces for those absent. Although a space was left for him between the signatures of Eliyahu Dobkin and Meir Vilner, Zerach Warhaftig signed at the top of the next column, leading to speculation that Vilner's name had been left alone to isolate him, or to stress that even a communist had agreed with the declaration. However, Warhaftig later denied this, stating that a space had been left for him (as he was one of the signatories trapped in Jerusalem) where a Hebraicised form of his name would have fitted alphabetically, but he insisted on signing under his actual name so as to honour his father's memory and so moved down two spaces. He and Vilner would be the last surviving signatories, and remained close for the rest of their lives. Of the signatories, two were women (Golda Meir and Rachel Cohen-Kagan). When Herzl Rosenblum, a journalist, was called up to sign, Ben-Gurion instructed him to sign under the name Herzl Vardi, his pen name, as he wanted more Hebrew names on the document. Although Rosenblum acquiesced to Ben-Gurion's request and legally changed his name to Vardi, he later admitted to regretting not signing as Rosenblum. Several other signatories later Hebraised their names, including Meir Argov (Grabovsky), Peretz Bernstein (then Fritz Bernstein), Avraham Granot (Granovsky), Avraham Nissan (Katznelson), Moshe Kol (Kolodny), Yehuda Leib Maimon (Fishman), Golda Meir (Meyerson/Myerson), Pinchas Rosen (Felix Rosenblueth) and Moshe Sharett (Shertok). Other signatories added their own touches, including Saadia Kobashi who added the phrase "HaLevy", referring to the tribe of Levi. After Sharett, the last of the signatories, had put his name to paper, the audience again stood and the Israel Philharmonic Orchestra played "Hatikvah". Ben-Gurion concluded the event with the words "The State of Israel is established! This meeting is adjourned!" Aftermath The declaration was signed in the context of civil war between the Arab and Jewish populations of the Mandate that had started the day after the partition vote at the UN six months earlier. Neighbouring Arab states and the Arab League were opposed to the vote and had declared they would intervene to prevent its implementation. In a cablegram on 15 May 1948 to the Secretary-General of the United Nations, the Secretary-General of the League of Arab States claimed that "the Arab states find themselves compelled to intervene in order to restore law and order and to check further bloodshed". Over the next few days after the declaration, armies of Egypt, Trans-Jordan, Iraq, and Syria engaged Israeli troops inside the area of what had just ceased to be Mandatory Palestine, thereby starting the 1948 Arab–Israeli War. A truce began on 11 June, but fighting resumed on 8 July and stopped again on 18 July, before restarting in mid-October and finally ending on 24 July 1949 with the signing of the armistice agreement with Syria. By then Israel had retained its independence and increased its land area by almost 50% compared to the 1947 UN Partition Plan. Following the declaration, Moetzet HaAm became the Provisional State Council, which acted as the legislative body for the new state until the first elections in January 1949. Many of the signatories would play a prominent role in Israeli politics following independence; Moshe Sharett and Golda Meir both served as Prime Minister, Yitzhak Ben-Zvi became the country's second president in 1952, and several others served as ministers. David Remez was the first signatory to pass away, dying in May 1951, while Meir Vilner, the youngest signatory at just 29, was the longest living, serving in the Knesset until 1990 and dying in June 2003. Eliyahu Berligne, the oldest signatory at 82, died in 1959. Eleven minutes after midnight, the United States de facto recognized the State of Israel. This was followed by Iran (which had voted against the UN partition plan), Guatemala, Iceland, Nicaragua, Romania, and Uruguay. The Soviet Union was the first nation to fully recognize Israel de jure on 17 May 1948, followed by Poland, Czechoslovakia, Yugoslavia, Ireland, and South Africa. The United States extended official recognition after the first Israeli election, as Truman had promised on 31 January 1949. By virtue of General Assembly Resolution 273 (III), Israel was admitted to membership in the United Nations on 11 May 1949. In the three years following the 1948 Palestine war, about 700,000 Jews immigrated to Israel, residing mainly along the borders and in former Arab lands. Around 136,000 |
divided into three parts, albeit with some differences. Continental Italy Continental Italy defined as the southern side of the Alps, the Po Valley, Liguria and the portion of the Apennines bounded by the conventional line that connects La Spezia to Rimini. The region of Nice (corresponding to the historic County of Nice), Italian Switzerland, part of Julian March and other less extensive portions of territory such as Valle Stretta, Gondo and Val Monastero, are not part of the Italian Republic in its continental part but they are part of the Italian geographical region. Conversely, the Val di Lei, the Val di Livigno, the San Candido basin, the Rio Sesto valley and the Tarvisio basin, although part of the Italian Republic, are not included in the Italian geographical region. Peninsular Italy By peninsular Italy we mean the entire southern part of the aforementioned line, up to Punta Melito in Calabria (which is the southernmost point of the peninsula) and Santa Maria di Leuca in Apulia. San Marino and the Vatican City are foreign territories, although included in the Italian geographical region. The Italian peninsula occupies a median position between the three main peninsulas of southern Europe, emerging right in the center of the Mediterranean Sea, with large islands and some archipelagos. Insular Italy Insular Italy is made up of Sardinia, Sicily and numerous smaller islands, scattered or grouped into archipelagos in the seas that bathe the coasts of the peninsula. Corsica is not politically included in insular Italy since it belongs to France, however, it is included in the Italian geographical region. The five largest islands belonging to the Italian state are, in order of size: Sicily () Sardinia () Elba () Sant'Antioco () Pantelleria (). Other islands belonging to Italy are grouped into the following archipelagos: Archipelago of the Gulf of La Spezia, formed by the island of Palmaria, Tino and Tinetto; Tuscan archipelago, formed by the island of Elba, the largest and most important of the group from whose bowels iron has been extracted for centuries. To the north of the island of Elba rise Capraia and Gorgona, to the south Pianosa, Montecristo, Giannutri and the island of Giglio. Minor islets are Cerboli and Palmaiola off the coast of Elba, the Islet of the Sparviero at Punta Ala, the Formiche di Grosseto, the Formica di Burano, the Formica di Montecristo (or Scoglio d'Africa) and some islets off the coast of promontory of the Argentario including Argentarola, Isola Rossa and Isolotto, in addition to the Secche della Meloria and the Secche di Vada. The Phlegraean Islands (Ischia and Procida) plus Capri, in the Gulf of Naples; sometimes the three islands are included in the Campanian Archipelago; Pontine islands: Ponza, Palmarola, Zannone and Ventotene, in the gulf of Gaeta; Archipelago of the Aeolian Islands or Lipari, which includes Salina, Lipari, the largest of the group, Vulcano, a now almost extinct volcano; Panarea and then Stromboli, an eruptive cone still in activity which was called Stronghilo'' by the ancient Greeks (hence Stromboli), due to its conical shape of an inverted top on the sea; to these must be added Filicudi and Alicudi; Aegadian Islands, i.e. the islands of Favignana, Marettimo, Levanzo and Stagnone, which arise between Marsala and Trapani, west of Sicily; Pelagie Islands, including Linosa, Lampione and Lampedusa; In Sicily we still find Ustica off the Gulf of Palermo and Pantelleria in the middle of the Sicilian Channel; The group of the Tremiti Islands and the island of Pianosa, which rise in the Adriatic Sea; To the north of Sardinia the Asinara and the archipelago of La Maddalena, to the south San Pietro and Sant'Antioco. The Cheradi Islands of San Pietro and San Paolo in the Gulf of Taranto. Orography Mountains Almost 40% of the Italian territory is mountainous, with the Alps as the northern boundary and the Apennine Mountains forming the backbone of the peninsula and extending for . The Alpine mountain range is linked with the Apennines with the Colle di Cadibona pass in the Ligurian Alps. Nineteen Italian regions are crossed by either the Alps or the Apennines, or their offshoots. Sardinia has mountains with their own characteristics and are included in the Sardinian-Corsican relief, since it also affects Corsica. The Alps (formed during the Mesozoic and Cenozoic) surround the Po Valley to the north, east and west, and develop along the entire northern border of Italy (about ), creating a natural border. The Alps contain the highest peak in the European Union, Mont Blanc, at above sea level, located between the Aosta Valley and France. The Apennines (formed during the Oligocene) rise south of the Po Valley and run from north to south throughout the Italian peninsula, from Liguria to Calabria and continue in northern Sicily ending in the Madonie, acting as a watershed between the Tyrrhenian and Adriatic-Ionian coast. The highest peaks in Italy are found in the Western Alps, where there are numerous peaks that exceed including Monte Rosa (), the Cervino () and Mont Blanc which with its . The maximum height of the Apennines is the Gran Sasso d'Italia (). Worldwide-known mountains in Italy are Monte Cervino (Matterhorn), Monte Rosa, Gran Paradiso in the West Alps, and Bernina, Stelvio and Dolomites along the eastern side of the Alps. Hills The hills cover most of the Italian territory. They are mainly located in the central-southern part of the peninsula, along the sides of the Apennine ridge, but also in the pre-Alpine area, close to the Alps. The hilly reliefs, which alternate with hollows and valleys, have slight slopes and do not exceed . The first two hilly systems are the subalpine hills and the Preappennino, two hilly strips arranged between the Alps and the Po Valley and between the Apennines and the Adriatic coast respectively. The subalpine hills widen more in the western part of the Po Valley, where they form the hills of the Langhe and Montferrat. Two other hill systems are the Tyrrhenian Anti-Apennine, which extends from the Colline Metallifere of Tuscany to Vesuvius and the Beneventane Hills in Campania, and the Adriatic Anti-Apennine, present in Puglia with the Murge and Gargano hills. The Italian hills have different origins: The Langhe, Monferrato, Chianti and Murge are sedimentary hills formed by the lifting of the seabed. The Beneventane Hills are of tertiary formation, that is, composed of gravel stratifications or masses of pebbles mixed with limestone and sandstone, probably due to the raising of the lake bottom. The hills of Brianza, of Canavese and more generally of the entire strip that runs at the foot of the Alps are morainic, that is, made up of deposits of earth and crushed stone transported by ancient glaciers. The Euganean Hills and numerous other formations in Tuscany, Lazio, Campania are of volcanic origin, i.e. they are the remains of ancient extinct volcanoes, rounded by a long erosion. Plains The plains make up 23.2% of the Italian national territory. In between the two lies a large plain in the valley of the Po, the largest river in Italy, which flows eastward from the Cottian Alps to the Adriatic. The Po Valley is the largest plain in Italy, with , and it represents over 70% of the total plain area in the country. The Po Valley is divided into two bands: the high plain, which borders the Alpine and Apennine hills, and the low plain located in the center and extended up to the Po delta. In the peninsular part and in the islands there are only small plains often located along the coasts and at the mouth of the major rivers, near which they formed: this is the case, for example, of the Tavoliere delle Puglie, of the Campidano in Sardinia or the Maremma in Tuscany. The Italian plains have different origins: Most of it is of alluvial origin, that is, formed by the debris deposited by the rivers along their course. The Po Valley, Valdarno, Pontine Marshes, Campidano, Metapontino, Plain of Sele, Salento, Plain of Sibari, Plain of Catania and Plain of Sant'Eufemia are alluvial. The second largest Italian plain is the Tavoliere delle Puglie, which is a rising plain, formed from the raising of the seabed. Other plains, for example the Plain of Campania, are of volcanic origin where the ashes of the volcanoes have filled the surrounding valleys, transforming them into fertile plains. Hydrography Italy is surrounded, except to the north, by the sea, | not part of the Italian Republic in its continental part but they are part of the Italian geographical region. Conversely, the Val di Lei, the Val di Livigno, the San Candido basin, the Rio Sesto valley and the Tarvisio basin, although part of the Italian Republic, are not included in the Italian geographical region. Peninsular Italy By peninsular Italy we mean the entire southern part of the aforementioned line, up to Punta Melito in Calabria (which is the southernmost point of the peninsula) and Santa Maria di Leuca in Apulia. San Marino and the Vatican City are foreign territories, although included in the Italian geographical region. The Italian peninsula occupies a median position between the three main peninsulas of southern Europe, emerging right in the center of the Mediterranean Sea, with large islands and some archipelagos. Insular Italy Insular Italy is made up of Sardinia, Sicily and numerous smaller islands, scattered or grouped into archipelagos in the seas that bathe the coasts of the peninsula. Corsica is not politically included in insular Italy since it belongs to France, however, it is included in the Italian geographical region. The five largest islands belonging to the Italian state are, in order of size: Sicily () Sardinia () Elba () Sant'Antioco () Pantelleria (). Other islands belonging to Italy are grouped into the following archipelagos: Archipelago of the Gulf of La Spezia, formed by the island of Palmaria, Tino and Tinetto; Tuscan archipelago, formed by the island of Elba, the largest and most important of the group from whose bowels iron has been extracted for centuries. To the north of the island of Elba rise Capraia and Gorgona, to the south Pianosa, Montecristo, Giannutri and the island of Giglio. Minor islets are Cerboli and Palmaiola off the coast of Elba, the Islet of the Sparviero at Punta Ala, the Formiche di Grosseto, the Formica di Burano, the Formica di Montecristo (or Scoglio d'Africa) and some islets off the coast of promontory of the Argentario including Argentarola, Isola Rossa and Isolotto, in addition to the Secche della Meloria and the Secche di Vada. The Phlegraean Islands (Ischia and Procida) plus Capri, in the Gulf of Naples; sometimes the three islands are included in the Campanian Archipelago; Pontine islands: Ponza, Palmarola, Zannone and Ventotene, in the gulf of Gaeta; Archipelago of the Aeolian Islands or Lipari, which includes Salina, Lipari, the largest of the group, Vulcano, a now almost extinct volcano; Panarea and then Stromboli, an eruptive cone still in activity which was called Stronghilo'' by the ancient Greeks (hence Stromboli), due to its conical shape of an inverted top on the sea; to these must be added Filicudi and Alicudi; Aegadian Islands, i.e. the islands of Favignana, Marettimo, Levanzo and Stagnone, which arise between Marsala and Trapani, west of Sicily; Pelagie Islands, including Linosa, Lampione and Lampedusa; In Sicily we still find Ustica off the Gulf of Palermo and Pantelleria in the middle of the Sicilian Channel; The group of the Tremiti Islands and the island of Pianosa, which rise in the Adriatic Sea; To the north of Sardinia the Asinara and the archipelago of La Maddalena, to the south San Pietro and Sant'Antioco. The Cheradi Islands of San Pietro and San Paolo in the Gulf of Taranto. Orography Mountains Almost 40% of the Italian territory is mountainous, with the Alps as the northern boundary and the Apennine Mountains forming the backbone of the peninsula and extending for . The Alpine mountain range is linked with the Apennines with the Colle di Cadibona pass in the Ligurian Alps. Nineteen Italian regions are crossed by either the Alps or the Apennines, or their offshoots. Sardinia has mountains with their own characteristics and are included in the Sardinian-Corsican relief, since it also affects Corsica. The Alps (formed during the Mesozoic and Cenozoic) surround the Po Valley to the north, east and west, and develop along the entire northern border of Italy (about ), creating a natural border. The Alps contain the highest peak in the European Union, Mont Blanc, at above sea level, located between the Aosta Valley and France. The Apennines (formed during the Oligocene) rise south of the Po Valley and run from north to south throughout the Italian peninsula, from Liguria to Calabria and continue in northern Sicily ending in the Madonie, acting as a watershed between the Tyrrhenian and Adriatic-Ionian coast. The highest peaks in Italy are found in the Western Alps, where there are numerous peaks that exceed including Monte Rosa (), the Cervino () and Mont Blanc which with its . The maximum height of the Apennines is the Gran Sasso d'Italia (). Worldwide-known mountains in Italy are Monte Cervino (Matterhorn), Monte Rosa, Gran Paradiso in the West Alps, and Bernina, Stelvio and Dolomites along the eastern side of the Alps. Hills The hills cover most of the Italian territory. They are mainly located in the central-southern part of the peninsula, along the sides of the Apennine ridge, but also in the pre-Alpine area, close to the Alps. The hilly reliefs, which alternate with hollows and valleys, have slight slopes and do not exceed . The first two hilly systems are the subalpine hills and the Preappennino, two hilly strips arranged between the Alps and the Po Valley and between the Apennines and the Adriatic coast respectively. The subalpine hills widen more in the western part of the Po Valley, where they form the hills of the Langhe and Montferrat. Two other hill systems are the Tyrrhenian Anti-Apennine, which extends from the Colline Metallifere of Tuscany to Vesuvius and the Beneventane Hills in Campania, and the Adriatic Anti-Apennine, present in Puglia with the Murge and Gargano hills. The Italian hills have different origins: The Langhe, Monferrato, Chianti and Murge are sedimentary hills formed by the lifting of the seabed. The Beneventane Hills are of tertiary formation, that is, composed of gravel stratifications or masses of pebbles mixed with limestone and sandstone, probably due to the raising of the lake bottom. The hills of Brianza, of Canavese and more generally of the entire strip that runs at the foot of the Alps are morainic, that is, made up of deposits of earth and crushed stone transported by ancient glaciers. The Euganean Hills and numerous other formations in Tuscany, Lazio, Campania are of volcanic origin, i.e. they are the remains of ancient extinct volcanoes, rounded by a long erosion. Plains The plains make up 23.2% of the Italian national territory. In between the two lies a large plain in the valley of the Po, the largest river in Italy, which flows eastward from the Cottian Alps to the Adriatic. The Po Valley is the largest plain in Italy, with , and it represents over 70% of the total plain area in the country. The Po Valley is divided into two bands: the high plain, which borders the Alpine and Apennine hills, and the low plain located in the center and extended up to the Po delta. In the peninsular part and in the islands there are only small plains often located along the coasts and at the mouth of the major rivers, near which they formed: this is the case, for example, of the Tavoliere delle Puglie, of the Campidano in Sardinia or the Maremma in Tuscany. The Italian plains have different origins: Most of it is of alluvial origin, that is, formed by the debris deposited by the rivers along their course. The Po Valley, Valdarno, Pontine Marshes, Campidano, Metapontino, Plain of Sele, Salento, Plain of Sibari, Plain of Catania and Plain of Sant'Eufemia are alluvial. The second largest Italian plain is the Tavoliere delle Puglie, which is a rising plain, formed from the raising of the seabed. Other plains, for example the Plain of Campania, are of volcanic origin where the ashes of the volcanoes have filled the surrounding valleys, transforming them into fertile plains. Hydrography Italy is surrounded, except to the north, by the sea, and its territory has a rich reserve of inland waters (rivers and lakes). The southern regions, however, are drier than the northern ones, due to the scarcity of rains and the absence of glaciers that can feed the rivers. Rivers Italian rivers are shorter than those of other European regions due to the Apennines that run along the entire length of the peninsula, dividing the waters into two opposite sides. They are numerous however, due to the relative abundance of rains in Italy in general, and to the presence of the Alpine chain, rich in snowfields and glaciers, in northern Italy. The fundamental watershed follows the ridge of the Alps and the Apennines and defines five main slopes, corresponding to the seas into which the rivers flow: the Adriatic, Ionic, Tyrrhenian, Ligurian and Mediterranean sides. Italian rivers are categorized into two main groups: the Alpine-Po river rivers and the Apennine-island rivers. The longest Italian river is the Po (), which flows from the Monviso, runs through the entire Po Valley from west to east, and then flows, with a delta, into the Adriatic Sea. In addition to being the longest, it is also the river with the largest basin and the largest flow at its mouth. The second longest Italian river is the Adige (), which originates near Lake Resia and flows into the Adriatic Sea, |
the largest religion in the country, although the Catholic Church is no longer officially the state religion. In 2006, 87.8% of Italy's population self-identified as Roman Catholic, although only about one-third of these described themselves as active members (36.8%). In 2016, 71.1% of Italian citizens self-identified as Roman Catholic. This increased again to 78% in 2018. Most Italians believe in God, or a form of a spiritual life force. According to a Eurobarometer Poll in 2005: 74% of Italian citizens responded that 'they believe there is a God', 16% answered that 'they believe there is some sort of spirit or life force' and 6% answered that 'they do not believe there is any sort of spirit, God, or life force'. There are no data collected through census. Christianity The Italian Catholic Church is part of the global Roman Catholic Church, under the leadership of the Pope, curia in Rome, and the Conference of Italian Bishops. In addition to Italy, two other sovereign nations are included in Italian-based dioceses, San Marino and Vatican City. There are 225 dioceses in the Italian Catholic Church, see further in this article and in the article List of the Roman Catholic dioceses in Italy. Even though by law Vatican City is not part of Italy, it is in Rome, and along with Latin, Italian is the most spoken and second language of the Roman Curia. Italy has a rich Catholic culture, especially as numerous Catholic saints, martyrs and popes were Italian themselves. Roman Catholic art in Italy especially flourished during the Middle Ages, Renaissance and Baroque periods, with numerous Italian artists, such as Michelangelo, Leonardo da Vinci, Raphael, Caravaggio, Fra Angelico, Gian Lorenzo Bernini, Sandro Botticelli, Tintoretto, Titian and Giotto. Roman Catholic architecture in Italy is equally as rich and impressive, with churches, basilicas and cathedrals such as St Peter's Basilica, Florence Cathedral and St Mark's Basilica. Roman Catholicism is the largest religion and denomination in Italy, with around 71.1% of Italians considering themselves Catholic. Italy is also home to the greatest number of cardinals in the world, and is the country with the greatest number of Roman Catholic churches per capita. Even though the main Christian denomination in Italy is Roman Catholicism, there are some minorities of Protestant, Waldensian, Eastern Orthodox and other Christian churches. In the 20th century, Jehovah's Witnesses, Pentecostalism, non-denominational Evangelicalism, and Mormonism were the fastest-growing Protestant churches. Immigration from Western, Central, and Eastern Africa at the beginning of the 21st century has increased the size of Baptist, Anglican, Pentecostal and Evangelical communities in Italy, while immigration from Eastern Europe has produced large Eastern Orthodox communities. In 2006, Protestants made up 2.1% of Italy's population, and members of Eastern Orthodox churches comprised 1.2% or more than 700,000 Eastern Orthodox Christians including 180,000 Greek Orthodox, 550,000 Pentecostals and Evangelists (0.8%), of whom 400,000 are members of the Assemblies of God, about 250,000 are Jehovah's Witnesses (0.4%), 30,000 Waldensians, 25,000 Seventh-day Adventists, 22,000 Mormons, 15,000 Baptists (plus some 5,000 Free Baptists), 7,000 Lutherans, 4,000 Methodists (affiliated with the Waldensian Church). Other religions The longest-established religious faith in Italy is Judaism, Jews having been present in Ancient Rome before the birth of Christ. Italy has seen many influential Italian-Jews, such as prime minister Luigi Luzzatti, who took office in 1910, Ernesto Nathan served as mayor of Rome from 1907 to 1913 and Shabbethai Donnolo (died 982). During the Holocaust, Italy took in many Jewish refugees from Nazi Germany. However, with the creation of the Nazi-backed puppet Italian Social Republic, about 15% of 48,000 Italian Jews were killed. This, together with the emigration that preceded and followed the Second World War, has left only a small community of around 45,000 Jews in Italy today. Due to immigration from around the world, there has been an increase in non-Christian religions. As of 2009, there were 1.0 million Muslims in Italy forming 1.6 percent of population; independent estimates put the Islamic population in Italy anywhere from 0.8 million to 1.5 million. Only 50,000 Italian Muslims hold Italian citizenship. There are more than 200,000 followers of faith originating in the Indian subcontinent, including some 70,000 Sikhs with 22 gurdwaras across the country, 70,000 Hindus, and 50,000 Buddhists. There are an estimated some 4,900 Bahá'ís in Italy in 2005. Genetics and ethnic groups Within the Italian population, there is enough cultural, linguistic, genetic and historical diversity for them to constitute several distinct groups throughout the peninsula. In this regard, peoples like the Friulians, the Ladins, the Sardinians and the South | of the population identified as Catholic, 15% as non-believers or atheists, 2% as other Christians and 6% adhered to other religions. Historical overview 1861 to early 20th century From its unification in 1861 to the Italian economic miracle of the 1950s and 1960s, Italy has been a country of mass emigration. Between 1898 and 1914, the peak years of Italian diaspora, approximately 750,000 Italians emigrated each year. As a consequence, large numbers of people with full or significant Italian ancestry are found in Brazil (25 million), Argentina (20 million), US (17.8 million), France (5 million), Venezuela (2 million), Uruguay (1.5 million), Canada (1.4 million), and Australia (800,000). In addition, Italian communities once thrived in the former African colonies of Eritrea (nearly 100,000 at the beginning of World War II), Somalia and Libya (150,000 Italians settled in Libya, constituting about 18% of the total Libyan population). All of Libya's Italians were expelled after Muammar Gaddafi's takeover in 1970. Furthermore, after Tito's annexation of Istria in 1945, up to 350,000 ethnic Italians left communist Yugoslavia. After World War II As a result of the profound economic and social changes brought by rapid postwar economic growth, including low birth rates, an aging population and thus a shrinking workforce, by the 1970s emigration had all but stopped and Italy started to have a positive net migration rate. The nation's immigrant population reached 5 million by 2015, making up some 8% of the total population. However, the long-lasting effects of the Eurozone crisis double-dip recession strongly slowed down immigration rates in Italy in the 2010s. Effects of the COVID-19 pandemic As a direct effect of the COVID-19 pandemic, Italy registered at least 100,000 excess deaths for 2020 only, a loss of about 1.4 years in the average life expectancy, a noticeable decrease in births rates and a marked decrease in immigration rates, the overall effect being a record natural population decline of 342,042 units in that year, the largest ever recorded since 1918 (at the time of World War I). Immigration Since the fall of the Berlin Wall in 1989, and more recently, the 2004 and 2007 enlargements of the European Union, Italy received growing flows of migrants from the former socialist countries of Eastern Europe (especially Romania, Albania, Ukraine and Poland). The second most important area of immigration to Italy has always been the neighboring North Africa (especially Morocco, Egypt, Tunisia and Algeria). Furthermore, in recent years, growing migration fluxes from the Far East (notably, China and the Philippines) and Latin America (Ecuador, Peru) have been recorded. In 2020, Istat estimated that 5,039,637 foreign-born immigrants lived in Italy, representing about 8.4% of the total population. These figures do not include naturalized foreign-born residents (about 100,000 foreigners acquired Italian citizenship in 2020 ) as well as illegal immigrants, the so-called clandestini, whose numbers, difficult to determine, are thought to be at least 670,000. Romanians made up the largest community in the country (1,145,718), followed by Albanians (441,027) and Moroccans (422,980). The fourth largest, but the fastest growing, community of foreign residents in Italy was represented by the Chinese. The majority of Chinese living in Italy are from the city of Wenzhou in the province of Zhejiang. Breaking down the foreign-born population by continent, in 2020 the figures were as follows: Europe (54%), Africa (22%), Asia (16%), the Americas (8%) and Oceania (0.06%). The distribution of immigrants is largely uneven in Italy: 83% of immigrants live in the northern and central parts of the country (the most economically developed areas), while only 17% live in the southern half of the peninsula. Foreign-born residents by country of origin as of 2019: Cities 70.4% of Italian population is classified as urban, a relatively low figure among developed countries. During the last two decades, Italy underwent a devolution process, that eventually led to the creation of administrative metropolitan areas, to give major cities and their metropolitan areas a provincial status (somehow similar to PRC's direct-controlled municipality). According to OECD, the largest conurbations are: Milan – 7.4 million Rome – 3.7 million Naples – 3.1 million Turin – 2.2 million Historical data Life expectancy at birth from 1871 to 2020 Sources: Our World In Data and the United Nations. 1871–1950 1950–2020 Source: UN World Population Prospects Total fertility rate from 1850 to 1899 The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World in Data and Gapminder Foundation. Vital statistics since 1900 In the year 2020 88,345 babies were born to at least one foreign parent which makes up 21.8% of all newborns in that year (21,024 or 5.2% were born to foreign mothers, 7,529 or 1.9% to foreign fathers and 59,792 or 14.8% to two foreign parents. Current vital statistics Demographic statistics Demographic statistics according to the World Population Review in 2019. One birth |
organization. The Communist Party was at this point the largest communist party in Western Europe and remained such for the rest of its existence. Their ability to attract members was largely due to their pragmatic stance, especially their rejection of extremism and to their growing independence from Moscow (see Eurocommunism). The Italian communist party was especially strong in areas like Emilia-Romagna and Tuscany, where communists had been elected to stable government positions. This practical political experience may have contributed to their taking a more pragmatic approach to politics. The Years of Lead On 12 December 1969, a roughly decade-long period of extremist left- and right-wing political terrorism, known as The Years of Lead (as in the metal of bullets, ), began with the Piazza Fontana bombing in the center of Milan. Neofascist Vincenzo Vinciguerra later declared the bombing to be an attempt to push the Italian state to declare a state of emergency in order to lead to a more authoritative state. A bomb left in a bank killed about twenty and was initially blamed on anarchist Giuseppe Pinelli. This accusation was hotly contested by left-wing circles, especially the Maoist Student Movement, which had support in those years from some students of Milan's universities and who considered the bombing to have all the marks of a fascist operation. Their guess proved correct, but only after many years of difficult investigations. The strategy of tension attempted to blame the left for bombings carried out by right-wing terrorists. Fascist "black terrorists", such as Ordine Nuovo and the Avanguardia Nazionale, were in the 1980s and 1990s found to be responsible for several terrorist attacks. On the other extreme of the political spectrum, the leftist Red Brigades carried out assassinations against specific persons, but were not responsible for any blind bombings. The Red Brigades killed socialist journalist Walter Tobagi and in their most famous operation kidnapped and assassinated Aldo Moro, president of the Christian Democracy, who was trying to involve the Communist Party in the government through the compromesso storico ("historic compromise"), to which the radical left as well as Washington were opposed. The last and largest of the bombings, known as the Bologna massacre, destroyed the city's railway station in 1980. This was found to be a neofascist bombing, in which Propaganda Due was involved. On 24 October 1990, Prime Minister Giulio Andreotti (DC) revealed to the Parliament the existence of Gladio, NATO's secret "stay-behind" networks which stocked weapons in order to facilitate an armed resistance in case of a communist coup. In 2000, a Parliament Commission report from the Olive Tree (centre-left) coalition concluded that the strategy of tension followed by Gladio had been supported by the United States to "stop the PCI and, to a certain degree, the PSI [Italian Socialist Party] from reaching executive power in the country". 1980s With the end of the lead years, the Communist Party gradually increased their votes under the leadership of Enrico Berlinguer. The Italian Socialist Party, led by Bettino Craxi, became more and more critical of the communists and of the Soviet Union; Craxi himself pushed in favor of Ronald Reagan's positioning of Pershing II missiles in Italy, a move many communists strongly disapproved of. As the Socialist Party moved to more moderate positions, it attracted many reformists, some of whom were irritated by the failure of the communists to modernize. Increasingly, many on the left began to see the communists as old and out of fashion while Craxi and the socialists seemed to represent a new liberal socialism. The Communist Party surpassed the Christian Democrats only in the European elections of 1984, held barely two days after Berlinguer's death, a passing that likely drew sympathy from many voters. The election of 1984 was to be the only time the Christian Democrats did not emerge as the largest party in a nationwide election in which they participated. In 1987, one year after the Chernobyl disaster following a referendum in that year, a nuclear phase-out was commenced. Italy's four nuclear power plants were closed down, the last in 1990. A moratorium on the construction of new plants, originally in effect from 1987 until 1993, has since been extended indefinitely. In these years, corruption began to be more extensive, a development that would be exposed in the early 1990s and nicknamed Tangentopoli. With the mani pulite investigation, starting just one year after the collapse of the Soviet Union, the whole power structure faltered and seemingly indestructible parties, such as the Christian Democrats and the Socialist Party, disbanded whereas the Communist Party changed its name to the Democratic Party of the Left and took the role of the Socialist Party as the main social democratic party in Italy. What was to follow was then called the transition to the Second Republic. Second Republic: 1994–present From 1992 to 1997, Italy faced significant challenges as voters, disenchanted with past political paralysis, massive government debt, extensive corruption and organized crime's considerable influence, collectively called Tangentopoli after being uncovered by mani pulite, demanded political, economic and ethical reforms. In the Italian referendums of 1993, voters approved substantial changes, including moving from a proportional to an Additional Member System, which is largely dominated by a majoritarian electoral system and the abolition of some ministries, some of which have been reintroduced with only partly modified names, such as the Ministry of Agriculture reincarnated as the Ministry of Agricultural Resources. Major political parties, beset by scandal and loss of voter confidence, underwent far-reaching changes. New political forces and new alignments of power emerged in the March 1994 national elections. This election saw a major turnover in the new parliament, with 452 out of 630 deputies and 213 out of 315 senators elected for the first time. The 1994 elections also swept media magnate Silvio Berlusconi (leader of Pole of Freedoms coalition) into office as prime minister. However, Berlusconi was forced to step down in December 1994 when the Lega Nord withdrew support. The Berlusconi government was succeeded by a technical government headed by Prime Minister Lamberto Dini, which left office in early 1996. A series of center-left coalitions dominated Italy's political landscape between 1996 and 2001. In April 1996, national elections led to the victory of a center-left coalition, The Olive Tree, under the leadership of Romano Prodi. Prodi's government became the third-longest to stay in power before he narrowly lost a vote of confidence, by three votes, in October 1998. In May 1999, the Parliament selected Carlo Azeglio Ciampi as the President of the Republic. Ciampi, a former prime minister and Minister of the Treasury and before entering the government also the governor of the Bank of Italy, was elected on the first ballot by a comfortable margin over the required two-thirds of the votes. A new government was formed by the Democrats of the Left leader and former communist Massimo D'Alema, but in April 2000 he resigned following poor performance by his coalition in regional elections. The succeeding center-left government, including most of the same parties, was headed by Giuliano Amato, a social democrat, who had previously served as prime minister in 1992–1993 and had at the time sworn never to return to active politics. National elections held on 13 May 2001 returned Berlusconi to power at the head of the five-party center-right House of Freedoms coalition, comprising the Prime Minister's own party, Forza Italia, the National Alliance, the North League, the Christian Democratic Center and the United Christian Democrats. Between 17 May 2006 and 21 February 2007, Romano Prodi served as prime minister of Italy following the narrow victory of his The Union coalition over the House of Freedoms led by Silvio Berlusconi in the April 2006 Italian elections. Following a government crisis, Prodi submitted his resignation on 21 February 2007. Three days later, he was asked by President Giorgio Napolitano to stay on as prime minister and he agreed to do so. On 28 February 2007, Prodi narrowly survived a senate no confidence | and more critical of the communists and of the Soviet Union; Craxi himself pushed in favor of Ronald Reagan's positioning of Pershing II missiles in Italy, a move many communists strongly disapproved of. As the Socialist Party moved to more moderate positions, it attracted many reformists, some of whom were irritated by the failure of the communists to modernize. Increasingly, many on the left began to see the communists as old and out of fashion while Craxi and the socialists seemed to represent a new liberal socialism. The Communist Party surpassed the Christian Democrats only in the European elections of 1984, held barely two days after Berlinguer's death, a passing that likely drew sympathy from many voters. The election of 1984 was to be the only time the Christian Democrats did not emerge as the largest party in a nationwide election in which they participated. In 1987, one year after the Chernobyl disaster following a referendum in that year, a nuclear phase-out was commenced. Italy's four nuclear power plants were closed down, the last in 1990. A moratorium on the construction of new plants, originally in effect from 1987 until 1993, has since been extended indefinitely. In these years, corruption began to be more extensive, a development that would be exposed in the early 1990s and nicknamed Tangentopoli. With the mani pulite investigation, starting just one year after the collapse of the Soviet Union, the whole power structure faltered and seemingly indestructible parties, such as the Christian Democrats and the Socialist Party, disbanded whereas the Communist Party changed its name to the Democratic Party of the Left and took the role of the Socialist Party as the main social democratic party in Italy. What was to follow was then called the transition to the Second Republic. Second Republic: 1994–present From 1992 to 1997, Italy faced significant challenges as voters, disenchanted with past political paralysis, massive government debt, extensive corruption and organized crime's considerable influence, collectively called Tangentopoli after being uncovered by mani pulite, demanded political, economic and ethical reforms. In the Italian referendums of 1993, voters approved substantial changes, including moving from a proportional to an Additional Member System, which is largely dominated by a majoritarian electoral system and the abolition of some ministries, some of which have been reintroduced with only partly modified names, such as the Ministry of Agriculture reincarnated as the Ministry of Agricultural Resources. Major political parties, beset by scandal and loss of voter confidence, underwent far-reaching changes. New political forces and new alignments of power emerged in the March 1994 national elections. This election saw a major turnover in the new parliament, with 452 out of 630 deputies and 213 out of 315 senators elected for the first time. The 1994 elections also swept media magnate Silvio Berlusconi (leader of Pole of Freedoms coalition) into office as prime minister. However, Berlusconi was forced to step down in December 1994 when the Lega Nord withdrew support. The Berlusconi government was succeeded by a technical government headed by Prime Minister Lamberto Dini, which left office in early 1996. A series of center-left coalitions dominated Italy's political landscape between 1996 and 2001. In April 1996, national elections led to the victory of a center-left coalition, The Olive Tree, under the leadership of Romano Prodi. Prodi's government became the third-longest to stay in power before he narrowly lost a vote of confidence, by three votes, in October 1998. In May 1999, the Parliament selected Carlo Azeglio Ciampi as the President of the Republic. Ciampi, a former prime minister and Minister of the Treasury and before entering the government also the governor of the Bank of Italy, was elected on the first ballot by a comfortable margin over the required two-thirds of the votes. A new government was formed by the Democrats of the Left leader and former communist Massimo D'Alema, but in April 2000 he resigned following poor performance by his coalition in regional elections. The succeeding center-left government, including most of the same parties, was headed by Giuliano Amato, a social democrat, who had previously served as prime minister in 1992–1993 and had at the time sworn never to return to active politics. National elections held on 13 May 2001 returned Berlusconi to power at the head of the five-party center-right House of Freedoms coalition, comprising the Prime Minister's own party, Forza Italia, the National Alliance, the North League, the Christian Democratic Center and the United Christian Democrats. Between 17 May 2006 and 21 February 2007, Romano Prodi served as prime minister of Italy following the narrow victory of his The Union coalition over the House of Freedoms led by Silvio Berlusconi in the April 2006 Italian elections. Following a government crisis, Prodi submitted his resignation on 21 February 2007. Three days later, he was asked by President Giorgio Napolitano to stay on as prime minister and he agreed to do so. On 28 February 2007, Prodi narrowly survived a senate no confidence vote. On 24 January 2008, the Prodi II Cabinet went through a new crisis because Minister of Justice Clemente Mastella retracted his support to the Cabinet. Consequently, the Prodi Cabinet lost the vote of confidence and the President Giorgio Napolitano called a new general election. The election set against two new parties, the Democratic Party (founded in October 2007 by the union of the Democrats of the Left and Democracy is Freedom – The Daisy) led by Walter Veltroni: and The People of Freedom (federation of Forza Italia, National Alliance and other parties) led by Silvio Berlusconi. The Democratic Party was in alliance with Italy of Values while The People of Freedom forged an alliance with Lega Nord and the Movement for Autonomy. The coalition led by Berlusconi won the election and the leader of the centre-right created the Berlusconi IV Cabinet. The Monti government had the highest average age in the western world (64 years), with its youngest members being 57. The previous Italian Prime Minister Mario Monti is 70, his predecessor Silvio Berlusconi was 75 at the time of resignation (2011), the previous head of the government Romano Prodi was 70 when he stepped down (2008), the Italian President Giorgio Napolitano is 88 and his predecessor Carlo Azeglio Ciampi was 86. In 2013, the youngest among the candidates for prime minister (Pier Luigi Bersani) is 62, the others being 70 and 78. The current average age of Italian university professors is 63, of bank directors and CEOs 67, of members of parliament 56 and of labor union representatives 59. The new Italian government headed by Enrico Letta took two months to form and made international news when Luigi Preiti shot at policemen near the building where they were swearing in the new government on Sunday 28 April 2013. Former Prime Minister Matteo Renzi became the youngest prime minister at 39 years and his government had the youngest average age in Europe. Grand coalition governments At different times since his entering the Italian Parliament, Silvio Berlusconi, leader of the centre-right, had repeatedly vowed to stop the "communists", while leftist parties had insisted that they would oust Berlusconi. Thus, despite the fact that the executive branch bears responsibility toward the Parliament, the governments led by Mario Monti (since 2011) and by Enrico Letta (since 2013) were called "unelected governments" because they won a vote of confidence by a Parliament coalition formed by centre-right and left-right parties that had in turn obtained parliamentary seats by taking part in the elections as competitors, rather than allies. While formally complying with law and procedures, the creation of these governments did not comply with the decision made by people through the election. Meanwhile, in 2013, a ruling by the Constitutional Court of Italy established that the Italian electoral system employed to elect the Parliament breached a number of Constitutional requirements. Notably, the Court observed the following four facts: 1) "such a legislation deprives the elector of any margin of choice of its representatives"; 2) "all of the elected parliamentarians, with no exception, lack the support of a personal designation by the citizens"; 3) the electoral law has regulations which "exclude any ability on the part of the elector to have an influence on the election of his/her representatives"; 4) and contains conditions such that "they alter the representative relationship between electors and elected people...they coerce the electors' freedom of choice in the election of their representatives to the Parliament...and consequently they are at odds with the democratic principle, by affecting the very freedom of vote provided for by art. 48 of the Constitution". This implies that, despite being called - and acting as – a legitimate "parliament", the legislative assembly of Italy was chosen with a vote system by which the right of vote was not exercised according to the Italian fundamental chart of citizen's rights and duties. The issue was a major one, to the extent that the Constitutional Court itself ruled that the Italian Parliament should remain in charge only to reform the electoral system and then should be dissolved. The new government led by Matteo Renzi proposed a new electoral law. The so-called Italicum was approved in 2015 and came into force on 1 July 2016. Since 2016 However, Renzi resigned after losing a constitutional referendum in December 2016, and was succeeded by Paolo Gentiloni. The centre-left cabinets were plagued by the aftermath of the European debt crisis and the European migrant crisis, that fueled support for populist and right-wing parties. The 2018 general election produced once again a hung parliament that resulted in the birth of an unlikely populist government between the anti-system Five Star Movement and Salvini's far-right League, led by Giuseppe Conte. However, after only fourteen months the League withdrew its support to Conte, who subsequently allied with the Democratic Party and other smaller left-wing parties to form a new cabinet. In 2020, Italy was severely hit by the COVID-19 pandemic. From March to May, Conte's government imposed a national quarantine as a measure to limit the spread of the pandemic. The measures, despite being widely approved by public opinion, were also described as the largest suppression of constitutional rights in the history of the republic. With more than 100,000 confirmed victims, Italy was one of the countries with the highest total number of deaths in the worldwide coronavirus pandemic. The pandemic caused also a severe economic disruption, which resulted in Italy being one of the most affected countries. In February 2021, these extraordinary circumstances brought to the formation of a national coalition government led by former President of the European Central Bank Mario Draghi. On 13 February |
had accrued significant industrial securities, establishing the Istituto per la Ricostruzione Industriale. A number of mixed entities were formed, whose purpose was to bring together representatives of the government and of the major businesses. These representatives discussed economic policy and manipulated prices and wages so as to satisfy both the wishes of the government and the wishes of business. This economic model based on a partnership between government and business was soon extended to the political sphere, in what came to be known as corporatism. At the same time, the aggressive foreign policy of Mussolini led to an increasing military expenditure. After the invasion of Ethiopia, Italy intervened to support Franco's nationalists in the Spanish Civil War. By 1939, Italy had the highest percentage of state-owned enterprises after the Soviet Union. Italy's involvement in World War II as a member of the Axis powers required the establishment of a war economy. The Allied invasion of Italy in 1943 eventually caused the Italian political structure – and the economy – to rapidly collapse. The Allies, on the one hand, and the Germans on the other, took over the administration of the areas of Italy under their control. By the end of the war, Italian per capita income was at its lowest point since the beginning of the 20th century. Post-war economic miracle After the end of World War II, Italy was in rubble and occupied by foreign armies, a condition that worsened the chronic development gap towards the more advanced European economies. However, the new geopolitical logic of the Cold War made possible that the former enemy Italy, a hinge-country between Western Europe and the Mediterranean, and now a new, fragile democracy threatened by the NATO occupation forces, the proximity of the Iron Curtain and the presence of a strong Communist party, was considered by the United States as an important ally for the Free World, and received under the Marshall Plan over US$1.2 billion from 1947 to 1951. The end of aid through the Plan could have stopped the recovery but it coincided with a crucial point in the Korean War whose demand for metal and manufactured products was a further stimulus of Italian industrial production. In addition, the creation in 1957 of the European Common Market, with Italy as a founding member, provided more investment and eased exports. These favorable developments, combined with the presence of a large labour force, laid the foundation for spectacular economic growth that lasted almost uninterrupted until the "Hot Autumn's" massive strikes and social unrest of 1969–70, which then combined with the later 1973 oil crisis and put an abrupt end to the prolonged boom. It has been calculated that the Italian economy experienced an average rate of growth of GDP of 5.8% per year between 1951 and 1963, and 5% per year between 1964 and 1973. Italian rates of growth were second only, but very close, to the German rates, in Europe, and among the OEEC countries only Japan had been doing better. The 1970s and 1980s: from stagflation to "il sorpasso" The 1970s were a period of economic, political turmoil and social unrest in Italy, known as Years of lead. Unemployment rose sharply, especially among the young, and by 1977 there were one million unemployed people under age 24. Inflation continued, aggravated by the increases in the price of oil in 1973 and 1979. The budget deficit became permanent and intractable, averaging about 10 percent of the gross domestic product (GDP), higher than any other industrial country. The lira fell steadily, from 560 lira to the U.S. dollar in 1973 to 1,400 lira in 1982. The economic recession went on into the mid-1980s until a set of reforms led to the independence of the Bank of Italy and a big reduction of the indexation of wages that strongly reduced inflation rates, from 20.6% in 1980 to 4.7% in 1987. The new macroeconomic and political stability resulted in a second, export-led "economic miracle", based on small and medium-sized enterprises, producing clothing, leather products, shoes, furniture, textiles, jewelry, and machine tools. As a result of this rapid expansion, in 1987 Italy overtook the UK's economy (an event known as il sorpasso), becoming the fourth richest nation in the world, after the US, Japan and West Germany. The Milan stock exchange increased its market capitalization more than fivefold in the space of a few years. However, the Italian economy of the 1980s presented a problem: it was booming, thanks to increased productivity and surging exports, but unsustainable fiscal deficits drove the growth. In the 1990s, the new Maastricht criteria boosted the urge to curb the public debt, already at 104% of GDP in 1992. The consequent restrictive economic policies worsened the impact of the global recession already underway. After a brief recover at the end of the 1990s, high tax rates and red tape caused the country to stagnate between 2000 and 2008. Great Recession Italy was among the countries hit hardest by the Great Recession of 2008–2009 and the subsequent European debt crisis. The national economy shrunk by 6.76% during the whole period, totaling seven-quarters of recession. In November 2011 the Italian bond yield was 6.74 percent for 10-year bonds, nearing a 7 percent level where Italy is thought to lose access to financial markets. According to Eurostat, in 2015 the Italian government debt stood at 128% of GDP, ranking as the second biggest debt ratio after Greece (with 175%). However, the biggest chunk of Italian public debt is owned by Italian nationals and relatively high levels of private savings and low levels of private indebtedness are seen as making it the safest among Europe's struggling economies. As a shock therapy to avoid the debt crisis and kick-start growth, the national unity government led by the economist Mario Monti launched a program of massive austerity measures, that brought down the deficit but precipitated the country in a double-dip recession in 2012 and 2013, receiving criticism from numerous economists. Economic recovery In the period 2014-2019 the economy partially recovered from the disastrous losses incurred during the Great Recession, primarily thanks to strong exports, but nonetheless growth rates remained well below the Euro area average, meaning that Italy's GDP in 2019 was still 5 per cent below its level in 2008. Impact of the COVID-19 pandemic Italy, starting from February 2020, was the first country of Europe to be severely affected by the COVID-19 pandemic, that eventually expanded to the rest of the world. The economy suffered a massive shock as a result of the lockdown of most of the country's economic activity. After three months, at the end of May 2020, the epidemic was put under control, and the economy started to recover, especially the manufacturing sector. Overall, it remained surprisingly resilient, although GDP plummeted like in most western countries. The Italian government issued special treasury bills, known as BTP Futura as a COVID-19 emergency funding, waiting for the approval of the European Union response to the COVID-19 pandemic. Eventually, in July 2020, the European Council approved the 750 billion € Next Generation EU fund, of which €209 billion will go to Italy. Overview Data The following table shows the main economic indicators in 1980–2020 (with IMF staff estimtates in 2021–2026). Inflation below 2% is in green. Companies Of the world's 500 largest stock-market-listed companies measured by revenue in 2016, the Fortune Global 500, nine are headquartered in Italy. Figures are for 2016. Figures in italic = Q3 2017 Wealth Italy has over 1.4 million people with a net wealth greater than $1 million, a total national wealth of $11.857 trillion, and represents the 5th largest cumulative net wealth globally (it accounts for 4.92% of the net wealth in the world). According to the Credit Suisse's Global Wealth Databook 2013, the median wealth per adult is $138,653 (5th in the world), while according to the Allianz's Global Wealth Report 2013, the net financial wealth per capita is €45,770 (13th in the world). The following top 10 list of Italian billionaires is based on an annual assessment of wealth and assets compiled and published by Forbes in 2017. Regional data North–South divide Since the unification of Italy in 1861, a wide and increasing economic divide has been growing between the northern provinces and the southern half of the Italian state. This gap was mainly induced by the region-specific policies selected by the Piedmontese elite, who dominated the first post-unitary governments. To illustrate, the 1887 protectionist reform, instead of safeguarding the arboriculture sectors crushed by 1880s fall in prices, shielded the Po Valley wheat breeding and those Northern textile and manufacturing industries that had survived the liberal years thanks to state intervention. While indeed the former dominated the allocation of military clothing contracts, the latter monopolized both coal mining permits and public contracts. A similar logic guided the assignment of monopoly rights in the steamboat construction and navigation sectors and, above all, the public spending in the railway sector, which represented 53% of the 1861-1911 total. To make things worse, the resources necessary to finance this public spending effort were obtained through highly unbalanced land property taxes, which affected the key source of savings available for investment in the growth sectors absent a developed banking system. To elaborate, the 1864 reform fixed a 125 million target revenue to be raised from 9 districts resembling the pre-unitary states. Given the inability of the government to estimate the land profitability, especially because of the huge differences among the regional cadasters, this policy irreparably induced large regional discrepancies. To illustrate, the ex-Papal State (central Italy) took on the 10%, the ex-Kingdom of Two Sicilies (Southern Italy) the 40%, and the rest of the state (ex-Kingdom of Sardinia, Northern Italy) the 21%. To weigh this burden down, a 20% surcharge was added by 1868. The 1886 cadastral reform opened the way to more egalitarian policies and, after the First World War, to the harmonization of the tax-rates, but the impact of extraction on the economies of the two blocks was at that point irreversible. While indeed a flourishing manufacturing sector was established in the North, the mix of low public spending and heavy taxation squeezed the Southern investment to the point that the local industry and export-oriented farming were wiped out. Moreover, extraction destroyed the relationship between the central state and the Southern population by unchaining first a civil war called Brigandage, which brought about 20,000 victims by 1864 and the militarization of the area, and then favoring emigration, especially from 1892 to 1921. After the rise of Benito Mussolini, the "Iron Prefect" Cesare Mori tried to defeat the already powerful criminal organizations flourishing in the South with some degree of success. Fascist policy aimed at the creation of an Italian empire and Southern Italian ports were strategic for all commerce towards the colonies. With the invasion of Southern Italy, the Allies restored the authority of the mafia families, lost | of a strong Communist party, was considered by the United States as an important ally for the Free World, and received under the Marshall Plan over US$1.2 billion from 1947 to 1951. The end of aid through the Plan could have stopped the recovery but it coincided with a crucial point in the Korean War whose demand for metal and manufactured products was a further stimulus of Italian industrial production. In addition, the creation in 1957 of the European Common Market, with Italy as a founding member, provided more investment and eased exports. These favorable developments, combined with the presence of a large labour force, laid the foundation for spectacular economic growth that lasted almost uninterrupted until the "Hot Autumn's" massive strikes and social unrest of 1969–70, which then combined with the later 1973 oil crisis and put an abrupt end to the prolonged boom. It has been calculated that the Italian economy experienced an average rate of growth of GDP of 5.8% per year between 1951 and 1963, and 5% per year between 1964 and 1973. Italian rates of growth were second only, but very close, to the German rates, in Europe, and among the OEEC countries only Japan had been doing better. The 1970s and 1980s: from stagflation to "il sorpasso" The 1970s were a period of economic, political turmoil and social unrest in Italy, known as Years of lead. Unemployment rose sharply, especially among the young, and by 1977 there were one million unemployed people under age 24. Inflation continued, aggravated by the increases in the price of oil in 1973 and 1979. The budget deficit became permanent and intractable, averaging about 10 percent of the gross domestic product (GDP), higher than any other industrial country. The lira fell steadily, from 560 lira to the U.S. dollar in 1973 to 1,400 lira in 1982. The economic recession went on into the mid-1980s until a set of reforms led to the independence of the Bank of Italy and a big reduction of the indexation of wages that strongly reduced inflation rates, from 20.6% in 1980 to 4.7% in 1987. The new macroeconomic and political stability resulted in a second, export-led "economic miracle", based on small and medium-sized enterprises, producing clothing, leather products, shoes, furniture, textiles, jewelry, and machine tools. As a result of this rapid expansion, in 1987 Italy overtook the UK's economy (an event known as il sorpasso), becoming the fourth richest nation in the world, after the US, Japan and West Germany. The Milan stock exchange increased its market capitalization more than fivefold in the space of a few years. However, the Italian economy of the 1980s presented a problem: it was booming, thanks to increased productivity and surging exports, but unsustainable fiscal deficits drove the growth. In the 1990s, the new Maastricht criteria boosted the urge to curb the public debt, already at 104% of GDP in 1992. The consequent restrictive economic policies worsened the impact of the global recession already underway. After a brief recover at the end of the 1990s, high tax rates and red tape caused the country to stagnate between 2000 and 2008. Great Recession Italy was among the countries hit hardest by the Great Recession of 2008–2009 and the subsequent European debt crisis. The national economy shrunk by 6.76% during the whole period, totaling seven-quarters of recession. In November 2011 the Italian bond yield was 6.74 percent for 10-year bonds, nearing a 7 percent level where Italy is thought to lose access to financial markets. According to Eurostat, in 2015 the Italian government debt stood at 128% of GDP, ranking as the second biggest debt ratio after Greece (with 175%). However, the biggest chunk of Italian public debt is owned by Italian nationals and relatively high levels of private savings and low levels of private indebtedness are seen as making it the safest among Europe's struggling economies. As a shock therapy to avoid the debt crisis and kick-start growth, the national unity government led by the economist Mario Monti launched a program of massive austerity measures, that brought down the deficit but precipitated the country in a double-dip recession in 2012 and 2013, receiving criticism from numerous economists. Economic recovery In the period 2014-2019 the economy partially recovered from the disastrous losses incurred during the Great Recession, primarily thanks to strong exports, but nonetheless growth rates remained well below the Euro area average, meaning that Italy's GDP in 2019 was still 5 per cent below its level in 2008. Impact of the COVID-19 pandemic Italy, starting from February 2020, was the first country of Europe to be severely affected by the COVID-19 pandemic, that eventually expanded to the rest of the world. The economy suffered a massive shock as a result of the lockdown of most of the country's economic activity. After three months, at the end of May 2020, the epidemic was put under control, and the economy started to recover, especially the manufacturing sector. Overall, it remained surprisingly resilient, although GDP plummeted like in most western countries. The Italian government issued special treasury bills, known as BTP Futura as a COVID-19 emergency funding, waiting for the approval of the European Union response to the COVID-19 pandemic. Eventually, in July 2020, the European Council approved the 750 billion € Next Generation EU fund, of which €209 billion will go to Italy. Overview Data The following table shows the main economic indicators in 1980–2020 (with IMF staff estimtates in 2021–2026). Inflation below 2% is in green. Companies Of the world's 500 largest stock-market-listed companies measured by revenue in 2016, the Fortune Global 500, nine are headquartered in Italy. Figures are for 2016. Figures in italic = Q3 2017 Wealth Italy has over 1.4 million people with a net wealth greater than $1 million, a total national wealth of $11.857 trillion, and represents the 5th largest cumulative net wealth globally (it accounts for 4.92% of the net wealth in the world). According to the Credit Suisse's Global Wealth Databook 2013, the median wealth per adult is $138,653 (5th in the world), while according to the Allianz's Global Wealth Report 2013, the net financial wealth per capita is €45,770 (13th in the world). The following top 10 list of Italian billionaires is based on an annual assessment of wealth and assets compiled and published by Forbes in 2017. Regional data North–South divide Since the unification of Italy in 1861, a wide and increasing economic divide has been growing between the northern provinces and the southern half of the Italian state. This gap was mainly induced by the region-specific policies selected by the Piedmontese elite, who dominated the first post-unitary governments. To illustrate, the 1887 protectionist reform, instead of safeguarding the arboriculture sectors crushed by 1880s fall in prices, shielded the Po Valley wheat breeding and those Northern textile and manufacturing industries that had survived the liberal years thanks to state intervention. While indeed the former dominated the allocation of military clothing contracts, the latter monopolized both coal mining permits and public contracts. A similar logic guided the assignment of monopoly rights in the steamboat construction and navigation sectors and, above all, the public spending in the railway sector, which represented 53% of the 1861-1911 total. To make things worse, the resources necessary to finance this public spending effort were obtained through highly unbalanced land property taxes, which affected the key source of savings available for investment in the growth sectors absent a developed banking system. To elaborate, the 1864 reform fixed a 125 million target revenue to be raised from 9 districts resembling the pre-unitary states. Given the inability of the government to estimate the land profitability, especially because of the huge differences among the regional cadasters, this policy irreparably induced large regional discrepancies. To illustrate, the ex-Papal State (central Italy) took on the 10%, the ex-Kingdom of Two Sicilies (Southern Italy) the 40%, and the rest of the state (ex-Kingdom of Sardinia, Northern Italy) the 21%. To weigh this burden down, a 20% surcharge was added by 1868. The 1886 cadastral reform opened the way to more egalitarian policies and, after the First World War, to the harmonization of the tax-rates, but the impact of extraction on the economies of the two blocks was at that point irreversible. While indeed a flourishing manufacturing sector was established in the North, the mix of low public spending and heavy taxation squeezed the Southern investment to the point that the local industry and export-oriented farming were wiped out. Moreover, extraction destroyed the relationship between the central state and the Southern population by unchaining first a civil war called Brigandage, which brought about 20,000 victims by 1864 and the militarization of the area, and then favoring emigration, especially from 1892 to 1921. After the rise of Benito Mussolini, the "Iron Prefect" Cesare Mori tried to defeat the already powerful criminal organizations flourishing in the South with some degree of success. Fascist policy aimed at the creation of an Italian empire and Southern Italian ports were strategic for all commerce towards the colonies. With the invasion of Southern Italy, the Allies restored the authority of the mafia families, lost during the Fascist period, and used their influence to maintain public order. In the 1950s the Cassa per il Mezzogiorno was set up as a huge public master plan to help industrialize the South, aiming to do this in two ways: through land reforms creating 120,000 new smallholdings, and through the "Growth Pole Strategy" whereby 60% of all government investment would go to the South, thus boosting the Southern economy by attracting new capital, stimulating local firms, and providing employment. However, the objectives were largely missed, and as a result the South became increasingly subsidized and state dependent, incapable of generating private growth itself. Even at present, huge regional disparities persist. Problems in Southern Italy still include widespread political corruption, pervasive organized crime, and very high unemployment rates. In 2007, it was estimated that about 80% of the businesses in the Sicilian cities of Catania and Palermo paid protection money; thanks to grassroots movement like Addiopizzo, the mafia racket is slowly but constantly losing its effect. The Italian Ministry of Interior reported that organized crime generated an estimated annual profit of €13 billion. Economic sectors Primary According to the last national agricultural census, there were |
antennas - 3 for Atlantic Ocean and 2 for Indian Ocean), 1 Inmarsat (Atlantic Ocean region), and NA Eutelsat; 21 submarine cables. Radio broadcast stations: AM about 100, FM about 4,600, shortwave 9 (1998) Radios: 50.5 million (1997) Television broadcast stations: 358 (plus 4,728 repeaters) | and NA Eutelsat; 21 submarine cables. Radio broadcast stations: AM about 100, FM about 4,600, shortwave 9 (1998) Radios: 50.5 million (1997) Television broadcast stations: 358 (plus 4,728 repeaters) (1995) Televisions: 30.5 million (1997) Internet Hosts: 22.152 million (2009) Internet users: 24.992 million (2008) Country code (Top-level domain): .it See also Media of Italy |
250 km/h on both high-speed and mainline tracks; Frecciabianca (White arrow) trains operate at a maximum of 200 km/h on mainline tracks only. Since 2012, a new and Italy's first private train operator, NTV (branded as Italo), run high-speed services in competition with Trenitalia. Even nowadays, Italy is the only county in Europe with a private high-speed train operator. Construction of the Milan-Venice high-speed line has begun in 2013 and in 2016 the Milan-Treviglio section has been opened to passenger traffic; the Milan-Genoa high-speed line (Terzo Valico dei Giovi) is also under construction. Today it is possible to travel from Rome to Milan in less than 3 hours (2h 55') with the Frecciarossa 1000, the new high-speed train. To cover this route, there's a train every 30 minutes. Intercity trains With the introduction of high-speed trains, intercity trains are limited to few services per day on mainline and regional tracks. The daytime services (Intercity IC), while not frequent and limited to one or two trains per route, are essential in providing access to cities and towns off the railway's mainline network. The main routes are Trieste to Rome (stopping at Venice, Bologna, Prato, Florence and Arezzo), Milan to Rome (stopping at Genoa, La Spezia, Pisa and Livorno / stopping at Parma, Modena, Bologna, Prato, Florence and Arezzo), Bologna to Lecce (stopping at Rimini, Ancona, Pescara, Bari and Brindisi) and Rome to Reggio di Calabria (stopping at Latina and Naples). In addition, the Intercity trains provide a more economical means of long-distance rail travel within Italy. The night trains (Intercity Notte ICN) have sleeper compartments and washrooms, but no showers on board. Main routes are Rome to Bolzano/Bozen (calling at Florence, Bologna, Verona, Rovereto and Trento), Milan to Lecce (calling at Bologna, Rimini, Ancona, Pescara, Bari and Brindisi), Turin to Lecce (calling at Alessandria, Voghera, Piacenza, Parma, Bologna, Rimini, Pescara, Bari and Brindisi) and Reggio di Calabria to Turin (calling Naples, Rome, Livorno, La Spezia and Genova). Most portions of these ICN services run during the night; since most services take 10 to 15 hours to complete a one-way journey, their day-time portion provide extra train connections to complement with the Intercity services. There are a total of 86 intercity trains running within Italy per day. Regional trains Trenitalia operates regional services (both fast veloce RGV and stopping REG) throughout Italy. Regional train agencies exist: their train schedules are largely connected to and shown on Trenitalia, and tickets for such train services can be purchased through Trenitalia's national network. Other regional agencies have separate ticket systems which are not mutually exchangeable with that of Trenitalia. These "regional" tickets could be purchased at local newsagents or tobacco stores instead. Trentino-Alto Adige / Trentino-Südtirol: Südtirol Bahn (South Tyrol Railway) runs regional services on Ala/Ahl-am-Etsch to Bolzano/Bozen (calling at Rovereto/Rofreit, Trento/Trient and Mezzocorona/Kronmetz), Bolzano/Bozen to Merano/Meran, Bressanone/Brixen to San Candido/Innichen, and a direct "Tirol regional express REX" service between Bolzano/Bozen in Italy and Innsbruck in Austria. Veneto: Sistemi Territoriali runs regional trains in Veneto region. Lombardy: Trenord runs the Malpensa Express airport train, many Milan's suburban lines and most regional train services in Lombardy. Trenord also co-operates with DB and ÖBB on the EuroCity Verona-Munich service, and with SBB CFF FFS (joint-venture TiLo) on the regional Milan-Bellinzona service. Emilia-Romagna: Trasporto Passeggeri Emilia-Romagna provides vital connections across cities on different mainline networks, including Modena, Parma, Suzzara, Ferrara, Reggio Emilia and Bologna. Tuscany: La Ferroviaria Italiana operates in Arezzo province. Abruzzo: Sangritana runs daily services between Pescara and Lanciano. In addition to these agencies, there's a great deal of | a total length of about 487,700 km. It comprises both an extensive motorway network (6,400 km), mostly toll roads, and national and local roads. Because of its long seacoast, Italy also has many harbors for the transportation of both goods and passengers. Transport networks in Italy are integrated into the Trans-European Transport Networks. Railways The Italian railway system has a length of , of which standard gauge and electrified. The active lines are 16,723 km. The network is recently growing with the construction of the new high-speed rail network. The narrow gauge tracks are: of gauge (all electrified); of gauge (of which electrified). A major part of the Italian rail network is managed and operated by Ferrovie dello Stato Italiane, a state owned company. Other regional agencies, mostly owned by public entities such as regional governments, operate on the Italian network. The Italian railways are subsidised by the government, receiving €8.1 billion in 2009. Travellers who often make use of the railway during their stay in Italy might use Rail Passes, such as the European Inter-Rail or Italy's national and regional passes. These rail passes allow travellers the freedom to use regional trains during the validity period, but all high-speed and intercity trains require a 10-euro reservation fee. Regional passes, such as "Io viaggio ovunque Lombardia", offer one-day, multiple-day and monthly period of validity. There are also saver passes for adults, who travel as a group, with savings up to 20%. Foreign travellers should purchase these passes in advance, so that the passes could be delivered by post prior to the trip. When using the rail passes, the date of travel needs to be filled in before boarding the trains. High speed trains Major works to increase the commercial speed of the trains already started in 1967: the Rome-Florence "super-direct" line was built for trains up to 230 km/h, and reduced the journey time to less than two hours. This is the first high-speed train line in Europe, as its operations started in 1977. In 2009 a new high-speed line linking Milan and Turin, operating at 300 km/h, opened to passenger traffic, reducing the journey time from two hours to one hour. In the same year, the Milan-Bologna line was open, reducing the journey time to 55 minutes. Also the Bologna-Florence high-speed line was upgraded to 300 km/h for a journey time of 35 minutes. Since then, it is possible to travel from Turin to Salerno (ca. 950 km) in less than 5 hours. More than 100 trains per day are operated. The main public operator of high-speed trains (alta velocità AV, formerly Eurostar Italia) is Trenitalia, part of FSI. Trains are divided into three categories: Frecciarossa ("Red arrow") trains operate at a maximum of 300 km/h on dedicated high-speed tracks; Frecciargento (Silver arrow) trains operate at a maximum of 250 km/h on both high-speed and mainline tracks; Frecciabianca (White arrow) trains operate at a maximum of 200 km/h on mainline tracks only. Since 2012, a new and Italy's first private train operator, NTV (branded as Italo), run high-speed services in competition with Trenitalia. Even nowadays, Italy is the only county in Europe with a private high-speed train operator. Construction of the Milan-Venice high-speed line has begun in 2013 and in 2016 the Milan-Treviglio section has been opened to passenger traffic; the Milan-Genoa high-speed line (Terzo Valico dei Giovi) is also under construction. Today it is possible to travel from Rome to Milan in less than 3 hours (2h 55') with the Frecciarossa 1000, the new high-speed train. To cover this route, there's a train every 30 minutes. Intercity trains With the introduction of high-speed trains, intercity trains are limited to few services per day on mainline and regional tracks. The daytime services (Intercity IC), while not frequent and limited to one or two trains per route, are essential in providing access to cities and towns off the railway's mainline network. The main routes are Trieste to Rome (stopping at Venice, Bologna, Prato, Florence and Arezzo), Milan to Rome (stopping at Genoa, La Spezia, Pisa and Livorno / stopping at Parma, Modena, Bologna, Prato, Florence and Arezzo), Bologna to Lecce (stopping at Rimini, Ancona, Pescara, Bari and Brindisi) and Rome to Reggio di Calabria (stopping at Latina and Naples). In addition, the Intercity trains provide a more economical means of long-distance rail travel within Italy. The night trains (Intercity Notte ICN) have sleeper compartments and washrooms, but no showers on board. Main routes are Rome to Bolzano/Bozen (calling at Florence, Bologna, Verona, Rovereto and Trento), Milan to Lecce (calling at Bologna, Rimini, Ancona, Pescara, Bari and Brindisi), Turin to Lecce (calling at Alessandria, Voghera, Piacenza, Parma, Bologna, Rimini, Pescara, Bari and Brindisi) and Reggio di Calabria to Turin (calling Naples, Rome, Livorno, La Spezia and Genova). Most portions of these ICN services run during the night; since most services take 10 to 15 hours to complete a one-way journey, their day-time portion provide extra train connections to complement with the Intercity services. There are a total of 86 intercity trains running within Italy per day. Regional trains Trenitalia operates regional services (both fast veloce RGV and stopping REG) throughout Italy. Regional train agencies exist: their train schedules are largely connected to and shown on Trenitalia, and tickets for such train services can be purchased through Trenitalia's national network. Other regional agencies have separate ticket systems which are not mutually exchangeable with that of Trenitalia. These "regional" tickets could be purchased at local newsagents or tobacco stores instead. Trentino-Alto Adige / Trentino-Südtirol: Südtirol Bahn (South Tyrol Railway) runs regional services on Ala/Ahl-am-Etsch to Bolzano/Bozen (calling at Rovereto/Rofreit, Trento/Trient and Mezzocorona/Kronmetz), Bolzano/Bozen to Merano/Meran, Bressanone/Brixen to San Candido/Innichen, and a direct "Tirol regional express REX" service between Bolzano/Bozen in Italy |
During the Cold War the Army prepared itself to defend against a Warsaw Pact invasion from the east. Since the dissolution of the Soviet Union, it has seen extensive peacekeeping service in Lebanon, Afghanistan, and Iraq. On 29 July 2004 it became a professional all-volunteer force when conscription was finally ended. Marina Militare The navy of Italy was created in 1861, following the proclamation of the Kingdom of Italy, as the Regia Marina. The new navy's baptism of fire came during the Third Italian War of Independence against the Austrian Empire. During the First World War, it spent its major efforts in the Adriatic Sea, fighting the Austro-Hungarian Navy. In the Second World War, it engaged the Royal Navy in a two-and-a-half-year struggle for the control of the Mediterranean Sea. After the war, the new Marina Militare, being a member of the North Atlantic Treaty Organisation (NATO), has taken part in many coalition peacekeeping operations. It is a blue-water navy. The Guardia Costiera (Coast Guard) is a component of the navy. Aeronautica Militare The air force of Italy was founded as an independent service arm on 28 March 1923, by King Vittorio Emanuele III as the Regia Aeronautica (which equates to "Royal Air Force"). During the 1930s, it was involved in its first military operations in Ethiopia in 1935, and later in the Spanish Civil War between 1936 and 1939. Eventually, Italy entered World War II alongside Germany. After the armistice of 8 September 1943, Italy was divided into two sides, and the same fate befell the Regia Aeronautica. The Air Force was split into the Italian Co-Belligerent Air Force in the south aligned with the Allies, and the pro-Axis Aeronautica Nazionale Repubblicana in the north until the end of the war. When Italy was made a republic by referendum, the air force was given its current name Aeronautica Militare. Carabinieri The Arma dei Carabinieri is the gendarmerie and military police of Italy. The corps was instituted in 1814 by King Victor Emmanuel I of Savoy with the aim of providing the Kingdom of Sardinia with a police corps; it is therefore older than Italy itself. The new force was divided into divisions on the scale of one division for each province of Italy. The divisions were further divided into companies and subdivided into lieutenancies, which commanded and coordinated the local police stations and were distributed throughout the national territory in direct contact with the public. The Italian unification saw the number of divisions increased, and in 1861 the Carabinieri were appointed the "First Force" of the new national military organization. In recent years Carabinieri became the fourth branch of Italian Armed Forces. Primarily they carry out law enforcement, military policing duties and peacekeeping mission abroad, such as Kosovo, Afghanistan, and Iraq. At the Sea Islands Conference of the G8 in 2004, the Carabinieri were given the mandate to establish a Center of Excellence for Stability Police Units (CoESPU) to spearhead the development of training and doctrinal standards for civilian police units attached to international peacekeeping missions. International stance Italy has joined in many UN, NATO and EU operations as well as with assistance to Russia and the other CIS nations, Middle East peace process, peacekeeping, and combating the illegal drug trade, human trafficking, piracy and terrorism. Italy did take part in the 1982 Multinational Force in Lebanon along with US, French and British troops. Italy also participated in the 1990–91 Gulf War, with the deployment of eight Panavia Tornado IDS bomber jets; Italian Army troops were subsequently deployed to assist Kurdish refugees in northern Iraq following the conflict. As part of Operation Enduring Freedom, Italy contributed to the international operation in Afghanistan. Italian forces have contributed to ISAF, the NATO force in Afghanistan, and to the Provincial reconstruction team. | Empire in Libya (1911-1912), on the Alps against the Austro-Hungarian Empire during World War I, in Abyssinia during the Interwar period, and in World War II in Albania, Greece, North Africa and Russia, as well as in the Italian Civil War. During the Cold War the Army prepared itself to defend against a Warsaw Pact invasion from the east. Since the dissolution of the Soviet Union, it has seen extensive peacekeeping service in Lebanon, Afghanistan, and Iraq. On 29 July 2004 it became a professional all-volunteer force when conscription was finally ended. Marina Militare The navy of Italy was created in 1861, following the proclamation of the Kingdom of Italy, as the Regia Marina. The new navy's baptism of fire came during the Third Italian War of Independence against the Austrian Empire. During the First World War, it spent its major efforts in the Adriatic Sea, fighting the Austro-Hungarian Navy. In the Second World War, it engaged the Royal Navy in a two-and-a-half-year struggle for the control of the Mediterranean Sea. After the war, the new Marina Militare, being a member of the North Atlantic Treaty Organisation (NATO), has taken part in many coalition peacekeeping operations. It is a blue-water navy. The Guardia Costiera (Coast Guard) is a component of the navy. Aeronautica Militare The air force of Italy was founded as an independent service arm on 28 March 1923, by King Vittorio Emanuele III as the Regia Aeronautica (which equates to "Royal Air Force"). During the 1930s, it was involved in its first military operations in Ethiopia in 1935, and later in the Spanish Civil War between 1936 and 1939. Eventually, Italy entered World War II alongside Germany. After the armistice of 8 September 1943, Italy was divided into two sides, and the same fate befell the Regia Aeronautica. The Air Force was split into the Italian Co-Belligerent Air Force in the south aligned with the Allies, and the pro-Axis Aeronautica Nazionale Repubblicana in the north until the end of the war. When Italy was made a republic by referendum, the air force was given its current name Aeronautica Militare. Carabinieri The Arma dei Carabinieri is the gendarmerie and military police of Italy. The corps was instituted in 1814 by King Victor Emmanuel I of Savoy with the aim of providing the Kingdom of Sardinia with a police corps; it is therefore older than Italy itself. The new force was divided into divisions on the scale of one division for each province of Italy. The divisions were further divided into companies and subdivided into lieutenancies, which commanded and coordinated the local police stations and were distributed throughout the national territory in direct contact with the public. The Italian unification saw the number of divisions increased, and in 1861 the Carabinieri were appointed the "First Force" of the new national military organization. In recent years Carabinieri became the fourth branch of Italian Armed Forces. Primarily they carry out law enforcement, military policing duties and peacekeeping mission abroad, such as Kosovo, Afghanistan, and Iraq. At the Sea Islands Conference of the G8 in 2004, the Carabinieri were given the mandate to establish a Center of Excellence for Stability Police Units (CoESPU) to spearhead the development of training and doctrinal standards for civilian police units attached to international peacekeeping missions. International stance Italy has joined in many UN, NATO and EU operations as well as with assistance to Russia and the other CIS nations, Middle East peace process, peacekeeping, and combating the illegal drug trade, human trafficking, piracy and terrorism. Italy did take part in the 1982 Multinational Force in Lebanon along with US, French and British troops. Italy also participated in the 1990–91 Gulf War, with the deployment of eight Panavia Tornado IDS bomber jets; Italian Army troops were subsequently deployed to assist Kurdish refugees in northern Iraq following the conflict. As part of |
terms of the alliance and Italy decided to take part in World War I as a principal allied power with France and Great Britain. Two leaders, Prime Minister Antonio Salandra and Foreign Minister Sidney Sonnino made the decisions; their primary motivation was seizure of territory from Austria, as secretly promised by Britain and France in the Treaty of London of 1915. Also, Italy occupied southern Albania and established a protectorate over Albania, which remained in place until 1920. The Allies defeated the Austrian Empire in 1918 and Italy became one of the main winners of the war. At the Paris Peace Conference in 1919, Prime Minister Vittorio Emanuele Orlando focused almost exclusively on territorial gains, but he got far less than he wanted, and Italians were bitterly resentful when they were denied control of the city of Fiume. The conference, under the control of Britain, France and the United States refused to assign Dalmazia and Albania to Italy as had been promised in the Treaty of London. Britain, France and Japan divided the German overseas colonies into mandates of their own, excluding Italy. Italy also gained no territory from the breakup of the Ottoman Empire. Civil unrest erupted in Italy between nationalists who supported the war effort and opposed what they called the "mutilated victory" (as nationalists referred to it) and leftists who were opposed to the war. The Fascist government that came to power with Benito Mussolini in 1922 sought to increase the size of the Italian empire and to satisfy the claims of Italian irredentists. In 1935–36, in its second invasion of Ethiopia Italy was successful and merged its new conquest with its older east African colonies. In 1939, Italy invaded Albania and incorporated it into the Fascist state. During the Second World War (1939–45), Italy formed the axis alliance with Germany (and nominally also Japan). It seized several territories (including parts of France, Greece, Egypt and Tunisia). By war's end it was forced out of all its colonies and protectorates. Following the civil war on 1943–1945 and the resulting economic depression, Italy enjoyed an economic miracle, promoted European unity, joined NATO and became an active member of the European Union. Italy was granted a United Nations trust to administer Somaliland in 1950. When Somalia became independent in 1960, Italy's eight-decade experience with colonialism ended. Relations by region and country Africa Americas Asia Europe Oceania International institutions Italy is part of the UN, EU, NATO, the OECD, the OSCE, the DAC, the WTO, the G7, the G20, the Union for the Mediterranean, the Latin Union, the Council of Europe, the Central European Initiative, the ASEM, the MEF, the ISA, the Uniting for Consensus and several Contact Groups. See also Diplomatic history of World War II#Italy International relations of the Great Powers (1814–1919) List of diplomatic missions in Italy List of diplomatic missions of Italy Treaty of Osimo, 1975 with Yugoslavia Treaty of Rapallo, 1920 Visa requirements for Italian citizens References Further reading Azzi, Stephen Corrado. "The Historiography of Fascist Foreign Policy," Historical Journal (1993) 36#1 pp. 187–203 in JSTOR Bosworth, Richard. Italy and the wider world 1860-1960 (2013) excerpt Bosworth, Richard. Italy: The Least of the Great Powers: Italian Foreign Policy Before the First World War (1979) Bosworth, Richard. Mussolini (2002) excerpt and text search Burgwyn, H. James. The legend of the mutilated victory: Italy, the Great War, and the Paris Peace Conference, 1915-1919 (1993). Burgwyn, H. James. Italian Foreign Policy in the Interwar Period, 1918-1940 (1997) excerpt and text search Cassels, Alan. Italian Foreign Policy, 1918-1945: A Guide to | 1918-1940 (1997) excerpt and text search Cassels, Alan. Italian Foreign Policy, 1918-1945: A Guide to Research and Research Materials (1997) Gooch, John. Mussolini and his Generals: The Armed Forces and Fascist Foreign Policy, 1922-1940 (2007) excerpt and text search Lowe, C. J. and F. Marzari. Italian Foreign Policy, 1870-1940 (2001) Maurizio Marinelli, Giovanni Andornino. Italy's Encounter with Modern China: Imperial dreams, strategic ambitions (New York: Palgrave Macmillan, 2014). Maurizio Marinelli, "The Genesis of the Italian Concession in Tianjin: A Combination of Wishful Thinking and Realpolitik". Journal of Modern Italian Studies, 15 (4), 2010: 536–556. Sette, Alessandro. "L'Albania nella strategia diplomatica italiana (1871-1915)", Nuova Rivista Storica, Vol. CII, n. 1 (2018), 321–378. Smith, Denis Mack. Modern Italy: A Political History (1997) Taylor, A.J.P. The Struggle for Mastery in Europe 1848–1918 (1954), covers all European diplomacy Since 1945 Barberini, Pierluigi. "What strategy for Italy in the Mediterranean basin: rethinking the Italia approach to foreign, security and defense policy." (2020). online Baraggia, Antonia. "7 The Italian regions in the European Union." in Federalism and Constitutional Law: The Italian Contribution to Comparative Regionalism (2021). Chabod, Federico. Italian Foreign Policy (1996) excerpt and text search Cladi, Lorenzo, and Mark Webber. "Italian foreign policy in the post-cold war period: a neoclassical realist approach." European security 20.2 (2011): 205–219. Cladi, Lorenzo, and Andrea Locatelli. "Explaining Italian foreign policy adjustment after Brexit: a Neoclassical realist account." Journal of European Integration 43.4 (2021): 459-473. Collina, Cristian. "A bridge in times of confrontation: Italy and Russia in the context of EU and NATO enlargements." Journal of Modern Italian Studies 13.1 (2008): 25–40. Coticchia, Fabrizio, and Jason W. Davidson. Italian Foreign Policy During Matteo Renzi's Government: A Domestically Focused Outsider and the World (Rowman & Littlefield, 2019). Coticchia, Fabrizio, and Valerio Vignoli. "Italian Foreign Policy: Still the Days Seem the Same?." in Foreign policy change in Europe Since 1991 (Palgrave Macmillan, Cham, 2021) pp. 179-204. Coticchia, Fabrizio, and Francesco Niccolò Moro. "From enthusiasm to retreat: Italy and military missions abroad after the Cold War." Italian Political Science 15.1 (2020): 114-131. Coticchia, Fabrizio. "A sovereignist revolution? Italy’s foreign policy under the “Yellow–Green” government." Comparative European Politics 19.6 (2021): 739-759. online Coticchia, Fabrizio, and Jason W. Davidson. "The limits of radical parties in coalition foreign policy: Italy, hijacking, and the extremity hypothesis." Foreign Policy Analysis 14.2 (2018): 149-168. Croci, Osvaldo. "The ‘Americanization’ of Italian foreign policy?" Journal of Modern Italian Studies 10.1 (2005): 10–26. Cusumano, Eugenio, and Kristof Gombeer. "In deep waters: The legal, humanitarian and political implications of closing Italian ports to migrant rescuers." Mediterranean Politics 25.2 (2020): 245-253. online Dentice, Giuseppe, and Federico Donelli. "Reasserting (middle) power by looking southwards: Italy’s policy towards Africa." Contemporary Italian Politics 13.3 (2021): 331-351. Diodato, Emidio, and Federico Niglia. Berlusconi ‘The Diplomat’: Populism and Foreign Policy in |
this are twelve groups considered "historical language minorities", which are officially recognized as distinct minority languages by the law. On the other hand, Corsican (a language spoken on the French island of Corsica) is closely related to medieval Tuscan, from which Standard Italian derives and evolved. The differences in the evolution of Latin in the different regions of Italy can be attributed to the natural changes that all languages in regular use are subject to, and to some extent to the presence of three other types of languages: substrata, superstrata, and adstrata. The most prevalent were substrata (the language of the original inhabitants), as the Italian dialects were most likely simply Latin as spoken by native cultural groups. Superstrata and adstrata were both less important. Foreign conquerors of Italy that dominated different regions at different times left behind little to no influence on the dialects. Foreign cultures with which Italy engaged in peaceful relations with, such as trade, had no significant influence either. Throughout Italy, regional variations of Standard Italian, called Regional Italian, are spoken. Regional differences can be recognized by various factors: the openness of vowels, the length of the consonants, and influence of the local language (for example, in informal situations , and replace the standard Italian in the area of Tuscany, Rome and Venice respectively for the infinitive "to go"). There is no definitive date when the various Italian variants of Latin—including varieties that contributed to modern Standard Italian—began to be distinct enough from Latin to be considered separate languages. One criterion for determining that two language variants are to be considered separate languages rather than variants of a single language is that they have evolved so that they are no longer mutually intelligible; this diagnostic is effective if mutual intelligibility is minimal or absent (e.g. in Romance, Romanian and Portuguese), but it fails in cases such as Spanish-Portuguese or Spanish-Italian, as native speakers of either pairing can understand each other well if they choose to do so. Nevertheless, on the basis of accumulated differences in morphology, syntax, phonology, and to some extent lexicon, it is not difficult to identify that for the Romance varieties of Italy, the first extant written evidence of languages that can no longer be considered Latin comes from the ninth and tenth centuries C.E. These written sources demonstrate certain vernacular characteristics and sometimes explicitly mention the use of the vernacular in Italy. Full literary manifestations of the vernacular began to surface around the 13th century in the form of various religious texts and poetry.Although these are the first written records of Italian varieties separate from Latin, the spoken language had likely diverged long before the first written records appear, since those who were literate generally wrote in Latin even if they spoke other Romance varieties in person. Throughout the 19th and 20th centuries, the use of Standard Italian became increasingly widespread and was mirrored by a decline in the use of the dialects. An increase in literacy was one of the main driving factors (one can assume that only literates were capable of learning Standard Italian, whereas those who were illiterate had access only to their native dialect). The percentage of literates rose from 25% in 1861 to 60% in 1911, and then on to 78.1% in 1951. Tullio De Mauro, an Italian linguist, has asserted that in 1861 only 2.5% of the population of Italy could speak Standard Italian. He reports that in 1951 that percentage had risen to 87%. The ability to speak Italian did not necessarily mean it was in everyday use, and most people (63.5%) still usually spoke their native dialects. In addition, other factors such as mass emigration, industrialization, and urbanization, and internal migrations after World War II, contributed to the proliferation of Standard Italian. The Italians who emigrated during the Italian diaspora beginning in 1861 were often of the uneducated lower class, and thus the emigration had the effect of increasing the percentage of literates, who often knew and understood the importance of Standard Italian, back home in Italy. A large percentage of those who had emigrated also eventually returned to Italy, often more educated than when they had left. The Italian dialects have declined in the modern era, as Italy unified under Standard Italian and continues to do so aided by mass media, from newspapers to radio to television. Phonology Italian has a seven-vowel system, consisting of , as well as 23 consonants. Compared with most other Romance languages, Italian phonology is conservative, preserving many words nearly unchanged from Vulgar Latin. Some examples: Italian "fourteen" < Latin (cf. Spanish , French , Catalan and Portuguese ) Italian settimana "week" < Latin (cf. Romanian săptămână, Spanish and Portuguese semana, French semaine , Catalan setmana) Italian medesimo "same" < Vulgar Latin * (cf. Spanish mismo, Portuguese mesmo, French même , Catalan mateix; note that Italian usually prefers the shorter stesso) Italian guadagnare "to win, earn, gain" < Vulgar Latin * < Germanic (cf. Spanish ganar, Portuguese ganhar, French gagner , Catalan guanyar) The conservative nature of Italian phonology is partly explained by its origin. Italian stems from a literary language that is derived from the 13th-century speech of the city of Florence in the region of Tuscany, and has changed little in the last 700 years or so. Furthermore, the Tuscan dialect is the most conservative of all Italian dialects, radically different from the Gallo-Italian languages less than to the north (across the La Spezia–Rimini Line). The following are some of the conservative phonological features of Italian, as compared with the common Western Romance languages (French, Spanish, Portuguese, Galician, Catalan). Some of these features are also present in Romanian. Little or no phonemic lenition of consonants between vowels, e.g. > vita "life" (cf. Romanian viață, Spanish vida , French vie), > piede "foot" (cf. Spanish pie, French pied ). Preservation of geminate consonants, e.g. > "year" (cf. Spanish , French , Romanian , Portuguese ). Preservation of all Proto-Romance final vowels, e.g. > "peace" (cf. Romanian , Spanish , French ), > "eight" (cf. Romanian , Spanish , French ), > "I did" (cf. Romanian dialectal , Spanish , French ). Preservation of most intertonic vowels (those between the stressed syllable and either the beginning or ending syllable). This accounts for some of the most noticeable differences, as in the forms quattordici and settimana given above. Slower consonant development, e.g. > Italo-Western > foglia "leaf" (cf. Romanian foaie , Spanish hoja , French feuille ; but note Portuguese folha ). Compared with most other Romance languages, Italian has many inconsistent outcomes, where the same underlying sound produces different results in different words, e.g. > lasciare and lassare, > cacciare and cazzare, > sdrucciolare, druzzolare and ruzzolare, > regina and reina. Although in all these examples the second form has fallen out of usage, the dimorphism is thought to reflect the several-hundred-year period during which Italian developed as a literary language divorced from any native-speaking population, with an origin in 12th/13th-century Tuscan but with many words borrowed from languages farther to the north, with different sound outcomes. (The La Spezia–Rimini Line, the most important isogloss in the entire Romance-language area, passes only about north of Florence.) Dual outcomes of Latin /p t k/ between vowels, such as > luogo but > fuoco, was once thought to be due to borrowing of northern voiced forms, but is now generally viewed as the result of early phonetic variation within Tuscany. Some other features that distinguish Italian from the Western Romance languages: Latin becomes rather than . Latin becomes rather than or : > otto "eight" (cf. Spanish ocho, French huit, Portuguese oito). Vulgar Latin becomes cchi rather than : > occhio "eye" (cf. Portuguese olho , French œil < ); but Romanian ochi . Final is not preserved, and vowel changes rather than are used to mark the plural: amico, amici "male friend(s)", amica, amiche "female friend(s)" (cf. Romanian amic, amici and amică, amice; Spanish amigo(s) "male friend(s)", amiga(s) "female friend(s)"); → tre, sei "three, six" (cf. Romanian trei, șase; Spanish tres, seis). Standard Italian also differs in some respects from most nearby Italian languages: Perhaps most noticeable is the total lack of metaphony, though metaphony is a feature characterizing nearly every other Italian language. No simplification of original , (which often became elsewhere). Assimilation Italian phonotactics do not usually permit verbs and polysyllabic nouns to end with consonants, except in poetry and song, so foreign words may receive extra terminal vowel sounds. Writing system Italian has a shallow orthography, meaning very regular spelling with an almost one-to-one correspondence between letters and sounds. In linguistic terms, the writing system is close to being a phonemic orthography. The most important of the few exceptions are the following (see below for more details): The letter c represents the sound at the end of words and before the letters a, o, and u but represents the sound (as the first sound in the English word chair) before the letters e and i. The letter g represents the sound at the end of words and before the letters a, o, and u but represents the sound (as the first sound in the English word gem) before the letters e and i. The letter n usually represents the sound , but it represents the sound (as in the English word sink) before the letter c and before the letter g when this is pronounced , and it represents the sound when the letter g is pronounced . So the combination of two letters ng represents either or (but never on its own, as in the English word singer). The letter h is always silent: hotel /oˈtɛl/; hanno 'they have' and anno 'year' both represent /ˈanno/. It is used to form a digraph with c or g to represent /k/ or /g/ before i or e: chi /ki/ 'who', che /ke/ 'what'; aghi /ˈagi/ 'needles', ghetto /ˈgetto/. The spellings ci and gi represent only /tʃ/ (as in English church) or /dʒ/ (as in English judge) with no /i/ sound before another vowel (ciuccio /ˈtʃuttʃo/ 'pacifier', Giorgio /ˈdʒɔrdʒo/) unless c or g precede stressed /i/ (farmacia /farmaˈtʃia/ 'pharmacy', biologia /bioloˈdʒia/ 'biology'). Elsewhere ci and gi represent /tʃ/ and /dʒ/ followed by /i/: cibo /ˈtʃibo/ 'food', baci /ˈbatʃi/ 'kisses'; gita /ˈdʒita/ 'trip', Tamigi /taˈmidʒi/ 'Thames'.* The Italian alphabet is typically considered to consist of 21 letters. The letters j, k, w, x, y are traditionally excluded, though they appear in loanwords such as jeans, whisky, taxi, xenofobo, xilofono. The letter has become common in standard Italian with the prefix extra-, although (e)stra- is traditionally used; it is also common to use the Latin particle ex(-) to mean "former(ly)" as in: la mia ex ("my ex-girlfriend"), "Ex-Jugoslavia" ("Former Yugoslavia"). The letter appears in the first name Jacopo and in some Italian place-names, such as Bajardo, Bojano, Joppolo, Jerzu, Jesolo, Jesi, Ajaccio, among others, and in Mar Jonio, an alternative spelling of Mar Ionio (the Ionian Sea). The letter may appear in dialectal words, but its use is discouraged in contemporary standard Italian. Letters used in foreign words can be replaced with phonetically equivalent native Italian letters and digraphs: , , or for ; or for (including in the standard prefix kilo-); , or for ; , , , or for ; and or | the ninth and tenth centuries C.E. These written sources demonstrate certain vernacular characteristics and sometimes explicitly mention the use of the vernacular in Italy. Full literary manifestations of the vernacular began to surface around the 13th century in the form of various religious texts and poetry.Although these are the first written records of Italian varieties separate from Latin, the spoken language had likely diverged long before the first written records appear, since those who were literate generally wrote in Latin even if they spoke other Romance varieties in person. Throughout the 19th and 20th centuries, the use of Standard Italian became increasingly widespread and was mirrored by a decline in the use of the dialects. An increase in literacy was one of the main driving factors (one can assume that only literates were capable of learning Standard Italian, whereas those who were illiterate had access only to their native dialect). The percentage of literates rose from 25% in 1861 to 60% in 1911, and then on to 78.1% in 1951. Tullio De Mauro, an Italian linguist, has asserted that in 1861 only 2.5% of the population of Italy could speak Standard Italian. He reports that in 1951 that percentage had risen to 87%. The ability to speak Italian did not necessarily mean it was in everyday use, and most people (63.5%) still usually spoke their native dialects. In addition, other factors such as mass emigration, industrialization, and urbanization, and internal migrations after World War II, contributed to the proliferation of Standard Italian. The Italians who emigrated during the Italian diaspora beginning in 1861 were often of the uneducated lower class, and thus the emigration had the effect of increasing the percentage of literates, who often knew and understood the importance of Standard Italian, back home in Italy. A large percentage of those who had emigrated also eventually returned to Italy, often more educated than when they had left. The Italian dialects have declined in the modern era, as Italy unified under Standard Italian and continues to do so aided by mass media, from newspapers to radio to television. Phonology Italian has a seven-vowel system, consisting of , as well as 23 consonants. Compared with most other Romance languages, Italian phonology is conservative, preserving many words nearly unchanged from Vulgar Latin. Some examples: Italian "fourteen" < Latin (cf. Spanish , French , Catalan and Portuguese ) Italian settimana "week" < Latin (cf. Romanian săptămână, Spanish and Portuguese semana, French semaine , Catalan setmana) Italian medesimo "same" < Vulgar Latin * (cf. Spanish mismo, Portuguese mesmo, French même , Catalan mateix; note that Italian usually prefers the shorter stesso) Italian guadagnare "to win, earn, gain" < Vulgar Latin * < Germanic (cf. Spanish ganar, Portuguese ganhar, French gagner , Catalan guanyar) The conservative nature of Italian phonology is partly explained by its origin. Italian stems from a literary language that is derived from the 13th-century speech of the city of Florence in the region of Tuscany, and has changed little in the last 700 years or so. Furthermore, the Tuscan dialect is the most conservative of all Italian dialects, radically different from the Gallo-Italian languages less than to the north (across the La Spezia–Rimini Line). The following are some of the conservative phonological features of Italian, as compared with the common Western Romance languages (French, Spanish, Portuguese, Galician, Catalan). Some of these features are also present in Romanian. Little or no phonemic lenition of consonants between vowels, e.g. > vita "life" (cf. Romanian viață, Spanish vida , French vie), > piede "foot" (cf. Spanish pie, French pied ). Preservation of geminate consonants, e.g. > "year" (cf. Spanish , French , Romanian , Portuguese ). Preservation of all Proto-Romance final vowels, e.g. > "peace" (cf. Romanian , Spanish , French ), > "eight" (cf. Romanian , Spanish , French ), > "I did" (cf. Romanian dialectal , Spanish , French ). Preservation of most intertonic vowels (those between the stressed syllable and either the beginning or ending syllable). This accounts for some of the most noticeable differences, as in the forms quattordici and settimana given above. Slower consonant development, e.g. > Italo-Western > foglia "leaf" (cf. Romanian foaie , Spanish hoja , French feuille ; but note Portuguese folha ). Compared with most other Romance languages, Italian has many inconsistent outcomes, where the same underlying sound produces different results in different words, e.g. > lasciare and lassare, > cacciare and cazzare, > sdrucciolare, druzzolare and ruzzolare, > regina and reina. Although in all these examples the second form has fallen out of usage, the dimorphism is thought to reflect the several-hundred-year period during which Italian developed as a literary language divorced from any native-speaking population, with an origin in 12th/13th-century Tuscan but with many words borrowed from languages farther to the north, with different sound outcomes. (The La Spezia–Rimini Line, the most important isogloss in the entire Romance-language area, passes only about north of Florence.) Dual outcomes of Latin /p t k/ between vowels, such as > luogo but > fuoco, was once thought to be due to borrowing of northern voiced forms, but is now generally viewed as the result of early phonetic variation within Tuscany. Some other features that distinguish Italian from the Western Romance languages: Latin becomes rather than . Latin becomes rather than or : > otto "eight" (cf. Spanish ocho, French huit, Portuguese oito). Vulgar Latin becomes cchi rather than : > occhio "eye" (cf. Portuguese olho , French œil < ); but Romanian ochi . Final is not preserved, and vowel changes rather than are used to mark the plural: amico, amici "male friend(s)", amica, amiche "female friend(s)" (cf. Romanian amic, amici and amică, amice; Spanish amigo(s) "male friend(s)", amiga(s) "female friend(s)"); → tre, sei "three, six" (cf. Romanian trei, șase; Spanish tres, seis). Standard Italian also differs in some respects from most nearby Italian languages: Perhaps most noticeable is the total lack of metaphony, though metaphony is a feature characterizing nearly every other Italian language. No simplification of original , (which often became elsewhere). Assimilation Italian phonotactics do not usually permit verbs and polysyllabic nouns to end with consonants, except in poetry and song, so foreign words may receive extra terminal vowel sounds. Writing system Italian has a shallow orthography, meaning very regular spelling with an almost one-to-one correspondence between letters and sounds. In linguistic terms, the writing system is close to being a phonemic orthography. The most important of the few exceptions are the following (see below for more details): The letter c represents the sound at the end of words and before the letters a, o, and u but represents the sound (as the first sound in the English word chair) before the letters e and i. The letter g represents the sound at the end of words and before the letters a, o, and u but represents the sound (as the first sound in the English word gem) before the letters e and i. The letter n usually represents the sound , but it represents the sound (as in the English word sink) before the letter c and before the letter g when this is pronounced , and it represents the sound when the letter g is pronounced . So the combination of two letters ng represents either or (but never on its own, as in the English word singer). The letter h is always silent: hotel /oˈtɛl/; hanno 'they have' and anno 'year' both represent /ˈanno/. It is used to form a digraph with c or g to represent /k/ or /g/ before i or e: chi /ki/ 'who', che /ke/ 'what'; aghi /ˈagi/ 'needles', ghetto /ˈgetto/. The spellings ci and gi represent only /tʃ/ (as in English church) or /dʒ/ (as in English judge) with no /i/ sound before another vowel (ciuccio /ˈtʃuttʃo/ 'pacifier', Giorgio /ˈdʒɔrdʒo/) unless c or g precede stressed /i/ (farmacia /farmaˈtʃia/ 'pharmacy', biologia /bioloˈdʒia/ 'biology'). Elsewhere ci and gi represent /tʃ/ and /dʒ/ followed by /i/: cibo /ˈtʃibo/ 'food', baci /ˈbatʃi/ 'kisses'; gita /ˈdʒita/ 'trip', Tamigi /taˈmidʒi/ 'Thames'.* The Italian alphabet is typically considered to consist of 21 letters. The letters j, k, w, x, y are traditionally excluded, though they appear in loanwords such as jeans, whisky, taxi, xenofobo, xilofono. The letter has become common in standard Italian with the prefix extra-, although (e)stra- is traditionally used; it is also common to use the Latin particle ex(-) to mean "former(ly)" as in: la mia ex ("my ex-girlfriend"), "Ex-Jugoslavia" ("Former Yugoslavia"). The letter appears in the first name Jacopo and in some Italian place-names, such as Bajardo, Bojano, Joppolo, Jerzu, Jesolo, Jesi, Ajaccio, among others, and in Mar Jonio, an alternative spelling of Mar Ionio (the Ionian Sea). The letter may appear in dialectal words, but its use is discouraged in contemporary standard Italian. Letters used in foreign words can be replaced with phonetically equivalent native Italian letters and digraphs: , , or for ; or for (including in the standard prefix kilo-); , or for ; , , , or for ; and or for . The acute accent is used over word-final to indicate a stressed front close-mid vowel, as in perché "why, because". In dictionaries, it is also used over to indicate a stressed back close-mid vowel (azióne). The grave accent is used over word-final and to indicate a front open-mid vowel and a back open-mid vowel respectively, as in tè "tea" and può "(he) can". The grave accent is used over any vowel to indicate word-final stress, as in gioventù "youth". Unlike , which is a close-mid vowel, a stressed final is almost always a back open-mid vowel (andrò), with a few exceptions, like metró, with a stressed final back close-mid vowel, making for the most part unnecessary outside of dictionaries. Most of the time, the penultimate syllable is stressed. But if the stressed vowel is the final letter of the word, the accent is mandatory, otherwise it is virtually always omitted. Exceptions are typically either in dictionaries, where all or most stressed vowels are commonly marked. Accents can optionally be used to disambiguate words that differ only by stress, as for prìncipi "princes" and princìpi "principles", or àncora "anchor" and ancóra "still/yet". For monosyllabic words, the rule is different: when two orthographically identical monosyllabic words with different meanings exist, one is accented and the other is not (example: è "is", e "and"). The letter distinguishes ho, hai, ha, hanno (present indicative of avere "to have") from o ("or"), ai ("to the"), a ("to"), anno ("year"). In the spoken language, the letter is always silent. The in ho additionally marks the contrasting open pronunciation of the . The letter is also used in combinations with other letters. No phoneme exists in Italian. In nativized foreign words, the is silent. For example, hotel and hovercraft are pronounced and respectively. (Where existed in Latin, it either disappeared or, in a few cases before a back vowel, changed to : traggo "I pull" ← Lat. .) The letters and can symbolize voiced or voiceless consonants. symbolizes or depending on context, with few minimal pairs. For example: zanzara "mosquito" and nazione "nation". symbolizes word-initially before a vowel, when clustered with a voiceless consonant (), and when doubled; it symbolizes when between vowels and when clustered with voiced consonants. Intervocalic varies regionally between and , with being more dominant in northern Italy and in the south. The letters and vary in pronunciation between plosives and affricates depending on following vowels. The letter symbolizes when word-final and before the back vowels . It symbolizes as in chair before the front vowels . The letter symbolizes when word-final and before the back vowels . It symbolizes as in gem before the front vowels . Other Romance languages and, to an extent, English have similar variations for . Compare hard and soft C, hard and soft G. (See also palatalization.) The digraphs and indicate ( and ) before . The digraphs and indicate "softness" ( and , the affricate consonants of English church and judge) before . For example: {| class="wikitable" ! ! colspan="2" | Before back vowel (A, O, U) ! colspan="2" | Before front vowel (I, E) |- ! rowspan="2" | Plosive ! C | caramella candy ! CH | china India ink |- ! G | gallo rooster ! GH | ghiro edible dormouse |- ! rowspan="2" | Affricate ! CI | ciambella donut ! C | Cina China |- ! GI | giallo yellow ! G | giro round, tour |} Note: is silent in the digraphs , ; and is silent in the digraphs and before unless the is stressed. For example, it is silent in ciao and cielo , but it is pronounced in farmacia and farmacie . Italian has geminate, or double, consonants, which are distinguished by length and intensity. Length is distinctive for all consonants except for , , , , , which are always geminate when between vowels, and , which is always single. Geminate plosives and affricates are realized as lengthened closures. Geminate fricatives, nasals, and are realized as lengthened continuants. There is only one vibrant phoneme but the actual pronunciation depends on context and regional accent. Generally one can find a flap consonant in unstressed position whereas is more common in stressed syllables, but there may be exceptions. Especially people from the Northern part of Italy (Parma, Aosta Valley, South Tyrol) may pronounce as , , or . Of special interest to the linguistic study of Regional Italian is the gorgia toscana, or "Tuscan Throat", the weakening or lenition of intervocalic , , and in the Tuscan language. The voiced postalveolar fricative is present as a phoneme only in loanwords: for example, garage . Phonetic is common in Central and Southern Italy as an intervocalic allophone of : gente 'people' but la gente 'the people', ragione 'reason'. Grammar Italian grammar is typical of the grammar of Romance languages in general. Cases exist for personal pronouns (nominative, oblique, accusative, dative), but not for nouns. There are two basic classes of nouns in Italian, referred to as genders, masculine and feminine. Gender may be natural (ragazzo 'boy', ragazza 'girl') or simply grammatical with no possible reference to biological gender (masculine costo 'cost', feminine costa 'coast'). Masculine nouns typically end in -o (ragazzo 'boy'), with plural marked by -i (ragazzi 'boys'), and feminine nouns typically end in -a, with plural marked by -e (ragazza 'girl', ragazze 'girls'). For a group composed of boys and girls, ragazzi is the plural, suggesting that -i is a general neutral plural. A third category of nouns is unmarked for gender, ending in -e in the singular and -i in the plural: legge 'law, f. sg.', leggi 'laws, f. pl.'; fiume 'river, m. sg.', fiumi 'rivers, m. pl.', thus assignment of gender is arbitrary in terms of form, enough so that terms may be identical but of distinct genders: fine meaning 'aim', 'purpose' is masculine, while fine meaning 'end, ending' (e.g. of a movie) is feminine, and both are fini in the plural, a clear instance of -i as a non-gendered default plural marker. These nouns often, but not always, denote inanimates. There are a number of nouns that have a masculine singular and a feminine plural, most commonly of the pattern m. sg. -o, f. pl. -a (miglio 'mile, m. sg.', miglia 'miles, f. pl.'; paio 'pair, m. sg., paia 'pairs, f. pl.'), and thus are sometimes considered neuter (these are usually derived from neuter Latin nouns). An instance of neuter gender also exists in pronouns of the third person singular. Examples: Nouns, adjectives, and articles inflect for gender and number (singular and plural). Like in English, common nouns are capitalized when occurring at the beginning of a sentence. Unlike English, nouns referring to languages (e.g. Italian), speakers of languages, or inhabitants of an area (e.g. Italians) are not capitalized. There are three types of adjectives: descriptive, invariable and form-changing. Descriptive adjectives are the most common, and their endings change to match the number and gender of the noun they modify. Invariable adjectives are adjectives whose endings do not change. The form changing adjectives "buono (good), bello (beautiful), grande (big), and santo (saint)" change in form when placed before different types of nouns. Italian has three degrees for comparison of adjectives: positive, comparative, and superlative. The order of words in the phrase is relatively free compared to most European languages. The position of the verb in the phrase is highly mobile. Word order often has a lesser grammatical function in Italian than in English. Adjectives are sometimes placed before their noun and sometimes after. Subject nouns generally come before the verb. Italian is a null-subject language, so that nominative pronouns are usually absent, with subject indicated by verbal inflections (e.g. amo 'I love', ama '(s)he loves', amano 'they love'). Noun objects normally come after the verb, as do pronoun objects after imperative verbs, infinitives and gerunds, but otherwise pronoun objects come before the verb. There are both indefinite and definite articles in Italian. There are four indefinite articles, selected by the gender of the noun they modify and by the phonological structure of the word that immediately follows the article. Uno is masculine singular, used before z ( or ), s+consonant, gn (), or ps, while masculine singular un is used before a word beginning with any other sound. The noun zio 'uncle' selects masculine singular, thus uno zio 'an uncle' or uno zio anziano 'an old uncle,' but un mio zio 'an uncle of mine'. The feminine singular indefinite articles are una, used before any consonant sound, and its abbreviated form, written un', used before vowels: una camicia 'a shirt', una camicia bianca 'a white shirt', un'altra camicia 'a different shirt'. There are seven forms for definite articles, both singular and plural. In the singular: lo, which corresponds to the uses of uno; il, which corresponds to the uses with consonant of un; la, which corresponds to the uses of una; l', used for both masculine and feminine singular before vowels. In the plural: gli is the masculine plural of lo and l'; i is the plural of il; and le is the plural of feminine la and l'. There are numerous contractions of prepositions with subsequent articles. There are |
crime) as a squad leader at Schofield Barracks, Marrow met a pimp named Mac. Mac admired that Marrow could quote Iceberg Slim, and he taught Marrow how to be a pimp himself. Marrow was also able to purchase stereo equipment cheaply in Hawaii, including two Technics turntables, a mixer, and large speakers. Once equipped, he then began to learn turntablism and rapping. Marrow learned from his commanding officer that he could receive an early honorable discharge because he was a single father. Taking advantage of this, Marrow was discharged as a Private First Class (PFC - E3) in December 1979 after serving for two years and two months. During an episode of The Adam Carolla Podcast that aired on June 6, 2012, Marrow claimed that after being discharged from the Army, he began a career as a bank robber. Marrow claimed he and some associates began conducting take-over bank robberies "like [in the film] Heat". Marrow then elaborated, explaining, "Only punks go for the drawer, we gotta go for the safe." Marrow also stated he was glad the United States justice system has statutes of limitations, which had likely expired when Marrow admitted to his involvement in multiple Class 1 Felonies in the early-to-mid 1980s. In July 2010, Marrow was mistakenly arrested. A month later when Marrow attended court, the charges were dropped and the prosecution stated "there had been a clerical error when the rapper was arrested". Marrow gave some advice to young people who think going to jail is a mark of integrity, saying: "Street credibility has nothing to do with going to jail, it has everything to do with staying out." Career Music Early career (1980–1981) After leaving the Army, Marrow wanted to stay away from gang life and violence and instead make a name for himself as a DJ. As a tribute to Iceberg Slim, Marrow adopted the stage name Ice-T. While performing as a DJ at parties, he received more attention for his rapping, which led Ice-T to pursue a career as a rapper. After breaking up with his girlfriend Caitlin Boyd, he returned to a life of crime and robbed jewelry stores with his high school friends. Ice-T's raps later described how he and his friends pretended to be customers to gain access before smashing the display glass with baby sledgehammers. Ice-T's friends Al P. and Sean E. Sean went to prison. Al P. was caught in 1982 and sent to prison for robbing a high-end jewelry store in Laguna Niguel for $2.5 million in jewelry. Sean was arrested for possession of not only cannabis, which Sean sold, but also material stolen by Ice-T. Sean took the blame and served two years in prison. Ice-T stated that he owed a debt of gratitude to Sean because his prison time allowed him to pursue a career as a rapper. Concurrently, he wound up in a car accident and was hospitalized as a John Doe because he did not carry any form of identification due to his criminal activities. After being discharged from the hospital, he decided to abandon the criminal lifestyle and pursue a professional career rapping. Two weeks after being released from the hospital, he won an open mic competition judged by Kurtis Blow. Professional career (1982–present) In 1982, Ice-T met producer Willie Strong from Saturn Records. In 1983, Strong recorded Ice-T's first single, "Cold Wind Madness", also known as "The Coldest Rap", an electro hip-hop record that became an underground success, becoming popular even though radio stations did not play it due to the song's hardcore lyrics. That same year, Ice-T released "Body Rock", another electro hip-hop single that found popularity in clubs. In 1984, Ice-T released the single Killers, the first of his political raps, and then was a featured rapper on "Reckless", a single by DJ Chris "The Glove" Taylor and (co-producer) David Storrs. This song was almost immediately followed up with a sequel entitled "Reckless Rivalry (Combat)", which was featured in the Breakin''' sequel, Breakin' 2: Electric Boogaloo, however it was never featured on the soundtrack album and, to this day, has never been released. Ice later recorded the songs "Ya Don't Quit" and "Dog'n the Wax (Ya Don't Quit-Part II)" with Unknown DJ, who provided a Run–D.M.C.-like sound for the songs. Ice-T received further inspiration as an artist from Schoolly D's gangsta rap single "P.S.K. What Does It Mean?", which he heard in a club. Ice-T enjoyed the single's sound and delivery, as well as its vague references to gang life, although the real life gang, Park Side Killers, was not named in the song. Ice-T decided to adopt Schoolly D's style, and wrote the lyrics to his first gangsta rap song, "6 in the Mornin'", in his Hollywood apartment, and created a minimal beat with a Roland TR-808. He compared the sound of the song, which was recorded as a B-side on the single "Dog'n The Wax", to that of the Beastie Boys. The single was released in 1986, and he learned that "6 in the Mornin'" was more popular in clubs than its A-side, leading Ice-T to rap about Los Angeles gang life, which he described more explicitly than any previous rapper. He intentionally did not represent any particular gang, and wore a mixture of red and blue clothing and shoes to avoid antagonizing gang-affiliated listeners, who debated his true affiliation. Ice-T finally landed a deal with a major label Sire Records. When label founder and president Seymour Stein heard his demo, he said Ice-T sounded like Bob Dylan. Shortly after, he released his debut album Rhyme Pays in 1987 supported by DJ Evil E, DJ Aladdin and producer Afrika Islam, who helped create the mainly party-oriented sound. The record wound up being certified gold by the Recording Industry Association of America. That same year, he recorded the title theme song for Dennis Hopper's Colors, a film about inner-city gang life in Los Angeles. His next album Power was released in 1988, under his own label Rhyme Syndicate, and it was a more assured and impressive record, earning him strong reviews and his second gold record. Released in 1989, The Iceberg/Freedom of Speech... Just Watch What You Say! established his popularity by matching excellent abrasive music with narrative and commentative lyrics. In the same year, he appeared on Hugh Harris' single "Alice". In 1991, he released his album O.G. Original Gangster, which is regarded as one of the albums that defined gangsta rap. On OG, he introduced his heavy metal band Body Count in a track of the same name. Ice-T toured with Body Count on the first annual Lollapalooza concert tour in 1991, gaining him appeal among middle-class teenagers and fans of alternative music genres. The album Body Count was released in March 1992. For his appearance on the heavily collaborative track "Back on the Block", a composition by jazz musician Quincy Jones that "attempt[ed] to bring together black musical styles from jazz to soul to funk to rap", Ice-T won a Grammy Award for the Best Rap Performance by a Duo or Group, an award shared by others who worked on the track including Jones and fellow jazz musician Ray Charles. Controversy later surrounded Body Count over its song "Cop Killer". The rock song was intended to speak from the viewpoint of a criminal getting revenge on racist, brutal cops. Ice-T's rock song infuriated government officials, the National Rifle Association, and various police advocacy groups. Consequently, Time Warner Music refused to release Ice-T's upcoming album Home Invasion because of the controversy surrounding "Cop Killer". Ice-T suggested that the furor over the song was an overreaction, telling journalist Chuck Philips "...they've done movies about nurse killers and teacher killers and student killers. Arnold Schwarzenegger blew away dozens of cops as the Terminator. But I don't hear anybody complaining about that". In the same interview, Ice-T suggested to Philips that the misunderstanding of Cop Killer, the misclassification of it as a rap song (not a rock song), and the attempts to censor it had racial overtones: "The Supreme Court says it's OK for a white man to burn a cross in public. But nobody wants a black man to write a record about a cop killer". Ice-T split amicably with Sire/Warner Bros. Records after a dispute over the artwork of the album Home Invasion. He then reactivated Rhyme Syndicate and formed a deal with Priority Records for distribution. Priority released Home Invasion in the spring of 1993. The album peaked at No. 9 on Billboard magazine's Top R&B/Hip-Hop Albums and at No. 14 on the Billboard 200, spawning several singles including "Gotta Lotta Love", "I Ain't New Ta This" and "99 Problems" – which would later inspire Jay-Z to record a version with new lyrics in 2003. In 2003 he released the single "Beat of Life" with Sandra Nasić, Trigga tha Gambler and DJ Tomekk and placed in the German charts.GfK Entertainment charts: Offizielle Charts Ice-T had also collaborated with certain other heavy metal bands during this time period. For the film Judgment Night, he did a duet with Slayer on the track "Disorder". In 1995, Ice-T made a guest performance on Forbidden by Black Sabbath. Another album of his, VI – Return of the Real, was released in 1996, followed by The Seventh Deadly Sin in 1999. His first rap album since 1999, Gangsta Rap, was released on October 31, 2006. The album's cover, which "shows [Ice-T] lying on his back in bed with his ravishing wife's ample posterior in full view and one of her legs coyly draped over his private parts", was considered to be too suggestive for most retailers, many of which were reluctant to stock the album. Some reviews of the album were unenthusiastic, as many had hoped for a return to the political raps of Ice-T's most successful albums. Ice-T appears in the film Gift. One of the last scenes includes Ice-T and Body Count playing with Jane's Addiction in a version of the Sly and the Family Stone song "Don't Call Me Nigger, Whitey". Besides fronting his own band and rap projects, Ice-T has also collaborated with other hard rock and metal bands, such as Icepick, Motörhead, Slayer, Pro-Pain, and Six Feet Under. He has also covered songs by hardcore punk bands such as The Exploited, Jello Biafra, and Black Flag. Ice-T made an appearance at Insane Clown Posse's Gathering of the Juggalos (2008 edition). Ice-T was also a judge for the 7th annual Independent Music Awards to support independent artists. His 2012 film Something from Nothing: The Art of Rap features a who's who of underground and mainstream rappers. In November 2011, Ice-T announced via Twitter that he was in the process of collecting beats for his next LP which was expected sometime during 2012, but , the album has not been released. A new Body Count album, Bloodlust, was released in 2017. After the release of the album, responding to an interview question asking if he's "done with rap", he answered "I don't know" and noted that he's "really leaning more toward EDM right now". In July 2019, Ice-T released his first solo hip hop track in 10 years, titled "Feds in My Rearview". The track is the first in a trilogy, with the second track, "Too Old for the Dumb Shit", described as a prequel to "Feds in My Rearview", and released in September 2019. Ice-T was also featured on the 2020 hip hop posse cut "The Slayers Club" alongside R.A. the Rugged Man, Brand Nubian and others. Ice-T performed at New Year's Eve Toast & Roast 2021, Fox broadcast. Acting Television and film Ice-T was prominently featured as both a rapper and a breakdancer in Breakin' 'n' 'Enterin' (1983), a documentary about the early West Coast hip hop scene. Ice-T's first film appearances were in the motion pictures, Breakin' (1984), and its sequel, Breakin' 2: Electric Boogaloo (1984). These films were released before Ice-T released his first LP, although he appears on the soundtrack to Breakin. He has since stated he considers the films and his own performance in them to be "wack". In 1991, he embarked on a serious acting career, portraying police detective Scotty Appleton in Mario Van Peebles' action thriller New Jack City, gang leader Odessa (alongside Denzel Washington and John Lithgow) in Ricochet (1991), gang leader King James in Trespass (1992), followed by a notable lead role performance in Surviving the Game (1994), in addition to many supporting roles, such as J-Bone in Johnny Mnemonic (1995), and the marsupial mutant T-Saint in Tank Girl (1995). He was also interviewed in the Brent Owens documentary Pimps Up, Ho's Down, in which he claims to have had an extensive pimping background before getting into rap. He is quoted as saying "once you max something out, it ain't no fun no more. I couldn't really get no farther." He goes on to explain his pimping experience gave him the ability to get into new businesses. "I can't act, I really can't act, I ain't no rapper, it's all game. I'm just working these niggas." Later he raps at the Players Ball. In 1993, Ice-T, along with other rappers and the three Yo! MTV Raps hosts Ed Lover, Doctor Dré and Fab 5 Freddy starred in the comedy Who's the Man?, directed by Ted Demme. In the movie, he is a drug dealer who gets really frustrated when someone calls him by his real name, "Chauncey", rather than his street name, "Nighttrain". In 1995, Ice-T had a recurring role as vengeful drug dealer Danny Cort on the television series New York Undercover, co-created by Dick Wolf. His work on the series earned him the 1996 NAACP Image Award for Outstanding Supporting Actor in a Drama Series. In 1997, he co-created the short-lived series Players, produced by Wolf. This was followed by a role as pimp Seymour "Kingston" Stockton in Exiled: A Law & Order Movie (1998). These collaborations led Wolf to add Ice-T to the cast of Law & Order: Special Victims Unit. Since 2000 he has portrayed Odafin "Fin" Tutuola, a former undercover narcotics officer transferred to the Special Victims Unit. In 2002, the NAACP awarded Ice-T with a second Image Award, again for Outstanding Supporting Actor in a Drama Series, for his work on Law & Order: SVU. Around 1995, Ice-T co-presented a UK-produced magazine television series on black culture, Baadasss TV. In 1997, Ice-T had a pay-per-view special titled Ice-T's Extreme Babes which appeared on Action PPV, formerly owned by BET Networks. In 1999, Ice-T starred in the HBO movie Stealth Fighter as a United States Naval Aviator who fakes his own death, steals an F-117 stealth fighter, and threatens to destroy United States military bases. He also acted in the movie Sonic Impact, released the same year. Ice-T made an appearance on the comedy television series Chappelle's Show as himself presenting the award for "Player Hater of the Year" at the "Player-Haters Ball", a parody of his own appearance at the Players Ball. He was dubbed the "Original Player Hater".Beyond Tough, a 2002 documentary series, aired on Discovery Channel about the world's most dangerous and intense professions, such as alligator wrestlers and Indy 500 pit crews, was hosted by Ice-T. In 2007, Ice-T appeared as a celebrity guest star on the MTV sketch comedy show Short Circuitz. Also in late 2007, he appeared in the short-music film Hands of Hatred, which can be found online. Ice-T | he won an open mic competition judged by Kurtis Blow. Professional career (1982–present) In 1982, Ice-T met producer Willie Strong from Saturn Records. In 1983, Strong recorded Ice-T's first single, "Cold Wind Madness", also known as "The Coldest Rap", an electro hip-hop record that became an underground success, becoming popular even though radio stations did not play it due to the song's hardcore lyrics. That same year, Ice-T released "Body Rock", another electro hip-hop single that found popularity in clubs. In 1984, Ice-T released the single Killers, the first of his political raps, and then was a featured rapper on "Reckless", a single by DJ Chris "The Glove" Taylor and (co-producer) David Storrs. This song was almost immediately followed up with a sequel entitled "Reckless Rivalry (Combat)", which was featured in the Breakin''' sequel, Breakin' 2: Electric Boogaloo, however it was never featured on the soundtrack album and, to this day, has never been released. Ice later recorded the songs "Ya Don't Quit" and "Dog'n the Wax (Ya Don't Quit-Part II)" with Unknown DJ, who provided a Run–D.M.C.-like sound for the songs. Ice-T received further inspiration as an artist from Schoolly D's gangsta rap single "P.S.K. What Does It Mean?", which he heard in a club. Ice-T enjoyed the single's sound and delivery, as well as its vague references to gang life, although the real life gang, Park Side Killers, was not named in the song. Ice-T decided to adopt Schoolly D's style, and wrote the lyrics to his first gangsta rap song, "6 in the Mornin'", in his Hollywood apartment, and created a minimal beat with a Roland TR-808. He compared the sound of the song, which was recorded as a B-side on the single "Dog'n The Wax", to that of the Beastie Boys. The single was released in 1986, and he learned that "6 in the Mornin'" was more popular in clubs than its A-side, leading Ice-T to rap about Los Angeles gang life, which he described more explicitly than any previous rapper. He intentionally did not represent any particular gang, and wore a mixture of red and blue clothing and shoes to avoid antagonizing gang-affiliated listeners, who debated his true affiliation. Ice-T finally landed a deal with a major label Sire Records. When label founder and president Seymour Stein heard his demo, he said Ice-T sounded like Bob Dylan. Shortly after, he released his debut album Rhyme Pays in 1987 supported by DJ Evil E, DJ Aladdin and producer Afrika Islam, who helped create the mainly party-oriented sound. The record wound up being certified gold by the Recording Industry Association of America. That same year, he recorded the title theme song for Dennis Hopper's Colors, a film about inner-city gang life in Los Angeles. His next album Power was released in 1988, under his own label Rhyme Syndicate, and it was a more assured and impressive record, earning him strong reviews and his second gold record. Released in 1989, The Iceberg/Freedom of Speech... Just Watch What You Say! established his popularity by matching excellent abrasive music with narrative and commentative lyrics. In the same year, he appeared on Hugh Harris' single "Alice". In 1991, he released his album O.G. Original Gangster, which is regarded as one of the albums that defined gangsta rap. On OG, he introduced his heavy metal band Body Count in a track of the same name. Ice-T toured with Body Count on the first annual Lollapalooza concert tour in 1991, gaining him appeal among middle-class teenagers and fans of alternative music genres. The album Body Count was released in March 1992. For his appearance on the heavily collaborative track "Back on the Block", a composition by jazz musician Quincy Jones that "attempt[ed] to bring together black musical styles from jazz to soul to funk to rap", Ice-T won a Grammy Award for the Best Rap Performance by a Duo or Group, an award shared by others who worked on the track including Jones and fellow jazz musician Ray Charles. Controversy later surrounded Body Count over its song "Cop Killer". The rock song was intended to speak from the viewpoint of a criminal getting revenge on racist, brutal cops. Ice-T's rock song infuriated government officials, the National Rifle Association, and various police advocacy groups. Consequently, Time Warner Music refused to release Ice-T's upcoming album Home Invasion because of the controversy surrounding "Cop Killer". Ice-T suggested that the furor over the song was an overreaction, telling journalist Chuck Philips "...they've done movies about nurse killers and teacher killers and student killers. Arnold Schwarzenegger blew away dozens of cops as the Terminator. But I don't hear anybody complaining about that". In the same interview, Ice-T suggested to Philips that the misunderstanding of Cop Killer, the misclassification of it as a rap song (not a rock song), and the attempts to censor it had racial overtones: "The Supreme Court says it's OK for a white man to burn a cross in public. But nobody wants a black man to write a record about a cop killer". Ice-T split amicably with Sire/Warner Bros. Records after a dispute over the artwork of the album Home Invasion. He then reactivated Rhyme Syndicate and formed a deal with Priority Records for distribution. Priority released Home Invasion in the spring of 1993. The album peaked at No. 9 on Billboard magazine's Top R&B/Hip-Hop Albums and at No. 14 on the Billboard 200, spawning several singles including "Gotta Lotta Love", "I Ain't New Ta This" and "99 Problems" – which would later inspire Jay-Z to record a version with new lyrics in 2003. In 2003 he released the single "Beat of Life" with Sandra Nasić, Trigga tha Gambler and DJ Tomekk and placed in the German charts.GfK Entertainment charts: Offizielle Charts Ice-T had also collaborated with certain other heavy metal bands during this time period. For the film Judgment Night, he did a duet with Slayer on the track "Disorder". In 1995, Ice-T made a guest performance on Forbidden by Black Sabbath. Another album of his, VI – Return of the Real, was released in 1996, followed by The Seventh Deadly Sin in 1999. His first rap album since 1999, Gangsta Rap, was released on October 31, 2006. The album's cover, which "shows [Ice-T] lying on his back in bed with his ravishing wife's ample posterior in full view and one of her legs coyly draped over his private parts", was considered to be too suggestive for most retailers, many of which were reluctant to stock the album. Some reviews of the album were unenthusiastic, as many had hoped for a return to the political raps of Ice-T's most successful albums. Ice-T appears in the film Gift. One of the last scenes includes Ice-T and Body Count playing with Jane's Addiction in a version of the Sly and the Family Stone song "Don't Call Me Nigger, Whitey". Besides fronting his own band and rap projects, Ice-T has also collaborated with other hard rock and metal bands, such as Icepick, Motörhead, Slayer, Pro-Pain, and Six Feet Under. He has also covered songs by hardcore punk bands such as The Exploited, Jello Biafra, and Black Flag. Ice-T made an appearance at Insane Clown Posse's Gathering of the Juggalos (2008 edition). Ice-T was also a judge for the 7th annual Independent Music Awards to support independent artists. His 2012 film Something from Nothing: The Art of Rap features a who's who of underground and mainstream rappers. In November 2011, Ice-T announced via Twitter that he was in the process of collecting beats for his next LP which was expected sometime during 2012, but , the album has not been released. A new Body Count album, Bloodlust, was released in 2017. After the release of the album, responding to an interview question asking if he's "done with rap", he answered "I don't know" and noted that he's "really leaning more toward EDM right now". In July 2019, Ice-T released his first solo hip hop track in 10 years, titled "Feds in My Rearview". The track is the first in a trilogy, with the second track, "Too Old for the Dumb Shit", described as a prequel to "Feds in My Rearview", and released in September 2019. Ice-T was also featured on the 2020 hip hop posse cut "The Slayers Club" alongside R.A. the Rugged Man, Brand Nubian and others. Ice-T performed at New Year's Eve Toast & Roast 2021, Fox broadcast. Acting Television and film Ice-T was prominently featured as both a rapper and a breakdancer in Breakin' 'n' 'Enterin' (1983), a documentary about the early West Coast hip hop scene. Ice-T's first film appearances were in the motion pictures, Breakin' (1984), and its sequel, Breakin' 2: Electric Boogaloo (1984). These films were released before Ice-T released his first LP, although he appears on the soundtrack to Breakin. He has since stated he considers the films and his own performance in them to be "wack". In 1991, he embarked on a serious acting career, portraying police detective Scotty Appleton in Mario Van Peebles' action thriller New Jack City, gang leader Odessa (alongside Denzel Washington and John Lithgow) in Ricochet (1991), gang leader King James in Trespass (1992), followed by a notable lead role performance in Surviving the Game (1994), in addition to many supporting roles, such as J-Bone in Johnny Mnemonic (1995), and the marsupial mutant T-Saint in Tank Girl (1995). He was also interviewed in the Brent Owens documentary Pimps Up, Ho's Down, in which he claims to have had an extensive pimping background before getting into rap. He is quoted as saying "once you max something out, it ain't no fun no more. I couldn't really get no farther." He goes on to explain his pimping experience gave him the ability to get into new businesses. "I can't act, I really can't act, I ain't no rapper, it's all game. I'm just working these niggas." Later he raps at the Players Ball. In 1993, Ice-T, along with other rappers and the three Yo! MTV Raps hosts Ed Lover, Doctor Dré and Fab 5 Freddy starred in the comedy Who's the Man?, directed by Ted Demme. In the movie, he is a drug dealer who gets really frustrated when someone calls him by his real name, "Chauncey", rather than his street name, "Nighttrain". In 1995, Ice-T had a recurring role as vengeful drug dealer Danny Cort on the television series New York Undercover, co-created by Dick Wolf. His work on the series earned him the 1996 NAACP Image Award for Outstanding Supporting Actor in a Drama Series. In 1997, he co-created the short-lived series Players, produced by Wolf. This was followed by a role as pimp Seymour "Kingston" Stockton in Exiled: A Law & Order Movie (1998). These collaborations led Wolf to add Ice-T to the cast of Law & Order: Special Victims Unit. Since 2000 he has portrayed Odafin "Fin" Tutuola, a former undercover narcotics officer transferred to the Special Victims Unit. In 2002, the NAACP awarded Ice-T with a second Image Award, again for Outstanding Supporting Actor in a Drama Series, for his work on Law & Order: SVU. Around 1995, Ice-T co-presented a UK-produced magazine television |
6000 BC and were able to produce temperatures greater than . In addition to specially designed furnaces, ancient iron production needed to develop complex procedures for the removal of impurities, the regulation of the admixture of carbon, and for hot-working to achieve a useful balance of hardness and strength in steel. The earliest tentative evidence for iron-making is a small number of iron fragments with the appropriate amounts of carbon admixture found in the Proto-Hittite layers at Kaman-Kalehöyük and dated to 2200–2000 BC. Akanuma (2008) concludes that "The combination of carbon dating, archaeological context, and archaeometallurgical examination indicates that it is likely that the use of ironware made of steel had already begun in the third millennium BC in Central Anatolia". Souckova-Siegolová (2001) shows that iron implements were made in Central Anatolia in very limited quantities around 1800 BC and were in general use by elites, though not by commoners, during the New Hittite Empire (∼1400–1200 BC). Similarly, recent archaeological remains of iron-working in the Ganges Valley in India have been tentatively dated to 1800 BC. Tewari (2003) concludes that "knowledge of iron smelting and manufacturing of iron artifacts was well known in the Eastern Vindhyas and iron had been in use in the Central Ganga Plain, at least from the early second millennium BC". By the Middle Bronze Age increasing numbers of smelted iron objects (distinguishable from meteoric iron by the lack of nickel in the product) appeared in the Middle East, Southeast Asia and South Asia. African sites are turning up dates as early as 2000-1200 BC. Modern archaeological evidence identifies the start of large-scale iron production in around 1200 BC, marking the end of the Bronze Age. Between 1200 BC and 1000 BC diffusion in the understanding of iron metallurgy and the use of iron objects was fast and far-flung. Anthony Snodgrass suggests that a shortage of tin, as a part of the Bronze Age Collapse and trade disruptions in the Mediterranean around 1300 BC, forced metalworkers to seek an alternative to bronze. As evidence, many bronze implements were recycled into weapons during that time. More widespread use of iron led to improved steel-making technology at a lower cost. Thus, even when tin became available again, iron was cheaper, stronger and lighter, and forged iron implements superseded cast bronze tools permanently. Ancient Near East The Iron Age in the Ancient Near East is believed to have begun with the discovery of iron smelting and smithing techniques in Anatolia or the Caucasus and Balkans in the late 2nd millennium BC ( 1300 BC). The earliest bloomery smelting of iron is found at Tell Hammeh, Jordan around 930 BC (14C dating). The Early Iron Age in the Caucasus area is conventionally divided into two periods, Early Iron I, dated to around 1100 BC, and the Early Iron II phase from the tenth to ninth centuries BC. Many of the material culture traditions of the Late Bronze Age continued into the Early Iron Age. Thus, there's a sociocultural continuity during this transitional period. In Iran, the earliest actual iron artifacts were unknown until the 9th century BC. For Iran, the best studied archaeological site during this time period is Teppe Hasanlu. West Asia In the Mesopotamian states of Sumer, Akkad and Assyria, the initial use of iron reaches far back, to perhaps 3000 BC. One of the earliest smelted iron artifacts known was a dagger with an iron blade found in a Hattic tomb in Anatolia, dating from 2500 BC. The widespread use of iron weapons which replaced bronze weapons rapidly disseminated throughout the Near East (North Africa, southwest Asia) by the beginning of the 1st millennium BC. The development of iron smelting was once attributed to the Hittites of Anatolia during the Late Bronze Age. As part of the Late Bronze Age-Early Iron Age, the Bronze Age collapse saw the slow, comparatively continuous spread of iron-working technology in the region. It was long held that the success of the Hittite Empire during the Late Bronze Age had been based on the advantages entailed by the "monopoly" on ironworking at the time. Accordingly, the invading Sea Peoples would have been responsible for spreading the knowledge through that region. The view of such a "Hittite monopoly" has come under scrutiny and no longer represents a scholarly consensus. While there are some iron objects from Bronze Age Anatolia, the number is comparable to iron objects found in Egypt and other places of the same time period; and only a small number of these objects are weapons. Dates are approximate, consult particular article for details Prehistoric (or Proto-historic) Iron Age Historic Iron Age Egypt The Iron Age in Egyptian archaeology essentially corresponds to the Third Intermediate Period of Egypt. Iron metal is singularly scarce in collections of Egyptian antiquities. Bronze remained the primary material there until the conquest by Neo-Assyrian Empire in 671 BC. The explanation of this would seem to be that the relics are in most cases the paraphernalia of tombs, the funeral vessels and vases, and iron being considered an impure metal by the ancient Egyptians it was never used in their manufacture of these or for any religious purposes. It was attributed to Seth, the spirit of evil who according to Egyptian tradition governed the central deserts of Africa. In the Black Pyramid of Abusir, dating before 2000 BC, Gaston Maspero found some pieces of iron. In the funeral text of Pepi I, the metal is mentioned. A sword bearing the name of pharaoh Merneptah as well as a battle axe with an iron blade and gold-decorated bronze shaft were both found in the excavation of Ugarit. A dagger with an iron blade found in Tutankhamun's tomb, 13th century BC, was recently examined and found to be of meteoric origin. Europe In Europe, the Iron Age is the last stage of prehistoric Europe and the first of the protohistoric periods, which initially means descriptions of a particular area by Greek and Roman writers. For much of Europe, the period came to an abrupt local end after conquest by the Romans, though ironworking remained the dominant technology until recent times. Elsewhere it may last until the early centuries AD, and either Christianization or a new conquest in the Migration Period. Iron working was introduced to Europe in the late 11th century BC, probably from the Caucasus, and slowly spread northwards and westwards over the succeeding 500 years. The Iron Age did not start when iron first appeared in Europe but it began to replace bronze in the preparation of tools and weapons. It did not happen at the same time all around Europe; local cultural developments played a role in the transition to the Iron Age. For example, the Iron Age of Prehistoric Ireland begins around 500 BC (when the Greek Iron Age had already ended) and finishes around AD 400. The widespread use of the technology of iron was implemented in Europe simultaneously with Asia. The prehistoric Iron Age in Central Europe divided into two periods based on historical events – Hallstatt culture (early Iron Age) and La Tène (late Iron Age) cultures. Material cultures of Hallstatt and La Tène consist of 4 phases (A, B, C, D phases). The Iron Age in Europe is characterized by an elaboration of designs in weapons, implements, and utensils. These are no longer cast but hammered into shape, and decoration is elaborate and curvilinear rather than simple rectilinear; the forms and character of the ornamentation of the northern European weapons resemble in some respects Roman arms, while in other respects they are peculiar and evidently representative of northern art. Citania de Briterios located in Guimarães, Portugal is one of the examples of archaeological sites of the Iron Age. This settlement (fortified villages) covered an area of , and served as a Celtiberian stronghold against Roman invasions. İt dates more than 2500 years back. The site was researched by Francisco Martins Sarmento starting from 1874. A number of amphoras (containers usually for wine or olive oil), coins, fragments of pottery, weapons, pieces of jewelry, as well as ruins of a bath and its () revealed here. Asia Central Asia The Iron Age in Central Asia began when iron objects appear among the Indo-European Saka in present-day Xinjiang (China) between the 10th century BC and the 7th century BC, such as those found at the cemetery site of Chawuhukou. The Pazyryk culture is an Iron Age archaeological culture (c. 6th to 3rd centuries BC) identified by excavated artifacts and mummified humans found in the Siberian permafrost in the Altay Mountains. East Asia Dates are approximate, consult particular article for details Prehistoric (or Proto-historic) Iron Age Historic Iron Age In China, Chinese bronze inscriptions are found around 1200 BC, preceding the development of iron metallurgy, which was known by the 9th century BC. Therefore, in China prehistory had given way to history periodized by ruling dynasties by the start of iron use, so "Iron Age" is not typically used as to describe a period in Chinese history. Iron metallurgy reached the Yangtse Valley toward the end of the 6th century BC. The few objects were found at Changsha and Nanjing. The mortuary evidence suggests that the initial use of iron in Lingnan belongs to the mid-to-late Warring States period (from about 350 BC). Important non-precious husi style metal finds include Iron tools found at the tomb at Guwei-cun of the 4th century BC. The techniques used in Lingnan are a combination of bivalve moulds of distinct southern tradition and the incorporation of piece mould technology from the Zhongyuan. The products of the combination of these two periods are bells, vessels, weapons and ornaments, and the sophisticated cast. An Iron Age culture of the Tibetan Plateau has tentatively been associated with the Zhang Zhung culture described in early Tibetan writings. Iron objects were introduced to the Korean peninsula through trade with chiefdoms and state-level societies in the Yellow Sea area in the 4th century BC, just at the end of the Warring States Period but before the Western Han Dynasty began. Yoon proposes that iron was first introduced to chiefdoms located along North Korean river valleys that flow into the Yellow Sea such as the Cheongcheon and Taedong Rivers. Iron production quickly followed in the 2nd century BC, and iron implements came to be used by farmers by the 1st century in southern Korea. The earliest known cast-iron axes in southern Korea are found in the Geum River basin. The time that iron production begins is the same time that complex chiefdoms of Proto-historic Korea emerged. The complex chiefdoms were the precursors of early states such as Silla, Baekje, Goguryeo, and Gaya Iron ingots were an important mortuary item and indicated the wealth or prestige of the deceased in this period. In Japan, iron items, such as tools, weapons, and decorative objects, are postulated to have entered Japan during the late Yayoi period ( 300 BC–AD 300) or the succeeding Kofun period | mythological "Ages of Man" of Hesiod. As an archaeological era, it was first introduced for Scandinavia by Christian Jürgensen Thomsen in the 1830s. By the 1860s, it was embraced as a useful division of the "earliest history of mankind" in general and began to be applied in Assyriology. The development of the now-conventional periodization in the archaeology of the Ancient Near East was developed in the 1920s to 1930s. As its name suggests, Iron Age technology is characterized by the production of tools and weaponry by ferrous metallurgy (ironworking), more specifically from carbon steel. Chronology Increasingly the Iron Age in Europe is being seen as a part of the Bronze Age collapse in the ancient Near East, in ancient India (with the post-Rigvedic Vedic civilization), ancient Iran, and ancient Greece (with the Greek Dark Ages). In other regions of Europe the Iron Age began in the 8th century BC in Central Europe and the 6th century BC in Northern Europe. The Near Eastern Iron Age is divided into two subsections, Iron I and Iron II. Iron I (1200–1000 BC) illustrates both continuity and discontinuity with the previous Late Bronze Age. There is no definitive cultural break between the 13th and 12th centuries BC throughout the entire region, although certain new features in the hill country, Transjordan and coastal region may suggest the appearance of the Aramaean and Sea People groups. There is evidence, however, of strong continuity with Bronze Age culture, although as one moves later into Iron Age the culture begins to diverge more significantly from that of the late 2nd millennium. The Iron Age as an archaeological period is roughly defined as that part of the prehistory of a culture or region during which ferrous metallurgy was the dominant technology of metalworking. The characteristic of an Iron Age culture is the mass production of tools and weapons made from steel, typically alloys with a carbon content between approximately 0.30% and 1.2% by weight. Only with the capability of the production of carbon steel does ferrous metallurgy result in tools or weapons that are equal or superior to bronze. The use of steel has been based as much on economics as on metallurgical advancements. Early steel was made by smelting iron. By convention, the Iron Age in the Ancient Near East is taken to last from c. 1200 BC (the Bronze Age collapse) to c. 550 BC (or 539 BC), roughly the beginning of historiography with Herodotus; the end of the proto-historical period. In Central and Western Europe, the Iron Age is taken to last from c. 800 BC to c. 1 BC, in Northern Europe from c. 500 BC to AD 800. In China, there is no recognizable prehistoric period characterized by ironworking, as Bronze Age China transitions almost directly into the Qin dynasty of imperial China; "Iron Age" in the context of China is sometimes used for the transitional period of c. 900 BC to 100 BC during which ferrous metallurgy was present even if not dominant. Early ferrous metallurgy The earliest-known iron artifacts are nine small beads dated to 3200 BC, which were found in burials at Gerzeh, Lower Egypt. They have been identified as meteoric iron shaped by careful hammering. Meteoric iron, a characteristic iron–nickel alloy, was used by various ancient peoples thousands of years before the Iron Age. Such iron, being in its native metallic state, required no smelting of ores. Smelted iron appears sporadically in the archeological record from the middle Bronze Age. Whilst terrestrial iron is naturally abundant, its high melting point of placed it out of reach of common use until the end of the second millennium BC. Tin's low melting point of and copper's relatively moderate melting point of placed them within the capabilities of the Neolithic pottery kilns, which date back to 6000 BC and were able to produce temperatures greater than . In addition to specially designed furnaces, ancient iron production needed to develop complex procedures for the removal of impurities, the regulation of the admixture of carbon, and for hot-working to achieve a useful balance of hardness and strength in steel. The earliest tentative evidence for iron-making is a small number of iron fragments with the appropriate amounts of carbon admixture found in the Proto-Hittite layers at Kaman-Kalehöyük and dated to 2200–2000 BC. Akanuma (2008) concludes that "The combination of carbon dating, archaeological context, and archaeometallurgical examination indicates that it is likely that the use of ironware made of steel had already begun in the third millennium BC in Central Anatolia". Souckova-Siegolová (2001) shows that iron implements were made in Central Anatolia in very limited quantities around 1800 BC and were in general use by elites, though not by commoners, during the New Hittite Empire (∼1400–1200 BC). Similarly, recent archaeological remains of iron-working in the Ganges Valley in India have been tentatively dated to 1800 BC. Tewari (2003) concludes that "knowledge of iron smelting and manufacturing of iron artifacts was well known in the Eastern Vindhyas and iron had been in use in the Central Ganga Plain, at least from the early second millennium BC". By the Middle Bronze Age increasing numbers of smelted iron objects (distinguishable from meteoric iron by the lack of nickel in the product) appeared in the Middle East, Southeast Asia and South Asia. African sites are turning up dates as early as 2000-1200 BC. Modern archaeological evidence identifies the start of large-scale iron production in around 1200 BC, marking the end of the Bronze Age. Between 1200 BC and 1000 BC diffusion in the understanding of iron metallurgy and the use of iron objects was fast and far-flung. Anthony Snodgrass suggests that a shortage of tin, as a part of the Bronze Age Collapse and trade disruptions in the Mediterranean around 1300 BC, forced metalworkers to seek an alternative to bronze. As evidence, many bronze implements were recycled into weapons during that time. More widespread use of iron led to improved steel-making technology at a lower cost. Thus, even when tin became available again, iron was cheaper, stronger and lighter, and forged iron implements superseded cast bronze tools permanently. Ancient Near East The Iron Age in the Ancient Near East is believed to have begun with the discovery of iron smelting and smithing techniques in Anatolia or the Caucasus and Balkans in the late 2nd millennium BC ( 1300 BC). The earliest bloomery smelting of iron is found at Tell Hammeh, Jordan around 930 BC (14C dating). The Early Iron Age in the Caucasus area is conventionally divided into two periods, Early Iron I, dated to around 1100 BC, and the Early Iron II phase from the tenth to ninth centuries BC. Many of the material culture traditions of the Late Bronze Age continued into the Early Iron Age. Thus, there's a |
away from eris by disconnecting from any subset of the IRC network as soon as they saw eris there. For a few days, the entire IRC network suffered frequent netsplits, but eventually the majority of servers added the Q-line and effectively created a new separate IRC net called EFnet (Eris-Free Network); the remaining servers which stayed connected to eris (and thus were no longer able to connect to EFnet servers) were called A-net (Anarchy Network). A-net soon vanished, leaving EFnet as the only IRC network. Continuing problems with performance and abuse eventually led to the rise of another major IRC network, Undernet, which split off in October 1992. In July 1996, disagreement on policy caused EFnet to break in two: the slightly larger European half (including Australia and Japan) formed IRCnet, while the American servers continued as EFnet. This was known as The Great Split. In July 2001, after a string of DDoS attacks a service called CHANFIX (originally JUPES) was created, which is designed to give back ops to channels which have lost ops or been taken over. In 2007, various EFnet servers began implementing SSL. February 2009 saw the introduction of a new CHANFIX module called OPME, a mechanism for EFnet Admins to use to restore ops in an opless channel. It was proposed by Douglas Boldt to provide a much cleaner alternative to masskill, which was unnecessarily invasive and disruptive to the network. Later in 2009, some major IRC servers were delinked: irc.vel.net, irc.dks.ca, irc.pte.hu, EFnet's only UK server efnet.demon.co.uk, and EFnet's only UK hub hub.uk, which were sponsored by Demon Internet. In September 2010, the two western regions of the network (United States and Canada) merged into the North American region. While the North American and European regions are technically independent | was once again a valid IRC server on the "Eris Free" IRC network and accepted clients. At the same time, efnet.org begin redirecting to erisnet.org. eris.Berkeley.EDU delinked on April 02 2018 at 19:50 UTC. Characteristics EFnet has large variations in rules and policy between different servers as well as the two major regions (EU and NA). Both have their own policy structure, and each region votes on their own server applications. However, central policies are voted upon by the server admin community which is archived for referencing. Due to EFnet's nature, it has gained recognition over the years for warez, hackers, and DoS attacks. EFnet has always been known for its lack of IRC services that other IRC networks support (such as NickServ and ChanServ, although it had a NickServ until April 8, 1994). Instead, the CHANFIX service was introduced to fix "opless" channels. All servers on EFnet run ircd-ratbox. EFnet's channel operators are generally free to run their channels however they see fit without the intervention of IRC operators. IRC ops are primarily there to handle network and server related issues, and rarely get involved with channel-level issues. |
wracked by an ongoing series of flame wars. Again in 2001, it was threatened by automated heavy spamming of its users for potential commercial gain. Undernet survived these periods relatively intact and its popularity continues to the present day. It is notable as being the first network to utilize timestamping, originally made by Carlo Wood, in the IRC server protocol as a means to curb abuse. Services Undernet uses GNUworld to provide X, its channel service bot. X operates on a username basis; a username is independent from a nickname, which cannot be registered on Undernet. As Undernet limits channel registration to "established channels" or channels with an active userbase, Undernet introduced a version of ChanFix (under the nickname C) | nickname, which cannot be registered on Undernet. As Undernet limits channel registration to "established channels" or channels with an active userbase, Undernet introduced a version of ChanFix (under the nickname C) designed to work like EFNet's CHANFIX. Its use is to protect unregistered channels. ChanFix tracks channel op usage by username basis and restores ops if channels become opless or are taken over. Undernet also runs an open proxy scanner. This scans users currently connecting to the network for open WinGate, SOCKS version 4/5, and HTTP proxy servers. IP addresses hosting open proxy servers are automatically G-lined from the network. These changes were put in place after the 2001 Denial-of-service attacks almost destroyed the network and left Undernet without the registered channel service bot for months. In 2010, Undernet also started to g-line Tor exit nodes, instead of assigning those users a cloak like e.g. Quakenet. References External links Washington Post coverage Internet Relay Chat networks 1992 establishments in the United |
DDoS issues was the fact that the owner of twisted.dal.net (the world's largest single IRC server, hosting more than 50,000 clients most of the time) delinked his servers (for personal reasons). The other servers on the network could not absorb the extra client load, leading to users' complete inability to connect to DALnet. The network was first crushed by attacks, and then by its own user base. It was around this time that DALnet closed many of their channels that were dedicated to serving content such as MP3 files and movies. File transfers were still allowed but not on a large scale. This raised suspicion as to whether DALnet was being targeted by the RIAA, although this was not true, but a precautionary measure. In 2003, DALnet put up their first anycast servers under the name "The IX Concept", and made irc.dal.net resolve to the anycast IP. Since then, most new client servers linked are anycast. Characteristics The main characteristics of DALnet is its ChanServ services which was invented on DALnet in 1995. Along with NickServ it gave a solid ground for usability and security on IRC where users got the ability to register their nicknames and their channels. DALnet is also developing and running on its own ircd software called Bahamut which is based on ircd-hybrid and Dreamforge and was first live in the early 2000s. The name Bahamut comes from a silver-white dragon with blue eyes standing | owner of twisted.dal.net (the world's largest single IRC server, hosting more than 50,000 clients most of the time) delinked his servers (for personal reasons). The other servers on the network could not absorb the extra client load, leading to users' complete inability to connect to DALnet. The network was first crushed by attacks, and then by its own user base. It was around this time that DALnet closed many of their channels that were dedicated to serving content such as MP3 files and movies. File transfers were still allowed but not on a large scale. This raised suspicion as to whether DALnet was being targeted by the RIAA, although this was not true, but a precautionary measure. In 2003, DALnet put up their first anycast servers under the name "The IX Concept", and made irc.dal.net resolve to the anycast IP. Since then, most new client servers linked are anycast. Characteristics The main characteristics of DALnet is its ChanServ services which was invented on DALnet in 1995. Along with NickServ it gave a solid ground for usability and security on IRC where users got the ability to register their nicknames and their channels. DALnet is also developing and running on its own ircd software called Bahamut which is based on ircd-hybrid and Dreamforge and was first live in the early 2000s. The name Bahamut comes from a silver-white dragon with blue eyes standing for |
is written in C and is a TUI application utilizing ncurses. GTK+ toolkit support has been dropped. It works on all Unix-like operating systems, and is distributed under a BSD license. It is originally based on ircII-EPIC and eventually it was merged into the EPIC IRC client. It supports IPv6, multiple servers and SSL and a subset of UTF-8 (characters contained in ISO-8859-1) with an unofficial patch. BitchX has frequently been noted to be a popular IRC client for Unix-like systems. The latest official release is version 1.2 BitchX does not yet support Unicode. Security It was known that | an unofficial patch. BitchX has frequently been noted to be a popular IRC client for Unix-like systems. The latest official release is version 1.2 BitchX does not yet support Unicode. Security It was known that early versions of BitchX were vulnerable to a denial-of-service attack in that they could be caused to crash by passing specially-crafted strings as arguments to certain IRC commands. This was before format string attacks became a well-known class of vulnerability. The previous version of BitchX, released in 2004, has security problems allowing remote IRC servers to execute arbitrary code on the client's machine (CVE-2007-3360, CVE-2007-4584). On April 26, 2009, Slackware |
developer states that version 5.91 is the final one to support 16-bit Windows; 6.35 is the last to support Windows 95, NT 4.0, 98, and ME. The current version supports Windows XP and later. Main features mIRC has a number of distinguishing features. One is its scripting language which is further developed with each version. The scripting language can be used to make minor changes to the program like custom commands (aliases), but can also be used to completely alter the behavior and appearance of mIRC. Another claimed feature is mIRC's file sharing abilities, via the DCC protocol, featuring a built-in file server. Starting with mIRC 7.1, released on 30 July 2010, Unicode and IPv6 are supported. mIRC scripting mIRC's abilities and behaviors can be altered and extended using the embedded mIRC scripting language. mIRC includes its own GUI scripting editor, with help that has been described as "extremely detailed". mIRC scripting is not limited to IRC related events and commands. It is Turing complete. There is support for COM objects, calling DLLs, sockets, canvas drawing, input device reading, regular expressions, and dialog boxes, among other things. This allows the client to be used in a variety of ways beyond chatting, for example as an IRC bot, a media player, a web HTML parser, or for other entertainment purposes such as mIRC games. Due to the level of access the language has to a user's computer — for example, being able to rename and delete files — a number of abusive scripts have | ten most popular Internet applications. History mIRC was created by Khaled Mardam-Bey (), a British programmer of Palestinian and Syrian origin. He began developing the software in late 1994, and released its first version on 28 February 1995. Mardam-Bey states that he decided to create mIRC because he felt the first IRC clients for Windows lacked some basic IRC features. He then continued developing it due to the challenge and the fact that people appreciated his work. The author states that its subsequent popularity allowed him to make a living out of mIRC. mIRC is shareware and requires payment for registration after the 30-day evaluation period. The developer states that version 5.91 is the final one to support 16-bit Windows; 6.35 is the last to support Windows 95, NT 4.0, 98, and ME. The current version supports Windows XP and later. Main features mIRC has a number of distinguishing features. One is its scripting language which is further developed with each version. The scripting language can be used to make minor changes to the program like custom commands (aliases), but can also be used to completely alter the behavior and appearance of mIRC. Another claimed feature is mIRC's file sharing abilities, |
client that forked from XChat. It has a choice of a tabbed document interface or tree interface, support for multiple servers, and numerous configuration options. Both command-line and graphical versions were available. The client runs on Unix-like operating systems, and many Linux distributions include packages in their repositories. History The XChat-WDK (XChat Windows Driver Kit) project started in 2010 and was originally Windows-only. The project's | just fixing Windows bugs to adding new features. It started to make sense to support more platforms than Windows. On July 6, 2012, XChat-WDK officially changed its name to HexChat. See also Comparison of IRC clients References External links on Libera.chat Free Internet Relay Chat clients Instant messaging clients that use GTK Internet Relay Chat clients MacOS Internet Relay Chat clients Unix Internet Relay Chat clients Windows |
channel may be left without users, allowing the first rejoining user to recreate the channel and gain operator status. When the servers merge, any pre-existing operators retain their status, allowing the new user to kick out the original operators and take over the channel. A simple prevention mechanism involves timestamping (abbreviated to TS), or checking the creation dates of the channels being merged. This was first implemented by Undernet (ircu) and is now common in many IRC servers. If both channels were created at the same time, all user statuses are retained when the two are combined; if one is newer than the other, special statuses are removed from those in the newer channel. Additionally, a newer protection involving timestamping is used when a server splits away from the main network (when it no longer detects that IRC services are available), it disallows anyone creating a channel to be given operator privileges. Nick collision Another popular form of channel takeover abuses nickname collision protection, which keeps two users from having the same nickname at once. A user on one side of a netsplit takes the nickname of a target on the other side of the split; when | the split. After such mass disconnections, a channel may be left without users, allowing the first rejoining user to recreate the channel and gain operator status. When the servers merge, any pre-existing operators retain their status, allowing the new user to kick out the original operators and take over the channel. A simple prevention mechanism involves timestamping (abbreviated to TS), or checking the creation dates of the channels being merged. This was first implemented by Undernet (ircu) and is now common in many IRC servers. If both channels were created at the same time, all user statuses are retained when the two are combined; if one is newer than the other, special statuses are removed from those in the newer channel. Additionally, a newer protection involving timestamping is used when a server splits away from the main network (when it no longer detects that IRC services are available), it disallows anyone creating a channel to be given operator privileges. Nick collision Another popular form of channel takeover abuses nickname collision protection, which keeps two users from having the same nickname at once. A user on one side of a netsplit takes the nickname of a target on the other side of the split; when the servers reconnect, the nicks collide and both users are kicked from the server. The attacker then reconnects or |
is an IRC client program for Linux, FreeBSD, macOS and Microsoft Windows. It was originally written by Timo Sirainen, and released under the terms of the GNU GPL-2.0-or-later in January 1999. Features Irssi is written in the C programming language and in normal operation uses a text-mode user interface. According to the developers, Irssi was written from scratch, not based on ircII (like BitchX and epic). This freed the developers from having to deal with the constraints of an existing codebase, allowing them to maintain | as security and customization. Numerous Perl scripts have been made available for Irssi to customise how it looks and operates. Plugins are available which add encryption and protocols such as ICQ and XMPP. Irssi may be configured by using its user interface or by manually editing its configuration files, which use a syntax resembling Perl data structures. Distributions Irssi was written primarily to run on Unix-like operating systems, and binaries and packages are available for Gentoo Linux, Debian, Slackware, SUSE (openSUSE), Frugalware, Fedora, FreeBSD, OpenBSD, NetBSD, DragonFly BSD, Solaris, Arch Linux, Ubuntu, NixOS, and others. Irssi builds and runs on Microsoft Windows under Cygwin, |
be addressed by civil litigation and, in several jurisdictions, under criminal law. Trade secret misappropriation Trade secret misappropriation is different from violations of other intellectual property laws, since by definition trade secrets are secret, while patents and registered copyrights and trademarks are publicly available. In the United States, trade secrets are protected under state law, and states have nearly universally adopted the Uniform Trade Secrets Act. The United States also has federal law in the form of the Economic Espionage Act of 1996 (), which makes the theft or misappropriation of a trade secret a federal crime. This law contains two provisions criminalizing two sorts of activity. The first, , criminalizes the theft of trade secrets to benefit foreign powers. The second, , criminalizes their theft for commercial or economic purposes. (The statutory penalties are different for the two offenses.) In Commonwealth common law jurisdictions, confidentiality and trade secrets are regarded as an equitable right rather than a property right but penalties for theft are roughly the same as in the United States. Criticisms The term "intellectual property" Criticism of the term intellectual property ranges from discussing its vagueness and abstract overreach to direct contention to the semantic validity of using words like property and rights in fashions that contradict practice and law. Many detractors think this term specially serves the doctrinal agenda of parties opposing reform in the public interest or otherwise abusing related legislations, and that it disallows intelligent discussion about specific and often unrelated aspects of copyright, patents, trademarks, etc. Free Software Foundation founder Richard Stallman argues that, although the term intellectual property is in wide use, it should be rejected altogether, because it "systematically distorts and confuses these issues, and its use was and is promoted by those who gain from this confusion". He claims that the term "operates as a catch-all to lump together disparate laws [which] originated separately, evolved differently, cover different activities, have different rules, and raise different public policy issues" and that it creates a "bias" by confusing these monopolies with ownership of limited physical things, likening them to "property rights". Stallman advocates referring to copyrights, patents and trademarks in the singular and warns against abstracting disparate laws into a collective term. He argues that "to avoid spreading unnecessary bias and confusion, it is best to adopt a firm policy not to speak or even think in terms of 'intellectual property'." Similarly, economists Boldrin and Levine prefer to use the term "intellectual monopoly" as a more appropriate and clear definition of the concept, which, they argue, is very dissimilar from property rights. They further argued that "stronger patents do little or nothing to encourage innovation", mainly explained by its tendency to create market monopolies, thereby restricting further innovations and technology transfer. On the assumption that intellectual property rights are actual rights, Stallman says that this claim does not live to the historical intentions behind these laws, which in the case of copyright served as a censorship system, and later on, a regulatory model for the printing press that may have benefited authors incidentally, but never interfered with the freedom of average readers. Still referring to copyright, he cites legal literature such as the United States Constitution and case law to demonstrate that the law is meant to be an optional and experimental bargain to temporarily trade property rights and free speech for public, not private, benefits in the form of increased artistic production and knowledge. He mentions that "if copyright were a natural right nothing could justify terminating this right after a certain period of time". Law professor, writer and political activist Lawrence Lessig, along with many other copyleft and free software activists, has criticized the implied analogy with physical property (like land or an automobile). They argue such an analogy fails because physical property is generally rivalrous while intellectual works are non-rivalrous (that is, if one makes a copy of a work, the enjoyment of the copy does not prevent enjoyment of the original). Other arguments along these lines claim that unlike the situation with tangible property, there is no natural scarcity of a particular idea or information: once it exists at all, it can be re-used and duplicated indefinitely without such re-use diminishing the original. Stephan Kinsella has objected to intellectual property on the grounds that the word "property" implies scarcity, which may not be applicable to ideas. Entrepreneur and politician Rickard Falkvinge and hacker Alexandre Oliva have independently compared George Orwell's fictional dialect Newspeak to the terminology used by intellectual property supporters as a linguistic weapon to shape public opinion regarding copyright debate and DRM. Alternative terms In civil law jurisdictions, intellectual property has often been referred to as intellectual rights, traditionally a somewhat broader concept that has included moral rights and other personal protections that cannot be bought or sold. Use of the term intellectual rights has declined since the early 1980s, as use of the term intellectual property has increased. Alternative terms monopolies on information and intellectual monopoly have emerged among those who argue against the "property" or "intellect" or "rights" assumptions, notably Richard Stallman. The backronyms intellectual protectionism and intellectual poverty, whose initials are also IP, have found supporters as well, especially among those who have used the backronym digital restrictions management. The argument that an intellectual property right should (in the interests of better balancing of relevant private and public interests) be termed an intellectual monopoly privilege (IMP) has been advanced by several academics including Birgitte Andersen and Thomas Alured Faunce. Objections to overly broad intellectual property laws Some critics of intellectual property, such as those in the free culture movement, point at intellectual monopolies as harming health (in the case of pharmaceutical patents), preventing progress, and benefiting concentrated interests to the detriment of the masses, and argue that the public interest is harmed by ever-expansive monopolies in the form of copyright extensions, software patents, and business method patents. More recently scientists and engineers are expressing concern that patent thickets are undermining technological development even in high-tech fields like nanotechnology. Petra Moser has asserted that historical analysis suggests that intellectual property laws may harm innovation: Overall, the weight of the existing historical evidence suggests that patent policies, which grant strong intellectual property rights to early generations of inventors, may discourage innovation. On the contrary, policies that encourage the diffusion of ideas and modify patent laws to facilitate entry and encourage competition may be an effective mechanism to encourage innovation. In support of that argument, Jörg Baten, Nicola Bianchi and Petra Moser find historical evidence that especially compulsory licensing – which allows governments to license patents without the consent of patent-owners – encouraged invention in Germany in the early 20th century by increasing the threat of competition in fields with low pre-existing levels of competition. Peter Drahos notes, "Property rights confer authority over resources. When authority is granted to the few over resources on which the many depend, the few gain power over the goals of the many. This has consequences for both political and economic freedom within a society." The World Intellectual Property Organization (WIPO) recognizes that conflicts may exist between the respect for and implementation of current intellectual property systems and other human rights. In 2001 the UN Committee on Economic, Social and Cultural Rights issued a document called "Human rights and intellectual property" that argued that intellectual property tends to be governed by economic goals when it should be viewed primarily as a social product; in order to serve human well-being, intellectual property systems must respect and conform to human rights laws. According to the Committee, when systems fail to do so, they risk infringing upon the human right to food and health, and to cultural participation and scientific benefits. In 2004 the General Assembly of WIPO adopted The Geneva Declaration on the Future of the World Intellectual Property Organization which argues that WIPO should "focus more on the needs of developing countries, and to view IP as one of many tools for development—not as an end in itself". Ethical problems are most pertinent when socially valuable goods like life-saving medicines are given IP protection. While the application of IP rights can allow companies to charge higher than the marginal cost of production in order to recoup the costs of research and development, the price may exclude from the market anyone who cannot afford the cost of the product, in this case a life-saving drug. "An IPR driven regime is therefore not a regime that is conductive to the investment of R&D of products that are socially valuable to predominately poor populations". Libertarians have differing views on intellectual property. Stephan Kinsella, an anarcho-capitalist on the right-wing of libertarianism, argues against intellectual property because allowing property rights in ideas and information creates artificial scarcity and infringes on the right to own tangible property. Kinsella uses the following scenario to argue this point: [I]magine the time when men lived in caves. One bright guy—let's call him Galt-Magnon—decides to build a log cabin on an open field, near his crops. To be sure, this is a good idea, and others notice it. They naturally imitate Galt-Magnon, and they start building their own cabins. But the first man to invent a house, according to IP advocates, would have a right to prevent others from building houses on their own land, with their own logs, or to charge them a fee if they do build houses. It is plain that the innovator in these examples becomes a partial owner of the tangible property (e.g., land and logs) of others, due not to first occupation and use of that property (for it is already owned), but due to his coming up with an idea. Clearly, this rule flies in the face of the first-user homesteading rule, arbitrarily and groundlessly overriding the very homesteading rule that is at the foundation of all property rights. Thomas Jefferson once said in a letter to Isaac McPherson on 13 August 1813: If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. In 2005 the Royal Society of Arts launched the Adelphi Charter, aimed at creating an international policy statement | all industries and globally". Economists estimate that two-thirds of the value of large businesses in the United States can be traced to intangible assets. "IP-intensive industries" are estimated to generate 72% more value added (price minus material cost) per employee than "non-IP-intensive industries". A joint research project of the WIPO and the United Nations University measuring the impact of IP systems on six Asian countries found "a positive correlation between the strengthening of the IP system and subsequent economic growth." Morality According to Article 27 of the Universal Declaration of Human Rights, "everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author". Although the relationship between intellectual property and human rights is a complex one, there are moral arguments for intellectual property. The arguments that justify intellectual property fall into three major categories. Personality theorists believe intellectual property is an extension of an individual. Utilitarians believe that intellectual property stimulates social progress and pushes people to further innovation. Lockeans argue that intellectual property is justified based on deservedness and hard work. Various moral justifications for private property can be used to argue in favor of the morality of intellectual property, such as: Natural Rights/Justice Argument: this argument is based on Locke's idea that a person has a natural right over the labour and products which are produced by their body. Appropriating these products is viewed as unjust. Although Locke had never explicitly stated that natural right applied to products of the mind, it is possible to apply his argument to intellectual property rights, in which it would be unjust for people to misuse another's ideas. Locke's argument for intellectual property is based upon the idea that laborers have the right to control that which they create. They argue that we own our bodies which are the laborers, this right of ownership extends to what we create. Thus, intellectual property ensures this right when it comes to production. Utilitarian-Pragmatic Argument: according to this rationale, a society that protects private property is more effective and prosperous than societies that do not. Innovation and invention in 19th century America has been attributed to the development of the patent system. By providing innovators with "durable and tangible return on their investment of time, labor, and other resources", intellectual property rights seek to maximize social utility. The presumption is that they promote public welfare by encouraging the "creation, production, and distribution of intellectual works". Utilitarians argue that without intellectual property there would be a lack of incentive to produce new ideas. Systems of protection such as Intellectual property optimize social utility. "Personality" Argument: this argument is based on a quote from Hegel: "Every man has the right to turn his will upon a thing or make the thing an object of his will, that is to say, to set aside the mere thing and recreate it as his own". European intellectual property law is shaped by this notion that ideas are an "extension of oneself and of one's personality". Personality theorists argue that by being a creator of something one is inherently at risk and vulnerable for having their ideas and designs stolen and/or altered. Intellectual property protects these moral claims that have to do with personality. Lysander Spooner (1855) argues "that a man has a natural and absolute right—and if a natural and absolute, then necessarily a perpetual, right—of property, in the ideas, of which he is the discoverer or creator; that his right of property, in ideas, is intrinsically the same as, and stands on identically the same grounds with, his right of property in material things; that no distinction, of principle, exists between the two cases". Writer Ayn Rand argued in her book Capitalism: The Unknown Ideal that the protection of intellectual property is essentially a moral issue. The belief is that the human mind itself is the source of wealth and survival and that all property at its base is intellectual property. To violate intellectual property is therefore no different morally than violating other property rights which compromises the very processes of survival and therefore constitutes an immoral act. Infringement, misappropriation, and enforcement Violation of intellectual property rights, called "infringement" with respect to patents, copyright, and trademarks, and "misappropriation" with respect to trade secrets, may be a breach of civil law or criminal law, depending on the type of intellectual property involved, jurisdiction, and the nature of the action. As of 2011 trade in counterfeit copyrighted and trademarked works was a $600 billion industry worldwide and accounted for 5–7% of global trade. Patent infringement Patent infringement typically is caused by using or selling a patented invention without permission from the patent holder. The scope of the patented invention or the extent of protection is defined in the claims of the granted patent. There is safe harbor in many jurisdictions to use a patented invention for research. This safe harbor does not exist in the US unless the research is done for purely philosophical purposes, or in order to gather data in order to prepare an application for regulatory approval of a drug. In general, patent infringement cases are handled under civil law (e.g., in the United States) but several jurisdictions incorporate infringement in criminal law also (for example, Argentina, China, France, Japan, Russia, South Korea). Copyright infringement Copyright infringement is reproducing, distributing, displaying or performing a work, or to make derivative works, without permission from the copyright holder, which is typically a publisher or other business representing or assigned by the work's creator. It is often called "piracy". While copyright is created the instant a work is fixed, generally the copyright holder can only get money damages if the owner registers the copyright. Enforcement of copyright is generally the responsibility of the copyright holder. The ACTA trade agreement, signed in May 2011 by the United States, Japan, Switzerland, and the EU, and which has not entered into force, requires that its parties add criminal penalties, including incarceration and fines, for copyright and trademark infringement, and obligated the parties to actively police for infringement. There are limitations and exceptions to copyright, allowing limited use of copyrighted works, which does not constitute infringement. Examples of such doctrines are the fair use and fair dealing doctrine. Trademark infringement Trademark infringement occurs when one party uses a trademark that is identical or confusingly similar to a trademark owned by another party, in relation to products or services which are identical or similar to the products or services of the other party. In many countries, a trademark receives protection without registration, but registering a trademark provides legal advantages for enforcement. Infringement can be addressed by civil litigation and, in several jurisdictions, under criminal law. Trade secret misappropriation Trade secret misappropriation is different from violations of other intellectual property laws, since by definition trade secrets are secret, while patents and registered copyrights and trademarks are publicly available. In the United States, trade secrets are protected under state law, and states have nearly universally adopted the Uniform Trade Secrets Act. The United States also has federal law in the form of the Economic Espionage Act of 1996 (), which makes the theft or misappropriation of a trade secret a federal crime. This law contains two provisions criminalizing two sorts of activity. The first, , criminalizes the theft of trade secrets to benefit foreign powers. The second, , criminalizes their theft for commercial or economic purposes. (The statutory penalties are different for the two offenses.) In Commonwealth common law jurisdictions, confidentiality and trade secrets are regarded as an equitable right rather than a property right but penalties for theft are roughly the same as in the United States. Criticisms The term "intellectual property" Criticism of the term intellectual property ranges from discussing its vagueness and abstract overreach to direct contention to the semantic validity of using words like property and rights in fashions that contradict practice and law. Many detractors think this term specially serves the doctrinal agenda of parties opposing reform in the public interest or otherwise abusing related legislations, and that it disallows intelligent discussion about specific and often unrelated aspects of copyright, patents, trademarks, etc. Free Software Foundation founder Richard Stallman argues that, although the term intellectual property is in wide use, it should be rejected altogether, because it "systematically distorts and confuses these issues, and its use was and is promoted by those who gain from this confusion". He claims that the term "operates as a catch-all to lump together disparate laws [which] originated separately, evolved differently, cover different activities, have different rules, and raise different public policy issues" and that it creates a "bias" by confusing these monopolies with ownership of limited physical things, likening them to "property rights". Stallman advocates referring to copyrights, patents and trademarks in the singular and warns against abstracting disparate laws into a collective term. He argues that "to avoid spreading unnecessary bias and confusion, it is best to adopt a firm policy not to speak or even think in terms of 'intellectual property'." Similarly, economists Boldrin and Levine prefer to use the term "intellectual monopoly" as a more appropriate and clear definition of the concept, which, they argue, is very dissimilar from property rights. They further argued that "stronger patents do little or nothing to encourage innovation", mainly explained by its tendency to create market monopolies, thereby restricting further innovations and technology transfer. On the assumption that intellectual property rights are actual rights, Stallman says that this claim does not live to the historical intentions behind these laws, which in the case of copyright served as a censorship system, and later on, a regulatory model for the printing press that may have benefited authors incidentally, but never interfered with the freedom of average readers. Still referring to copyright, he cites legal literature such as the United States Constitution and case law to demonstrate that the law is meant to be an optional and experimental bargain to temporarily trade property rights and free speech for public, not private, benefits in the form of increased artistic production and knowledge. He mentions that "if copyright were a natural right nothing could justify terminating this right after a certain period of time". Law professor, writer and political activist Lawrence Lessig, along with many other copyleft and free software activists, has criticized the implied analogy with physical property (like land or an automobile). They argue such an analogy fails because physical property is generally rivalrous while intellectual works are non-rivalrous (that is, if one makes a copy of a work, the enjoyment of the copy does not prevent enjoyment of the original). Other arguments along these lines claim that unlike the situation with tangible property, there is no natural scarcity of a particular idea or information: once it exists at all, it can be re-used and duplicated indefinitely without such re-use diminishing the original. Stephan Kinsella has objected to intellectual property on the grounds that the word "property" implies scarcity, which may not be applicable to ideas. Entrepreneur and politician Rickard Falkvinge and hacker Alexandre Oliva have independently compared George Orwell's fictional dialect Newspeak to the terminology used by intellectual property supporters as a linguistic weapon to shape public opinion regarding copyright debate and DRM. Alternative terms In civil law jurisdictions, intellectual property has often been referred to as intellectual rights, traditionally a somewhat broader concept that has included moral rights and other personal protections that cannot be bought or sold. Use of the term intellectual rights has declined since the early 1980s, as use of the term intellectual property has increased. Alternative terms monopolies on information and intellectual monopoly have emerged among those who argue against the "property" or "intellect" or "rights" assumptions, notably Richard Stallman. The backronyms intellectual protectionism and intellectual poverty, whose initials are also IP, have found supporters as well, especially among those who have used the backronym digital restrictions management. The argument that an intellectual property right should (in the interests of better balancing of relevant private and public interests) be termed an intellectual monopoly privilege (IMP) has been advanced by several academics including Birgitte Andersen and Thomas Alured Faunce. Objections to overly broad intellectual property laws Some critics of intellectual property, such as those in the free culture movement, point at intellectual monopolies as harming health (in the case of pharmaceutical patents), preventing progress, and benefiting concentrated interests to the detriment of the masses, and argue that the public interest is harmed by ever-expansive monopolies in the form of copyright extensions, software patents, and business method patents. More recently scientists and engineers are expressing concern that patent thickets are undermining technological development even in high-tech fields like nanotechnology. Petra Moser has asserted that historical analysis suggests that intellectual property laws may harm innovation: Overall, the weight of the existing historical evidence suggests that patent policies, which grant strong intellectual property rights to early generations of inventors, may discourage innovation. On the contrary, policies that encourage the diffusion of ideas and modify patent laws to facilitate entry and encourage competition may be an effective mechanism to encourage innovation. In support of that argument, Jörg Baten, Nicola Bianchi and Petra Moser find historical evidence that especially compulsory licensing – which allows governments to license patents without the consent of patent-owners – encouraged invention in Germany in the early 20th century by increasing the threat of competition in fields with low pre-existing levels of competition. Peter Drahos notes, "Property rights confer authority over resources. When authority is granted to the few over resources on which the many depend, the few gain power over the goals of the many. This has consequences for both political and economic freedom within a society." The World Intellectual Property Organization (WIPO) recognizes that conflicts may exist between the respect for and implementation of current intellectual property systems and other human rights. In 2001 the UN Committee on Economic, Social and Cultural Rights issued a document called "Human rights and intellectual property" that argued that intellectual property tends to be governed by economic goals when it should be viewed primarily as a social product; in order to serve human well-being, intellectual property systems must respect and conform to human rights laws. According to the Committee, when systems fail to do so, they risk infringing upon the human right to food and health, and to cultural participation and scientific benefits. In 2004 the General Assembly of WIPO adopted The Geneva Declaration on the Future of the World Intellectual Property Organization which argues that WIPO should "focus more on the needs of developing countries, and to view IP as one of many tools for development—not as an end in itself". Ethical problems are most pertinent when socially valuable goods like life-saving medicines are given IP protection. While the application of IP rights can allow companies to charge higher than the marginal cost of production in order to recoup the costs of research and development, the price may exclude from the market anyone who cannot afford the cost of the product, in this case a life-saving drug. "An IPR driven regime is therefore not a regime that is conductive to the investment of R&D of products that are socially valuable to predominately poor populations". Libertarians have differing views on intellectual property. Stephan Kinsella, an anarcho-capitalist on the right-wing of libertarianism, argues against intellectual property because allowing property rights in ideas and information creates artificial scarcity and infringes on the right to own tangible property. Kinsella uses the following scenario to argue this point: [I]magine the time when men lived in caves. One bright guy—let's call him Galt-Magnon—decides to build a log cabin on an open field, near his crops. To be sure, this is a good idea, and others notice it. They naturally imitate Galt-Magnon, and they start building their own cabins. But the first man to invent a house, according to IP advocates, would have a right to prevent others from building houses on their own land, with their own logs, or to charge them a fee if they do build houses. It is plain that the innovator in these examples becomes a partial owner of the tangible property (e.g., land and logs) of others, due not to first occupation and use of that property (for it is already owned), but due to his coming up with an idea. Clearly, this rule flies in the face of the first-user homesteading rule, arbitrarily and groundlessly overriding the very homesteading rule that is at the foundation of all property rights. Thomas Jefferson once said in a letter to Isaac McPherson on 13 August 1813: If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. In 2005 the Royal Society of Arts launched the Adelphi Charter, aimed at creating an international policy statement to frame how governments should make balanced intellectual property law. Another aspect of current U.S. Intellectual Property legislation is its focus on individual and joint works; thus, copyright protection can only be obtained in 'original' works of authorship. Critics like Philip Bennet argue that this does not provide adequate protection against cultural appropriation of indigenous knowledge, for which a collective IP regime is needed. Intellectual property law has been criticized as not recognizing new forms of art such as the remix culture, whose participants often commit what technically constitutes violations of such laws, creation works such as anime music videos and |
in 1847 Congressman Abraham Lincoln donated $10 ($307 in 2019 value). Pope Pius IX also made a personal contribution of 1,000 Scudi (approximately £213) for famine relief in Ireland and authorized collections in Rome. Most significantly, on 25 March 1847, Pius IX issued the encyclical Praedecessores nostros, which called the whole Catholic world to contribute moneywise and spiritually to Irish relief. Major figures behind international Catholic fundraising for Ireland were the rector of the Pontifical Irish College, Paul Cullen, and the President of the Society of Saint Vincent de Paul, Jules Gossin. International fundraising activities received donations from locations as diverse as Venezuela, Australia, South Africa, Mexico, Russia and Italy. In addition to the religious, non-religious organisations came to the assistance of famine victims. The British Relief Association was one such group. Founded on 1 January 1847 by Lionel de Rothschild, Abel Smith, and other prominent bankers and aristocrats, the Association raised money throughout England, America, and Australia; their funding drive was benefited by a "Queen's Letter", a letter from Queen Victoria appealing for money to relieve the distress in Ireland. With this initial letter, the Association raised £171,533. A second, somewhat less successful "Queen's Letter" was issued in late 1847. In total, the Association raised approximately £390,000 for Irish relief. Private initiatives such as the Central Relief Committee of the Society of Friends (Quakers) attempted to fill the gap caused by the end of government relief, and eventually, the government reinstated the relief works, although bureaucracy slowed the release of food supplies. Thousands of dollars were raised in the United States, including $170 ($5,218 in 2019 value) collected from a group of Native American Choctaws in 1847. Judy Allen, editor of the Choctaw Nation of Oklahoma's newspaper Biskinik, wrote that "It had been just 16 years since the Choctaw people had experienced the Trail of Tears, and they had faced starvation ... It was an amazing gesture." To mark the 150th anniversary, eight Irish people retraced the Trail of Tears. Contributions by the United States during the famine were highlighted by Senator Henry Clay who said; "No imagination can conceive—no tongue express—no brush paint—the horrors of the scenes which are daily exhibited in Ireland." He called upon Americans to remind them that the practice of charity was the greatest act of humanity they could do. In total, 118 vessels sailed from the US to Ireland with relief goods valued at $545,145. Specific states which provided aid include South Carolina and Philadelphia, Pennsylvania. Pennsylvania was the second most important state for famine relief in the US and the second-largest shipping port for aid to Ireland. The state hosted the Philadelphia Irish Famine Relief Committee. Catholics, Methodists, Quakers, Presbyterians, Episcopalians, Lutherans, Moravian and Jewish groups put aside their differences in the name of humanity to help out the Irish. South Carolina rallied around the efforts to help those experiencing the famine. They raised donations of money, food and clothing to help the victims of the famine—Irish immigrants made up 39% of the white population in the southern cities. Historian Harvey Strum claims that "The states ignored all their racial, religious, and political differences to support the cause for relief." The total sum of voluntary contributions for famine relief in Ireland can be estimated at £1.5 million (the real price equivalent of £135 million in 2018), of which less than £1 million came from abroad. Eviction Landlords were responsible for paying the rates of every tenant whose yearly rent was £4 or less. Landlords whose land was crowded with poorer tenants were now faced with large bills. Many began clearing the poor tenants from their small plots and letting the land in larger plots for over £4 which then reduced their debts. In 1846, there had been some clearances, but the great mass of evictions came in 1847. According to James S. Donnelly Jr., it is impossible to be sure how many people were evicted during the years of the famine and its immediate aftermath. It was only in 1849 that the police began to keep a count, and they recorded a total of almost 250,000 persons as officially evicted between 1849 and 1854. Donnelly considered this to be an underestimate, and if the figures were to include the number pressured into "voluntary" surrenders during the whole period (1846–1854), the figure would almost certainly exceed half a million persons. While Helen Litton says there were also thousands of "voluntary" surrenders, she notes also that there was "precious little voluntary about them". In some cases, tenants were persuaded to accept a small sum of money to leave their homes, "cheated into believing the workhouse would take them in". West Clare was one of the worst areas for evictions, where landlords turned thousands of families out and demolished their derisory cabins. Captain Kennedy in April 1848 estimated that 1,000 houses, with an average of six people to each, had been levelled since November. The Mahon family of Strokestown House evicted 3,000 people in 1847 and were still able to dine on lobster soup. After Clare, the worst area for evictions was County Mayo, accounting for 10% of all evictions between 1849 and 1854. George Bingham, 3rd Earl of Lucan, who owned over , was among the worst evicting landlords. He was quoted as saying that "he would not breed paupers to pay priests". Having turned out in the parish of Ballinrobe over 2,000 tenants alone, he then used the cleared land as grazing farms. In 1848, the Marquis of Sligo owed £1,650 to Westport Union; he was also an evicting landlord, though he claimed to be selective, saying that he was only getting rid of the idle and dishonest. Altogether, he cleared about 25% of his tenants. In 1847, Bishop of Meath, Thomas Nulty, described his personal recollection of the evictions in a pastoral letter to his clergy: The population in Drumbaragh, a townland in County Meath, plummeted 67 per cent between 1841 and 1851; in neighbouring Springville, it fell 54 per cent. There were fifty houses in Springville in 1841 and only eleven left in 1871. According to Litton, evictions might have taken place earlier but for fear of the secret societies. However, they were now greatly weakened by the Famine. Revenge still occasionally took place, with seven landlords being shot, six fatally, during the autumn and winter of 1847. Ten other occupiers of land, though without tenants, were also murdered, she says. One such landlord reprisal occurred in West Roscommon. The "notorious" Major Denis Mahon enforced thousands of his tenants into eviction before the end of 1847, with an estimated 60 per cent decline in population in some parishes. He was shot dead in that year. In East Roscommon, "where conditions were more benign", the estimated decline in population was under 10 percent. Lord Clarendon, alarmed at the number of landlords being shot and that this might mean rebellion, asked for special powers. Lord John Russell was not sympathetic to this appeal. Lord Clarendon believed that the landlords themselves were mostly responsible for the tragedy in the first place, saying that "It is quite true that landlords in England would not like to be shot like hares and partridges ... but neither does any landlord in England turn out fifty persons at once and burn their houses over their heads, giving them no provision for the future." The Crime and Outrage Act was passed in December 1847 as a compromise, and additional troops were sent to Ireland. The "Gregory clause", described by Donnelly as a "vicious amendment to the Irish poor law", had been a successful Tory amendment to the Whig poor-relief bill which became law in early June 1847, where its potential as an estate-clearing device was widely recognised in parliament, although not in advance. At first, the poor law commissioners and inspectors viewed the clause as a valuable instrument for a more cost-effective administration of public relief, but the drawbacks soon became apparent, even from an administrative perspective. They would soon view them as little more than murderous from a humanitarian perspective. According to Donnelly, it became obvious that the quarter-acre clause was "indirectly a death-dealing instrument". Emigration At least a million people are thought to have emigrated as a result of the famine. There were about 1 million long-distance emigrants between 1846 and 1851, mainly to North America. The total given in the 1851 census is 967,908. Short-distance emigrants, mainly to Britain, may have numbered 200,000 or more. While the famine was responsible for a significant increase in emigration from Ireland, of anywhere from 45% to nearly 85% depending on the year and the county, it was not the sole cause. The beginning of mass emigration from Ireland can be traced to the mid-18th century, when some 250,000 people left Ireland over a period of 50 years to settle in the New World. Irish economist Cormac Ó Gráda estimates that between 1 million and 1.5 million people emigrated during the 30 years between 1815 (when Napoleon was defeated in Waterloo) and 1845 (when the Great Famine began). However, during the worst of the famine, emigration reached somewhere around 250,000 in one year alone, with western Ireland seeing the most emigrants. Families did not migrate en masse, but younger members of families did, so much so that emigration almost became a rite of passage, as evidenced by the data that show that, unlike similar emigrations throughout world history, women emigrated just as often, just as early, and in the same numbers as men. The emigrants would send remittances (reaching a total of £1,404,000 by 1851) back to family in Ireland, which, in turn, allowed another member of their family to leave. Emigration during the famine years of 1845–1850 was primarily to England, Scotland, South Wales, North America, and Australia. Many of those fleeing to the Americas used the McCorkell Line. One city that experienced a particularly strong influx of Irish immigrants was Liverpool, with at least one-quarter of the city's population being Irish-born by 1851. This would heavily influence the city's identity and culture in the coming years, earning it the nickname of "Ireland's second capital". Liverpool became the only place outside of Ireland to elect an Irish nationalist to parliament when it elected T. P. O'Connor in 1885, and continuously re-elected him unopposed until his death in 1929. As of 2020, it is estimated that three quarters of people from the city have Irish ancestry. Of the more than 100,000 Irish that sailed to Canada in 1847, an estimated one out of five died from disease and malnutrition, including over 5,000 at Grosse Isle, Quebec, an island in the Saint Lawrence River used to quarantine ships near Quebec City. Overcrowded, poorly maintained, and badly provisioned vessels known as coffin ships sailed from small, unregulated harbours in the West of Ireland in contravention of British safety requirements, and mortality rates were high. The 1851 census reported that more than half the inhabitants of Toronto were Irish, and, in 1847 alone, 38,000 Irish flooded a city with fewer than 20,000 citizens. Other Canadian cities such as Quebec City, Montreal, Ottawa, Kingston, Hamilton, and Saint John also received large numbers. By 1871, 55% of Saint John residents were Irish natives or children of Irish-born parents. Unlike the United States, Canada could not close its ports to Irish ships because it was part of the British Empire, so emigrants could obtain cheap passage in returning empty lumber holds. In America, most Irish became city-dwellers; with little money, many had to settle in the cities that the ships they came on landed in. By 1850, the Irish made up a quarter of the population in Boston, New York City, Philadelphia, and Baltimore. The famine marked the beginning of the depopulation of Ireland in the 19th century. The population had increased by 13–14% in the first three decades of the 19th century; between 1831 and 1841, the population grew by 5%. Application of Thomas Malthus's idea of population expanding geometrically while resources increase arithmetically was popular during the famines of 1817 and 1822. By the 1830s, they were seen as overly simplistic, and Ireland's problems were seen "less as an excess of population than as a lack of capital investment". The population of Ireland was increasing no faster than that of England, which suffered no equivalent catastrophe. By 1854, between 1.5 and 2 million Irish left their country due to evictions, starvation, and harsh living conditions. Death toll It is not known exactly how many people died during the period of the famine, although it is believed that more died from disease than from starvation. State registration of births, marriages, or deaths had not yet begun, and records kept by the Catholic Church are incomplete. One possible estimate has been reached by comparing the expected population with the eventual numbers in the 1850s. A census taken in 1841 recorded a population of 8,175,124. A census immediately after the famine in 1851 counted 6,552,385, a drop of over 1.5 million in 10 years. The census commissioners estimated that, at the normal rate of population increase, the population in 1851 should have grown to just over 9 million if the famine had not occurred. On the in-development Great Irish Famine Online resource, produced by the Geography department of University College Cork, the population of Ireland section states, that together with the census figures being called low, before the famine it reads that "it is now generally believed" that over 8.75 million people populated the island of Ireland prior to it striking. In 1851, the census commissioners collected information on the number who died in each family since 1841, and the cause, season, and year of death. They recorded 21,770 total deaths from starvation in the previous decade and 400,720 deaths from disease. Listed diseases were fever, diphtheria, dysentery, cholera, smallpox, and influenza, with the first two being the main killers (222,021 and 93,232). The commissioners acknowledged that their figures were incomplete and that the true number of deaths was probably higher: The greater the amount of destitution of mortality ... the less will be the amount of recorded deaths derived through any household form;—for not only were whole families swept away by disease ... but whole villages were effaced from off the land. Later historians agree that the 1851 death tables "were flawed and probably under-estimated the level of mortality". The combination of institutional and figures provided by individuals gives "an incomplete and biased count" of fatalities during the famine. Cormac Ó Gráda, referencing the work of W. A. MacArthur, writes that specialists have long known that the Irish death tables were inaccurate, and undercounted the number of deaths. S. H. Cousens's estimate of 800,000 deaths relied heavily on retrospective information contained in the 1851 census and elsewhere, and is now regarded as too low. Modern historian Joseph Lee says "at least 800,000", and R. F. Foster estimates that "at least 775,000 died, mostly through disease, including cholera in the latter stages of the holocaust". He further notes that "a recent sophisticated computation estimates excess deaths from 1846 to 1851 as between 1,000,000 and 1,500,000 ... after a careful critique of this, other statisticians arrive at a figure of 1,000,000". Joel Mokyr's estimates at an aggregated county level range from 1.1 million to 1.5 million deaths between 1846 and 1851. Mokyr produced two sets of data which contained an upper-bound and lower-bound estimate, which showed not much difference in regional patterns. The true figure is likely to lie between the two extremes of half and one and a half million, and the most widely accepted estimate is one million. Another area of uncertainty lies in the descriptions of disease given by tenants as to the cause of their relatives' deaths. Though the 1851 census has been rightly criticised as underestimating the true extent of mortality, it does provide a framework for the medical history of the Great Famine. The diseases that badly affected the population fell into two categories: famine-induced diseases and diseases of nutritional deficiency. Of the nutritional deficiency diseases, the most commonly experienced were starvation and marasmus, as well as a condition at the time called dropsy. Dropsy (oedema) was a popular name given for the symptoms of several diseases, one of which, kwashiorkor, is associated with starvation. However, the greatest mortality was not from nutritional deficiency diseases, but from famine-induced ailments. The malnourished are very vulnerable to infections; therefore, these were more severe when they occurred. Measles, diphtheria, diarrhoea, tuberculosis, most respiratory infections, whooping cough, many intestinal parasites, and cholera were all strongly conditioned by nutritional status. Potentially lethal diseases, such as smallpox and influenza, were so virulent that their spread was independent of nutrition. The best example of this phenomenon was fever, which exacted the greatest death toll. In the popular mind, as well as medical opinion, fever and famine were closely related. Social dislocation—the congregation of the hungry at soup kitchens, food depots, and overcrowded work houses—created conditions that were ideal for spreading infectious diseases such as typhus, typhoid, and relapsing fever. Diarrhoeal diseases were the result of poor hygiene, bad sanitation, and dietary changes. The concluding attack on a population incapacitated by famine was delivered by Asiatic cholera, which had visited Ireland briefly in the 1830s. In the following decade, it spread uncontrollably across Asia, through Europe, and into Britain, finally reaching Ireland in 1849. Some scholars estimate that the population of Ireland was reduced by 20–25%. After the famine Ireland's mean age of marriage in 1830 was 23.8 for women and 27.5 for men, where they had once been 21 for women and 25 for men, and those who never married numbered about 10% of the population; in 1840, they had respectively risen to 24.4 and 27.7. In the decades after the Famine, the age of marriage had risen to 28–29 for women and 33 for men, and as many as a third of Irishmen and a quarter of Irishwomen never married, due to low wages and chronic economic problems that discouraged early and universal marriage. One consequence of the increase in the number of orphaned children was that some young women turned to prostitution to provide for themselves. Some of the women who became Wrens of the Curragh were famine orphans. The potato blight would return to Ireland in 1879, though by then the rural cottier tenant farmers and labourers of Ireland had begun the "Land War", described as one of the largest agrarian movements to take place in nineteenth-century Europe. By the time the potato blight returned in 1879, The Land League, which was led by Michael Davitt, who was born during the Great Famine and whose family had been evicted when Davitt was only 4 years old, encouraged the mass boycott of "notorious landlords" with some members also physically blocking evictions. The policy, however, would soon be suppressed. Despite close to 1000 interned under the 1881 Coercion Act for suspected membership. With the reduction in the rate of homelessness and the increased physical and political networks eroding the landlordism system, the severity of the following shorter famine would be limited. According to the linguist Erick Falc'her-Poyroux, surprisingly, for a country renowned for its rich musical heritage, only a small number of folk songs can be traced back to the demographic and cultural catastrophe brought about by the Great Famine, and he infers from this that the subject was generally avoided for decades among poorer people as it brought back too many sorrowful memories. Also, large areas of the country became uninhabited and the folk song collectors of the eighteenth and nineteenth centuries did not collect the songs they heard in the Irish language, as the language of the peasantry was often regarded as dead, or "not delicate enough for educated ears". Of the songs that have survived probably the best known is Skibbereen. Emigration has been an important source of inspiration for songs of the Irish during the 20th century. Analysis of the government's role Contemporary analysis Contemporary opinion was sharply critical of the Russell government's response to and management of the crisis. From the start, there were accusations that the government failed to grasp the magnitude of the disaster. Sir James Graham, who had served as Home Secretary in Sir Robert Peel's late government, wrote to Peel that, in his opinion, "the real extent and magnitude of the Irish difficulty are underestimated by the Government, and cannot be met by measures within the strict rule of economical science". This criticism was not confined to outside critics. The Lord-Lieutenant of Ireland, Lord Clarendon, wrote a letter to Russell on 26 April 1849, urging that the government propose additional relief measures: "I don't think there is another legislature in Europe that would disregard such suffering as now exists in the west of Ireland, or coldly persist in a policy of extermination." Also in 1849, the Chief Poor Law Commissioner, Edward Twisleton, resigned in protest over the Rate-in-Aid Act, which provided additional funds for the Poor Law through a 6d in the pound levy on all rateable properties in Ireland. Twisleton testified that "comparatively trifling sums were required for Britain to spare itself the deep disgrace of permitting its miserable fellow-subjects to die of starvation". According to Peter Gray in his book The Irish Famine, the government spent £7 million for relief in Ireland between 1845 and 1850, "representing less than half of one percent of the British gross national product over five years. Contemporaries noted the sharp contrast with the £20 million compensation given to West Indian slave-owners in the 1830s." Other critics maintained that, even after the government recognised the scope of the crisis, it failed to take sufficient steps to address it. John Mitchel, one of the leaders of the Young Ireland Movement, wrote in 1860: I have called it an artificial famine: that is to say, it was a famine which desolated a rich and fertile island that produced every year abundance and superabundance to sustain all her people and many more. The English, indeed, call the famine a "dispensation of Providence"; and ascribe it entirely to the blight on potatoes. But potatoes failed in like manner all over Europe; yet there was no famine save in Ireland. The British account of the matter, then, is first, a fraud; second, a blasphemy. The Almighty, indeed, sent the potato blight, but the English created the famine. Still other critics saw reflected in the government's response its attitude to the so-called "Irish Question". Nassau Senior, an economics professor at Oxford University, wrote that the Famine "would not kill more than one million people, and that would scarcely be enough to do any good". In 1848, Denis Shine Lawlor suggested that Russell was a student of the Elizabethan poet Edmund Spenser, who had calculated "how far English colonisation and English policy might be most effectively carried out by Irish starvation". Charles Trevelyan, the civil servant with most direct responsibility for the government's handling of the famine, described it in 1848 as "a direct stroke of an all-wise and all-merciful Providence", which laid bare "the deep and inveterate root of social evil"; he affirmed that the Famine was "the sharp but effectual remedy by which the cure is likely to be effected. God grant that the generation to which this opportunity has been offered may rightly perform its part..." Historical analysis Christine Kinealy has written that "the major tragedy of the Irish Famine of 1845–52 marked a watershed in modern Irish history. Its occurrence, however, was neither inevitable nor unavoidable". The underlying factors which combined to cause the famine were aggravated by an inadequate government response. Kinealy notes that the "government had to do something to help alleviate the suffering" but that "it became apparent that the government was using its information not merely to help it formulate its relief policies, but also as an opportunity to facilitate various long-desired changes within Ireland". Some also pointed to the structure of the British Empire as a contributing factor. James Anthony Froude wrote that "England governed Ireland for what she deemed her own interest, making her calculations on the gross balance of her trade ledgers, and leaving moral obligations aside, as if right and wrong had been blotted out of the statute book of the Universe." Dennis Clark, an Irish-American historian and critic of empire, claimed the famine was "the culmination of generations of neglect, misrule and repression. It was an epic of English colonial cruelty and inadequacy. For the landless cabin dwellers it meant emigration or extinction..." Position of the British government The British government has not expressly apologized for its role in the famine. But in 1997, at a commemoration event in County Cork, the actor Gabriel Byrne read out a message by Prime Minister Tony Blair that acknowledged the inadequacy of the government response. It asserted that "those who governed in London at the time failed their people through standing by while a crop failure turned into a massive human tragedy". The message was well-received in Ireland, where it was understood as the long sought-after British apology. Archive documents released in 2021 showed that the message was not in fact written or approved by Blair, who could not be reached by aides at the time. It was therefore approved by Blair's principal private secretary John Holmes on his own initiative. Genocide question The famine remains a controversial event in Irish history. Debate and discussion about whether the British government's response to the failure of the potato crop, and the continued exportation of food crops and livestock, constituted a genocide, remains a subject of political debate. Most historians reject the claim that the famine constituted a genocide. In 1996, the U.S. state of New Jersey included the famine in the "Holocaust and Genocide Curriculum" of its secondary schools. The curriculum was pushed by various Irish American political groups and drafted by the librarian James Mullin. Following criticism of the curriculum, the New Jersey Holocaust Commission requested statements from two academics that the Irish famine was genocide, which was eventually provided by law professors Charles E. Rice and Francis Boyle, who had not been previously known for studying Irish history. They concluded that the British government deliberately pursued a race- and ethnicity-based policy aimed at destroying the Irish people and that the policy of mass starvation amounted to genocide per retrospective application of article 2 of the Hague Convention of 1948. The Irish historian Cormac Ó Gráda rejected the claim that the famine was a genocide. He argued that "genocide includes murderous intent, and it must be said that not even the most bigoted and racist commentators of the day sought the extermination of the Irish", | and that she even craved alms from all mankind". He further suggested that in Ireland no one ever asked alms or favours of any kind from England or any other nation, but that it was England herself that begged for Ireland. He also claimed that it was England that "sent 'round the hat over all the globe, asking a penny for the love of God to relieve the poor Irish", and, constituting herself the agent of all that charity, took all the profit of it. Large sums of money were donated by charities; the first foreign campaign in December 1845 included the Boston Repeal Association and the Catholic Church Calcutta is credited with making the first larger donations in 1846, summing up to around £14,000. The money raised included contributions by Irish soldiers serving there and Irish people employed by the East India Company. Russian Tsar Alexander II sent funds and Queen Victoria donated £2,000. According to legend, Sultan Abdülmecid I of the Ottoman Empire originally offered to send £10,000 but was asked either by British diplomats or his own ministers to reduce it to £1,000 to avoid donating more than the Queen. U.S. President James K. Polk donated $50 and in 1847 Congressman Abraham Lincoln donated $10 ($307 in 2019 value). Pope Pius IX also made a personal contribution of 1,000 Scudi (approximately £213) for famine relief in Ireland and authorized collections in Rome. Most significantly, on 25 March 1847, Pius IX issued the encyclical Praedecessores nostros, which called the whole Catholic world to contribute moneywise and spiritually to Irish relief. Major figures behind international Catholic fundraising for Ireland were the rector of the Pontifical Irish College, Paul Cullen, and the President of the Society of Saint Vincent de Paul, Jules Gossin. International fundraising activities received donations from locations as diverse as Venezuela, Australia, South Africa, Mexico, Russia and Italy. In addition to the religious, non-religious organisations came to the assistance of famine victims. The British Relief Association was one such group. Founded on 1 January 1847 by Lionel de Rothschild, Abel Smith, and other prominent bankers and aristocrats, the Association raised money throughout England, America, and Australia; their funding drive was benefited by a "Queen's Letter", a letter from Queen Victoria appealing for money to relieve the distress in Ireland. With this initial letter, the Association raised £171,533. A second, somewhat less successful "Queen's Letter" was issued in late 1847. In total, the Association raised approximately £390,000 for Irish relief. Private initiatives such as the Central Relief Committee of the Society of Friends (Quakers) attempted to fill the gap caused by the end of government relief, and eventually, the government reinstated the relief works, although bureaucracy slowed the release of food supplies. Thousands of dollars were raised in the United States, including $170 ($5,218 in 2019 value) collected from a group of Native American Choctaws in 1847. Judy Allen, editor of the Choctaw Nation of Oklahoma's newspaper Biskinik, wrote that "It had been just 16 years since the Choctaw people had experienced the Trail of Tears, and they had faced starvation ... It was an amazing gesture." To mark the 150th anniversary, eight Irish people retraced the Trail of Tears. Contributions by the United States during the famine were highlighted by Senator Henry Clay who said; "No imagination can conceive—no tongue express—no brush paint—the horrors of the scenes which are daily exhibited in Ireland." He called upon Americans to remind them that the practice of charity was the greatest act of humanity they could do. In total, 118 vessels sailed from the US to Ireland with relief goods valued at $545,145. Specific states which provided aid include South Carolina and Philadelphia, Pennsylvania. Pennsylvania was the second most important state for famine relief in the US and the second-largest shipping port for aid to Ireland. The state hosted the Philadelphia Irish Famine Relief Committee. Catholics, Methodists, Quakers, Presbyterians, Episcopalians, Lutherans, Moravian and Jewish groups put aside their differences in the name of humanity to help out the Irish. South Carolina rallied around the efforts to help those experiencing the famine. They raised donations of money, food and clothing to help the victims of the famine—Irish immigrants made up 39% of the white population in the southern cities. Historian Harvey Strum claims that "The states ignored all their racial, religious, and political differences to support the cause for relief." The total sum of voluntary contributions for famine relief in Ireland can be estimated at £1.5 million (the real price equivalent of £135 million in 2018), of which less than £1 million came from abroad. Eviction Landlords were responsible for paying the rates of every tenant whose yearly rent was £4 or less. Landlords whose land was crowded with poorer tenants were now faced with large bills. Many began clearing the poor tenants from their small plots and letting the land in larger plots for over £4 which then reduced their debts. In 1846, there had been some clearances, but the great mass of evictions came in 1847. According to James S. Donnelly Jr., it is impossible to be sure how many people were evicted during the years of the famine and its immediate aftermath. It was only in 1849 that the police began to keep a count, and they recorded a total of almost 250,000 persons as officially evicted between 1849 and 1854. Donnelly considered this to be an underestimate, and if the figures were to include the number pressured into "voluntary" surrenders during the whole period (1846–1854), the figure would almost certainly exceed half a million persons. While Helen Litton says there were also thousands of "voluntary" surrenders, she notes also that there was "precious little voluntary about them". In some cases, tenants were persuaded to accept a small sum of money to leave their homes, "cheated into believing the workhouse would take them in". West Clare was one of the worst areas for evictions, where landlords turned thousands of families out and demolished their derisory cabins. Captain Kennedy in April 1848 estimated that 1,000 houses, with an average of six people to each, had been levelled since November. The Mahon family of Strokestown House evicted 3,000 people in 1847 and were still able to dine on lobster soup. After Clare, the worst area for evictions was County Mayo, accounting for 10% of all evictions between 1849 and 1854. George Bingham, 3rd Earl of Lucan, who owned over , was among the worst evicting landlords. He was quoted as saying that "he would not breed paupers to pay priests". Having turned out in the parish of Ballinrobe over 2,000 tenants alone, he then used the cleared land as grazing farms. In 1848, the Marquis of Sligo owed £1,650 to Westport Union; he was also an evicting landlord, though he claimed to be selective, saying that he was only getting rid of the idle and dishonest. Altogether, he cleared about 25% of his tenants. In 1847, Bishop of Meath, Thomas Nulty, described his personal recollection of the evictions in a pastoral letter to his clergy: The population in Drumbaragh, a townland in County Meath, plummeted 67 per cent between 1841 and 1851; in neighbouring Springville, it fell 54 per cent. There were fifty houses in Springville in 1841 and only eleven left in 1871. According to Litton, evictions might have taken place earlier but for fear of the secret societies. However, they were now greatly weakened by the Famine. Revenge still occasionally took place, with seven landlords being shot, six fatally, during the autumn and winter of 1847. Ten other occupiers of land, though without tenants, were also murdered, she says. One such landlord reprisal occurred in West Roscommon. The "notorious" Major Denis Mahon enforced thousands of his tenants into eviction before the end of 1847, with an estimated 60 per cent decline in population in some parishes. He was shot dead in that year. In East Roscommon, "where conditions were more benign", the estimated decline in population was under 10 percent. Lord Clarendon, alarmed at the number of landlords being shot and that this might mean rebellion, asked for special powers. Lord John Russell was not sympathetic to this appeal. Lord Clarendon believed that the landlords themselves were mostly responsible for the tragedy in the first place, saying that "It is quite true that landlords in England would not like to be shot like hares and partridges ... but neither does any landlord in England turn out fifty persons at once and burn their houses over their heads, giving them no provision for the future." The Crime and Outrage Act was passed in December 1847 as a compromise, and additional troops were sent to Ireland. The "Gregory clause", described by Donnelly as a "vicious amendment to the Irish poor law", had been a successful Tory amendment to the Whig poor-relief bill which became law in early June 1847, where its potential as an estate-clearing device was widely recognised in parliament, although not in advance. At first, the poor law commissioners and inspectors viewed the clause as a valuable instrument for a more cost-effective administration of public relief, but the drawbacks soon became apparent, even from an administrative perspective. They would soon view them as little more than murderous from a humanitarian perspective. According to Donnelly, it became obvious that the quarter-acre clause was "indirectly a death-dealing instrument". Emigration At least a million people are thought to have emigrated as a result of the famine. There were about 1 million long-distance emigrants between 1846 and 1851, mainly to North America. The total given in the 1851 census is 967,908. Short-distance emigrants, mainly to Britain, may have numbered 200,000 or more. While the famine was responsible for a significant increase in emigration from Ireland, of anywhere from 45% to nearly 85% depending on the year and the county, it was not the sole cause. The beginning of mass emigration from Ireland can be traced to the mid-18th century, when some 250,000 people left Ireland over a period of 50 years to settle in the New World. Irish economist Cormac Ó Gráda estimates that between 1 million and 1.5 million people emigrated during the 30 years between 1815 (when Napoleon was defeated in Waterloo) and 1845 (when the Great Famine began). However, during the worst of the famine, emigration reached somewhere around 250,000 in one year alone, with western Ireland seeing the most emigrants. Families did not migrate en masse, but younger members of families did, so much so that emigration almost became a rite of passage, as evidenced by the data that show that, unlike similar emigrations throughout world history, women emigrated just as often, just as early, and in the same numbers as men. The emigrants would send remittances (reaching a total of £1,404,000 by 1851) back to family in Ireland, which, in turn, allowed another member of their family to leave. Emigration during the famine years of 1845–1850 was primarily to England, Scotland, South Wales, North America, and Australia. Many of those fleeing to the Americas used the McCorkell Line. One city that experienced a particularly strong influx of Irish immigrants was Liverpool, with at least one-quarter of the city's population being Irish-born by 1851. This would heavily influence the city's identity and culture in the coming years, earning it the nickname of "Ireland's second capital". Liverpool became the only place outside of Ireland to elect an Irish nationalist to parliament when it elected T. P. O'Connor in 1885, and continuously re-elected him unopposed until his death in 1929. As of 2020, it is estimated that three quarters of people from the city have Irish ancestry. Of the more than 100,000 Irish that sailed to Canada in 1847, an estimated one out of five died from disease and malnutrition, including over 5,000 at Grosse Isle, Quebec, an island in the Saint Lawrence River used to quarantine ships near Quebec City. Overcrowded, poorly maintained, and badly provisioned vessels known as coffin ships sailed from small, unregulated harbours in the West of Ireland in contravention of British safety requirements, and mortality rates were high. The 1851 census reported that more than half the inhabitants of Toronto were Irish, and, in 1847 alone, 38,000 Irish flooded a city with fewer than 20,000 citizens. Other Canadian cities such as Quebec City, Montreal, Ottawa, Kingston, Hamilton, and Saint John also received large numbers. By 1871, 55% of Saint John residents were Irish natives or children of Irish-born parents. Unlike the United States, Canada could not close its ports to Irish ships because it was part of the British Empire, so emigrants could obtain cheap passage in returning empty lumber holds. In America, most Irish became city-dwellers; with little money, many had to settle in the cities that the ships they came on landed in. By 1850, the Irish made up a quarter of the population in Boston, New York City, Philadelphia, and Baltimore. The famine marked the beginning of the depopulation of Ireland in the 19th century. The population had increased by 13–14% in the first three decades of the 19th century; between 1831 and 1841, the population grew by 5%. Application of Thomas Malthus's idea of population expanding geometrically while resources increase arithmetically was popular during the famines of 1817 and 1822. By the 1830s, they were seen as overly simplistic, and Ireland's problems were seen "less as an excess of population than as a lack of capital investment". The population of Ireland was increasing no faster than that of England, which suffered no equivalent catastrophe. By 1854, between 1.5 and 2 million Irish left their country due to evictions, starvation, and harsh living conditions. Death toll It is not known exactly how many people died during the period of the famine, although it is believed that more died from disease than from starvation. State registration of births, marriages, or deaths had not yet begun, and records kept by the Catholic Church are incomplete. One possible estimate has been reached by comparing the expected population with the eventual numbers in the 1850s. A census taken in 1841 recorded a population of 8,175,124. A census immediately after the famine in 1851 counted 6,552,385, a drop of over 1.5 million in 10 years. The census commissioners estimated that, at the normal rate of population increase, the population in 1851 should have grown to just over 9 million if the famine had not occurred. On the in-development Great Irish Famine Online resource, produced by the Geography department of University College Cork, the population of Ireland section states, that together with the census figures being called low, before the famine it reads that "it is now generally believed" that over 8.75 million people populated the island of Ireland prior to it striking. In 1851, the census commissioners collected information on the number who died in each family since 1841, and the cause, season, and year of death. They recorded 21,770 total deaths from starvation in the previous decade and 400,720 deaths from disease. Listed diseases were fever, diphtheria, dysentery, cholera, smallpox, and influenza, with the first two being the main killers (222,021 and 93,232). The commissioners acknowledged that their figures were incomplete and that the true number of deaths was probably higher: The greater the amount of destitution of mortality ... the less will be the amount of recorded deaths derived through any household form;—for not only were whole families swept away by disease ... but whole villages were effaced from off the land. Later historians agree that the 1851 death tables "were flawed and probably under-estimated the level of mortality". The combination of institutional and figures provided by individuals gives "an incomplete and biased count" of fatalities during the famine. Cormac Ó Gráda, referencing the work of W. A. MacArthur, writes that specialists have long known that the Irish death tables were inaccurate, and undercounted the number of deaths. S. H. Cousens's estimate of 800,000 deaths relied heavily on retrospective information contained in the 1851 census and elsewhere, and is now regarded as too low. Modern historian Joseph Lee says "at least 800,000", and R. F. Foster estimates that "at least 775,000 died, mostly through disease, including cholera in the latter stages of the holocaust". He further notes that "a recent sophisticated computation estimates excess deaths from 1846 to 1851 as between 1,000,000 and 1,500,000 ... after a careful critique of this, other statisticians arrive at a figure of 1,000,000". Joel Mokyr's estimates at an aggregated county level range from 1.1 million to 1.5 million deaths between 1846 and 1851. Mokyr produced two sets of data which contained an upper-bound and lower-bound estimate, which showed not much difference in regional patterns. The true figure |
Edwin of Northumbria, from which he launched raids into Ireland. How much influence the Northumbrians exerted on Mann is unknown, but very few place names on Mann are of Old English origin. Vikings arrived at the end of the 8th century. They established Tynwald and introduced many land divisions that still exist. In 1266 King Magnus VI of Norway ceded the islands to Scotland in the Treaty of Perth. But Scotland's rule over Mann did not become firmly established until 1275, when the Manx were defeated in the Battle of Ronaldsway, near Castletown. In 1290 King Edward I of England sent Walter de Huntercombe to take possession of Mann. It remained in English hands until 1313, when Robert Bruce took it after besieging Castle Rushen for five weeks. In 1314, it was retaken for the English by John Bacach of Argyll. In 1317, it was retaken for the Scots by Thomas Randolph, 1st Earl of Moray and Lord of the Isle of Man. It was held by the Scots until 1333. For some years thereafter control passed back and forth between the kingdoms until the English took it for the final time in 1346. The English Crown delegated its rule of the island to a series of lords and magnates. Tynwald passed laws concerning the government of the island in all respects and had control over its finances, but was subject to the approval of the Lord of Mann. In 1866, the Isle of Man obtained limited home rule, with partly democratic elections to the House of Keys, but the Legislative Council was appointed by the Crown. Since then, democratic government has been gradually extended. The Isle of Man has designated more than 250 historic sites as registered buildings. Geography The Isle of Man is an island located in the middle of the northern Irish Sea, almost equidistant from England to the east, Northern Ireland to the west, and Scotland (closest) to the north; while Wales to the south is almost the distance of the Republic of Ireland to the southwest. It is long and, at its widest point, wide. It has an area of around . Besides the island of Mann itself, the political unit of the Isle of Man includes some nearby small islands: the seasonally inhabited Calf of Man, Chicken Rock (on which stands an unstaffed lighthouse), St Patrick's Isle and St Michael's Isle. The last two of these are connected to the main island by permanent roads/causeways. Ranges of hills in the north and south are separated by a central valley. The northern plain, by contrast, is relatively flat, consisting mainly of deposits from glacial advances from western Scotland during colder times. There are more recently deposited shingle beaches at the northernmost point, the Point of Ayre. The island has one mountain higher than , Snaefell, with a height of . According to an old saying, from the summit one can see six kingdoms: those of Mann, Scotland, England, Ireland, Wales, and Heaven. Some versions add a seventh kingdom, that of the sea, or Neptune. The peak of Snaefell is the only location in the British Isles from which one can see all the major constituents of the United Kingdom. Population At the 2021 census, the Isle of Man was home to 84,069 people, of whom 26,677 resided in the island's capital, Douglas. The population increased by 755 persons between the 2016 and 2021 censuses. Census The Isle of Man Full Census, last held in 2021, has been a decennial occurrence since 1821, with interim censuses being introduced from 1966. It is separate from, but similar to, the Census in the United Kingdom. Government The United Kingdom is responsible for the island's defence and ultimately for good governance, and for representing the island in international forums, while the island's own parliament and government have competence over all domestic matters. Socio-political structure The island's parliament, Tynwald, is claimed to have been in continuous existence since 979 or earlier, purportedly making it the oldest continuously governing body in the world, though evidence supports a much later date. Tynwald is a bicameral or tricameral legislature, comprising the House of Keys (directly elected by universal suffrage with a voting age of 16 years) and the Legislative Council (consisting of indirectly elected and ex-officio members). These two bodies also meet together in joint session as Tynwald Court. The executive branch of government is the Council of Ministers, which is composed of Members of Tynwald (usually Members of the House of Keys, though Members of the Legislative Council may also be appointed as Ministers). It is headed by the Chief Minister. Vice-regal functions of the head of state are performed by a lieutenant governor. External relations and security In various laws of the United Kingdom, "the United Kingdom" is defined to exclude the Isle of Man. Historically, the UK has taken care of its external and defence affairs, and retains paramount power to legislate for the Island. However, in 2007, the Isle of Man and the UK signed an agreement that established frameworks for the development of the international identity of the Isle of Man. There is no separate Manx citizenship. Citizenship is covered by UK law, and Manx people are classed as British citizens. There is a long history of relations and cultural exchange between the Isle of Man and Ireland. The Isle of Man's historic Manx language (and its modern revived variant) are closely related to both Scottish Gaelic and the Irish language, and in 1947, Irish Taoiseach Éamon de Valera spearheaded efforts to save the dying Manx language. Defence The Isle of Man is not part of the United Kingdom, however, the UK takes care of its external and defence affairs. There are no independent military forces on the Isle of Man, although HMS Ramsey is affiliated with the town of the same name. From 1938 to 1955 there was the Manx Regiment of the British Territorial Army, which saw extensive action during the Second World War. In 1779, the Manx Fencible Corps, a fencible regiment of three companies, was raised; it was disbanded in 1783 at the end of the American War of Independence. Later, the Royal Manx Fencibles was raised at the time of the French Revolutionary Wars and Napoleonic Wars. The 1st Battalion (of 3 companies) was raised in 1793. A 2nd Battalion (of 10 companies) was raised in 1795, and it saw action during the Irish Rebellion of 1798. The regiment was disbanded in 1802. A third body of Manx Fencibles was raised in 1803 to defend the island during the Napoleonic Wars and to assist the Revenue. It was disbanded in 1811. The Isle of Man Home Guard was raised during the Second World War for home defence. In 2015 a multi-capability recruiting and training unit of the British Army Reserve was established in Douglas. Manxman status There is no citizenship of the Isle of Man as such under the British Nationality Acts 1948 and 1981. The Passport Office, Isle of Man, Douglas, accepts and processes applications for the Lieutenant Governor of the Isle of Man, who is formally responsible for issuing Isle of Man–issued British passports, titled "British Islands – Isle of Man. Isle of Man-issued British passports can presently be issued to any British citizen resident in the Isle of Man, and also to British citizens who have a qualifying close personal connection to the Isle of Man, but are now resident either in the UK or in either one of the two other Crown Dependencies. European Union The Isle of Man was never part of the European Union, nor did it have a special status, and thus it did not take part in the 2016 referendum on the UK's EU membership. However, Protocol 3 of the UK's Act of Accession to the Treaty of Rome included the Isle of Man within the EU's customs area, allowing for trade in Manx goods without tariffs throughout the EU. As it was not part of the EU's internal market, there are still limitations on the movement of capital, services and labour. EU citizens were entitled to travel and reside, but not work, in the island without restriction. British citizens with Manxman status were under the same circumstances and restrictions as any other non-EU European relating country to work in the EU. The political and diplomatic impacts of Brexit on the island are still uncertain. The UK confirmed that the Crown Dependencies' positions were included in the Brexit negotiations. The Brexit withdrawal agreement explicitly included the Isle of Man in its territorial scope, but makes no other mention of it. The island's government website stated that after the end of the implementation period, the Isle of Man's relationship with the EU would depend on the agreement reached between the UK and the EU on their future relationship. Commonwealth of Nations The Isle of Man is not a member of the Commonwealth of Nations. By virtue of its relationship with the United Kingdom, it takes part in several Commonwealth institutions, including the Commonwealth Parliamentary Association and the Commonwealth Games. The Government of the Isle of Man has made calls for a more integrated relationship with the Commonwealth, including more direct representation and enhanced participation in Commonwealth organisations and meetings, including Commonwealth Heads of Government Meetings. The Chief Minister of the Isle of Man has said: "A closer connection with the Commonwealth itself would be a welcome further development of the island's international relationships." Politics Most Manx politicians stand for election as independents rather than as representatives of political parties. Although political parties do exist, their influence is not nearly as strong as in the United Kingdom. There are three political parties in the Isle of Man: The Liberal Vannin Party (established 2006) has one seat in the House of Keys; it promotes greater Manx autonomy and more accountability in government. The Manx Labour Party is active, and for much of the 20th century had several MHKs. Currently (since the 2021 general election) there are 2 MLP members in the House of Keys, both of whom are women. The Isle of Man Green Party was established in 2016, but currently only has representation at local government level. There are also a number of pressure groups on the island. Mec Vannin advocate the establishment of a sovereign republic. The Positive Action Group campaign for three key elements to be introduced into the governance of the island: open accountable government, rigorous control of public finances, and a fairer society. Local government Local government on the Isle of Man is based partly on the island's 17 ancient parishes. There are four types of local authorities: a corporation for the Borough of Douglas, and bodies of commissioners for the town districts of Castletown, Peel and Ramsey the districts of Kirk Michael and Onchan the village districts of Port Erin and Port St Mary the 13 parish districts (those historic parishes, or combinations or parts of them, which do not fall within the districts previously mentioned). Each of these districts has its own body of commissioners. Public services Education Public education is under the Department of Education, Sport & Culture. Thirty-two primary schools, five secondary schools and the University College Isle of Man function under the department. Health Two-thirds of residents of Mann are overweight or obese, four in ten are physically inactive, one-quarter are binge drinkers, one in twelve smoke cigarettes, and about 15% are in poor general health. Healthcare is provided via a public health scheme by Department of Health and Social Care for residents and visitors from the UK. Crime The Crime Severity Rate in Mann remains substantially less than that in the rest of the United Kingdom, although the rate of violent crime has been increasing in recent years. Most violent crime is associated with the trade in illegal drugs. Emergency services The Isle of Man Government maintains five emergency services. These are: Isle of Man Constabulary (police) Isle of Man Coastguard Isle of Man Fire and Rescue Service Isle of Man Ambulance Service Isle of Man Civil Defence Corps All of these services are controlled directly by the Department of Home Affairs of the Isle of Man Government, and are independent of the United Kingdom. Nonetheless, the Isle of Man Constabulary voluntarily submits to inspection by the British inspectorate of police, and the Isle of Man Coastguard contracts Her Majesty's Coastguard (UK) for air-sea rescue operations. Crematorium The island's sole crematorium is in Glencrutchery Road, Douglas, and is operated by Douglas Borough Council. Usually staffed by four, in March 2020 an increase of staff to 12 was announced by the council leader, responding to the threat of coronavirus, which could require more staff. Economy The Isle of Man is a low-tax economy with no capital gains tax, wealth tax, stamp duty, or inheritance tax and a top rate of income tax of 20%. A tax cap is in force: the maximum amount of tax payable by an individual is £200,000 or £400,000 for couples if they choose to have their incomes jointly assessed. Personal income is assessed and taxed on a total worldwide income basis rather than a remittance basis. This means that all income earned throughout the world is assessable for Manx tax rather than only income earned in or brought into the island. The standard rate of corporation tax for residents and non-residents is 0%. Retail business profits above £500,000 and banking business income are taxed at 10%, and rental (or other) income from land and buildings situated on the Isle of Man is taxed at 20%. Mann's low corporate tax burden and absence of public registries of corporate ownership provides tax avoidance and tax evasion strategies for individuals and corporations, resulting in a large influx of funds from those in pursuit of tax advantage and financial confidentiality. The relative importance of agriculture, fishing and tourism, the former mainstays of the Manx economy, has accordingly declined. As is typical of the low-tax crown dependencies, Mann's economy features financial services, shell corporations for high-technology companies, online gambling and online gaming, cinema production, and tax havens for high net worth individuals. These activities have brought some high-income jobs to Mann, as hundreds of local residents serve as “straw man" directors and shareholders of shell companies. Similar schemes provide a means for high net worth individuals to reduce their tax obligations and to shield their financial dealings from public scrutiny. As described in the Paradise Papers, the Isle of Man economy features extensive illegal economic activity including tax evasion, money laundering from drug sales, money transfers from weapons sales, and looting of pubic treasuries of other nation states (particularly Russia) These funds are mostly funneled into the London financial markets. There has been an effort to regulate these activities, though the impact of legal measures instituted by the Isle of Man government remains uncertain. Online gambling sites provided about 10% of the islands income in 2014. The Isle of Man currently enjoys free access to EU markets and trade is mostly with the UK. The Isle of Man’s trade relationship with the EU derives from the United Kingdom’s EU membership and will need to be renegotiated in light of the United Kingdom’s decision to withdraw from the bloc. A transition period is expected to allow the free movement of goods and agricultural products to the EU until the end of 2020 or until a new settlement is negotiated. The Isle of Man Department for Enterprise manages the diversified economy in 12 key sectors. The largest sectors by GNP are insurance and eGambling with 17% of GNP each, followed by ICT and banking with 9% each. The 2016 census lists 41,636 total employed. The largest sectors by employment are | of warmhouses for cannabis cultivation, and research facilities, and to develop the business. It was announced that zoning permits had been granted for development of the facility. Although the availability of medical cannabis is heavily restricted within the U.K. proper, there has been an effort to develop the cannabis industry on the Channel Islands of Jersey and Guernsey. Culture The Manx are a Celtic nation. The culture of the Isle of Man is often promoted as being influenced by its Celtic and, to a lesser extent, its Norse origins. Proximity to the UK, popularity as a UK tourist destination in Victorian times, and immigration from Britain have all meant that the cultures of Great Britain have been influential at least since Revestment. Revival campaigns have attempted to preserve the surviving vestiges of Manx culture after a long period of Anglicisation, and there has been significantly increased interest in the Manx language, history and musical tradition. Language The official language of the Isle of Man is English. Manx has traditionally been spoken but has been stated to be "critically endangered". However, it now has a growing number of young speakers. Manx is a Goidelic Celtic language and is one of a number of insular Celtic languages spoken in the British Isles. Manx has been officially recognised as a legitimate autochthonous regional language under the European Charter for Regional or Minority Languages, ratified by the United Kingdom on 27 March 2001 on behalf of the Isle of Man government. Manx is closely related to Irish and Scottish Gaelic, but is orthographically sui generis. On the island, the Manx greetings (good morning) and (good afternoon) can often be heard. As in Irish and Scottish Gaelic, the concepts of "evening" and "afternoon" are referred to with one word. Two other Manx expressions often heard are Gura mie eu ("Thank you"; familiar 2nd person singular form Gura mie ayd) and , meaning "time enough", which represents a stereotypical view of the Manx attitude to life. In the 2011 Isle of Man census, approximately 1,800 residents could read, write, and speak the Manx language. Symbols For centuries, the island's symbol has been the so-called "three legs of Mann" (), a triskelion of three legs conjoined at the thigh. The Manx triskelion, which dates back with certainty to the late 13th century, is of uncertain origin. It has been suggested that its origin lies in Sicily, an island which has been associated with the triskelion since ancient times. The symbol appears in the island's official flag and official coat of arms, as well as its currency. The Manx triskelion may be reflected in the island's motto, Quocunque jeceris stabit, which appears as part of the island's coat of arms. The Latin motto translates as "whichever way you throw, it will stand" or "whithersoever you throw it, it will stand". It dates to the late 17th century when it is known to have appeared on the island's coinage. It has also been suggested that the motto originally referred to the poor quality of coinage which was common at the time—as in "however it is tested it will pass". The ragwort or cushag has been referred to as the Manx national flower. Religion The predominant religious tradition of the island is Christianity. Before the Protestant Reformation, the island had a long history as part of the unified Catholic Church, and in the years following the Reformation, the religious authorities on the island, and later the population of the island, accepted the religious authority of the British monarchy and the Church of England. It has also come under the influence of Irish religious tradition. The island forms a separate diocese called Sodor and Man, which in the distant past comprised the medieval kingdom of Man and the Scottish isles ("Suðreyjar" in Old Norse). It now consists of 16 parishes, and since 1541 has formed part of the Province of York. (These modern ecclesiastical parishes do not correspond to the island's ancient parishes mentioned in this article under "Local Government", but more closely reflect the current geographical distribution of population.) Other Christian churches also operate on the Isle of Man. The second largest denomination is the Methodist Church, whose Isle of Man District is close in numbers to the Anglican diocese. There are eight Catholic parish churches, included in the Catholic Archdiocese of Liverpool, as well as a presence of Eastern Orthodox Christians. Additionally there are five Baptist churches, four Pentecostal churches, the Salvation Army, a ward of The Church of Jesus Christ of Latter-day Saints, two congregations of Jehovah's Witnesses, two United Reformed churches, as well as other Christian churches. There is a small Muslim community, with its own mosque in Douglas, and there is also a small Jewish community; see history of the Jews in the Isle of Man. Myth, legend and folklore In Manx mythology, the island was ruled by the sea god Manannán, who would draw his misty cloak around the island to protect it from invaders. One of the principal folk theories about the origin of the name Mann is that it is named after Manannán. In the Manx tradition of folklore, there are many stories of mythical creatures and characters. These include the , a malevolent spirit which, according to legend, blew the roof off St Trinian's Church in a fit of rage; the ; the ; and the , a ghostly black dog which wandered the walls and corridors of Peel Castle. The Isle of Man is also said to be home to fairies, known locally as "the little folk" or "themselves". There is a famous Fairy Bridge, and it is said to be bad luck if one fails to wish the fairies good morning or afternoon when passing over it. It used to be a tradition to leave a coin on the bridge to ensure good luck. Other types of fairies are the and the . An old Irish story tells how Lough Neagh was formed when Ireland's legendary giant Fionn mac Cumhaill (commonly anglicised to Finn McCool) ripped up a portion of the land and tossed it at a Scottish rival. He missed, and the chunk of earth landed in the Irish Sea, thus creating the island. Peel Castle has been proposed as a possible location of the Arthurian Avalon or as the location of the Grail Castle, site of Lancelot's encounter with the sword bridge of King Maleagant. One of the most oft-repeated myths is that people found guilty of witchcraft were rolled down Slieau Whallian, a hill near St John's, in a barrel. However this is a 19th-century legend derived from a Scottish legend, which in turn comes from a German legend. Separately, a witchcraft museum was opened at the Witches Mill, Castletown in 1951. There has never actually been a witches' coven on that site; the myth was only created with the opening of the museum. However, there has been a strong tradition of herbalism and the use of charms to prevent and cure illness and disease in people and animals. Music The music of the Isle of Man reflects Celtic, Norse and other influences, including from its neighbours, Scotland, Ireland, England and Wales. A wide range of music is performed on the island, such as rock, blues, jazz and pop. In 2021, during a renovation of Douglas Promenade, the Isle of Man government erected a statue of the Bee Gees trio in commemoration of their birth and upbringing on the Isle of Man. Its traditional folk music has undergone a revival since the 1970s, starting with a music festival called in Ramsey. This was part of a general revival of the Manx language and culture after the death of the last native speaker of Manx in 1974. The Isle of Man was mentioned in the Who song "Happy Jack" as the homeland of the song's titular character, who is always in a state of ecstasy, no matter what happens to him. The song 'The Craic was 90 in the Isle of Man' by Christy Moore describes a lively visit during the Island's tourism heyday. The Island is also the birthplace of Maurice, Robin and Barry Gibb, of the Bee Gees; a bronze statue of the trio was unveiled on Douglas promenade in July 2021. Food In the past, the basic national dish of the island was spuds and herrin, boiled potatoes and herring. This plain dish was supported by the subsistence farmers of the island, who for centuries crofted the land and fished the sea. Chips, cheese and gravy a dish similar to poutine, is found in most of the island's fast-food outlets, and consists of thick cut chips, covered in shredded Cheddar cheese and topped with a thick gravy. However, as of the Isle of Man Food & Drink Festival 2018, queenies have been crowned the Manx national dish. Seafood has traditionally accounted for a large proportion of the local diet. Although commercial fishing has declined in recent years, local delicacies include Manx kippers (smoked herring) which are produced by the smokeries in Peel on the west coast of the island, albeit mainly from North Sea herring these days. The smokeries also produce other specialities including smoked salmon and bacon. Crab, lobster and scallops are commercially fished, and the queen scallop (queenies) is regarded as a particular delicacy, with a light, sweet flavour. Cod, ling and mackerel are often angled for the table, and freshwater trout and salmon can be taken from the local rivers and lakes, supported by the government fish hatchery at Cornaa on the east coast. Cattle, sheep, pigs and poultry are all commercially farmed; Manx lamb from the hill farms is a popular dish. The Loaghtan, the indigenous breed of Manx sheep, has a rich, dark meat that has found favour with chefs, featuring in dishes on the BBC's MasterChef series. Manx cheese has also found some success, featuring smoked and herb-flavoured varieties, and is stocked by many of the UK's supermarket chains. Manx cheese took bronze medals in the 2005 British Cheese Awards, and sold 578 tonnes over the year. Manx cheddar has been exported to Canada where it is available in some supermarkets. Beer is brewed on a commercial scale by Okells Brewery, which was established in 1850 and is the island's largest brewer; and also by Bushy's Brewery and the Hooded Ram Brewery. The Isle of Man's Pure Beer Act of 1874, which resembles the German Reinheitsgebot, is still in effect: under this Act, brewers may only use water, malt, sugar and hops in their brews. Sport The Isle of Man is represented as a nation in the Commonwealth Games and the Island Games and hosted the IV Commonwealth Youth Games in 2011. Manx athletes have won three gold medals at the Commonwealth Games, including the one by cyclist Mark Cavendish in 2006 in the Scratch race. The Island Games were first held on the island in 1985, and again in 2001. In 2019, FC Isle of Man was founded and is a North West Counties League team. Isle of Man teams and individuals participate in many sports both on and off the island including rugby union, football, gymnastics, field hockey, netball, taekwondo, bowling, obstacle course racing and cricket. It being an island, many types of watersports are also popular with residents. Motorcycle racing The main international event associated with the island is the Isle of Man Tourist Trophy race, colloquially known as "The TT", which began in 1907. It takes place in late May and early June. The TT is now an international road racing event for motorcycles, which used to be part of the World Championship, and is long considered to be one of the "greatest motorcycle sporting events of the world". Taking place over a two-week period, it has become a festival for motorcycling culture, makes a huge contribution to the island's economy and has become part of Manx identity. For many, the Isle carries the title "road racing capital of the world". The Manx Grand Prix is a separate motorcycle |
other branches of Indo-European, such as Greek, belonged to a single branch of the family, parallel for example to Celtic and Germanic. The founder of this theory is Antoine Meillet (1866–1936). This unitary theory has been criticized by, among others, Alois Walde, Vittore Pisani and Giacomo Devoto, who proposed that the Latino-Faliscan and Osco-Umbrian languages constituted two distinct branches of Indo-European. This view gained acceptance in the second half of the 20th century, though proponents such as Rix would later reject the idea, and the unitary theory remains dominant in contemporary scholarship. Classification The following classification, proposed by Michiel de Vaan (2008), is generally agreed on, although some scholars have recently rejected the position of Venetic within the Italic branch. Proto-Italic (or Proto-Italo-Venetic) Proto-Venetic Venetic (550–100 BC) Proto-Latino-Sabellic Latino-Faliscan Early Faliscan (7th–5th c. BC) Middle Faliscan (5th–3rd c. BC) Late Faliscan (3rd–2nd c. BC), strongly influenced by Latin Old Latin (6th–1st c. BC) Classical Latin (1st c. BC–3rd c. AD) Late Latin (3rd–6th c. AD) Vulgar Latin (2nd c. BC–9th c. AD) evolved into Proto-Romance (the reconstructed Late Vulgar Latin ancestor of Romance languages) between the 3rd and 8th c. AD Romance languages, non-mutually intelligible with Latin since at the least the 9th c. AD; the only Italic languages still spoken today Gallo-Romance (attested from 842 AD), Italo-Dalmatian (ca. 960), Occitano-Romance (ca. 1000), Ibero-Romance (ca. 1075), Rhaeto-Romance (ca. 1100), Sardinian (1102), African Romance (extinct; spoken at least until the 12th c. AD), Eastern Romance (1521) Sabellic (Osco-Umbrian) Umbrian (7th–1st c. BC), including dialects like Aequian, Marsian, or Volscian Oscan (5th–1st c. BC), including dialects like Hernican, North Oscan (Marrucinian, Paelignian, Vestinian), or Sabine (Samnite) Picene languages Pre-Samnite (6th–5th c. BC) South Picene (6th–4th c. BC) (?) Sicel (?) Lusitanian History Proto-Italic period Proto-Italic was probably originally spoken by Italic tribes north of the Alps. In particular, early contacts with Celtic and Germanic speakers are suggested by linguistic evidence. Bakkum defines Proto-Italic as a "chronological stage" without an independent development of its own, but extending over late Proto-Indo-European and the initial stages of Proto-Latin and Proto-Sabellic. Meiser's dates of 4000 BC to 1800 BC, well before Mycenaean Greek, are described by him as being "as good a guess as anyone's". Schrijver argues for a Proto-Italo-Celtic stage, which he suggests was spoken in "approximately the first half or the middle of the 2nd millennium BC", from which Celtic split off first, then Venetic, before the remainder, Italic, split into Latino-Faliscan and Sabellian. Italic peoples probably moved towards the Italian Peninsula during the second half of the 2nd millennium BC, gradually reaching the southern regions. Although an equation between archeological and linguistic evidence cannot be established with certainty, the Proto-Italic language is generally associated with the Terramare (1700–1150 BC) and Proto-Villanovan culture (1200–900 BC). Languages of Italy in the Iron Age At the start of the Iron Age, around 700 BC, Ionian Greek settlers from Euboea established colonies along the coast of southern Italy. They brought with them the alphabet, which they had learned from the Phoenicians; specifically, what we now call Western Greek alphabet. The invention quickly spread through the whole peninsula, across language and political barriers. Local adaptations (mainly minor letter shape changes and the dropping or addition of a few letters) yielded several Old Italic alphabets. The inscriptions show that, by 700 BC, many languages were spoken in the region, including members of several branches of Indo-European and several non-Indo-European languages. The most important of the latter was Etruscan, attested by evidence from more than 10,000 inscriptions and some short texts. No relation has been found between Etruscan and any other known language, and there is still no clue about its possible origin (except for inscriptions on the island of Lemnos in the eastern Mediterranean). Other possibly non-Indo-European languages present at the time were Rhaetian in the Alpine region, Ligurian around present-day Genoa, and some unidentified language(s) in Sardinia. Those languages have left some detectable imprint in Latin. The largest language in southern Italy, except Ionic Greek spoken in the Greek colonies, was Messapian, known due to some 260 inscriptions dating from the 6th and 5th centuries BC. There is a historical connection of Messapian with the Illyrian tribes, added to the archaeological connection in ceramics and metals existing between both peoples, which motivated the hypothesis of linguistic connection. But the evidence of Illyrian inscriptions is reduced to personal names and places, which makes it difficult to support such a hypothesis. It has also been proposed that the Lusitanian language may have belonged to the Italic family. Timeline of Latin In the history of Latin of ancient times, there are several periods: From the archaic period, several inscriptions of the 6th to the 4th centuries BC, fragments of the oldest laws, fragments from the sacral anthem of the Salii, the anthem of the Arval Brethren were preserved. In the pre-classical period (3rd and 2nd centuries BC), the literary Latin language (the comedies of Plautus and Terence, the agricultural treatise of Cato the Elder, fragments of works by a number of other authors) was based on the dialect of Rome. The period of classical ("golden") Latin dated until the death of Ovid in AD 17 (1st century BC, the development of vocabulary, the development of terminology, the elimination of old morphological doublets, the flowering of literature: Cicero, Caesar, Sallust, Virgil, Horace, Ovid) was particularly distinguished. During the period of classical ("silver") Latin dated until the death of emperor Marcus Aurelius in AD 180, seeing works by Juvenal, Tacitus, Suetonius and the Satyricon of Petronius, during which time the phonetic, morphological and spelling norms were finally formed. As the Roman Republic extended its political dominion over the whole of the Italian peninsula, Latin became dominant over the other Italic languages, which ceased to be spoken perhaps sometime in the 1st century AD. From Vulgar Latin, the Romance languages emerged. The Latin language gradually spread beyond Rome, along with the growth of the power of this state, displacing, beginning in the 4th and 3rd centuries BC, the languages of other Italic tribes, as well as Illyrian, Messapian and Venetic, etc. The Romanisation of the Italian Peninsula was basically complete by the 1st century BC; except for the south of Italy and Sicily, where the dominance of Greek was preserved. The attribution of Ligurian is controversial. Origin theories The main debate concerning the origin of the Italic languages mirrors that on the origins of the Greek ones, except that there is no record of any "early Italic" to play the role of Mycenaean Greek. All we know about the linguistic landscape of Italy is from inscriptions made after the introduction of the alphabet in the peninsula, around 700 BC onwards, and from Greek and Roman writers several centuries later. The oldest known samples come from Umbrian and Faliscan inscriptions from the 7th century BC. Their alphabets were clearly derived from the Etruscan alphabet, which was derived from the Western Greek alphabet not much earlier than that. There is no reliable information about the languages spoken before that time. Some conjectures can be made based on toponyms, but they cannot be verified. There is no guarantee that the intermediate phases between those old Italic languages and Indo-European will be found. The question of whether Italic originated outside Italy or developed by assimilation of Indo-European and other elements within Italy, approximately on or within its current range there, remains. An extreme view of some linguists and historians is that there is no such thing as "the Italic branch" of Indo-European. Namely, there never was a unique "Proto-Italic", whose diversification resulted in those languages. Some linguists, like Silvestri and Rix, further argue that no common Proto-Italic can be reconstructed such that (1) its phonological system may have developed into those of Latin and Osco-Umbrian through consistent phonetic changes, and (2) its phonology and morphology can be consistently derived from those | connection. But the evidence of Illyrian inscriptions is reduced to personal names and places, which makes it difficult to support such a hypothesis. It has also been proposed that the Lusitanian language may have belonged to the Italic family. Timeline of Latin In the history of Latin of ancient times, there are several periods: From the archaic period, several inscriptions of the 6th to the 4th centuries BC, fragments of the oldest laws, fragments from the sacral anthem of the Salii, the anthem of the Arval Brethren were preserved. In the pre-classical period (3rd and 2nd centuries BC), the literary Latin language (the comedies of Plautus and Terence, the agricultural treatise of Cato the Elder, fragments of works by a number of other authors) was based on the dialect of Rome. The period of classical ("golden") Latin dated until the death of Ovid in AD 17 (1st century BC, the development of vocabulary, the development of terminology, the elimination of old morphological doublets, the flowering of literature: Cicero, Caesar, Sallust, Virgil, Horace, Ovid) was particularly distinguished. During the period of classical ("silver") Latin dated until the death of emperor Marcus Aurelius in AD 180, seeing works by Juvenal, Tacitus, Suetonius and the Satyricon of Petronius, during which time the phonetic, morphological and spelling norms were finally formed. As the Roman Republic extended its political dominion over the whole of the Italian peninsula, Latin became dominant over the other Italic languages, which ceased to be spoken perhaps sometime in the 1st century AD. From Vulgar Latin, the Romance languages emerged. The Latin language gradually spread beyond Rome, along with the growth of the power of this state, displacing, beginning in the 4th and 3rd centuries BC, the languages of other Italic tribes, as well as Illyrian, Messapian and Venetic, etc. The Romanisation of the Italian Peninsula was basically complete by the 1st century BC; except for the south of Italy and Sicily, where the dominance of Greek was preserved. The attribution of Ligurian is controversial. Origin theories The main debate concerning the origin of the Italic languages mirrors that on the origins of the Greek ones, except that there is no record of any "early Italic" to play the role of Mycenaean Greek. All we know about the linguistic landscape of Italy is from inscriptions made after the introduction of the alphabet in the peninsula, around 700 BC onwards, and from Greek and Roman writers several centuries later. The oldest known samples come from Umbrian and Faliscan inscriptions from the 7th century BC. Their alphabets were clearly derived from the Etruscan alphabet, which was derived from the Western Greek alphabet not much earlier than that. There is no reliable information about the languages spoken before that time. Some conjectures can be made based on toponyms, but they cannot be verified. There is no guarantee that the intermediate phases between those old Italic languages and Indo-European will be found. The question of whether Italic originated outside Italy or developed by assimilation of Indo-European and other elements within Italy, approximately on or within its current range there, remains. An extreme view of some linguists and historians is that there is no such thing as "the Italic branch" of Indo-European. Namely, there never was a unique "Proto-Italic", whose diversification resulted in those languages. Some linguists, like Silvestri and Rix, further argue that no common Proto-Italic can be reconstructed such that (1) its phonological system may have developed into those of Latin and Osco-Umbrian through consistent phonetic changes, and (2) its phonology and morphology can be consistently derived from those of Proto-Indo-European. However, Rix later changed his mind and became an outspoken supporter of Italic as a family. Those linguists propose instead that the ancestors of the 1st millennium Indo-European languages of Italy were two or more different languages, that separately descended from Indo-European in a more remote past, and separately entered Europe, possibly by different routes and/or in different epochs. That view stems in part from the difficulty in identifying a common Italic homeland in prehistory, or reconstructing an ancestral "Common Italic" or "Proto-Italic" language from which those languages could have descended. Some common features that seem to connect the languages may be just a sprachbund phenomenon – a linguistic convergence due to contact over a long period, as in the most widely accepted version of the Italo-Celtic hypothesis. Characteristics General and specific characteristics of the pre-Roman Italic languages: in phonetics: Oscan (in comparison with Latin and Umbrian) preserved all positions of old diphthongs ai, oi, ei, ou, in the absence of rhotacism, the absence of sibilants, in the development of kt > ht; a different interpretation of Indo-European kw and gw (Latin qu and v, Osco-Umbrian p and b); in the latter the preservation of s in front of nasal sonants and the reflection of Indo-European *dh and *bh as f; initial stress (in Latin, it was reconstructed in the historical period), which led to syncopation and the reduction of vowels of unstressed syllables; in the syntax: many convergences; In Osco-Umbrian, impersonal constructions, parataxis, partitive genitive, temporal genitive and genitive relationships are more often used; Phonology The most distinctive feature of the Italic languages is the development of the PIE voiced aspirated stops. In initial position, *bʰ-, *dʰ- and *gʷʰ- merged to /f-/, while *gʰ- became /h-/, although Latin also has *gʰ- > /v-/ and /g-/ in special environments. In medial position, all voiced aspirated stops have a distinct reflex in Latin, with different outcome for -*gʰ- and *gʷʰ- if preceded by a nasal. In Osco-Umbrian, they generally have the same reflexes as in initial position, although Umbrian shows a special development if preceded by a nasal, just as in Latin. Most probably, the voiced aspirated stops went through an intermediate stage *-β-, *-ð-, *-ɣ- and *-ɣʷ- in Proto-Italic. The voiceless and plain voiced stops (*p, *t, *k, *kʷ; *b, *d, *g, *gʷ) remained unchanged in Latin, except for the minor shift of *gʷ > /v/. In Osco-Umbrian, the labiovelars *kʷ and *gʷ became the labial stops /p/ and /b/, e.g. Oscan pis 'who?' (cf. Latin quis) and bivus 'alive (nom.pl.)' (cf. Latin vivus). Grammar In grammar there are basically three innovations shared by the Osco-Umbrian and the Latino-Faliscan languages: A suffix in the imperfect subjunctive *-sē- (in Oscan the 3rd person singular of the imperfect subjunctive fusíd and Latin foret, both derivatives of *fusēd). A suffix in the imperfect indicative *-fā- (Oscan fufans 'they were', in Latin this suffix became -bā- as in portabāmus 'we carried'). A suffix to derive gerundive adjectives from verbs *-ndo- (Latin operandam 'which will be built'; in Osco-Umbrian there is the additional reduction -nd- > -nn-, Oscan úpsannam 'which will be built', Umbrian pihaner 'which will be purified'). In turn, these shared innovations are one of the main arguments in favour of an Italic group, questioned by other authors. Lexical comparison Among the Indo-European languages, the Italic languages share a higher percentage of lexicon with the Celtic and the Germanic ones, three of the four traditional "centum" branches of Indo-European |
The protocol specified that characters were 8-bit but did not specify the character encoding the text was supposed to use. This can cause problems when users using different clients and/or different platforms want to converse. All client-to-server IRC protocols in use today are descended from the protocol implemented in the irc2.4.0 version of the IRC2 server, and documented in RFC 1459. Since RFC 1459 was published, the new features in the irc2.10 implementation led to the publication of several revised protocol documents (RFC 2810, RFC 2811, RFC 2812 and RFC 2813); however, these protocol changes have not been widely adopted among other implementations. Although many specifications on the IRC protocol have been published, there is no official specification, as the protocol remains dynamic. Virtually no clients and very few servers rely strictly on the above RFCs as a reference. Microsoft made an extension for IRC in 1998 via the proprietary IRCX. They later stopped distributing software supporting IRCX, instead developing the proprietary MSNP. The standard structure of a network of IRC servers is a tree. Messages are routed along only necessary branches of the tree but network state is sent to every server and there is generally a high degree of implicit trust between servers. However, this architecture has a number of problems. A misbehaving or malicious server can cause major damage to the network and any changes in structure, whether intentional or a result of conditions on the underlying network, require a net-split and net-join. This results in a lot of network traffic and spurious quit/join messages to users and temporary loss of communication to users on the splitting servers. Adding a server to a large network means a large background bandwidth load on the network and a large memory load on the server. Once established, however, each message to multiple recipients is delivered in a fashion similar to multicast, meaning each message travels a network link exactly once. This is a strength in comparison to non-multicasting protocols such as Simple Mail Transfer Protocol (SMTP) or Extensible Messaging and Presence Protocol (XMPP). An IRC daemon can also be used on a local area network (LAN). IRC can thus be used to facilitate communication between people within the local area network (internal communication). Commands and replies IRC has a line-based structure. Clients send single-line messages to the server, receive replies to those messages and receive copies of some messages sent by other clients. In most clients, users can enter commands by prefixing them with a '/'. Depending on the command, these may either be handled entirely by the client, or (generally for commands the client does not recognize) passed directly to the server, possibly with some modification. Due to the nature of the protocol, automated systems cannot always correctly pair a sent command with its reply with full reliability and are subject to guessing. Channels The basic means of communicating to a group of users in an established IRC session is through a channel. Channels on a network can be displayed using the IRC command LIST, which lists all currently available channels that do not have the modes +s or +p set, on that particular network. Users can join a channel using the JOIN command, in most clients available as /join #channelname. Messages sent to the joined channels are then relayed to all other users. Channels that are available across an entire IRC network are prefixed with a '#', while those local to a server use '&'. Other less common channel types include '+' channels—'modeless' channels without operators—and '!' channels, a form of timestamped channel on normally non-timestamped networks. Modes Users and channels may have modes that are represented by single case-sensitive letters and are set using the MODE command. User modes and channel modes are separate and can use the same letter to mean different things (e.g. user mode "i" is invisible mode while channel mode "i" is invite only.) Modes are usually set and unset using the mode command that takes a target (user or channel), a set of modes to set (+) or unset (-) and any parameters the modes need. Some channel modes take parameters and other channel modes apply to a user on a channel or add or remove a mask (e.g. a ban mask) from a list associated with the channel rather than applying to the channel as a whole. Modes that apply to users on a channel have an associated symbol that is used to represent the mode in names replies (sent to clients on first joining a channel and use of the names command) and in many clients also used to represent it in the client's displayed list of users in a channel or to display an own indicator for a user's modes. In order to correctly parse incoming mode messages and track channel state the client must know which mode is of which type and for the modes that apply to a user on a channel which symbol goes with which letter. In early implementations of IRC this had to be hard-coded in the client but there is now a de facto standard extension to the protocol called ISUPPORT that sends this information to the client at connect time using numeric 005. There is a small design fault in IRC regarding modes that apply to users on channels: the names message used to establish initial channel state can only send one such mode per user on the channel, but multiple such modes can be set on a single user. For example, if a user holds both operator status (+o) and voice status (+v) on a channel, a new client will be unable to see the mode with less priority (i.e. voice). Workarounds for this are possible on both the client and server side but none are widely implemented. Standard (RFC 1459) modes Many daemons and networks have added extra modes or modified the behavior of modes in the above list. Channel operators A channel operator is a client on an IRC channel that manages the channel. IRC channel operators can be easily seen by the a symbol or icon next to their name (varies by client implementation, commonly a "@" symbol prefix, a green circle, or a Latin letter "+o"/"o"). On most networks, an operator can: Kick a user. Ban a user. Give another user IRC Channel Operator Status or IRC Channel Voice Status. Change the IRC Channel topic while channel mode +t is set. Change the IRC Channel Mode locks. IRC operators There are also users who maintain elevated rights on their local server, or the entire network; these are called IRC operators, sometimes shortened to IRCops or Opers (not to be confused with channel operators). As the implementation of the IRCd varies, so do the privileges of the IRC operator on the given IRCd. RFC 1459 claims that IRC operators are "a necessary evil" to keep a clean state of the network, and as such they need to be able to disconnect and reconnect servers. Additionally, to prevent malicious users or even harmful automated programs from entering IRC, IRC operators are usually allowed to disconnect clients and completely ban IP addresses or complete subnets. Networks that carry services (NickServ et al.) usually allow their IRC operators also to handle basic "ownership" matters. Further privileged rights may include overriding channel bans (being able to join channels they would not be allowed to join, if they were not opered), being able to op themselves on channels where they would not be able without being opered, being auto-opped on channels always and so forth. Hostmasks A hostmask is a unique identifier of an IRC client connected to an IRC server. IRC servers, services, and other clients, including bots, can use it to identify a specific IRC session. The format of a hostmask is nick!user@host. The hostmask looks similar to, but should not be confused with an e-mail address. The nick part is the nickname chosen by the user and may be changed while connected. The user part is the username reported by ident on the client. If ident is not available on the client, the username specified when the client connected is used after being prefixed with a tilde. The host part is the hostname the client is connecting from. If the IP address of the client cannot be resolved to a valid hostname by the server, it is used instead of the hostname. Because of the privacy implications of exposing the IP address or hostname of a client, some IRC daemons also provide privacy features, such as InspIRCd or UnrealIRCd's "+x" mode. This hashes a client IP address or masks part of a client's hostname, making it unreadable to users other than IRCops. Users may also have the option of requesting a "virtual host" (or "vhost"), to be displayed in the hostmask to allow further anonymity. Some IRC networks, such as Libera Chat or Freenode, use these as "cloaks" to indicate that a user is affiliated with a group or project. URI scheme There are three recognized uniform resource identifier (URI) schemes for Internet Relay Chat: irc, ircs, and irc6. When supported, they allow hyperlinks of various forms, including irc://<host>[:<port>]/[<channel>[?<channel_keyword>]] ircs://<host>[:<port>]/[<channel>[?<channel_keyword>]] irc6://<host>[:<port>]/[<channel>[?<channel_keyword>]] (where items enclosed within brackets ([,]) are optional) to be used to (if necessary) connect to the specified host (or network, if known to the IRC client) and join the specified channel. (This can be used within the client itself, or from another application such as a Web browser). irc is the default URI, irc6 specifies a connection to be made using IPv6, and ircs specifies a secure connection. Per the specification, the usual hash symbol (#) will be prepended to channel names that begin with an alphanumeric character—allowing it to be omitted. Some implementations (for example, mIRC) will do so unconditionally resulting in a (usually unintended) extra (for example, ##channel), if included in the URL. Some implementations allow multiple channels to be specified, separated by commas. Challenges Issues in the original design of IRC were the amount of shared state data being a limitation on its scalability, the absence of unique user identifications leading to the nickname collision problem, lack of protection from netsplits by means of cyclic routing, the trade-off in scalability for the sake of real-time user presence information, protocol weaknesses providing a platform for abuse, no transparent and optimizable message passing, and no encryption. Some of these issues have been addressed in Modern IRC. Attacks Because IRC connections may be unencrypted and typically span long time periods, they are an attractive target for DoS/DDoS attackers and hackers. Because of this, careful security policy is necessary to ensure that an IRC network is not susceptible to an attack such as a takeover war. IRC networks may also K-line or G-line users or servers that have a harming effect. Some IRC servers support SSL/TLS connections for security purposes. This helps stop the use of packet sniffer programs to obtain the passwords of IRC users, but has little use beyond this scope due to the public nature of IRC channels. SSL connections require both client and server support (that may require the user to install SSL binaries and IRC client specific patches or modules on their computers). Some networks also use SSL for server-to-server connections, and provide a special channel flag (such as +S) to only allow SSL-connected users on the channel, while disallowing operator identification in clear text, to better utilize the advantages that SSL provides. IRC served as an early laboratory for many kinds of Internet attacks, such as using fake ICMP unreachable messages to break TCP-based IRC connections (nuking) to annoy users or facilitate takeovers. Abuse prevention One of the most contentious technical issues surrounding IRC implementations, which survives to this day, is the merit of "Nick/Channel Delay" vs. "Timestamp" protocols. Both methods exist to solve the problem of denial-of-service attacks, but take very different approaches. The problem with the original IRC protocol as implemented was that when two servers split and rejoined, the two sides of the network would simply merge their channels. If a user could join on a "split" server, where a channel that existed on the other side of the network was empty, and gain operator status, they would become a channel operator of the "combined" channel after the netsplit ended; if a user took a nickname that existed on the | Status or IRC Channel Voice Status. Change the IRC Channel topic while channel mode +t is set. Change the IRC Channel Mode locks. IRC operators There are also users who maintain elevated rights on their local server, or the entire network; these are called IRC operators, sometimes shortened to IRCops or Opers (not to be confused with channel operators). As the implementation of the IRCd varies, so do the privileges of the IRC operator on the given IRCd. RFC 1459 claims that IRC operators are "a necessary evil" to keep a clean state of the network, and as such they need to be able to disconnect and reconnect servers. Additionally, to prevent malicious users or even harmful automated programs from entering IRC, IRC operators are usually allowed to disconnect clients and completely ban IP addresses or complete subnets. Networks that carry services (NickServ et al.) usually allow their IRC operators also to handle basic "ownership" matters. Further privileged rights may include overriding channel bans (being able to join channels they would not be allowed to join, if they were not opered), being able to op themselves on channels where they would not be able without being opered, being auto-opped on channels always and so forth. Hostmasks A hostmask is a unique identifier of an IRC client connected to an IRC server. IRC servers, services, and other clients, including bots, can use it to identify a specific IRC session. The format of a hostmask is nick!user@host. The hostmask looks similar to, but should not be confused with an e-mail address. The nick part is the nickname chosen by the user and may be changed while connected. The user part is the username reported by ident on the client. If ident is not available on the client, the username specified when the client connected is used after being prefixed with a tilde. The host part is the hostname the client is connecting from. If the IP address of the client cannot be resolved to a valid hostname by the server, it is used instead of the hostname. Because of the privacy implications of exposing the IP address or hostname of a client, some IRC daemons also provide privacy features, such as InspIRCd or UnrealIRCd's "+x" mode. This hashes a client IP address or masks part of a client's hostname, making it unreadable to users other than IRCops. Users may also have the option of requesting a "virtual host" (or "vhost"), to be displayed in the hostmask to allow further anonymity. Some IRC networks, such as Libera Chat or Freenode, use these as "cloaks" to indicate that a user is affiliated with a group or project. URI scheme There are three recognized uniform resource identifier (URI) schemes for Internet Relay Chat: irc, ircs, and irc6. When supported, they allow hyperlinks of various forms, including irc://<host>[:<port>]/[<channel>[?<channel_keyword>]] ircs://<host>[:<port>]/[<channel>[?<channel_keyword>]] irc6://<host>[:<port>]/[<channel>[?<channel_keyword>]] (where items enclosed within brackets ([,]) are optional) to be used to (if necessary) connect to the specified host (or network, if known to the IRC client) and join the specified channel. (This can be used within the client itself, or from another application such as a Web browser). irc is the default URI, irc6 specifies a connection to be made using IPv6, and ircs specifies a secure connection. Per the specification, the usual hash symbol (#) will be prepended to channel names that begin with an alphanumeric character—allowing it to be omitted. Some implementations (for example, mIRC) will do so unconditionally resulting in a (usually unintended) extra (for example, ##channel), if included in the URL. Some implementations allow multiple channels to be specified, separated by commas. Challenges Issues in the original design of IRC were the amount of shared state data being a limitation on its scalability, the absence of unique user identifications leading to the nickname collision problem, lack of protection from netsplits by means of cyclic routing, the trade-off in scalability for the sake of real-time user presence information, protocol weaknesses providing a platform for abuse, no transparent and optimizable message passing, and no encryption. Some of these issues have been addressed in Modern IRC. Attacks Because IRC connections may be unencrypted and typically span long time periods, they are an attractive target for DoS/DDoS attackers and hackers. Because of this, careful security policy is necessary to ensure that an IRC network is not susceptible to an attack such as a takeover war. IRC networks may also K-line or G-line users or servers that have a harming effect. Some IRC servers support SSL/TLS connections for security purposes. This helps stop the use of packet sniffer programs to obtain the passwords of IRC users, but has little use beyond this scope due to the public nature of IRC channels. SSL connections require both client and server support (that may require the user to install SSL binaries and IRC client specific patches or modules on their computers). Some networks also use SSL for server-to-server connections, and provide a special channel flag (such as +S) to only allow SSL-connected users on the channel, while disallowing operator identification in clear text, to better utilize the advantages that SSL provides. IRC served as an early laboratory for many kinds of Internet attacks, such as using fake ICMP unreachable messages to break TCP-based IRC connections (nuking) to annoy users or facilitate takeovers. Abuse prevention One of the most contentious technical issues surrounding IRC implementations, which survives to this day, is the merit of "Nick/Channel Delay" vs. "Timestamp" protocols. Both methods exist to solve the problem of denial-of-service attacks, but take very different approaches. The problem with the original IRC protocol as implemented was that when two servers split and rejoined, the two sides of the network would simply merge their channels. If a user could join on a "split" server, where a channel that existed on the other side of the network was empty, and gain operator status, they would become a channel operator of the "combined" channel after the netsplit ended; if a user took a nickname that existed on the other side of the network, the server would kill both users when rejoining (a "nick collision"). This was often abused to "mass-kill" all users on a channel, thus creating "opless" channels where no operators were present to deal with abuse. Apart from causing problems within IRC, this encouraged people to conduct denial-of-service attacks against IRC servers in order to cause netsplits, which they would then abuse. The nick delay (ND) and channel delay (CD) strategies aim to prevent abuse by delaying reconnections and renames. After a user signs off and the nickname becomes available, or a channel ceases to exist because all its users parted (as often happens during a netsplit), the server will not allow any user to use that nickname or join that channel, until a certain period of time (the delay) has passed. The idea behind this is that even if a netsplit occurs, it is useless to an abuser because they cannot take the nickname or gain operator status on a channel, and thus no collision of a nickname or "merging" of a channel can occur. To some extent, this inconveniences legitimate users, who might be forced to briefly use a different name after rejoining (appending an underscore is popular). The timestamp protocol is an alternative to nick/channel delays which resolves collisions using timestamped priority. Every nickname and channel on the network is assigned a timestampthe date and time when it was created. When a netsplit occurs, two users on each side are free to use the same nickname or channel, but when the two sides are joined, only one can survive. In the case of nicknames, the newer user, according to their TS, is killed; when a channel collides, the members (users on the channel) are merged, but the channel operators on the "losing" side of the split lose their channel operator status. TS is a much more complicated protocol than ND/CD, both in design and implementation, and despite having gone through several revisions, some implementations still have problems with "desyncs" (where two servers on the same network disagree about the current state of the network), and allowing too much leniency in what was allowed by the "losing" side. Under the original TS protocols, for example, there was no protection against users setting bans or other modes in the losing channel that would then be merged when the split rejoined, even though the users who had set those modes lost their channel operator status. Some modern TS-based IRC servers have also incorporated some form of ND and/or CD in addition to timestamping in an attempt to further curb abuse. Most networks today use the timestamping approach. The timestamp versus ND/CD disagreements caused several servers to split away from EFnet and form the newer IRCnet. After the split, EFnet moved to a TS protocol, while IRCnet used ND/CD. In recent versions of the IRCnet ircd, as well as ircds using the TS6 protocol (including Charybdis), ND has been extended/replaced by a mechanism called SAVE. This mechanism assigns every client a UID upon connecting to an IRC server. This ID starts with a number, which is forbidden in nicks (although some ircds, namely IRCnet and InspIRCd, allow clients to switch to their own UID as the nickname). If two clients with the same nickname join from different sides of a netsplit ("nick collision"), the first server to see this collision will force both clients to change their nick to their UID, thus saving both clients from being disconnected. On IRCnet, the nickname will also be locked for some time (ND) to prevent both clients from changing back to the original nickname, thus colliding again. Clients Client software Client software exists for various operating systems or software packages, as well as web-based or inside games. Many different clients are available for the various operating systems, including Windows, Unix and Linux, macOS and mobile operating systems (such as iOS and Android). On Windows, mIRC is one of the most popular clients. Some programs which are extensible through plug-ins also serve as platforms for IRC clients. For instance, a client called ERC, written entirely in Emacs Lisp, is included in v.22.3 of Emacs. Therefore, any platform that can run Emacs can run ERC. A number of web browsers have built-in IRC clients, such as Opera (version 12.18 and earlier) and the ChatZilla add-on for Mozilla Firefox (for Firefox 56 and earlier; included as a built-in component of SeaMonkey). Web-based clients, such as Mibbit and open source KiwiIRC, can run in most browsers. Games such as War§ow, Unreal Tournament (up to Unreal Tournament 2004), Uplink, Spring Engine-based games, 0 A.D. and ZDaemon have included IRC. Ustream's chat interface is IRC with custom authentication as well as Twitch's (formerly Justin.tv). Bots A typical use of bots in IRC is to provide IRC services or specific functionality within a channel such as to host a chat-based game or provide notifications of external events. However, some IRC bots are used to launch malicious attacks such as denial of service, spamming, or exploitation. Bouncer A program that runs as a daemon on a server and functions as a persistent proxy is known as a BNC or bouncer. The purpose is to maintain a connection to an IRC server, acting as a relay between the server and client, or simply to act as a proxy. Should the client lose network connectivity, the BNC may stay connected and archive all traffic for later delivery, allowing the user to resume their IRC session without disrupting their connection to the server. Furthermore, as a way of obtaining a bouncer-like effect, an IRC client (typically text-based, for example Irssi) may be run on an always-on server to which the user connects via ssh. This also allows devices that only have ssh functionality, but no actual IRC client installed themselves, to connect to the IRC, and it allows sharing of IRC sessions. To keep the IRC client from quitting when the ssh connection closes, the client can be run inside a terminal multiplexer such as GNU Screen or tmux, thus staying connected to the IRC network(s) constantly and able to log conversation in channels that the user is interested in, or to maintain a channel's presence on the network. Modelled after this setup, in 2004 an IRC client following the client–server, called Smuxi, was launched. Search engines There are numerous search engines available to aid the user in finding what they are looking for on IRC. Generally the search engine consists of two parts, a "back-end" (or "spider/crawler") and a front-end "search engine". The back-end (spider/webcrawler) is the work horse of the search engine. It is responsible for crawling IRC servers to index the information being sent across them. The information that is indexed usually consists solely of channel text (text that is publicly displayed in public channels). The storage method is usually some sort of relational database, like MySQL or Oracle. The front-end "search engine" is the user interface to the database. It supplies users with a way to search the database of indexed information to retrieve the data they are looking for. These front-end search engines can also be coded in numerous programming languages. Most search engines have their own spider that is a single application responsible for crawling IRC and indexing data itself; however, others are "user based" indexers. The latter rely on users to install their "add-on" to their IRC client; the add-on is what sends the database the channel information of whatever channels the user happens to be on. Many users have implemented their own ad hoc search engines using the logging features built into many IRC clients. These search engines are usually implemented as bots and dedicated to a particular channel or group of associated channels. Character encoding IRC still lacks a single globally accepted standard convention for how to transmit characters outside the 7-bit ASCII repertoire. IRC servers normally transfer messages from a client to another client just as byte sequences, without any interpretation or recoding of characters. The IRC protocol (unlike e.g. MIME or HTTP) lacks mechanisms for announcing and negotiating character encoding options. This has put the responsibility for choosing the appropriate |
adaptation of the logogram for ʾālep "ox" as the letter aleph representing the initial sound of the word, a glottal stop. Many signs in hieroglyphic as well as in cuneiform writing could be used either logographically or phonetically. For example, the Sumerian sign DIĜIR () could represent the word diĝir 'deity', the god An or the word an 'sky'. The Akkadian counterpart could represent the Akkadian stem il- 'deity', the Akkadian word šamu 'sky', or the syllable an. Although Chinese characters are logograms, two of the smaller classes in the traditional classification are ideographic in origin: Simple ideographs (指事字 zhǐshìzì) are abstract symbols such as 上 shàng "up" and 下 xià "down" or numerals such as 三 sān "three". Semantic compounds (会意字 huìyìzì) are semantic combinations of characters, such as 明 míng "bright", composed of 日 rì "sun" and 月 yuè "moon", or 休 xiū "rest", composed of 人 rén "person" and 木 mù "tree". In the light of the modern understanding of Old Chinese phonology, researchers now believe that most of the characters originally classified as semantic compounds have an at least partially phonetic nature. An example of ideograms is the collection of 50 signs developed in the 1970s by the American Institute of Graphic Arts at the request of the US Department of Transportation. The system was initially used to mark airports and gradually became more widespread. Mathematics Mathematical symbols are a type of ideogram. Proposed universal languages Inspired by inaccurate early descriptions of Chinese and Japanese characters as ideograms, many Western thinkers have sought to design universal written languages, in which symbols denote concepts rather than words. An early proposal was An Essay towards a Real Character, and a Philosophical Language (1668) by John Wilkins. A recent example is the system of Blissymbols, which was devised by Charles K. Bliss | logographic symbols. Pictographic symbols depict the object referred to by the word, such as an icon of a bull denoting the Semitic word ʾālep "ox". Some words denoting abstract concepts may be represented iconically, but most other words are represented using the rebus principle, borrowing a symbol for a similarly-sounding word. Later systems used selected symbols to represent the sounds of the language, for example the adaptation of the logogram for ʾālep "ox" as the letter aleph representing the initial sound of the word, a glottal stop. Many signs in hieroglyphic as well as in cuneiform writing could be used either logographically or phonetically. For example, the Sumerian sign DIĜIR () could represent the word diĝir 'deity', the god An or the word an 'sky'. The Akkadian counterpart could represent the Akkadian stem il- 'deity', the Akkadian word šamu 'sky', or the syllable an. Although Chinese characters are logograms, two of the smaller classes in the traditional classification are ideographic in origin: Simple ideographs (指事字 zhǐshìzì) are abstract symbols such as 上 shàng "up" and 下 xià "down" or numerals such as 三 sān "three". Semantic compounds (会意字 huìyìzì) are semantic combinations of characters, such as 明 míng "bright", composed of 日 rì "sun" and 月 yuè "moon", or 休 xiū "rest", composed of 人 rén "person" and 木 mù "tree". In the light of the modern understanding of Old Chinese phonology, researchers now believe that most of the characters originally classified as semantic compounds have an at least partially phonetic nature. An example of ideograms is the collection of 50 signs developed in the 1970s by the American Institute of Graphic Arts at the request of the US Department of Transportation. The system was initially used to mark airports and gradually became more widespread. Mathematics Mathematical symbols are a type of ideogram. Proposed universal languages Inspired by inaccurate early descriptions of Chinese and Japanese characters as ideograms, many Western thinkers have sought to design universal written languages, in which symbols denote concepts rather than words. An early proposal was An Essay towards a Real Character, and a Philosophical Language (1668) by John Wilkins. A recent example is the system of Blissymbols, which was devised by Charles K. Bliss in 1949 and currently includes over 2,000 symbols. See also Character (computing) Character (symbol) Emoji Epigraphy, the study of inscriptions, or epigraphs, |
men killing 13 British soldiers near Millstreet early in the next year. At the Crossbarry Ambush in March 1921, 100 or so of Barry's men fought a sizeable engagement with a British column of 1,200, escaping from the British encircling manoeuvre. In Dublin, the "Squad" and elements of the IRA Dublin Brigade were amalgamated into the "Active Service Unit", under Oscar Traynor, which tried to carry out at least three attacks on British troops a day. Usually, these consisted of shooting or grenade attacks on British patrols. Outside Dublin and Munster, there were only isolated areas of intense activity. For instance, the County Longford IRA under Seán Mac Eoin carried out a number of well-planned ambushes and successfully defended the village of Ballinalee against Black and Tan reprisals in a three-hour gun battle. In County Mayo, large-scale guerrilla action did not break out until spring 1921, when two British forces were ambushed at Carrowkennedy and Tourmakeady. Elsewhere, fighting was more sporadic and less intense. In Belfast, the war had a character all of its own. The city had a Protestant and unionist majority and IRA actions were responded to with reprisals against the Catholic population, including killings (such as the McMahon killings) and the burning of many homes – as on Belfast's Bloody Sunday. The IRA in Belfast and the North generally, although involved in protecting the Catholic community from loyalists and state forces, undertook a retaliatory arson campaign against factories and commercial premises. The violence in Belfast alone, which continued until October 1922 (long after the truce in the rest of the country), claimed the lives of between 400 and 500 people. In April 1921, the IRA was again reorganised, in line with the Dáil's endorsement of its actions, along the lines of a regular army. Divisions were created based on region, with commanders being given responsibility, in theory, for large geographical areas. In practice, this had little effect on the localised nature of the guerrilla warfare. In May 1921, the IRA in Dublin attacked and burned the Custom House. The action was a serious setback as five members were killed and eighty captured. By the end of the war in July 1921, the IRA was hard-pressed by the deployment of more British troops into the most active areas and a chronic shortage of arms and ammunition. It has been estimated that the IRA had only about 3,000 rifles (mostly captured from the British) during the war, with a larger number of shotguns and pistols. An ambitious plan to buy arms from Italy in 1921 collapsed when the money did not reach the arms dealers. Towards the end of the war, some Thompson submachine guns were imported from the United States; however 450 of these were intercepted by the American authorities and the remainder only reached Ireland shortly before the Truce. By June 1921, Collins' assessment was that the IRA was within weeks, possibly even days, of collapse. It had few weapons or ammunition left. Moreover, almost 5,000 IRA men had been imprisoned or interned and over 500 killed. Collins and Mulcahy estimated that the number of effective guerrilla fighters was down to 2,000–3,000. However, in the summer of 1921, the war was abruptly ended. The British recruited hundreds of World War I veterans into the RIC and sent them to Ireland. Because there was initially a shortage of RIC uniforms, the veterans at first wore a combination of dark green RIC uniforms and khaki British Army uniforms, which inspired the nickname "Black and Tans". The brutality of the Black and Tans is now well-known, although the greatest violence attributed to the Crown's forces was often that of the Auxiliary Division of the Constabulary. One of the strongest critics of the Black and Tans was King George V who in May 1921 told Lady Margery Greenwood that "he hated the idea of the Black and Tans." The IRA was also involved in the destruction of many stately homes in Munster. The Church of Ireland Gazette recorded numerous instances of Unionists and Loyalists being shot, burnt or forced from their homes during the early 1920s. In County Cork between 1920 and 1923 the IRA shot over 200 civilians of whom over 70 (or 36%) were Protestants: five times the percentage of Protestants in the civilian population. This was due to the historical inclination of Protestants towards loyalty to the United Kingdom. A convention of Irish Protestant Churches in Dublin in May 1922 signed a resolution placing "on record" that "hostility to Protestants by reason of their religion has been almost, if not wholly, unknown in the twenty-six counties in which Protestants are in the minority." Many historic buildings in Ireland were destroyed during the war, most famously the Custom House in Dublin, which was disastrously attacked on de Valera's insistence, to the horror of the more militarily experienced Collins. As he feared, the destruction proved a pyrrhic victory for the Republic, with so many IRA men killed or captured that the IRA in Dublin suffered a severe blow. This was also a period of social upheaval in Ireland, with frequent strikes as well as other manifestations of class conflict. In this regard, the IRA acted to a large degree as an agent of social control and stability, driven by the need to preserve cross-class unity in the national struggle, and on occasion being used to break strikes. Assessments of the effectiveness of the IRA's campaign vary. They were never in a position to engage in conventional warfare. The political, military and financial costs of remaining in Ireland were higher than the British government was prepared to pay and this in a sense forced them into negotiations with the Irish political leaders. According to historian Michael Hopkinson, the guerrilla warfare "was often courageous and effective". Historian David Fitzpatrick observes, "The guerrilla fighters...were vastly outnumbered by the forces of the Crown... The success of the Irish Volunteers in surviving so long is therefore noteworthy." Truce and treaty David Lloyd George, the British Prime Minister, at the time, found himself under increasing pressure (both internationally and from within the British Isles) to try to salvage something from the situation. This was a complete reversal on his earlier position. He had consistently referred to the IRA as a "murder gang" up until then. An unexpected olive branch came from King George V, who, in a speech in Belfast called for reconciliation on all sides, changed the mood and enabled the British and Irish Republican governments to agree to a truce. The Truce was agreed on 11 July 1921. On 8 July, de Valera met General Nevil Macready, the British commander in chief in Ireland and agreed terms. The IRA was to retain its arms and the British Army was to remain in barracks for the duration of peace negotiations. Many IRA officers interpreted the truce only as a temporary break in fighting. They continued to recruit and train volunteers, with the result that the IRA had increased its number to over 72,000 men by early 1922. Negotiations on an Anglo-Irish Treaty took place in late 1921 in London. The Irish delegation was led by Arthur Griffith and Michael Collins. The most contentious areas of the Treaty for the IRA were abolition of the Irish Republic declared in 1919, the status of the Irish Free State as a dominion in the British Commonwealth and the British retention of the so-called Treaty Ports on Ireland's south coast. These issues were the cause of a split in the IRA and ultimately, the Irish Civil War. Under the Government of Ireland Act 1920, Ireland was partitioned, creating Northern Ireland and Southern Ireland. Under the terms of the Anglo-Irish agreement of 6 December 1921, which ended the war (1919–21), Northern Ireland was given the option of withdrawing from the new state, the Irish Free State, and remaining part of the United Kingdom. The Northern Ireland parliament chose to do that. An Irish Boundary Commission was then set up to review the border. Irish leaders expected that it would so reduce Northern Ireland's size, by transferring nationalist areas to the Irish Free State, as to make it economically unviable. Partition was not by itself the key breaking point between pro- and anti-Treaty campaigners; both sides expected the Boundary Commission to greatly reduce Northern Ireland. Moreover, Michael Collins was planning a clandestine guerrilla campaign against the Northern state using the IRA. In early 1922, he sent IRA units to the border areas and sent arms to northern units. It was only afterwards, when partition was confirmed, that a united Ireland became the preserve of anti-Treaty Republicans. IRA and the Anglo-Irish Treaty The IRA leadership was deeply divided over the decision by the Dáil to ratify the Treaty. Despite the fact that Michael Collins – the de facto leader of the IRA – had negotiated the Treaty, many IRA officers were against it. Of the General Headquarters (GHQ) staff, nine members were in favour of the Treaty while four opposed it. The majority of the IRA rank-and-file were against the Treaty; in January–June 1922, their discontent developed into open defiance of the elected civilian Provisional government of Ireland. Both sides agreed that the IRA's allegiance was to the (elected) Dáil of the Irish Republic, but the anti-Treaty side argued that the decision of the Dáil to accept the Treaty (and set aside the Irish Republic) meant that the IRA no longer owed that body its allegiance. They called for the IRA to withdraw from the authority of the Dáil and to entrust the IRA Executive with control over the army. On 16 January, the first IRA division – the 2nd Southern Division led by Ernie O'Malley – repudiated the authority of the GHQ. A month later, on 18 February, Liam Forde, O/C of the IRA Mid-Limerick Brigade, issued a proclamation stating that: "We no longer recognise the authority of the present head of the army, and renew our allegiance to the existing Irish Republic". This was the first unit of the IRA to break with the pro-Treaty government. On 22 March, Rory O'Connor held what was to become an infamous press conference and declared that the IRA would no longer obey the Dáil as (he said) it had violated its Oath to uphold the Irish Republic. He went on to say that "we repudiate the Dáil ... We will set up an Executive which will issue orders to the IRA all over the country." In reply to the question on whether this meant they intended to create a military dictatorship, O'Connor said: "You can take it that way if you like." On 28 March, the (anti-Treaty) IRA Executive issued statement stating that Minister of Defence (Richard Mulcahy) and the Chief-of-Staff (Eoin O'Duffy) no longer exercised any control over the IRA. In addition, it ordered an end to the recruitment to the new military and police forces of the Provisional Government. Furthermore, it instructed all IRA units to reaffirm their allegiance to the Irish Republic on 2 April. The stage was set for civil war over the Treaty. Civil War The pro-treaty IRA soon became the nucleus of the new (regular) Irish National Army created by Collins | war against the Crown forces in Ireland from 1919 to July 1921. The most intense period of the war was from November 1920 onwards. The IRA campaign can broadly be split into three phases. The first, in 1919, involved the re-organisation of the Irish Volunteers as a guerrilla army and only sporadic attacks. Organisers such as Ernie O'Malley were sent around the country to set up viable guerrilla units. On paper, there were 100,000 or so Volunteers enrolled after the conscription crisis of 1918. However, only about 15,000 of these participated in the guerrilla war. In 1919, Collins, the IRA's Director of Intelligence, organised the "Squad"—an assassination unit based in Dublin which killed police involved in intelligence work (the Irish playwright Brendan Behan's father Stephen Behan was a member of the Squad). Typical of Collins's sardonic sense of humour, the Squad was often referred to as his "Twelve Apostles". In addition, there were some arms raids on RIC barracks. By the end of 1919, four Dublin Metropolitan Police and 11 RIC men had been killed. The RIC abandoned most of their smaller rural barracks in late 1919. Around 400 of these were burned in a co-ordinated IRA operation around the country in April 1920. The second phase of the IRA campaign, roughly from January to July 1920, involved attacks on the fortified police barracks located in the towns. Between January and June 1920, 16 of these were destroyed and 29 badly damaged. Several events of late 1920 greatly escalated the conflict. Firstly, the British declared martial law in parts of the country—allowing for internment and executions of IRA men. Secondly they deployed paramilitary forces, the Black and Tans and Auxiliary Division, and more British Army personnel into the country. Thus, the third phase of the war (roughly August 1920 – July 1921) involved the IRA taking on a greatly expanded British force, moving away from attacking well-defended barracks and instead using ambush tactics. To this end the IRA was re-organised into "flying columns"—permanent guerrilla units, usually about 20 strong, although sometimes larger. In rural areas, the flying columns usually had bases in remote mountainous areas. The most high-profile violence of the war took place in Dublin in November 1920 and is still known as Bloody Sunday. In the early hours of the morning, Collins' "Squad" killed fourteen British spies. In reprisal, that afternoon, British forces opened fire on a football crowd at Croke Park, killing 14 civilians. Towards the end of the day, two prominent Republicans and a friend of theirs were arrested and killed by Crown Forces. While most areas of the country saw some violence in 1919–1921, the brunt of the war was fought in Dublin and the southern province of Munster. In Munster, the IRA carried out a significant number of successful actions against British troops, for instance, the ambushing and killing of 16 of 18 Auxiliaries by Tom Barry's column at Kilmicheal in West Cork in November 1920, or Liam Lynch's men killing 13 British soldiers near Millstreet early in the next year. At the Crossbarry Ambush in March 1921, 100 or so of Barry's men fought a sizeable engagement with a British column of 1,200, escaping from the British encircling manoeuvre. In Dublin, the "Squad" and elements of the IRA Dublin Brigade were amalgamated into the "Active Service Unit", under Oscar Traynor, which tried to carry out at least three attacks on British troops a day. Usually, these consisted of shooting or grenade attacks on British patrols. Outside Dublin and Munster, there were only isolated areas of intense activity. For instance, the County Longford IRA under Seán Mac Eoin carried out a number of well-planned ambushes and successfully defended the village of Ballinalee against Black and Tan reprisals in a three-hour gun battle. In County Mayo, large-scale guerrilla action did not break out until spring 1921, when two British forces were ambushed at Carrowkennedy and Tourmakeady. Elsewhere, fighting was more sporadic and less intense. In Belfast, the war had a character all of its own. The city had a Protestant and unionist majority and IRA actions were responded to with reprisals against the Catholic population, including killings (such as the McMahon killings) and the burning of many homes – as on Belfast's Bloody Sunday. The IRA in Belfast and the North generally, although involved in protecting the Catholic community from loyalists and state forces, undertook a retaliatory arson campaign against factories and commercial premises. The violence in Belfast alone, which continued until October 1922 (long after the truce in the rest of the country), claimed the lives of between 400 and 500 people. In April 1921, the IRA was again reorganised, in line with the Dáil's endorsement of its actions, along the lines of a regular army. Divisions were created based on region, with commanders being given responsibility, in theory, for large geographical areas. In practice, this had little effect on the localised nature of the guerrilla warfare. In May 1921, the IRA in Dublin attacked and burned the Custom House. The action was a serious setback as five members were killed and eighty captured. By the end of the war in July 1921, the IRA was hard-pressed by the deployment of more British troops into the most active areas and a chronic shortage of arms and ammunition. It has been estimated that the IRA had only about 3,000 rifles (mostly captured from the British) during the war, with a larger number of shotguns and pistols. An ambitious plan to buy arms from Italy in 1921 collapsed when the money did not reach the arms dealers. Towards the end of the war, some Thompson submachine guns were imported from the United States; however 450 of these were intercepted by the American authorities and the remainder only reached Ireland shortly before the Truce. By June 1921, Collins' assessment was that the IRA was within weeks, possibly even days, of collapse. It had few weapons or ammunition left. Moreover, almost 5,000 IRA men had been imprisoned or interned and over 500 killed. Collins and Mulcahy estimated that the number of effective guerrilla fighters was down to 2,000–3,000. However, in the summer of 1921, the war was abruptly ended. The British recruited hundreds of World War I veterans into the RIC and sent them to Ireland. Because there was initially a shortage of RIC uniforms, the veterans at first wore a combination of dark green RIC uniforms and khaki British Army uniforms, which inspired the nickname "Black and Tans". The brutality of the Black and Tans is now well-known, although the greatest violence attributed to the Crown's forces was often that of the Auxiliary Division of the Constabulary. One of the strongest critics of the Black and Tans was King George V who in May 1921 told Lady Margery Greenwood that "he hated the idea of the Black and Tans." The IRA was also involved in the destruction of many stately homes in Munster. The Church of Ireland Gazette recorded numerous instances of Unionists and Loyalists being shot, burnt or forced from their homes during the early 1920s. In County Cork between 1920 and 1923 the IRA shot over 200 civilians of whom over 70 (or 36%) were Protestants: five times the percentage of Protestants in the civilian population. This was due to the historical inclination of Protestants towards loyalty to the United Kingdom. A convention of Irish Protestant Churches in Dublin in May 1922 signed a resolution placing "on record" that "hostility to Protestants by reason of their religion has been almost, if not wholly, unknown in the twenty-six counties in which Protestants are in the minority." Many historic buildings in Ireland were destroyed during the war, most famously the Custom House in Dublin, which was disastrously attacked on de Valera's insistence, to the horror of the more militarily experienced Collins. As he feared, the destruction proved a pyrrhic victory for the Republic, with so many IRA men killed or captured that the IRA in Dublin suffered a severe blow. This was also a period of social upheaval in Ireland, with frequent strikes as well as other manifestations of class conflict. In this regard, the IRA acted to a large degree as an agent of social control and stability, driven by the need to preserve cross-class unity in the national struggle, and on occasion being used to break strikes. Assessments of the effectiveness of the IRA's campaign vary. They were never in a position to engage in conventional warfare. The political, military and financial costs of remaining in Ireland were higher than the British government was prepared to pay and this in a sense forced them into negotiations with the Irish political leaders. According to historian Michael Hopkinson, the guerrilla warfare "was often courageous and effective". Historian David Fitzpatrick observes, "The guerrilla fighters...were vastly outnumbered by the forces of the Crown... The success of the Irish Volunteers in surviving so long is therefore noteworthy." Truce and treaty David Lloyd George, the British Prime Minister, at the time, found himself under increasing pressure (both internationally and from within the British Isles) to try to salvage something from the situation. This was a complete reversal on his earlier position. He had consistently referred to the IRA as a "murder gang" up until then. An unexpected olive branch came from King George V, who, in a speech in Belfast called for reconciliation on all sides, changed the mood and enabled the British and Irish Republican governments to agree to a truce. The Truce was agreed on 11 July 1921. On 8 July, de Valera met General Nevil Macready, the British commander in chief in Ireland and agreed terms. The IRA was to retain its arms and the British Army was to remain in barracks for the duration of peace negotiations. Many IRA officers interpreted the truce only as a temporary break in fighting. They continued to recruit and train volunteers, with the result that the IRA had increased its number to over 72,000 men by early 1922. Negotiations on an Anglo-Irish Treaty took place in late 1921 in London. The Irish delegation was led by Arthur Griffith and Michael Collins. The most contentious areas of the Treaty for the IRA were abolition of the Irish Republic declared in 1919, the status of the Irish Free State as a dominion in the British Commonwealth and the British retention of the so-called Treaty Ports on Ireland's south coast. These issues were the cause of a split in the IRA and ultimately, the Irish Civil War. Under the Government of Ireland Act 1920, Ireland was partitioned, creating Northern Ireland and Southern Ireland. Under the terms of the Anglo-Irish agreement of 6 December 1921, which ended the war (1919–21), Northern Ireland was given the option of withdrawing from the new state, the Irish Free State, and remaining part of the United Kingdom. The Northern Ireland parliament chose to do that. An Irish Boundary Commission was then set up to review the border. Irish leaders expected that it would so reduce Northern Ireland's size, by transferring nationalist areas to the Irish Free State, as to make it economically unviable. Partition was not by itself the key breaking point between |
Since iron was becoming cheaper and more plentiful, it also became a major structural material following the building of the innovative first iron bridge in 1778. This bridge still stands today as a monument to the role iron played in the Industrial Revolution. Following this, iron was used in rails, boats, ships, aqueducts, and buildings, as well as in iron cylinders in steam engines. Railways have been central to the formation of modernity and ideas of progress and various languages (e.g. French, Spanish, Italian and German) refer to railways as iron road. Steel Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in antiquity by using a bloomery. Blacksmiths in Luristan in western Persia were making good steel by 1000 BC. Then improved versions, Wootz steel by India and Damascus steel were developed around 300 BC and AD 500 respectively. These methods were specialized, and so steel did not become a major commodity until the 1850s. New methods of producing it by carburizing bars of iron in the cementation process were devised in the 17th century. In the Industrial Revolution, new methods of producing bar iron without charcoal were devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made steel much more economical, thereby leading to wrought iron no longer being produced in large quantities. Foundations of modern chemistry In 1774, Antoine Lavoisier used the reaction of water steam with metallic iron inside an incandescent iron tube to produce hydrogen in his experiments leading to the demonstration of the conservation of mass, which was instrumental in changing chemistry from a qualitative science to a quantitative one. Symbolic role Iron plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity. The Iron Age was closely related with Rome, and in Ovid's Metamorphoses An example of the importance of iron's symbolic role may be found in the German Campaign of 1813. Frederick William III commissioned then the first Iron Cross as military decoration. Berlin iron jewellery reached its peak production between 1813 and 1815, when the Prussian royal family urged citizens to donate gold and silver jewellery for military funding. The inscription Gold gab ich für Eisen (I gave gold for iron) was used as well in later war efforts. Laboratory routes For a few limited purposes when it is needed, pure iron is produced in the laboratory in small quantities by reducing the pure oxide or hydroxide with hydrogen, or forming iron pentacarbonyl and heating it to 250 °C so that it decomposes to form pure iron powder. Another method is electrolysis of ferrous chloride onto an iron cathode. Main industrial route Nowadays, the industrial production of iron or steel consists of two main stages. In the first stage, iron ore is reduced with coke in a blast furnace, and the molten metal is separated from gross impurities such as silicate minerals. This stage yields an alloy—pig iron—that contains relatively large amounts of carbon. In the second stage, the amount of carbon in the pig iron is lowered by oxidation to yield wrought iron, steel, or cast iron. Other metals can be added at this stage to form alloy steels. Blast furnace processing The blast furnace is loaded with iron ores, usually hematite or magnetite , together with coke (coal that has been separately baked to remove volatile components). Air pre-heated to 900 °C is blown through the mixture, in sufficient amount to turn the carbon into carbon monoxide: 2 C + O2 → 2 CO This reaction raises the temperature to about 2000 °C. The carbon monoxide reduces the iron ore to metallic iron Fe2O3 + 3 CO → 2 Fe + 3 CO2 Some iron in the high-temperature lower region of the furnace reacts directly with the coke: 2 Fe2O3 + 3 C → 4 Fe + 3 CO2 A flux such as limestone (calcium carbonate) or dolomite (calcium-magnesium carbonate) is also added to the furnace's load. Its purpose is to remove silicaceous minerals in the ore, which would otherwise clog the furnace. The heat of the furnace decomposes the carbonates to calcium oxide, which reacts with any excess silica to form a slag composed of calcium silicate or other products. At the furnace's temperature, the metal and the slag are both molten. They collect at the bottom as two immiscible liquid layers (with the slag on top), that are then easily separated. The slag can be used as a material in road construction or to improve mineral-poor soils for agriculture. Steelmaking The pig iron produced by the blast furnace process contains up to 4–5% carbon (by mass), with small amounts of other impurities like sulfur, magnesium, phosphorus, and manganese. This high level of carbon makes it relatively weak and brittle. Reducing the amount of carbon to 0.002–2.1% produces steel, which may be up to 1000 times harder than pure iron. A great variety of steel articles can then be made by cold working, hot rolling, forging, machining, etc. Removing the impurities from pig iron, but leaving 2–4% carbon, results in cast iron, which is cast by foundries into articles such as stoves, pipes, radiators, lamp-posts, and rails. Steel products often undergo various heat treatments after they are forged to shape. Annealing consists of heating them to 700–800 °C for several hours and then gradual cooling. It makes the steel softer and more workable. Direct iron reduction Owing to environmental concerns, alternative methods of processing iron have been developed. "Direct iron reduction" reduces iron ore to a ferrous lump called "sponge" iron or "direct" iron that is suitable for steelmaking. Two main reactions comprise the direct reduction process: Natural gas is partially oxidized (with heat and a catalyst): 2 CH4 + O2 → 2 CO + 4 H2 Iron ore is then treated with these gases in a furnace, producing solid sponge iron: Fe2O3 + CO + 2 H2 → 2 Fe + CO2 + 2 H2O Silica is removed by adding a limestone flux as described above. Thermite process Ignition of a mixture of aluminium powder and iron oxide yields metallic iron via the thermite reaction: Fe2O3 + 2 Al → 2 Fe + Al2O3 Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron (commercially pure iron). Various processes have been used for this, including finery forges, puddling furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other hand, other metals may be added to make alloy steels. Applications As structural material Iron is the most widely used of all the metals, accounting for over 90% of worldwide metal production. Its low cost and high strength often make it the material of choice material to withstand stress or transmit forces, such as the construction of machinery and machine tools, rails, automobiles, ship hulls, concrete reinforcing bars, and the load-carrying framework of buildings. Since pure iron is quite soft, it is most commonly combined with alloying elements to make steel. Mechanical properties The mechanical properties of iron and its alloys are extremely relevant to their structural applications. Those properties can be evaluated in various ways, including the Brinell test, the Rockwell test and the Vickers hardness test. The properties of pure iron are often used to calibrate measurements or to compare tests. However, the mechanical properties of iron are significantly affected by the sample's purity: pure, single crystals of iron are actually softer than aluminium, and the purest industrially produced iron (99.99%) has a hardness of 20–30 Brinell. The pure iron (99.9 %~99.999 %), especially called electrolytic iron, is industrially produced by electrolytic refining. An increase in the carbon content will cause a significant increase in the hardness and tensile strength of iron. Maximum hardness of 65 Rc is achieved with a 0.6% carbon content, although the alloy has low tensile strength. Because of the softness of iron, it is much easier to work with than its heavier congeners ruthenium and osmium. Types of steels and alloys α-Iron is a fairly soft metal that can dissolve only a small concentration of carbon (no more than 0.021% by mass at 910 °C). Austenite (γ-iron) is similarly soft and metallic but can dissolve considerably more carbon (as much as 2.04% by mass at 1146 °C). This form of iron is used in the type of stainless steel used for making cutlery, and hospital and food-service equipment. Commercially available iron is classified based on purity and the abundance of additives. Pig iron has 3.5–4.5% carbon and contains varying amounts of contaminants such as sulfur, silicon and phosphorus. Pig iron is not a saleable product, but rather an intermediate step in the production of cast iron and steel. The reduction of contaminants in pig iron that negatively affect material properties, such as sulfur and phosphorus, yields cast iron containing 2–4% carbon, 1–6% silicon, and small amounts of manganese. Pig iron has a melting point in the range of 1420–1470 K, which is lower than either of its two main components, and makes it the first product to be melted when carbon and iron are heated together. Its mechanical properties vary greatly and depend on the form the carbon takes in the alloy. "White" cast irons contain their carbon in the form of cementite, or iron carbide (Fe3C). This hard, brittle compound dominates the mechanical properties of white cast irons, rendering them hard, but unresistant to shock. The broken surface of a white cast iron is full of fine facets of the broken iron carbide, a very pale, silvery, shiny material, hence the appellation. Cooling a mixture of iron with 0.8% carbon slowly below 723 °C to room temperature results in separate, alternating layers of cementite and α-iron, which is soft and malleable and is called pearlite for its appearance. Rapid cooling, on the other hand, does not allow time for this separation and creates hard and brittle martensite. The steel can then be tempered by reheating to a temperature in between, changing the proportions of pearlite and martensite. The end product below 0.8% carbon content is a pearlite-αFe mixture, and that above 0.8% carbon content is a pearlite-cementite mixture. In gray iron the carbon exists as separate, fine flakes of graphite, and also renders the material brittle due to the sharp edged flakes of graphite that produce stress concentration sites within the material. A newer variant of gray iron, referred to as ductile iron, is specially treated with trace amounts of magnesium to alter the shape of graphite to spheroids, or nodules, reducing the stress concentrations and vastly increasing the toughness and strength of the material. Wrought iron contains less than 0.25% carbon but large amounts of slag that give it a fibrous characteristic. It is a tough, malleable product, but not as fusible as pig iron. If honed to an edge, it loses it quickly. Wrought iron is characterized by the presence of fine fibers of slag entrapped within the metal. Wrought iron is more corrosion resistant than steel. It has been almost completely replaced by mild steel for traditional "wrought iron" products and blacksmithing. Mild steel corrodes more readily than wrought iron, but is cheaper and more widely available. Carbon steel contains 2.0% carbon or less, with small amounts of manganese, sulfur, phosphorus, and silicon. Alloy steels contain varying amounts of carbon as well as other metals, such as chromium, vanadium, molybdenum, nickel, tungsten, etc. Their alloy content raises their cost, and so they are usually only employed for specialist uses. One common alloy steel, though, is stainless steel. Recent developments in ferrous metallurgy have produced a growing range of microalloyed steels, also termed 'HSLA' or high-strength, low alloy steels, containing tiny additions to produce high strengths and often spectacular toughness at minimal cost. Alloys with high purity elemental makeups (such as alloys of electrolytic iron) have specifically enhanced properties such as ductility, tensile strength, toughness, fatigue strength, heat resistance, and corrosion resistance. Apart from traditional applications, iron is also used for protection from ionizing radiation. Although it is lighter than another traditional protection material, lead, it is much stronger mechanically. The attenuation of radiation as a function of energy is shown in the graph. The main disadvantage of iron and steel is that pure iron, and most of its alloys, suffer badly from rust if not protected in some way, a cost amounting to over 1% of the world's economy. Painting, galvanization, passivation, plastic coating and bluing are all used to protect iron from rust by excluding water and oxygen or by cathodic protection. The mechanism of the rusting of iron is as follows: Cathode: 3 O2 + 6 H2O + 12 e− → 12 OH− Anode: 4 Fe → 4 Fe2+ + 8 e−; 4 Fe2+ → 4 Fe3+ + 4 e− Overall: 4 Fe + 3 O2 + 6 H2O → 4 Fe3+ + 12 OH− → 4 Fe(OH)3 or 4 FeO(OH) + 4 H2O The electrolyte is usually iron(II) sulfate in urban areas (formed when atmospheric sulfur dioxide attacks iron), and salt particles in the atmosphere in seaside areas. Iron compounds Although the dominant use of iron is in metallurgy, iron compounds are also pervasive in industry. Iron catalysts are traditionally used in the Haber–Bosch process for the production of ammonia and the Fischer–Tropsch process for conversion of carbon monoxide to hydrocarbons for fuels and lubricants. Powdered iron in an acidic solvent was used in the Bechamp reduction the reduction of nitrobenzene to aniline. Iron based catalysts play a crucial role in converting biobased raw materials into valuable bulk - and fine chemicals, in fuel cells as well as in removal of hazardous chemicals. Iron(III) oxide mixed with aluminium powder can be ignited to create a thermite reaction, used in welding large iron parts (like rails) and purifying ores. Iron(III) oxide and oxyhidroxide are used as reddish and ocher pigments. Iron(III) chloride finds use in water purification and sewage treatment, in the dyeing of cloth, as a coloring agent in paints, as an additive in animal feed, and as an etchant for copper in the manufacture of printed circuit boards. It can also be dissolved in alcohol to form tincture of iron, which is used as a medicine to stop bleeding in canaries. Iron(II) sulfate is used as a precursor to other iron compounds. It is also used to reduce chromate in cement. It is used to fortify foods and treat iron deficiency anemia. Iron(III) sulfate is used in settling minute sewage particles in tank water. Iron(II) chloride is used as a reducing flocculating agent, in the formation of iron complexes and magnetic iron oxides, and as a reducing agent in organic synthesis. Biological and pathological role Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and used of oxygen. Iron proteins are involved in electron transfer. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron (III). Biochemistry Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ and Fe2+ and stored as ferritin to be incorporated into hemoglobin. The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450. These compounds participate in transporting gases, building enzymes, and transferring electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase, lipoxygenases, and IRE-BP. Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy. Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled. In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to. When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue. When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar. (Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin.) This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa. Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue. As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide. Carbon monoxide and phosphorus trifluoride are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a. The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen – with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide. Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows: 4 Cytc2+ + O2 + 8H → 4 Cytc3+ + 2 H2O + 4H Although the heme proteins are the most important class of iron-containing proteins, the iron-sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, they always have high-spin iron. The simplest of such compounds is rubredoxin, which has only one iron atom coordinated to four sulfur atoms from cysteine residues in the surrounding peptide chains. Another important class of iron-sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes. The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times. Nutrition Diet Iron is pervasive, but particularly rich sources of dietary iron include red meat, oysters, lentils, beans, poultry, fish, leaf vegetables, watercress, tofu, chickpeas, black-eyed peas, and blackstrap molasses. Bread and breakfast cereals are sometimes specifically fortified with iron. Iron provided by dietary supplements is often found as iron(II) fumarate, although iron(II) sulfate is cheaper and is absorbed equally well. Elemental iron, or reduced iron, despite being absorbed at only one-third to two-thirds the efficiency (relative to iron sulfate), is often added to foods such as breakfast cereals or enriched wheat flour. Iron is most available to the body when chelated to amino acids and is also available for use as a common iron supplement. Glycine, the least expensive amino acid, is most often used to produce iron glycinate supplements. Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for iron in 2001. The current EAR for iron for women ages 14–18 is 7.9 mg/day, 8.1 for ages 19–50 and 5.0 thereafter (post menopause). For men the EAR is 6.0 mg/day for ages 19 and up. The RDA is 15.0 mg/day for women ages 15–18, 18.0 for 19–50 and 8.0 thereafter. For men, 8.0 mg/day for ages 19 and up. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 27 mg/day and, for lactation, 9 mg/day. For children ages 1–3 years 7 mg/day, 10 for ages 4–8 and 8 for ages 9–13. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of iron the UL is set at 45 mg/day. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For women the PRI is 13 mg/day ages 15–17 years, 16 mg/day for women ages 18 | acid. High purity iron, called electrolytic iron, is considered to be resistant to rust, due to its oxide layer. Binary compounds Oxides and hydroxides Iron forms various oxide and hydroxide compounds; the most common are iron(II,III) oxide (Fe3O4), and iron(III) oxide (Fe2O3). Iron(II) oxide also exists, though it is unstable at room temperature. Despite their names, they are actually all non-stoichiometric compounds whose compositions may vary. These oxides are the principal ores for the production of iron (see bloomery and blast furnace). They are also used in the production of ferrites, useful magnetic storage media in computers, and pigments. The best known sulfide is iron pyrite (FeS2), also known as fool's gold owing to its golden luster. It is not an iron(IV) compound, but is actually an iron(II) polysulfide containing Fe2+ and ions in a distorted sodium chloride structure. Halides The binary ferrous and ferric halides are well-known. The ferrous halides typically arise from treating iron metal with the corresponding hydrohalic acid to give the corresponding hydrated salts. Fe + 2 HX → FeX2 + H2 (X = F, Cl, Br, I) Iron reacts with fluorine, chlorine, and bromine to give the corresponding ferric halides, ferric chloride being the most common. 2 Fe + 3 X2 → 2 FeX3 (X = F, Cl, Br) Ferric iodide is an exception, being thermodynamically unstable due to the oxidizing power of Fe3+ and the high reducing power of I−: 2 I− + 2 Fe3+ → I2 + 2 Fe2+ (E0 = +0.23 V) Ferric iodide, a black solid, is not stable in ordinary conditions, but can be prepared through the reaction of iron pentacarbonyl with iodine and carbon monoxide in the presence of hexane and light at the temperature of −20 °C, with oxygen and water excluded. Complexes of ferric iodide with some soft bases are known to be stable compounds. Solution chemistry The standard reduction potentials in acidic aqueous solution for some common iron ions are given below: The red-purple tetrahedral ferrate(VI) anion is such a strong oxidizing agent that it oxidizes nitrogen and ammonia at room temperature, and even water itself in acidic or neutral solutions: 4 + 10 → 4 + 20 + 3 O2 The Fe3+ ion has a large simple cationic chemistry, although the pale-violet hexaquo ion is very readily hydrolyzed when pH increases above 0 as follows: As pH rises above 0 the above yellow hydrolyzed species form and as it rises above 2–3, reddish-brown hydrous iron(III) oxide precipitates out of solution. Although Fe3+ has a d5 configuration, its absorption spectrum is not like that of Mn2+ with its weak, spin-forbidden d–d bands, because Fe3+ has higher positive charge and is more polarizing, lowering the energy of its ligand-to-metal charge transfer absorptions. Thus, all the above complexes are rather strongly colored, with the single exception of the hexaquo ion – and even that has a spectrum dominated by charge transfer in the near ultraviolet region. On the other hand, the pale green iron(II) hexaquo ion does not undergo appreciable hydrolysis. Carbon dioxide is not evolved when carbonate anions are added, which instead results in white iron(II) carbonate being precipitated out. In excess carbon dioxide this forms the slightly soluble bicarbonate, which occurs commonly in groundwater, but it oxidises quickly in air to form iron(III) oxide that accounts for the brown deposits present in a sizeable number of streams. Coordination compounds Due to its electronic structure, iron has a very large coordination and organometallic chemistry. Many coordination compounds of iron are known. A typical six-coordinate anion is hexachloroferrate(III), [FeCl6]3−, found in the mixed salt tetrakis(methylammonium) hexachloroferrate(III) chloride. Complexes with multiple bidentate ligands have geometric isomers. For example, the trans-chlorohydridobis(bis-1,2-(diphenylphosphino)ethane)iron(II) complex is used as a starting material for compounds with the moiety. The ferrioxalate ion with three oxalate ligands (shown at right) displays helical chirality with its two non-superposable geometries labelled Λ (lambda) for the left-handed screw axis and Δ (delta) for the right-handed screw axis, in line with IUPAC conventions. Potassium ferrioxalate is used in chemical actinometry and along with its sodium salt undergoes photoreduction applied in old-style photographic processes. The dihydrate of iron(II) oxalate has a polymeric structure with co-planar oxalate ions bridging between iron centres with the water of crystallisation located forming the caps of each octahedron, as illustrated below. Iron(III) complexes are quite similar to those of chromium(III) with the exception of iron(III)'s preference for O-donor instead of N-donor ligands. The latter tend to be rather more unstable than iron(II) complexes and often dissociate in water. Many Fe–O complexes show intense colors and are used as tests for phenols or enols. For example, in the ferric chloride test, used to determine the presence of phenols, iron(III) chloride reacts with a phenol to form a deep violet complex: 3 ArOH + FeCl3 → Fe(OAr)3 + 3 HCl (Ar = aryl) Among the halide and pseudohalide complexes, fluoro complexes of iron(III) are the most stable, with the colorless [FeF5(H2O)]2− being the most stable in aqueous solution. Chloro complexes are less stable and favor tetrahedral coordination as in [FeCl4]−; [FeBr4]− and [FeI4]− are reduced easily to iron(II). Thiocyanate is a common test for the presence of iron(III) as it forms the blood-red [Fe(SCN)(H2O)5]2+. Like manganese(II), most iron(III) complexes are high-spin, the exceptions being those with ligands that are high in the spectrochemical series such as cyanide. An example of a low-spin iron(III) complex is [Fe(CN)6]3−. The cyanide ligands may easily be detached in [Fe(CN)6]3−, and hence this complex is poisonous, unlike the iron(II) complex [Fe(CN)6]4− found in Prussian blue, which does not release hydrogen cyanide except when dilute acids are added. Iron shows a great variety of electronic spin states, including every possible spin quantum number value for a d-block element from 0 (diamagnetic) to (5 unpaired electrons). This value is always half the number of unpaired electrons. Complexes with zero to two unpaired electrons are considered low-spin and those with four or five are considered high-spin. Iron(II) complexes are less stable than iron(III) complexes but the preference for O-donor ligands is less marked, so that for example is known while is not. They have a tendency to be oxidized to iron(III) but this can be moderated by low pH and the specific ligands used. Organometallic compounds Organoiron chemistry is the study of organometallic compounds of iron, where carbon atoms are covalently bound to the metal atom. They are many and varied, including cyanide complexes, carbonyl complexes, sandwich and half-sandwich compounds. Prussian blue or "ferric ferrocyanide", Fe4[Fe(CN)6]3, is an old and well-known iron-cyanide complex, extensively used as pigment and in several other applications. Its formation can be used as a simple wet chemistry test to distinguish between aqueous solutions of Fe2+ and Fe3+ as they react (respectively) with potassium ferricyanide and potassium ferrocyanide to form Prussian blue. Another old example of an organoiron compound is iron pentacarbonyl, Fe(CO)5, in which a neutral iron atom is bound to the carbon atoms of five carbon monoxide molecules. The compound can be used to make carbonyl iron powder, a highly reactive form of metallic iron. Thermolysis of iron pentacarbonyl gives triiron dodecacarbonyl, , a complex with a cluster of three iron atoms at its core. Collman's reagent, disodium tetracarbonylferrate, is a useful reagent for organic chemistry; it contains iron in the −2 oxidation state. Cyclopentadienyliron dicarbonyl dimer contains iron in the rare +1 oxidation state. A landmark in this field was the discovery in 1951 of the remarkably stable sandwich compound ferrocene , by Paulson and Kealy and independently by Miller and others, whose surprising molecular structure was determined only a year later by Woodward and Wilkinson and Fischer. Ferrocene is still one of the most important tools and models in this class. Iron-centered organometallic species are used as catalysts. The Knölker complex, for example, is a transfer hydrogenation catalyst for ketones. Industrial uses The iron compounds produced on the largest scale in industry are iron(II) sulfate (FeSO4·7H2O) and iron(III) chloride (FeCl3). The former is one of the most readily available sources of iron(II), but is less stable to aerial oxidation than Mohr's salt (). Iron(II) compounds tend to be oxidized to iron(III) compounds in the air. History Development of iron metallurgy Iron is one of the elements undoubtedly known to the ancient world. It has been worked, or wrought, for millennia. However, iron objects of great age are much rarer than objects made of gold or silver due to the ease with which iron corrodes. The technology developed slowly, and even after the discovery of smelting it took many centuries for iron to replace bronze as the metal of choice for tools and weapons. Meteoritic iron Beads made from meteoric iron in 3500 BC or earlier were found in Gerzeh, Egypt by G.A. Wainwright. The beads contain 7.5% nickel, which is a signature of meteoric origin since iron found in the Earth's crust generally has only minuscule nickel impurities. Meteoric iron was highly regarded due to its origin in the heavens and was often used to forge weapons and tools. For example, a dagger made of meteoric iron was found in the tomb of Tutankhamun, containing similar proportions of iron, cobalt, and nickel to a meteorite discovered in the area, deposited by an ancient meteor shower. Items that were likely made of iron by Egyptians date from 3000 to 2500 BC. Meteoritic iron is comparably soft and ductile and easily cold forged but may get brittle when heated because of the nickel content. Wrought iron The first iron production started in the Middle Bronze Age, but it took several centuries before iron displaced bronze. Samples of smelted iron from Asmar, Mesopotamia and Tall Chagar Bazaar in northern Syria were made sometime between 3000 and 2700 BC. The Hittites established an empire in north-central Anatolia around 1600 BC. They appear to be the first to understand the production of iron from its ores and regard it highly in their society. The Hittites began to smelt iron between 1500 and 1200 BC and the practice spread to the rest of the Near East after their empire fell in 1180 BC. The subsequent period is called the Iron Age. Artifacts of smelted iron are found in India dating from 1800 to 1200 BC, and in the Levant from about 1500 BC (suggesting smelting in Anatolia or the Caucasus). Alleged references (compare history of metallurgy in South Asia) to iron in the Indian Vedas have been used for claims of a very early usage of iron in India respectively to date the texts as such. The rigveda term ayas (metal) refers to copper, while iron which is called as śyāma ayas, literally "black copper", first is mentioned in the post-rigvedic Atharvaveda. Some archaeological evidence suggests iron was smelted in Zimbabwe and southeast Africa as early as the eighth century BC. Iron working was introduced to Greece in the late 11th century BC, from which it spread quickly throughout Europe. The spread of ironworking in Central and Western Europe is associated with Celtic expansion. According to Pliny the Elder, iron use was common in the Roman era. In the lands of what is now considered China, iron appears approximately 700–500 BC. Iron smelting may have been introduced into China through Central Asia. The earliest evidence of the use of a blast furnace in China dates to the 1st century AD, and cupola furnaces were used as early as the Warring States period (403–221 BC). Usage of the blast and cupola furnace remained widespread during the Song and Tang Dynasties. During the Industrial Revolution in Britain, Henry Cort began refining iron from pig iron to wrought iron (or bar iron) using innovative production systems. In 1783 he patented the puddling process for refining iron ore. It was later improved by others, including Joseph Hall. Cast iron Cast iron was first produced in China during 5th century BC, but was hardly in Europe until the medieval period. The earliest cast iron artifacts were discovered by archaeologists in what is now modern Luhe County, Jiangsu in China. Cast iron was used in ancient China for warfare, agriculture, and architecture. During the medieval period, means were found in Europe of producing wrought iron from cast iron (in this context known as pig iron) using finery forges. For all these processes, charcoal was required as fuel. Medieval blast furnaces were about tall and made of fireproof brick; forced air was usually provided by hand-operated bellows. Modern blast furnaces have grown much bigger, with hearths fourteen meters in diameter that allow them to produce thousands of tons of iron each day, but essentially operate in much the same way as they did during medieval times. In 1709, Abraham Darby I established a coke-fired blast furnace to produce cast iron, replacing charcoal, although continuing to use blast furnaces. The ensuing availability of inexpensive iron was one of the factors leading to the Industrial Revolution. Toward the end of the 18th century, cast iron began to replace wrought iron for certain purposes, because it was cheaper. Carbon content in iron was not implicated as the reason for the differences in properties of wrought iron, cast iron, and steel until the 18th century. Since iron was becoming cheaper and more plentiful, it also became a major structural material following the building of the innovative first iron bridge in 1778. This bridge still stands today as a monument to the role iron played in the Industrial Revolution. Following this, iron was used in rails, boats, ships, aqueducts, and buildings, as well as in iron cylinders in steam engines. Railways have been central to the formation of modernity and ideas of progress and various languages (e.g. French, Spanish, Italian and German) refer to railways as iron road. Steel Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in antiquity by using a bloomery. Blacksmiths in Luristan in western Persia were making good steel by 1000 BC. Then improved versions, Wootz steel by India and Damascus steel were developed around 300 BC and AD 500 respectively. These methods were specialized, and so steel did not become a major commodity until the 1850s. New methods of producing it by carburizing bars of iron in the cementation process were devised in the 17th century. In the Industrial Revolution, new methods of producing bar iron without charcoal were devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made steel much more economical, thereby leading to wrought iron no longer being produced in large quantities. Foundations of modern chemistry In 1774, Antoine Lavoisier used the reaction of water steam with metallic iron inside an incandescent iron tube to produce hydrogen in his experiments leading to the demonstration of the conservation of mass, which was instrumental in changing chemistry from a qualitative science to a quantitative one. Symbolic role Iron plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity. The Iron Age was closely related with Rome, and in Ovid's Metamorphoses An example of the importance of iron's symbolic role may be found in the German Campaign of 1813. Frederick William III commissioned then the first Iron Cross as military decoration. Berlin iron jewellery reached its peak production between 1813 and 1815, when the Prussian royal family urged citizens to donate gold and silver jewellery for military funding. The inscription Gold gab ich für Eisen (I gave gold for iron) was used as well in later war efforts. Laboratory routes For a few limited purposes when it is needed, pure iron is produced in the laboratory in small quantities by reducing the pure oxide or hydroxide with hydrogen, or forming iron pentacarbonyl and heating it to 250 °C so that it decomposes to form pure iron powder. Another method is electrolysis of ferrous chloride onto an iron cathode. Main industrial route Nowadays, the industrial production of iron or steel consists of two main stages. In the first stage, iron ore is reduced with coke in a blast furnace, and the molten metal is separated from gross impurities such as silicate minerals. This stage yields an alloy—pig iron—that contains relatively large amounts of carbon. In the second stage, the amount of carbon in the pig iron is lowered by oxidation to yield wrought iron, steel, or cast iron. Other metals can be added at this stage to form alloy steels. Blast furnace processing The blast furnace is loaded with iron ores, usually hematite or magnetite , together with coke (coal that has been separately baked to remove volatile components). Air pre-heated to 900 °C is blown through the mixture, in sufficient amount to turn the carbon into carbon monoxide: 2 C + O2 → 2 CO This reaction raises the temperature to about 2000 °C. The carbon monoxide reduces the iron ore to metallic iron Fe2O3 + 3 CO → 2 Fe + 3 CO2 Some iron in the high-temperature lower region of the furnace reacts directly with the coke: 2 Fe2O3 + 3 C → 4 Fe + 3 CO2 A flux such as limestone (calcium carbonate) or dolomite (calcium-magnesium carbonate) is also added to the furnace's load. Its purpose is to remove silicaceous minerals in the ore, which would otherwise clog the furnace. The heat of the furnace decomposes the carbonates to calcium oxide, which reacts with any excess silica to form a slag composed of calcium silicate or other products. At the furnace's temperature, the metal and the slag are both molten. They collect at the bottom as two immiscible liquid layers (with the slag on top), that are then easily separated. The slag can be used as a material in road construction or to improve mineral-poor soils for agriculture. Steelmaking The pig iron produced by the blast furnace process contains up to 4–5% carbon (by mass), with small amounts of other impurities like sulfur, magnesium, phosphorus, and manganese. This high level of carbon makes it relatively weak and brittle. Reducing the amount of carbon to 0.002–2.1% produces steel, which may be up to 1000 times harder than pure iron. A great variety of steel articles can then be made by cold working, hot rolling, forging, machining, etc. Removing the impurities from pig iron, but leaving 2–4% carbon, results in cast iron, which is cast by foundries into articles such as stoves, pipes, radiators, lamp-posts, and rails. Steel products often undergo various heat treatments after they are forged to shape. Annealing consists of heating them to 700–800 °C for several hours and then gradual cooling. It makes the steel softer and more workable. Direct iron reduction Owing to environmental concerns, alternative methods of processing iron have been developed. "Direct iron reduction" reduces iron ore to a ferrous lump called "sponge" iron or "direct" iron that is suitable for steelmaking. Two main reactions comprise the direct reduction process: Natural gas is partially oxidized (with heat and a catalyst): 2 CH4 + O2 → 2 CO + 4 H2 Iron ore is then treated with these gases in a furnace, producing solid sponge iron: Fe2O3 + CO + 2 H2 → 2 Fe + CO2 + 2 H2O Silica is removed by adding a limestone flux as described above. Thermite process Ignition of a mixture of aluminium powder and iron oxide yields metallic iron via the thermite reaction: Fe2O3 + 2 Al → 2 Fe + Al2O3 Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron (commercially pure iron). Various processes have been used for this, including finery forges, puddling furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other hand, other metals may be added to make alloy steels. Applications As structural material Iron is the most widely used of all the metals, accounting for over 90% of worldwide metal production. Its low cost and high strength often make it the material of choice material to withstand stress or transmit forces, such as the construction of machinery and machine tools, rails, automobiles, ship hulls, concrete reinforcing bars, and the load-carrying framework of buildings. Since pure iron is quite soft, it is most commonly combined with alloying elements to make steel. Mechanical properties The mechanical properties of iron and its alloys are extremely relevant to their structural applications. Those properties |
Smart Utility Networks (SUN) Task Group is chartered to create a PHY amendment to 802.15.4 to provide a standard that facilitates very large scale process control applications such as the utility smart grid network capable of supporting large, geographically diverse networks with minimal infrastructure, with potentially millions of fixed endpoints. In 2012 they released the 802.15.4g radio standard. The Telecommunications Industry Association TR-51 committee develops standards for similar applications. Enhanced Ultra Wideband (UWB) Physical Layers (PHYs) and Associated Ranging Techniques (4z) Approved in 2020, amendment to the UWB PHYs (e.g. with coding options) to increase accuracy and exchange ranging related information between the participating devices. IEEE 802.15.5: Mesh Networking IEEE 802.15.5 provides the architectural framework enabling WPAN devices to promote interoperable, stable, and scalable wireless mesh networking. This standard is composed of two parts: low-rate WPAN mesh and high-rate WPAN mesh networks. The low-rate mesh is built on IEEE 802.15.4-2006 MAC, while the high rate mesh utilizes IEEE 802.15.3/3b MAC. The common features of both meshes include network initialization, addressing, and multihop unicasting. In addition, the low-rate mesh supports multicasting, reliable broadcasting, portability support, trace route and energy saving function, and the high rate mesh supports multihop time-guaranteed service. Mesh networking for IEEE 802.15.1 networks is beyond scope of IEEE 802.15.5 and is carried within Bluetooth mesh working group. IEEE 802.15.6: Body Area Networks In December 2011, the IEEE 802.15.6 task group approved a draft of a standard for Body Area Network (BAN) technologies. The draft was approved on 22 July 2011 by Letter Ballot to start the Sponsor Ballot process. Task Group 6 was formed in November 2007 to focus on a low-power and short-range wireless standard to be optimized for devices and operation on, in, or around the human body (but not limited to humans) to serve a variety of applications including medical, consumer electronics, and personal entertainment. IEEE 802.15.7: Visible Light Communication As of December 2011, The IEEE 802.15.7 Visible Light Communication Task Group has completed draft 5c of a PHY and MAC standard for Visible Light Communication (VLC). The inaugural meeting for Task Group 7 was held during January 2009, where it was chartered to write standards for free-space optical communication using visible light. IEEE P802.15.8: Peer Aware Communications IEEE P802.15.8 received IEEE Standards Board approval on 29 March 2012 to form a Task Group to develop a standard for Peer Aware Communications (PAC) optimized for peer to peer and infrastructureless communications with fully distributed coordination operating in bands below 11 GHz. The proposed standard is targeting data rates greater than 100 kbit/s with scalable data rates up to 10 Mbit/s. Features of the proposed include: discovery for peer information without association discovery of the number of devices in the network group communications with simultaneous membership in multiple groups (typically up to 10) relative positioning multi-hop relay security The draft standard is under development, more information can be found on the IEEE 802.15 Task Group 8 web page. IEEE P802.15.9: Key Management Protocol IEEE P802.15.9 received IEEE Standards Board approval on 7 December 2011 to form a Task Group to develop a recommended practice for the transport of Key Management Protocol (KMP) datagrams. The recommended practice will define a message framework based on Information Elements as a transport method for key management protocol (KMP) datagrams and guidelines for the use of some | formed in March 2005. This mmWave WPAN is defined to operate in the 57–66 GHz range. Depending on the geographical region, anywhere from 2 to 9 GHz of bandwidth is available (for example, 57–64 GHz is available as unlicensed band defined by FCC 47 CFR 15.255 in North America). The millimeter-wave WPAN allows very high data rate, short range (10 m) for applications including high speed internet access, streaming content download (video on demand, HDTV, home theater, etc.), real time streaming and wireless data bus for cable replacement. A total of three PHY modes were defined in the standard: Single carrier (SC) mode (up to 5.3 Gbit/s) High speed interface (HSI) mode (single carrier, up to 5 Gbit/s) Audio/visual (AV) mode (OFDM, up to 3.8 Gbit/s). IEEE 802.15.4: Low Rate WPAN IEEE 802.15.4-2003 (Low Rate WPAN) deals with low data rate but very long battery life (months or even years) and very low complexity. The standard defines both the physical (Layer 1) and data-link (Layer 2) layers of the OSI model. The first edition of the 802.15.4 standard was released in May 2003. Several standardized and proprietary networks (or mesh) layer protocols run over 802.15.4-based networks, including IEEE 802.15.5, ZigBee, Thread, 6LoWPAN, WirelessHART, and ISA100.11a. WPAN Low Rate Alternative PHY (4a) IEEE 802.15.4a (formally called IEEE 802.15.4a-2007) is an amendment to IEEE 802.15.4 specifying additional physical layers (PHYs) to the original standard. The principal interest was in providing higher precision ranging and localization capability (1 meter accuracy and better), higher aggregate throughput, adding scalability to data rates, longer range, and lower power consumption and cost. The selected baselines are two optional PHYs consisting of a UWB Pulse Radio (operating in unlicensed UWB spectrum) and a Chirp Spread Spectrum (operating in unlicensed 2.4 GHz spectrum). The Pulsed UWB Radio is based on Continuous Pulsed UWB technology (see C-UWB) and will be able to deliver communications and high precision ranging. Revision and Enhancement (4b) IEEE 802.15.4b was approved in June 2006 and was published in September 2006 as IEEE 802.15.4-2006. The IEEE 802.15 task group 4b was chartered to create a project for specific enhancements and clarifications to the IEEE 802.15.4-2003 standard, such as resolving ambiguities, reducing unnecessary complexity, increasing flexibility in security key usage, considerations for newly available frequency allocations, and others. PHY Amendment for China (4c) IEEE 802.15.4c was approved in 2008 and was published in January 2009. This defines a PHY amendment adds new rf spectrum specifications to address the Chinese regulatory changes which have opened the 314-316 MHz, 430-434 MHz, and 779-787 MHz bands for Wireless PAN use within China. PHY and MAC Amendment for Japan (4d) The IEEE 802.15 Task Group 4d was chartered to define an amendment to the 802.15.4-2006 standard. The amendment defines a new PHY and such changes to the MAC as are necessary to support a new frequency allocation (950 MHz -956 MHz) in Japan while coexisting with passive tag systems in the band. MAC Amendment for Industrial Applications (4e) The IEEE 802.15 Task Group 4e is chartered to define a MAC amendment to the existing standard 802.15.4-2006. The intent of this amendment is to enhance and add functionality to the 802.15.4-2006 MAC to a) better support the industrial markets and b) permit compatibility with modifications being proposed within the Chinese WPAN. Specific enhancements were made to add channel hopping and a variable time slot option compatible with ISA100.11a. These changes were approved in 2011. PHY and MAC Amendment for Active RFID (4f) The IEEE 802.15.4f Active RFID System Task Group is chartered to define new wireless Physical (PHY) layer(s) and enhancements to the 802.15.4-2006 standard MAC layer which are required to support new PHY(s) for active RFID system bi-directional and location determination applications. PHY Amendment for Smart Utility Networks (4g) IEEE 802.15.4g Smart Utility Networks (SUN) Task Group is chartered to create a PHY amendment to 802.15.4 to provide a standard that facilitates very large scale process control applications such as the utility smart grid network capable of supporting large, geographically diverse networks with minimal infrastructure, with potentially millions of fixed endpoints. In 2012 they released the 802.15.4g radio standard. The Telecommunications Industry Association TR-51 committee develops standards for similar applications. Enhanced Ultra Wideband (UWB) Physical Layers (PHYs) and Associated Ranging Techniques (4z) Approved in 2020, amendment to the UWB PHYs (e.g. with coding options) to increase accuracy and exchange ranging related information between the participating devices. IEEE 802.15.5: Mesh Networking IEEE 802.15.5 provides the architectural framework enabling WPAN devices to promote interoperable, stable, and scalable wireless mesh networking. This standard is composed of two parts: low-rate WPAN mesh and high-rate WPAN mesh networks. The low-rate mesh is built on IEEE 802.15.4-2006 MAC, while the high rate mesh utilizes IEEE 802.15.3/3b MAC. The common features of both |
outside the scope of the IEEE 802 standards. The number 802 has no significance: it was simply the next number in the sequence that the IEEE used for standards projects. The services and protocols specified in IEEE 802 map to the lower two layers (data link and physical) of the seven-layer Open Systems Interconnection (OSI) networking reference model. IEEE 802 divides the OSI data link layer into | 802 map to the lower two layers (data link and physical) of the seven-layer Open Systems Interconnection (OSI) networking reference model. IEEE 802 divides the OSI data link layer into two sub-layers: logical link control (LLC) and medium access control (MAC), as follows: Data link layer LLC sublayer MAC sublayer Physical layer The most widely used of these standards are for the Ethernet family, Token Ring, wireless |
dense environments such as corporate offices, shopping malls and dense residential apartments. This is achieved by means of a technique called OFDMA, which is basically multiplexing in the frequency domain (as opposed to spatial multiplexing, as in 802.11ac). This is equivalent to cellular technology applied into . The IEEE 802.11ax2021 standard was approved on February 9, 2021. 802.11ay IEEE 802.11ay is a standard that is being developed, also called EDMG: Enhanced Directional MultiGigabit PHY. It is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. It will be an extension of the existing 11ad, aimed to extend the throughput, range, and use-cases. The main use-cases include indoor operation and short-range communications due to atmospheric oxygen absorption and inability to penetrate walls. The peak transmission rate of 802.11ay is 40 Gbit/s. The main extensions include: channel bonding (2, 3 and 4), MIMO (up to 4 streams) and higher modulation schemes. The expected range is 300-500 m. 802.11ba IEEE 802.11ba Wake-up Radio (WUR) Operation is an amendment to the IEEE 802.11 standard that enables energy efficient operation for data reception without increasing latency. The target active power consumption to receive a WUR packet is less than 1 milliwatt and supports data rates of 62.5 kbit/s and 250 kbit/s. The WUR PHY uses MC-OOK (multicarrier OOK) to achieve extremely low power consumption. 802.11be IEEE 802.11be Extremely High Throughput (EHT) is the potential next amendment to the 802.11 IEEE standard, and will likely be designated as Wi-Fi 7. It will build upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4 GHz, 5 GHz, and 6 GHz frequency bands. Common misunderstandings about achievable throughput Across all variations of 802.11, maximum achievable throughputs are given either based on measurements under ideal conditions or in the layer-2 data rates. However, this does not apply to typical deployments in which data is being transferred between two endpoints, of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link. This means that, typically, data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the application's packet size determines the speed of the data transfer. This means applications that use small packets (e.g., VoIP) create dataflows with high-overhead traffic (i.e., a low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e., the data rate) and, of course, the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached graphs that show measurements of UDP throughput. Each represents an average (UDP) throughput (please note that the error bars are there but barely visible due to the small variation) of 25 measurements. Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. These figures assume there are no packet errors, which, if occurring, will lower the transmission rate further. Channels and frequencies 802.11b, 802.11g, and 802.11n-2.4 utilize the spectrum, one of the ISM bands. 802.11a, 802.11n, and 802.11ac use the more heavily regulated band. These are commonly referred to as the "2.4 GHz and 5 GHz bands" in most sales literature. Each spectrum is sub-divided into channels with a center frequency and bandwidth, analogous to how radio and TV broadcast bands are sub-divided. The 2.4 GHz band is divided into 14 channels spaced 5 MHz apart, beginning with channel 1, which is centered on 2.412 GHz. The latter channels have additional restrictions or are unavailable for use in some regulatory domains. The channel numbering of the spectrum is less intuitive due to the differences in regulations between countries. These are discussed in greater detail on the list of WLAN channels. Channel spacing within the 2.4 GHz band In addition to specifying the channel center frequency, 802.11 also specifies (in Clause 17) a spectral mask defining the permitted power distribution across each channel. The mask requires the signal to be attenuated a minimum of 20 dB from its peak amplitude at ±11 MHz from the center frequency, the point at which a channel is effectively 22 MHz wide. One consequence is that stations can use only every fourth or fifth channel without overlap. Availability of channels is regulated by country, constrained in part by how each country allocates radio spectrum to various services. At one extreme, Japan permits the use of all 14 channels for 802.11b, and for 802.11g/n-2.4. Other countries such as Spain initially allowed only channels 10 and 11, and France allowed only 10, 11, 12, and 13; however, Europe now allow channels 1 through 13. North America and some Central and South American countries allow only Since the spectral mask defines only power output restrictions up to ±11 MHz from the center frequency to be attenuated by −50 dBr, it is often assumed that the energy of the channel extends no further than these limits. It is more correct to say that the overlapping signal on any channel should be sufficiently attenuated to interfere with a transmitter on any other channel minimally, given the separation between channels. Due to the near–far problem a transmitter can impact (desensitize) a receiver on a "non-overlapping" channel, but only if it is close to the victim receiver (within a meter) or operating above allowed power levels. Conversely, a sufficiently distant transmitter on an overlapping channel can have little to no significant effect. Confusion often arises over the amount of channel separation required between transmitting devices. 802.11b was based on direct-sequence spread spectrum (DSSS) modulation and utilized a channel bandwidth of 22 MHz, resulting in three "non-overlapping" channels (1, 6, and 11). 802.11g was based on OFDM modulation and utilized a channel bandwidth of 20 MHz. This occasionally leads to the belief that four "non-overlapping" channels (1, 5, 9, and 13) exist under 802.11g. However, this is not the case as per 17.4.6.3 Channel Numbering of operating channels of the IEEE Std 802.11 (2012), which states, "In a multiple cell network topology, overlapping and/or adjacent cells using different channels can operate simultaneously without interference if the distance between the center frequencies is at least 25 MHz." and section 18.3.9.3 and Figure 18-13. This does not mean that the technical overlap of the channels recommends the non-use of overlapping channels. The amount of inter-channel interference seen on a configuration using channels 1, 5, 9, and 13 (which is permitted in Europe, but not in North America) is barely different from a three-channel configuration, but with an entire extra channel. However, overlap between channels with more narrow spacing (e.g. 1, 4, 7, 11 in North America) may cause unacceptable degradation of signal quality and throughput, particularly when users transmit near the boundaries of AP cells. Regulatory domains and legal compliance IEEE uses the phrase regdomain to refer to a legal regulatory region. Different countries define different levels of allowable transmitter power, time that a channel can be occupied, and different available channels. Domain codes are specified for the United States, Canada, ETSI (Europe), Spain, France, Japan, and China. Most Wi-Fi certified devices default to regdomain 0, which means least common denominator settings, i.e., the device will not transmit at a power above the allowable power in any nation, nor will it use frequencies that are not permitted in any nation. The regdomain setting is often made difficult or impossible to change so that the end-users do not conflict with local regulatory agencies such as the United States' Federal Communications Commission. Layer 2 – Datagrams The datagrams are called frames. Current 802.11 standards specify frame types for use in the transmission of data as well as management and control of wireless links. Frames are divided into very specific and standardized sections. Each frame consists of a MAC header, payload, and frame check sequence (FCS). Some frames may not have a payload. The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. This frame control field is subdivided into the following sub-fields: Protocol Version: Two bits representing the protocol version. The currently used protocol version is zero. Other values are reserved for future use. Type: Two bits identifying the type of WLAN frame. Control, Data, and Management are various frame types defined in IEEE 802.11. Subtype: Four bits providing additional discrimination between frames. Type and Subtype are used together to identify the exact frame. ToDS and FromDS: Each is one bit in size. They indicate whether a data frame is headed for a distribution system. Control and management frames set these values to zero. All the data frames will have one of these bits set. However, communication within an independent basic service set (IBSS) network always sets these bits to zero. More Fragments: The More Fragments bit is set when a packet is divided into multiple frames for transmission. Every frame except the last frame of a packet will have this bit set. Retry: Sometimes frames require retransmission, and for this, there is a Retry bit that is set to one when a frame is resent. This aids in the elimination of duplicate frames. Power Management: This bit indicates the power management state of the sender after the completion of a frame exchange. Access points are required to manage the connection and will never set the power-saver bit. More Data: The More Data bit is used to buffer frames received in a distributed system. The access point uses this bit to facilitate stations in power-saver mode. It indicates that at least one frame is available and addresses all stations connected. Protected Frame: The Protected Frame bit is set to the value of one if the frame body is encrypted by a protection mechanism such as Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), or Wi-Fi Protected Access II (WPA2). Order: This bit is set only when the "strict ordering" delivery method is employed. Frames and fragments are not always sent in order as it causes a transmission performance penalty. The next two bytes are reserved for the Duration ID field, indicating how long the field's transmission will take so other devices know when the channel will be available again. This field can take one of three forms: Duration, Contention-Free Period (CFP), and Association ID (AID). An 802.11 frame can have up to four address fields. Each field can carry a MAC address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used for filtering purposes by the receiver. Address 4 is only present in data frames transmitted between access points in an Extended Service Set or between intermediate nodes in a mesh network. The remaining fields of the header are: The Sequence Control field is a two-byte section used to identify message order and eliminate duplicate frames. The first 4 bits are used for the fragmentation number, and the last 12 bits are the sequence number. An optional two-byte Quality of Service control field, present in QoS Data frames; it was added with 802.11e. The payload or frame body field is variable in size, from 0 to 2304 bytes plus any overhead from security encapsulation, and contains information from higher layers. The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for integrity checks of retrieved frames. As frames are about to be sent, the FCS is calculated and appended. When a station receives a frame, it can calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the frame was not distorted during transmission. Management frames Management frames are not always authenticated, and allow for the maintenance, or discontinuance, of communication. Some common 802.11 subtypes include: Authentication frame: 802.11 authentication begins with the wireless network interface card (WNIC) sending an authentication frame to the access point containing its identity. When open system authentication is being used, the WNIC sends only a single authentication frame, and the access point responds with an authentication frame of its own indicating acceptance or rejection. When shared key authentication is being used, the WNIC sends an initial authentication request, and the access point responds with an authentication frame containing challenge text. The WNIC then sends an authentication frame containing the encrypted version of the challenge text to the access point. The access point ensures the text was encrypted with the correct key by decrypting it with its own key. The result of this process determines the WNIC's authentication status. Association request frame: Sent from a station, it enables the access point to allocate resources and synchronize. The frame carries information about the WNIC, including supported data rates and the SSID of the network the station wishes to associate with. If the request is accepted, the access point reserves memory and establishes an association ID for the WNIC. Association response frame: Sent from an access point to a station containing the acceptance or rejection to an association request. If it is an acceptance, the frame will contain information such as an association ID and supported data rates. Beacon frame: Sent periodically from an access point to announce its presence and provide the SSID, and other parameters for WNICs within range. : Sent from a station wishing to terminate connection from another station. Disassociation frame: Sent from a station wishing to terminate the connection. It is an elegant way to allow the access point to relinquish memory allocation and remove the WNIC from the association table. Probe request frame: Sent from a station when it requires information from another station. Probe response frame: Sent from an access point containing capability information, supported data rates, etc., after receiving a probe request frame. Reassociation request frame: A WNIC sends a reassociation request when it drops from the currently associated access point range and finds another access point with a stronger signal. The new access point coordinates the forwarding of any information that may still be contained in the buffer of the previous access point. Reassociation response frame: Sent from an access point containing the acceptance or rejection to a WNIC reassociation request frame. The frame includes information required for association such as the association ID and supported data rates. Action frame: extending management frame to control a certain action. Some of the action categories are Block Ack, Radio Measurement, Fast BSS Transition, etc. These frames are sent by a station when it needs to tell its peer for a certain action to be taken. For example, a station can tell another station to set up a block acknowledgement by sending an ADDBA Request action frame. The other station would then respond with an ADDBA Response action frame. The body of a management frame consists of frame-subtype-dependent fixed fields followed by a sequence of information elements (IEs). The common structure of an IE is as follows: Control frames Control frames facilitate the exchange of data frames between stations. Some common 802.11 control frames include: Acknowledgement (ACK) frame: After receiving a data frame, the receiving station will send an ACK | the overall improvements to clients under dense environments. For an individual client, the maximum improvement in data rate (PHY speed) against the predecessor (802.11ac) is only 39% (for comparison, this improvement was nearly 500% for the predecessors). Yet, even with this comparatively minor 39% figure, the goal was to provide 4 times the throughput-per-area of 802.11ac (hence High Efficiency). The motivation behind this goal was the deployment of WLAN in dense environments such as corporate offices, shopping malls and dense residential apartments. This is achieved by means of a technique called OFDMA, which is basically multiplexing in the frequency domain (as opposed to spatial multiplexing, as in 802.11ac). This is equivalent to cellular technology applied into . The IEEE 802.11ax2021 standard was approved on February 9, 2021. 802.11ay IEEE 802.11ay is a standard that is being developed, also called EDMG: Enhanced Directional MultiGigabit PHY. It is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. It will be an extension of the existing 11ad, aimed to extend the throughput, range, and use-cases. The main use-cases include indoor operation and short-range communications due to atmospheric oxygen absorption and inability to penetrate walls. The peak transmission rate of 802.11ay is 40 Gbit/s. The main extensions include: channel bonding (2, 3 and 4), MIMO (up to 4 streams) and higher modulation schemes. The expected range is 300-500 m. 802.11ba IEEE 802.11ba Wake-up Radio (WUR) Operation is an amendment to the IEEE 802.11 standard that enables energy efficient operation for data reception without increasing latency. The target active power consumption to receive a WUR packet is less than 1 milliwatt and supports data rates of 62.5 kbit/s and 250 kbit/s. The WUR PHY uses MC-OOK (multicarrier OOK) to achieve extremely low power consumption. 802.11be IEEE 802.11be Extremely High Throughput (EHT) is the potential next amendment to the 802.11 IEEE standard, and will likely be designated as Wi-Fi 7. It will build upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4 GHz, 5 GHz, and 6 GHz frequency bands. Common misunderstandings about achievable throughput Across all variations of 802.11, maximum achievable throughputs are given either based on measurements under ideal conditions or in the layer-2 data rates. However, this does not apply to typical deployments in which data is being transferred between two endpoints, of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link. This means that, typically, data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the application's packet size determines the speed of the data transfer. This means applications that use small packets (e.g., VoIP) create dataflows with high-overhead traffic (i.e., a low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e., the data rate) and, of course, the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached graphs that show measurements of UDP throughput. Each represents an average (UDP) throughput (please note that the error bars are there but barely visible due to the small variation) of 25 measurements. Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. These figures assume there are no packet errors, which, if occurring, will lower the transmission rate further. Channels and frequencies 802.11b, 802.11g, and 802.11n-2.4 utilize the spectrum, one of the ISM bands. 802.11a, 802.11n, and 802.11ac use the more heavily regulated band. These are commonly referred to as the "2.4 GHz and 5 GHz bands" in most sales literature. Each spectrum is sub-divided into channels with a center frequency and bandwidth, analogous to how radio and TV broadcast bands are sub-divided. The 2.4 GHz band is divided into 14 channels spaced 5 MHz apart, beginning with channel 1, which is centered on 2.412 GHz. The latter channels have additional restrictions or are unavailable for use in some regulatory domains. The channel numbering of the spectrum is less intuitive due to the differences in regulations between countries. These are discussed in greater detail on the list of WLAN channels. Channel spacing within the 2.4 GHz band In addition to specifying the channel center frequency, 802.11 also specifies (in Clause 17) a spectral mask defining the permitted power distribution across each channel. The mask requires the signal to be attenuated a minimum of 20 dB from its peak amplitude at ±11 MHz from the center frequency, the point at which a channel is effectively 22 MHz wide. One consequence is that stations can use only every fourth or fifth channel without overlap. Availability of channels is regulated by country, constrained in part by how each country allocates radio spectrum to various services. At one extreme, Japan permits the use of all 14 channels for 802.11b, and for 802.11g/n-2.4. Other countries such as Spain initially allowed only channels 10 and 11, and France allowed only 10, 11, 12, and 13; however, Europe now allow channels 1 through 13. North America and some Central and South American countries allow only Since the spectral mask defines only power output restrictions up to ±11 MHz from the center frequency to be attenuated by −50 dBr, it is often assumed that the energy of the channel extends no further than these limits. It is more correct to say that the overlapping signal on any channel should be sufficiently attenuated to interfere with a transmitter on any other channel minimally, given the separation between channels. Due to the near–far problem a transmitter can impact (desensitize) a receiver on a "non-overlapping" channel, but only if it is close to the victim receiver (within a meter) or operating above allowed power levels. Conversely, a sufficiently distant transmitter on an overlapping channel can have little to no significant effect. Confusion often arises over the amount of channel separation required between transmitting devices. 802.11b was based on direct-sequence spread spectrum (DSSS) modulation and utilized a channel bandwidth of 22 MHz, resulting in three "non-overlapping" channels (1, 6, and 11). 802.11g was based on OFDM modulation and utilized a channel bandwidth of 20 MHz. This occasionally leads to the belief that four "non-overlapping" channels (1, 5, 9, and 13) exist under 802.11g. However, this is not the case as per 17.4.6.3 Channel Numbering of operating channels of the IEEE Std 802.11 (2012), which states, "In a multiple cell network topology, overlapping and/or adjacent cells using different channels can operate simultaneously without interference if the distance between the center frequencies is at least 25 MHz." and section 18.3.9.3 and Figure 18-13. This does not mean that the technical overlap of the channels recommends the non-use of overlapping channels. The amount of inter-channel interference seen on a configuration using channels 1, 5, 9, and 13 (which is permitted in Europe, but not in North America) is barely different from a three-channel configuration, but with an entire extra channel. However, overlap between channels with more narrow spacing (e.g. 1, 4, 7, 11 in North America) may cause unacceptable degradation of signal quality and throughput, particularly when users transmit near the boundaries of AP cells. Regulatory domains and legal compliance IEEE uses the phrase regdomain to refer to a legal regulatory region. Different countries define different levels of allowable transmitter power, time that a channel can be occupied, and different available channels. Domain codes are specified for the United States, Canada, ETSI (Europe), Spain, France, Japan, and China. Most Wi-Fi certified devices default to regdomain 0, which means least common denominator settings, i.e., the device will not transmit at a power above the allowable power in any nation, nor will it use frequencies that are not permitted in any nation. The regdomain setting is often made difficult or impossible to change so that the end-users do not conflict with local regulatory agencies such as the United States' Federal Communications Commission. Layer 2 – Datagrams The datagrams are called frames. Current 802.11 standards specify frame types for use in the transmission of data as well as management and control of wireless links. Frames are divided into very specific and standardized sections. Each frame consists of a MAC header, payload, and frame check sequence (FCS). Some frames may not have a payload. The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. This frame control field is subdivided into the following sub-fields: Protocol Version: Two bits representing the protocol version. The currently used protocol version is zero. Other values are reserved for future use. Type: Two bits identifying the type of WLAN frame. Control, Data, and Management are various frame types defined in IEEE 802.11. Subtype: Four bits providing additional discrimination between frames. Type and Subtype are used together to identify the exact frame. ToDS and FromDS: Each is one bit in size. They indicate whether a data frame is headed for a distribution system. Control and management frames set these values to zero. All the data frames will have one of these bits set. However, communication within an independent basic service set (IBSS) network always sets these bits to zero. More Fragments: The More Fragments bit is set when a packet is divided into multiple frames for transmission. Every frame except the last frame of a packet will have this bit set. Retry: Sometimes frames require retransmission, and for this, there is a Retry bit that is set to one when a frame is resent. This aids in the elimination of duplicate frames. Power Management: This bit indicates the power management state of the sender after the completion of a frame exchange. Access points are required to manage the connection and will never set the power-saver bit. More Data: The More Data bit is used to buffer frames received in a distributed system. The access point uses this bit to facilitate stations in power-saver mode. It indicates that at least one frame is available and addresses all stations connected. Protected Frame: The Protected Frame bit is set to the value of one if the frame body is encrypted by a protection mechanism such as Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), or Wi-Fi Protected Access II (WPA2). Order: This bit is set only when the "strict ordering" delivery method is employed. Frames and fragments are not always sent in order as it causes a transmission performance penalty. The next two bytes are reserved for the Duration ID field, indicating how long the field's transmission will take so other devices know when the channel will be available again. This field can take one of three forms: Duration, Contention-Free Period (CFP), and Association ID (AID). An 802.11 frame can have up to four address fields. Each field can carry a MAC address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used for filtering purposes by the receiver. Address 4 is only present in data frames transmitted between access points in an Extended Service Set or between intermediate nodes in a mesh network. The remaining fields of the header are: The Sequence Control field is a two-byte section used to identify message order and eliminate duplicate frames. The first 4 bits are used for the fragmentation number, and the last 12 bits are the sequence number. An optional two-byte Quality of Service control field, present in QoS Data frames; it was added with 802.11e. The payload or frame body field is variable in size, from 0 to 2304 bytes plus any overhead from security encapsulation, and contains information from higher layers. The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for integrity checks of retrieved frames. As frames are about to be sent, the FCS is calculated and appended. When a station receives a frame, it can calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the frame was not distorted during transmission. Management frames Management frames are not always authenticated, and allow for the maintenance, or discontinuance, of communication. Some common 802.11 subtypes include: Authentication frame: 802.11 authentication begins with the wireless network interface card (WNIC) sending an authentication frame to the access point containing its identity. When open system authentication is being used, the WNIC sends only a single authentication frame, and the access point responds with an authentication frame of its own indicating acceptance or rejection. When shared key authentication is being used, the WNIC sends an initial authentication request, and the access point responds with an authentication frame containing challenge text. The WNIC then sends an authentication frame containing the encrypted version of the challenge text to the access point. The access point ensures the text was encrypted with the correct key by decrypting it with its own key. The result of this process determines the WNIC's authentication status. Association request frame: Sent from a station, it enables the access point to allocate resources and synchronize. The frame carries information about the WNIC, including supported data rates and the SSID of the network the station wishes to associate with. If the request is accepted, the access point reserves memory and establishes an association ID for the WNIC. Association response frame: Sent from an access point to a station containing the acceptance or rejection to an association request. If it is an acceptance, the frame will contain information such as an association ID and supported data rates. Beacon frame: Sent periodically from an access point to announce its presence and provide the SSID, and other parameters for WNICs within range. : Sent from a station wishing to terminate connection from another station. Disassociation frame: Sent from a station wishing to terminate the connection. It is an elegant way to allow the access point to relinquish memory allocation and remove the WNIC from the association table. Probe request frame: Sent from a station when it requires information from another station. Probe response frame: Sent from an access point containing capability information, supported data rates, etc., after receiving a probe request frame. Reassociation request frame: A WNIC sends a reassociation request when it drops from the currently associated access point range and finds another access point with a stronger signal. The new access point coordinates the forwarding of any information that may still be contained in the buffer of the previous access point. Reassociation response frame: Sent from an access point containing the acceptance or rejection to a WNIC reassociation request frame. The frame includes information required for association such as the association ID and supported data rates. Action frame: extending management frame to control a certain action. Some of the action categories are Block Ack, Radio Measurement, Fast BSS Transition, etc. These frames are sent by a station when it needs to tell its peer for a certain action to be taken. For example, a station can tell another station to set up a block acknowledgement by sending an ADDBA Request action frame. The other station would then respond with an ADDBA Response action frame. The body of a management frame consists of frame-subtype-dependent fixed fields followed by a sequence of information elements (IEs). The common structure of an IE is as follows: Control frames Control frames facilitate the exchange of data frames between stations. Some common 802.11 control frames include: Acknowledgement (ACK) frame: After receiving a data frame, the receiving station will send an ACK frame to the sending station if no errors are found. If the sending station doesn't receive an ACK frame within a predetermined period of time, the sending station will resend the frame. Request to Send (RTS) frame: The RTS and CTS frames provide an optional collision reduction scheme for access points with hidden stations. A station sends an RTS frame as the first step in a two-way handshake required before sending data frames. Clear to Send (CTS) frame: A station responds to an RTS frame with a CTS frame. It provides clearance for the requesting station to send a data frame. The CTS provides collision control management by including a time value for which all other stations are to hold off transmission while the requesting station transmits. Data frames Data frames carry packets from web pages, files, etc. within the body. The body begins with an IEEE 802.2 header, with the Destination Service Access Point (DSAP) specifying the protocol, followed by a Subnetwork Access Protocol (SNAP) header if the DSAP is hex AA, with the organizationally unique identifier (OUI) and protocol ID (PID) fields specifying the protocol. If the OUI is all zeroes, the protocol ID field is an EtherType value. Almost all 802.11 data frames use 802.2 and SNAP headers, and most use an OUI of 00:00:00 and an EtherType value. Similar to TCP congestion control on the internet, frame loss is built into the operation of 802.11. To select the correct transmission speed or Modulation and Coding Scheme, a rate control algorithm may test different speeds. The actual packet loss rate of Access points varies widely for different link conditions. There are variations in the loss rate experienced on production Access points, between 10% and 80%, with 30% being a common average. It is important to be aware that the link layer should recover these lost frames. If the sender does not receive an Acknowledgement (ACK) frame, then it will be resent. Standards and amendments Within the IEEE 802.11 Working Group, the following IEEE Standards Association Standard and Amendments exist: IEEE 802.11-1997: The WLAN standard was originally 1 Mbit/s and 2 Mbit/s, 2.4 GHz RF and infrared (IR) standard (1997), all the others listed below are Amendments to this standard, except for Recommended Practices 802.11F and 802.11T. IEEE 802.11a: 54 Mbit/s, 5 GHz standard (1999, shipping products in 2001) IEEE 802.11b: 5.5 Mbit/s and 11 Mbit/s, 2.4 GHz standard (1999) IEEE 802.11c: Bridge operation procedures; included in the IEEE 802.1D standard (2001) IEEE 802.11d: International (country-to-country) roaming extensions (2001) IEEE 802.11e: Enhancements: QoS, including packet bursting (2005) IEEE 802.11F: Inter-Access Point Protocol (2003) Withdrawn February 2006 IEEE 802.11g: 54 Mbit/s, 2.4 GHz standard (backwards compatible with b) (2003) IEEE 802.11h: Spectrum Managed 802.11a (5 GHz) for European compatibility (2004) IEEE 802.11i: Enhanced security (2004) IEEE 802.11j: Extensions for Japan (4.9-5.0 GHz) (2004) IEEE 802.11-2007: A new release of the standard that includes amendments a, b, d, e, g, h, i, and j. (July 2007) IEEE 802.11k: Radio resource measurement enhancements (2008) IEEE 802.11n: Higher Throughput WLAN at 2.4 and 5 GHz; 20 and 40 MHz channels; introduces MIMO to (September 2009) IEEE 802.11p: WAVE—Wireless Access for the Vehicular Environment (such as ambulances and passenger cars) (July 2010) IEEE 802.11r: Fast BSS transition (FT) (2008) IEEE 802.11s: Mesh Networking, Extended Service Set (ESS) (July 2011) IEEE 802.11T: Wireless Performance Prediction (WPP)—test methods and metrics Recommendation cancelled IEEE 802.11u: Improvements related to HotSpots and 3rd-party authorization of clients, e.g., cellular network offload (February 2011) IEEE 802.11v: Wireless network management (February 2011) IEEE 802.11w: Protected Management Frames (September 2009) IEEE 802.11y: 3650–3700 MHz Operation in the U.S. (2008) IEEE 802.11z: Extensions to Direct Link Setup (DLS) (September 2010) IEEE 802.11-2012: A new release of the standard that includes amendments k, n, p, r, s, u, v, w, y, and z (March 2012) IEEE 802.11aa: Robust streaming of Audio Video Transport Streams (June 2012) - see Stream Reservation Protocol IEEE 802.11ac: Very High Throughput WLAN at 5 GHz; wider channels (80 and 160 MHz); Multi-user MIMO (down-link only) (December 2013) IEEE 802.11ad: Very High Throughput 60 GHz (December 2012) — see WiGig IEEE 802.11ae: Prioritization of Management Frames (March 2012) IEEE 802.11af: TV Whitespace (February 2014) IEEE 802.11-2016: A new release of the standard that includes amendments aa, ac, ad, ae, and af (December 2016) IEEE 802.11ah: Sub-1 GHz license exempt operation (e.g., sensor network, smart metering) (December 2016) IEEE 802.11ai: Fast Initial Link Setup (December 2016) IEEE 802.11aj: China Millimeter Wave (February 2018) IEEE 802.11ak: Transit Links within Bridged Networks (June 2018) IEEE 802.11aq: Pre-association Discovery (July 2018) |
Representatives Alexandria Ocasio-Cortez tried Irn-Bru at COP26 and said she loved it, and that it tasted just like the Latino soda Kola Champagne. The response from others at the conference ranged from strong dislike to strong like. The volume of editorial and opinion publicity the drink gained on social and print media was described as "the summit's surprise", coverage worth millions. However, AG Barr's share price remained relatively flat at the time. Production It is produced in Westfield, Cumbernauld, North Lanarkshire, since Barr's moved out of their Parkhead, Glasgow factory in the mid-2000s. In 2011, Irn-Bru closed their factory in Mansfield, making the Westfield plant in Cumbernauld the main location for production. Other manufacturing locations include the English city of Sheffield. Packaging Irn-Bru and other Barr brands including Pineappleade, Cream Soda, Tizer, Red Kola, Barr Cola, and Limeade are still available in 750 ml reusable glass bottles. The most popular plastic bottle size is 500 ml. Irn-Bru and Diet Irn-Bru are available in the following sizes: 150 ml can 250 ml plastic bottle 330 ml can 330 ml glass bottle 500 ml Value Can (formerly the big summer can) 500 ml plastic bottle (UK, Canada) 600 ml plastic bottle (Russia) 1 litre plastic bottle 1.25 litre bottle (Australia, New Zealand, Russia, UK) 2 litre plastic bottle 2.25 litre plastic bottle (Russia) 2.5 litre bottle (UK "Big Bru") 3 litre plastic bottle 355 ml glass bottle (in Canada) 750 ml glass bottle 5 litre Syrup containers. In May 2007, A.G Barr re-designed the Irn-Bru Can and Bottle Logos. In April 2016, A.G Barr released the redesigned Irn-Bru Can and Bottle Logos. Marketing Advertising campaigns Barr's actively promoted their Irn-Bru from the outset, with some of their earliest ads featuring world champion wrestlers and Highland Games athletes Donald Dinnie and Alex Munro who endorsed the drink by means of personal testimonials. In the 1930s, the firm began a long-running series of comic strip ads entitled "The Adventures of Ba-Bru" which ran in various local papers from April 1939 until October 1970. The last traces of this campaign, a large neon sign featuring Ba-Bru which stood in Union St above Glasgow Central railway station, was removed in 1983 and replaced with an illuminated display featuring the tagline "Your Other National Drink". Barr has a long-established gimmick associating Irn-Bru with Scottishness, stemming from the claim of its being Scotland's most popular soft drink. A tagline, "Made in Scotland from girders", was used for several years from the 1980s, usually featuring Irn-Bru drinkers becoming unusually strong, durable or magnetic. An advertising campaign launched in Spring 2000 aimed to "dramatise the extraordinary appeal of Irn-Bru in a likeably maverick style". David Amers, Planning Director, said: "Irn-Bru is the likeable maverick of the soft drinks market and these ads perfectly capture the brand's spirit." One involved a grandfather (played by actor Robert Wilson) who removed his false teeth to spoil his grandson's interest in his can of Irn-Bru. A further TV advertisement featured a senior citizen in a motorised wheelchair robbing a local shopping market of a supply of Irn-Bru. In 2004 Irn created a new concept "Phenomenal". In 2006 the company launched its first Christmas adverts. This campaign consisted of a parody commercial of a popular Christmas Cartoon, The Snowman, and was effective in interesting American audiences in the Irn-Bru brand. Further advertising campaigns for Irn-Bru appeared in conjunction with the release of Irn-Bru 32 in 2006. A 2009 advertisement for the product featured a group of high school pupils performing a musical number, with the refrain "It's fizzy, it's ginger, it's phenomenal!" It was a parody of High School Musical, and starred Jack Lowden. In 2012 the company changed its slogan to "gets you through", which see a number of people drinking Irn-Bru to get through tough situations. In response to the Coca-Cola 'Share a Coke' campaign, Barr decided to produce thousands of limited edition 750 ml bottles of Irn-Bru with the names 'Fanny', 'Senga', 'Rab' and 'Tam' on the label, mimicking that by Coca-Cola. The use of the name 'Fanny' ties in with one of Irn-Bru'''s controversial marketing advertisements. In December 2018, 12 years after the original Christmas advert left off, with the child being in a snow bank in Glasgow, returned to the screens as a sequel which involved the child taking a Seaplane to chase down the Snowman who had his Irn-Bru. In the end, the drink gets stolen by Santa. Controversy One of the most controversial Irn-Bru television adverts evoked 1950s entertainment. A mother plays the piano, while the father and two children deliver a song which ends with the mother singing: "...even though I used to be a man". This advertisement was broadcast in 2000, but when it was repeated in 2003, it led to seventeen complaints about it being offensive to members of the transgender community. Issue A14 of the Ofcom Advertising Complaints bulletin reports that the children's response to their mother's claim was not offensive. According to the advertising agency Leith, the advertisement was meant to "create a sense of humour while confirming the maverick nature of the brand". However, the scene involving the mother shaving at the end of the advertisement was deemed by Ofcom to be "capable of causing offence by strongly reinforcing negative stereotypes", and so it was taken off the air. In 2003, an Irn-Bru commercial which showed a midwife trying to entice a baby from its mother's womb during a difficult delivery sparked fifty complaints. Some saw it as upsetting to women who had suffered miscarriages. One billboard that drew criticism featured a young woman in a bikini along with the slogan "Diet Irn-Bru. I never knew could give so much pleasure". Another | once government SDI consolidation of the soft drinks industry had ended. The name change followed the introduction of new labelling restrictions which cracked down on spurious health claims and introduced minimum standards for drinks claiming to contain minerals such as iron. However, according to Robert Barr OBE (chairman 1947–1978), there was also a commercial rationale behind the unusual spelling. "Iron Brew" had come to be understood as a generic product category in the UK, whereas adopting the name "Irn-Bru" allowed the firm to have a legally protected brand identity that would enable the firm to benefit from the popularity of their wartime "Adventures of Ba-Bru" comic strip advertising. (The "Iron Brew" name has continued to be used for many versions of the drink sold by rival manufacturers.) 1980 saw the introduction of Low Calorie Irn-Bru: this was re-launched in 1991 as Diet Irn-Bru and again in 2011 as Irn-Bru Sugar Free. The Irn-Bru 32 energy drink variant was launched in 2006. Irn-Bru has long been the most popular soft drink in Scotland, with Coca-Cola second, but competition between the two brands brought their sales to roughly equal levels by 2003. It is also the third best selling soft drink in the UK, after Coca-Cola and Pepsi, outselling high-profile brands such as Fanta, Dr Pepper, Sprite and 7 Up. This success in defending its home market (a feat claimed only by Irn-Bru, Inca Kola and Thums Up; Thums Up sold out to Coca-Cola in 1993, and Inka Kola owners Corporación Lindley S.A. entered into a joint venture with Coca-Cola in 1999, giving up all rights to the name outside Peru) led to ongoing speculation that Coca-Cola, PepsiCo, Inc. or its UK brand franchisee Britvic would attempt to buy A.G. Barr. In November 2012 AG Barr and Britvic announced a merger proposal, but in July 2013 the merger collapsed when terms could not be agreed. Irn-Bru's advertising slogans used to be 'Scotland's other National Drink', referring to whisky, and 'Made in Scotland from girders', a reference to the rusty colour of the drink; though the closest one can come to substantiating this claim is the 0.002% ammonium ferric citrate listed in the ingredients. Fiery Irn-Bru, a limited edition variant, was released in autumn 2011. Packaged with a black and orange design, and with the signature man icon with an added image of a fire, it had a warm, tingly feeling in the mouth once drunk. It featured the traditional Irn-Bru flavour with an aftertaste similar to ginger. Irn-Bru was also sold in reusable 750 ml glass bottles which, like other Barr's drinks, were able to be returned to the manufacturer in exchange for a 30 pence (previously 20p) deposit paid on purchase. This scheme was widely available in shops across Scotland and led to the colloquial term for an empty: a "glass cheque". As a result of a 40% drop in returned bottles since the 90s Barr deemed the washing and re-filling process uneconomical, and on 1 January 2016 ceased the scheme. 2016 saw the introduction of the current logo, conveying "strength" and an "industrial feel", and a new diet variant called Irn-Bru Xtra in different branding to the existing sugar free variety in a similar fashion to Coca-Cola Zero and Pepsi Max. Barr changed the formula of Irn-Bru in January 2018 in response to a sugar tax implemented in the UK in April 2018, intended to combat obesity. By reducing the sugar content to less than 5g per 100ml, Barr has made Irn-Bru exempt from the tax. The manufacturer asserts that "most people will not be able to tell the difference in flavour between the old and new formulas", but fans of the drink have started the 'Save Real Irn-Bru' campaign to stop or reverse this change, and have been stocking up on the more sugary formula. In May 2019, Barr announced a new energy drink variant of Irn-Bru called Irn-Bru Energy, which was released on 1 July 2019. In October 2019, Barr announced the launch of the "Irn-Bru 1901". The drink would be available for a limited time and use the original recipe from 1901. In March 2021, Barr announced the relaunch of "IRN-BRU 1901" as a permanent addition to the IRN-BRU lineup. Irn-Bru was the only soft drink on sale at the 2021 United Nations Climate Change Conference (COP26) in Glasgow, Scotland, due to a sponsorship arrangement. Member of the US House of Representatives Alexandria Ocasio-Cortez tried Irn-Bru at COP26 and said she loved it, and that it tasted just like the Latino soda Kola Champagne. The response from others at the conference ranged from strong dislike to strong like. The volume of editorial and opinion publicity the drink gained on social and print media was described as "the summit's surprise", coverage worth millions. However, AG Barr's share price remained relatively flat at the time. Production It is produced in Westfield, Cumbernauld, North Lanarkshire, since Barr's moved out of their Parkhead, Glasgow factory in the mid-2000s. In 2011, Irn-Bru closed their factory in Mansfield, making the Westfield plant in Cumbernauld the main location for production. Other manufacturing locations include the English city of Sheffield. Packaging Irn-Bru and other Barr brands including Pineappleade, Cream Soda, Tizer, Red Kola, Barr Cola, and Limeade are still available in 750 ml reusable glass bottles. The most popular plastic bottle size is 500 ml. Irn-Bru and Diet Irn-Bru are available in the following sizes: 150 ml can 250 ml plastic bottle 330 ml can 330 ml glass bottle 500 ml Value Can (formerly the big summer can) 500 ml plastic bottle (UK, Canada) 600 ml plastic bottle (Russia) 1 litre plastic bottle 1.25 litre bottle (Australia, New Zealand, Russia, UK) 2 litre plastic bottle 2.25 litre plastic bottle (Russia) 2.5 litre bottle (UK "Big Bru") 3 litre plastic bottle 355 ml glass bottle (in Canada) 750 ml glass bottle 5 litre Syrup containers. In May 2007, A.G Barr re-designed the Irn-Bru Can and Bottle Logos. In April 2016, A.G Barr released the redesigned Irn-Bru Can and Bottle Logos. Marketing Advertising campaigns Barr's actively promoted their Irn-Bru from the outset, with some of their earliest ads featuring world champion wrestlers and Highland Games athletes Donald Dinnie and Alex Munro who endorsed the drink by means of personal testimonials. In the 1930s, the firm began a long-running series of comic strip ads entitled "The Adventures of Ba-Bru" which ran in various local papers from April 1939 until October 1970. The last traces of this campaign, a large neon sign featuring Ba-Bru which stood in Union St above Glasgow Central railway station, was removed in 1983 and replaced with an illuminated display featuring the tagline "Your Other National Drink". Barr has a long-established gimmick associating Irn-Bru with Scottishness, stemming from the claim of its being Scotland's most popular soft drink. A tagline, "Made in Scotland from girders", was used for several years from the 1980s, usually featuring Irn-Bru drinkers becoming unusually strong, durable or magnetic. An advertising campaign launched in Spring 2000 aimed to "dramatise the extraordinary appeal of Irn-Bru in a likeably maverick style". David Amers, Planning Director, said: "Irn-Bru is the likeable maverick of the soft drinks market and these ads perfectly capture the brand's spirit." One involved a grandfather (played by actor Robert Wilson) who removed his false teeth to spoil his grandson's interest in his can of Irn-Bru. A further TV advertisement featured a senior citizen in a motorised wheelchair robbing a local shopping market of a supply of Irn-Bru. In 2004 Irn created a new concept "Phenomenal". In 2006 the company launched its first Christmas adverts. This campaign consisted of a parody commercial of a popular Christmas Cartoon, The Snowman, and was effective in interesting American audiences in the Irn-Bru brand. Further advertising campaigns for Irn-Bru appeared in conjunction with the release of Irn-Bru 32 in 2006. A 2009 advertisement for the product featured a group of high school pupils performing a musical number, with the refrain "It's fizzy, it's ginger, it's phenomenal!" It was a parody of High School Musical, and starred Jack Lowden. In 2012 the company changed its slogan to "gets you through", which see a number of people drinking Irn-Bru to get through tough situations. In response to the Coca-Cola 'Share a Coke' |
that are written in HyperText Mark Language(HTML), are exchanged via networks. This protocol is the backbone of the Web allowing for the whole hypertext system to exist practically. It was created by the team of developers spearheaded by Tim Berners-Lee. Berners-Lee is responsible for the proposal of its creation, which he did in 1989. August 6, 1991 is the date he published the first complete version of HTTP on a public forum. This date subsequently is considered by some to be the official birth of the World Wide Web. HPPS has been continually evolving since its creation, becoming more complicated with time and progression of networking technology. By default HTTP is not encrypted so in practice HTTPS is used, which stands for HTTPS Secure. TLS/SSL TLS stands for Transport Layer Security is a standard that enables two different endpoints to interconnect sturdy and privately. TLS came as a replacement for SSL. Secure Sockets Layers was first introduced before the creation of HTTPS and it was created by Netscape. As a matter of fact HTTPS was based on SSL when it first came out. It was apparent that one common way of encrypting data was needed so the IETF specified TLS 1.0 in RFC 2246 in January, 1999. It has been upgraded since. Last version of TLS is 1.3 from RFC 8446 in August 2018. OSI Model The Open Systems Interconnection model began its development in 1977. It was created by the International Organization for Standardization. It was officially published and adopted as a standard for use in 1979. It was then updated several times and the final version. It took a few years for the protocol to be presented in its final form. ISO 7498 was published in 1984. Lastly in 1995 the OSI model was revised again satisfy the urgent needs of uprising development in the field of computer network UDP The goal of User Datagram Protocol was to find a way to communicate between two computers as quickly and efficiently as possible. Essentially UDP was conceived and realized by David P. Reed in 1980. Essentially the way it works is using compression to send information. Data would be compressed into a datagram and sent point to point. This proved to be a secure way to transmit information and despite the drawback of losing quality of data UDP is still in use. Standardization process Becoming a standard is a two-step process within the Internet Standards Process: Proposed Standard and Internet Standard. These are called maturity levels and the process is called the Standards Track. If an RFC is part of a proposal that is on the Standards Track, then at the first stage, the standard is proposed and subsequently organizations decide whether to implement this Proposed Standard. After the criteria in RFC 6410 is met (two separate implementations, widespread use, no errata etc.), the RFC can advance to Internet Standard. The Internet Standards Process is defined in several "Best Current Practice" documents, notably BCP 9 ( RFC 2026 and RFC 6410). There were previously three standard maturity levels: Proposed Standard, Draft Standard and Internet Standard. RFC 6410 reduced this to two maturity levels. Proposed Standard RFC 2026 originally characterized Proposed Standards as immature specifications, but this stance was annulled by RFC 7127. A Proposed Standard specification is stable, has resolved known design choices, has received significant community review, and appears to enjoy enough community interest to be considered valuable. Usually, neither implementation nor operational experience is required for the designation of a specification as a Proposed Standard. Proposed Standards are of such quality that implementations can be deployed in the Internet. However, as with all technical specifications, Proposed Standards may be revised if problems are found or better solutions are identified, when experiences with deploying implementations of such technologies at scale is gathered. Many Proposed Standards are actually deployed on the Internet and used extensively, as stable protocols. Actual practice has been that full progression through the sequence of standards levels is typically quite rare, and most popular IETF protocols remain at Proposed Standard. Draft Standard In October 2011, RFC 6410 merged the second and third maturity levels into one Draft Standard. Existing older Draft Standards retain that classification. The IESG can reclassify an old Draft Standard as Proposed Standard after two years (October 2013). Internet Standard An Internet Standard is characterized by a high degree of technical maturity and by a generally held belief that the specified protocol or service provides significant benefit to the Internet community. Generally Internet Standards cover interoperability of systems on the Internet through defining protocols, message formats, schemas, and languages. The most fundamental of the Internet Standards are the ones defining the Internet Protocol. An Internet Standard ensures that hardware and software produced by different vendors can work together. Having a standard makes it much easier to develop software and hardware that link different networks because software and hardware can be developed one layer at a time. Normally, the standards used in data communication are called protocols. All Internet Standards are given a number in the STD series. The series was summarized in its first document, STD 1 (RFC 5000), until 2013, but this practice was retired in RFC 7100. The definitive list of Internet Standards is now maintained by the RFC Editor. Documents submitted to the IETF editor and accepted as an RFC are not revised; if the document has to be changed, it is submitted again and assigned a new RFC number. When an RFC becomes an Internet Standard (STD), it is assigned an STD number but retains its RFC number. When an Internet Standard is updated, its number is unchanged but refers to a different RFC or set of RFCs. For example, in 2007 RFC 3700 was an Internet Standard (STD 1) and in May 2008 it was replaced with RFC 5000. RFC 3700 received Historic status, and RFC 5000 became STD 1. The list of Internet standards was originally published as STD 1 but this practice has been abandoned in favor of an online list maintained by the RFC Editor. Organizations of Internet Standards The standardization process is divided into three steps: Proposed standards are standards to be implemented and can be changed at any time The draft standard was carefully tested in preparation for riverside to form the future Internet standard Internet standards are mature standards. There are five Internet standards organizations: the Internet Engineering Task Force (IETF), Internet Society (ISOC), Internet Architecture Board (IAB), Internet Research Task Force (IRTF), World Wide Web Consortium (W3C). All organizations are required to use and express the Internet language | with a goal to make the Internet work superior. The working group then operates under the direction of the Area Director and progress an agreement. After the circulation of the proposed charter to the IESG and IAB mailing lists and its approval then it is further forwarded to the public IETF. It is not essential to have the complete agreement of all working groups and adopt the proposal. IETF working groups are only required to recourse to check if the accord is strong. Likewise, the Working Group produce documents in the arrangement of RFCs which are memorandum containing approaches, deeds, examination as well as innovations suitable to the functioning of the Internet and Internet-linked arrangements. In other words, Requests for Comments (RFCs) are primarily used to mature a standard network protocol that is correlated with network statements. Some RFCs are aimed to produce information while others are required to publish Internet standards. The ultimate form of the RFC converts to the standard and is issued with a numeral. After that, no more comments or variations are acceptable for the concluding form. This process is followed in every area to generate unanimous views about a problem related to the internet and develop internet standards as a solution to different glitches. There are eight common areas on which IETF focus and uses various working groups along with an area director. In the "general" area it works and develops the Internet standards. In "Application" area it concentrates on internet applications such as Web-related protocols. Furthermore, it also works on the development of internet infrastructure in the form of PPP extensions. IETF also establish principles and descriptions for network processes such as remote network observing. For example, IETF emphasis the enlargement of technical standards that encompass the Internet protocol suite (TCP/IP). The Internet Architecture Board (IAB) along with the Internet Research Task Force (IRTF) counterpart the exertion of the IETF using innovative technologies. The IETF is the standards making organization concentrate on the generation of "standard" stipulations of expertise and their envisioned usage. The IETF concentrates on matters associated with the progress of current Internet and TCP/IP know-how. It is alienated into numerous working groups (WGs), every one of which is accountable for evolving standards and skills in a specific zone, for example routing or security. People in working groups are volunteers and work in fields such as equipment vendors, network operators and different research institutions. Firstly, it works on getting the common consideration of the necessities that the effort should discourse. Then an IETF Working Group is formed and necessities are ventilated in the influential Birds of a Feather (BoF) assemblies at IETF conferences. Internet Engineering Task Force The Internet Engineering Task Force (IETF) is the premier internet standards organization. It follows an open and well-documented processes for setting internet standards. The resources that the IETF offers include RFCs, internet-drafts, IANA functions, intellectual property rights, standards process, and publishing and accessing RFCs. RFCs Documents that contain technical specifications and notes for the Internet. The acronym RFC came from the phrase "Request For Comments" - this isn't used anymore today and is now simply referred to as RFCs. The website RFC Editor is an official archive of internet standards, draft standards, and proposed standards. Internet Drafts Working documents of the IETF and its working groups. Other groups may distribute working documents as Internet-Drafts Intellectual property rights All IETF standards are freely available to view and read, and generally free to implement by anyone without permission or payment. Standards Process The process of creating a standard is straightforward - a specification goes through an extensive review process by the Internet community and revised through experience. Publishing and accessing RFCs Internet-Drafts that successfully completed the review process. Submitted to RFC editor for publication. Types of Internet Standards There are two ways in which an Internet Standard is formed and can be categorized as one of the following: "de jure" standards and "de facto" standards. A de facto standard becomes a standard through widespread use within the tech community. A de jure standard is formally created by official standard-developing organizations. These standards undergo the Internet Standards Process. Common de jure standards include ASCII, SCSI, and Internet protocol suite. Internet Standard Specifications Specifications subject to the Internet Standards Process can be categorized into one of the following: Technical Specification (TS) and Applicability Statement (AS). A Technical Specification is a statement describing all relevant aspects of a protocol, service, procedure, convention, or format. This includes its scope and its intent for use, or "domain of applicability". However, a TSs use within the Internet is defined by an Applicability Statement. An AS specifies how, and under what circumstances, TSs may be applied to support a particular Internet capability. An AS identifies the ways in which relevant TSs are combined and specifies the parameters or sub-functions of TS protocols. An AS also describes the domains of applicability of TSs, such as Internet routers, terminal server, or datagram-based database servers. An AS also applies one of the following "requirement levels" to each of the TSs to which it refers: Required: Implementation of the referenced TS is required to achieve interoperability. For example, Internet systems using the Internet Protocol Suite are required to implement IP and ICMP. Recommended: Implementation of the referenced TS is not required, but is desirable in the domain of applicability of the AS. Inclusion of the functions, features, and protocols of Recommended TSs in the developments of systems is encouraged. For example, the TELNET protocol should be implemented by all systems that intend to use remote access. Elective: Implementation of the referenced TS is optional. The TS is only necessary in a specific environment. For example, the DECNET MIB could be seen as valuable in an environment where the DECNET protocol is used. Common Standards Web Standards TCP/ IP Model & associated Internet Standards Web standards are a type of internet standard which define aspects of the World Wide Web. They allow for the building and rendering of websites. The three key standards used by the World Wide Web are Hypertext Transfer Protocol, HTML, and URL. Respectively, they specify the content and layout of |
Security Operations Command, a unit of the Thai military devoted to national security issues In Service Open Challenge, a Walt Disney World lifeguarding competition Islamic Society, various Islamic-based groups International Series of Champions, professional snowmobile snowcross racing organization. Introduction of Super Office | that promotes Internet use and access Internal Security Operations Command, a unit of the Thai military devoted to national security issues In Service Open Challenge, a Walt Disney World lifeguarding competition Islamic Society, |
Information Communication Technology (ICTs) on a worldwide basis, as well as defining tariff and accounting principles for international telecommunication services. The international standards that are produced by the ITU-T are referred to as "Recommendations" (with the word capitalized to distinguish its meaning from the common parlance sense of the word "recommendation"), as they become mandatory only when adopted as part of a national law. Since the ITU-T is part of the ITU, which is a United Nations specialized agency, its standards carry more formal international weight than those of most other standards development organizations that publish technical specifications of a similar form. History At the initiative of Napoleon III, the French government invited international participants to a conference in Paris in 1865 to facilitate and regulate international telegraph services. A result of the conference was the founding of the forerunner of the modern ITU. At the 1925 Paris conference, the ITU created two consultative committees to deal with the complexities of the international telephone services, known as CCIF, as the French acronym, and with long-distance telegraphy (CCIT). In view of the basic similarity of many of the technical problems faced by the CCIF and CCIT, a decision was taken in 1956 to merge them into a single entity, the International Telegraph and Telephone Consultative Committee (CCITT, in the French acronym). The first Plenary Assembly of the new organization was held in Geneva, Switzerland in December 1956. In 1992, the Plenipotentiary Conference (the top policy-making conference of ITU) saw a reform of ITU, giving the Union greater flexibility to adapt to an increasingly complex, interactive and competitive environment. The CCITT was renamed the Telecommunication Standardization Sector (ITU-T), as one of three Sectors of the Union alongside the Radiocommunication Sector (ITU-R) and the Telecommunication Development Sector (ITU-D). Historically, the Recommendations of the CCITT were presented at plenary assemblies for endorsement, held every four years, and the full set of Recommendations were published after each plenary assembly. However, the delays in producing texts, and translating them into other working languages, did not suit the fast pace of change in the telecommunications industry. "Real time" standardization The rise of the personal computer industry in the early 1980s created a new common practice among both consumers and businesses of adopting "bleeding edge" communications technology even if it was not yet standardized. Thus, standards organizations had to put forth standards much faster, or find themselves ratifying de facto standards after the fact. One of the most prominent examples of this was the Open Document Architecture project, which began in 1985 when a profusion of software firms around the world were still furiously competing to shape the future of the electronic office, and was completed in 1999 long after Microsoft Office's then-secret binary file formats had become established as the global de facto standard. The ITU-T now operates under much more streamlined processes. The time between an initial proposal of a draft document by a member company and the final approval of a full-status ITU-T Recommendation can now be as short as a few months (or less in some cases). This makes the standardization approval process in the ITU-T much more responsive to the needs of rapid technology development than in the ITU's historical past. New and updated Recommendations are published on an almost daily basis, and nearly all of the library of over 3,270 Recommendations is now free of charge online. (About 30 specifications jointly maintained by the ITU-T and ISO/IEC are not available for free to the public.) ITU-T has moreover tried to facilitate cooperation between the various forums and standard-developing organizations (SDOs). This collaboration is necessary to avoid duplication of work and the consequent risk of conflicting standards in the market place. In the work of standardization, ITU-T cooperates with other SDOs, e.g., the International Organization for Standardization (ISO) and the Internet Engineering Task Force (IETF). Development of Recommendations Most of the work of ITU-T is carried out by its Sector Members and Associates, while the Telecommunication Standardization Bureau (TSB) is the executive arm of ITU-T and coordinator for a number of workshops and seminars to progress existing work areas and explore new ones. The events cover a wide array of topics in the field of information and communication technologies (ICT) and attract high-ranking experts as speakers, and attendees from engineers to high-level management from all industry sectors. The technical work, the development of Recommendations, of ITU-T is managed by Study Groups (SGs), such as Study Group 13 for network standards, Study Group 16 for multimedia standards, and Study Group 17 for security standards, which are created by the World Telecommunication Standardization Assembly (WTSA) which is held every four years. As part of the deliberations, WTSA has instructed ITU to hold the Global Standards Symposium, which unlike WTSA is open to public for participation. The people involved in these SGs are experts in telecommunications from all over the world. There are currently 11 SGs. Study groups meet face to face (or virtually under exceptional circumstances) according to a calendar issued by the TSB. SGs are augmented by Focus Groups (FGs), an instrument created by ITU-T, providing a way to quickly react to ICT standardization needs and allowing great flexibility in terms of participation and working methods. The key difference between SGs and FGs is that the latter have greater freedom to organize and finance themselves, and to involve non-members in their work, but they do not have the authority to approve Recommendations. Focus Groups can be created very quickly, are usually short-lived and can choose their own working methods, leadership, financing, and types of deliverables. Current Focus Groups include the ITU-WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) as well as Machine Learning for 5G (which developed Y.3172), Quantum Information Technologies for Networks, and Artificial Intelligence for Assisted and Autonomous Driving. Alternative Approval Process The Alternative Approval Process (AAP) is a fast-track approval | Technology such as X.509 for cybersecurity, Y.3172 and Y.3173 for machine learning, and H.264/MPEG-4 AVC for video compression, between its Member States, Private Sector Members, and Academia Members. ITU-T is one of the three Sectors (divisions or units) of the International Telecommunication Union (ITU). The standardization efforts of ITU started in 1865 with the formation of the International Telegraph Union (ITU). ITU became a Specialized agency of the United Nations in 1947. The International Telegraph and Telephone Consultative Committee (, CCITT) was created in 1956, and was renamed ITU-T in 1993. ITU-T has a permanent secretariat called the Telecommunication Standardization Bureau (TSB), which is based at the ITU headquarters in Geneva, Switzerland. The current director of the TSB is Chaesub Lee, whose first 4-year term commenced on 1 January 2015, and whose second 4-year term commenced on 1 January 2019. Chaesub Lee succeeded Malcolm Johnson of the United Kingdom, who was director from 1 January 2007 until 31 December 2014. Primary function The ITU-T mission is to ensure the efficient and timely production of standards covering all fields of telecommunications and Information Communication Technology (ICTs) on a worldwide basis, as well as defining tariff and accounting principles for international telecommunication services. The international standards that are produced by the ITU-T are referred to as "Recommendations" (with the word capitalized to distinguish its meaning from the common parlance sense of the word "recommendation"), as they become mandatory only when adopted as part of a national law. Since the ITU-T is part of the ITU, which is a United Nations specialized agency, its standards carry more formal international weight than those of most other standards development organizations that publish technical specifications of a similar form. History At the initiative of Napoleon III, the French government invited international participants to a conference in Paris in 1865 to facilitate and regulate international telegraph services. A result of the conference was the founding of the forerunner of the modern ITU. At the 1925 Paris conference, the ITU created two consultative committees to deal with the complexities of the international telephone services, known as CCIF, as the French acronym, and with long-distance telegraphy (CCIT). In view of the basic similarity of many of the technical problems faced by the CCIF and CCIT, a decision was taken in 1956 to merge them into a single entity, the International Telegraph and Telephone Consultative Committee (CCITT, in the French acronym). The first Plenary Assembly of the new organization was held in Geneva, Switzerland in December 1956. In 1992, the Plenipotentiary Conference (the top policy-making conference of ITU) saw a reform of ITU, giving the Union greater flexibility to adapt to an increasingly complex, interactive and competitive environment. The CCITT was renamed the Telecommunication Standardization Sector (ITU-T), as one of three Sectors of the Union alongside the Radiocommunication Sector (ITU-R) and the Telecommunication Development Sector (ITU-D). Historically, the Recommendations of the CCITT were presented at plenary assemblies for endorsement, held every four years, and the full set of Recommendations were published after each plenary assembly. However, the delays in producing texts, and translating them into other working languages, did not suit the fast pace of change in the telecommunications industry. "Real time" standardization The rise of the personal computer industry in the early 1980s created a new common practice among both consumers and businesses of adopting "bleeding edge" communications technology even if it was not yet standardized. Thus, standards organizations had to put forth standards much faster, or find themselves ratifying de facto standards after the fact. One of the most prominent examples of this was the Open Document Architecture project, which began in 1985 when a profusion of software firms around the world were still furiously competing to shape the future of the electronic office, and was completed in 1999 long after Microsoft Office's then-secret binary file formats had become established as the global de facto standard. The ITU-T now operates under much more streamlined processes. The time between an initial proposal of a draft document by a member company and the final approval of a full-status ITU-T Recommendation can now be as short as a few months (or less in some cases). This makes the standardization approval process in the ITU-T much more responsive to the needs of rapid technology development than in the ITU's historical past. New and updated Recommendations are published on an almost daily basis, and nearly all of the library of over 3,270 Recommendations is now free of charge online. (About 30 specifications jointly maintained by the ITU-T and ISO/IEC are not available for free to the public.) ITU-T has moreover tried to facilitate cooperation between the various forums and standard-developing organizations (SDOs). |
temporarily emigrated to another country South Asian ethnic groups, referring to people of the Indian subcontinent, as well as the greater South Asia region prior to the 1947 partition of India Anglo-Indians, people with mixed Indian and British ancestry, or people of British descent born or living in the Indian subcontinent East Indians, a Christian community in India Europe British Indians, British people of Indian origin The Americas Indigenous peoples of the Americas, the pre-Columbian inhabitants of North and South America and their descendants Plains Indians, the common name for the Native Americans who lived on the Great Plains of North America Native Americans in the United States, the indigenous people in the United States Native American tribes, specific groups of Native Americans Indigenous peoples in Canada First Nations in Canada, the various Aboriginal peoples in Canada who are neither Inuit nor Métis Indigenous peoples of South America, peoples living in South America in the pre-Columbian era and their descendants Native Mexicans, indigenous people of Mexico Indigenous peoples of Central America Indigenous peoples of the Caribbean West Indians, people from the Caribbean region and the Lucayan Archipelago Mardi Gras Indians, African-American Carnival revelers in New Orleans, Louisiana, whose suits are influenced by Native American ceremonial apparel Australia Aboriginal Australians, called "Indians" until the 19th century Languages Indian English, a dialect of the English language used in India Indigenous languages of the Americas, spoken by indigenous peoples from Alaska and Greenland to the southern tip of South America Languages of India, including Indo-Aryan languages and Dravidian languages Places Indian, West Virginia, a former unincorporated community in Kanawha County The Indians, an islet group in the British Virgin Islands Indian Creek (disambiguation) Indian Island (disambiguation) Indian River (disambiguation), several rivers and communities Indian Run (disambiguation), streams | of the Americas, spoken by indigenous peoples from Alaska and Greenland to the southern tip of South America Languages of India, including Indo-Aryan languages and Dravidian languages Places Indian, West Virginia, a former unincorporated community in Kanawha County The Indians, an islet group in the British Virgin Islands Indian Creek (disambiguation) Indian Island (disambiguation) Indian River (disambiguation), several rivers and communities Indian Run (disambiguation), streams in the U.S. states of Pennsylvania and West Virginia Indian subcontinent Indian Ocean Arts, entertainment, and media Indian cinema Films Indian (1996 film), a Tamil film Indian (2001 film), a Hindi film Music Indians (musician), moniker of Danish singer Søren Løkke Juul accompanied by some musicians also collectively known as Indians "Indian" (song), by Sturm und Drang Indian (soundtrack), an album from the 1996 film "Indians" (song), by Anthrax Other arts, entertainment, and media Indian (card game), a simple card game that involves strategy Indian soap opera, soap operas written, produced, and filmed in India Indians (play), a 1968 play by Arthur Kopit Indians (sculpture), a name for The Bowman and The Spearman, sculptures by Ivan Meštrović Businesses Indian (airline), a now-defunct state-owned airline of India, merged with Air India Indian Motocycle Manufacturing |
months. Third, the children that showed stronger internalization from 25 to 52 months came to see themselves as more moral and "good". These self-perceptions, in turn, predicted the way parents and teachers would rate their competent and adaptive functioning at 80 months. As a symptom In behavioral psychology, the concept of internalization may also refer to disorders and behaviors in which a person deals with stressors in manners not externally evident. Such disorders and behaviors include depression, anxiety disorder, bulimia and anorexia. Biology In sciences such as biology, internalization is another term for endocytosis, in which molecules such as proteins are engulfed by the cell membrane and drawn into the cell. Economics and management In economics, internalization theory explains the practice of multinational enterprises (MNEs) to execute transactions within their organization rather than relying on an outside market. It must be cheaper for an MNE to internalize the transfer of its unique ownership advantages between countries than to do so through markets. In other words, the alternative to internalization through direct investment is some form of licensing of the firm's know-how to a firm in the target economy. Finance In finance, internalization can refer to several concepts. "When you place an order to buy or sell a stock, your broker has choices on where to execute your order. Instead of routing your order to a market or market-makers for execution, your broker may fill the order from the firm's own inventory – this is called 'internalization'. In this way, your broker's firm may make money on the "spread" – which is the difference between the purchase price and the sale price." For | is the opposite of externalization. Psychology and sociology In psychology, internalization is the outcome of a conscious mind reasoning about a specific subject; the subject is internalized, and the consideration of the subject is internal. Internalization of ideals might take place following religious conversion, or in the process of, more generally, moral conversion. Internalization is directly associated with learning within an organism (or business) and recalling what has been learned. In psychology and sociology, internalization involves the integration of attitudes, values, standards and the opinions of others into one's own identity or sense of self. In psychoanalytic theory, internalization is a process involving the formation of the super ego. Many theorists believe that the internalized values of behavior implemented during early socialization are key factors in predicting a child's future moral character. The self-determination theory proposes a motivational continuum from the extrinsic to intrinsic motivation and autonomous self-regulation. Some research suggests a child's moral self starts to develop around age three. These early years of socialization may be the underpinnings of moral development in later childhood. Proponents of this theory suggest that children whose view of self is "good and moral" tend to have a developmental trajectory toward pro-social behavior and few signs of anti-social behavior. In one child developmental study, researchers examined two key dimensions of early conscience – internalization of rules of conduct and empathic affects to others – as factors that may predict future social, adaptive and competent behavior. Data was collected from a longitudinal study of children, from two parent families, at age 25, 38, 52, 67 and 80 months. Children's internalization of each parent's rules and empathy |
of or relating to an ion, an atom or molecule with a net electric charge Ionic (mobile app framework), a software development kit Ionic bonding, a type chemical bonding Ionic compound, a chemical compound involving ionic bonding Other uses Ionian Technologies, an American biotechnology company Hull Ionians, an English rugby club Ionic order, one of the orders of classical architecture Ionic No. 5, a typeface , the name of two ships of the White Star | for the now-defined Chibanian stage in stratigraphy. Ionic, of or relating to an ion, an atom or molecule with a net electric charge Ionic (mobile app framework), a software development kit Ionic bonding, a type chemical bonding Ionic compound, a chemical compound involving ionic bonding Other uses Ionian Technologies, an American biotechnology company Hull Ionians, an English rugby club Ionic order, one of the orders of classical architecture Ionic No. 5, a typeface , the |
isotope, and indium-115, which has a half-life of 4.41 years, four orders of magnitude greater than the age of the Universe and nearly 30,000 times greater than that of natural thorium. The half-life of 115In is very long because the beta decay to 115Sn is spin-forbidden. Indium-115 makes up 95.7% of all indium. Indium is one of three known elements (the others being tellurium and rhenium) of which the stable isotope is less abundant in nature than the long-lived primordial radioisotopes. The stablest artificial isotope is indium-111, with a half-life of approximately 2.8 days. All other isotopes have half-lives shorter than 5 hours. Indium also has 47 meta states, among which indium-114m1 (half-life about 49.51 days) is the most stable, more stable than the ground state of any indium isotope other than the primordial. All decay by isomeric transition. The indium isotopes lighter than 115In predominantly decay through electron capture or positron emission to form cadmium isotopes, while the other indium isotopes from 115In and greater predominantly decay through beta-minus decay to form tin isotopes. Compounds Indium(III) Indium(III) oxide, In2O3, forms when indium metal is burned in air or when the hydroxide or nitrate is heated. In2O3 adopts a structure like alumina and is amphoteric, that is able to react with both acids and bases. Indium reacts with water to reproduce soluble indium(III) hydroxide, which is also amphoteric; with alkalis to produce indates(III); and with acids to produce indium(III) salts: In(OH)3 + 3 HCl → InCl3 + 3 H2O The analogous sesquichalcogenides with sulfur, selenium, and tellurium are also known. Indium forms the expected trihalides. Chlorination, bromination, and iodination of In produce colorless InCl3, InBr3, and yellow InI3. The compounds are Lewis acids, somewhat akin to the better known aluminium trihalides. Again like the related aluminium compound, InF3 is polymeric. Direct reaction of indium with the pnictogens produces the gray or semimetallic III–V semiconductors. Many of them slowly decompose in moist air, necessitating careful storage of semiconductor compounds to prevent contact with the atmosphere. Indium nitride is readily attacked by acids and alkalis. Indium(I) Indium(I) compounds are not common. The chloride, bromide, and iodide are deeply colored, unlike the parent trihalides from which they are prepared. The fluoride is known only as an unstable gaseous compound. Indium(I) oxide black powder is produced when indium(III) oxide decomposes upon heating to 700 °C. Other oxidation states Less frequently, indium forms compounds in oxidation state +2 and even fractional oxidation states. Usually such materials feature In–In bonding, most notably in the halides In2X4 and [In2X6]2−, and various subchalcogenides such as In4Se3. Several other compounds are known to combine indium(I) and indium(III), such as InI6(InIIICl6)Cl3, InI5(InIIIBr4)2(InIIIBr6), InIInIIIBr4. Organoindium compounds Organoindium compounds feature In–C bonds. Most are In(III) derivatives, but cyclopentadienylindium(I) is an exception. It was the first known organoindium(I) compound, and is polymeric, consisting of zigzag chains of alternating indium atoms and cyclopentadienyl complexes. Perhaps the best-known organoindium compound is trimethylindium, In(CH3)3, used to prepare certain semiconducting materials. History In 1863, the German chemists Ferdinand Reich and Hieronymous Theodor Richter were testing ores from the mines around Freiberg, Saxony. They dissolved the minerals pyrite, arsenopyrite, galena and sphalerite in hydrochloric acid and distilled raw zinc chloride. Reich, who was color-blind, employed Richter as an assistant for detecting the colored spectral lines. Knowing that ores from that region sometimes contain thallium, they searched for the green thallium emission spectrum lines. Instead, they found a bright blue line. Because that blue line did not match any known element, they hypothesized a new element was present in the minerals. They named the element indium, from the indigo color seen in its spectrum, after the Latin indicum, meaning 'of India'. Richter went on to isolate the metal in 1864. An ingot of was presented at the World Fair 1867. Reich and Richter later fell out when the latter claimed to be the sole discoverer. Occurrence Indium is created by the long-lasting (up to thousands of years) s-process (slow neutron capture) in low-to-medium-mass stars (range in mass between 0.6 and 10 solar masses). When a silver-109 atom captures a neutron, it transmutes into silver-110, which then undergoes beta decay to become cadmium-110. Capturing further neutrons, it becomes cadmium-115, which decays to indium-115 by another beta decay. This explains why the radioactive isotope is more abundant than the stable one. The stable indium isotope, indium-113, is one of the p-nuclei, the origin of which is not fully understood; although indium-113 is known to be made directly in the s- and r-processes (rapid neutron capture), and also as the daughter of very long-lived cadmium-113, which has a half-life of about eight quadrillion years, this cannot account for all indium-113. Indium is the 68th most abundant element in Earth's crust at approximately 50 ppb. This is similar to the crustal abundance of silver, bismuth and mercury. It very rarely forms its own minerals, or occurs in elemental form. Fewer than 10 indium minerals such as roquesite (CuInS2) are known, and none occur at sufficient concentrations for economic extraction. Instead, indium is usually a trace constituent of more common ore minerals, such as sphalerite and chalcopyrite. From these, it can be extracted as a by-product during smelting. While the enrichment of indium in these deposits is high relative to its crustal abundance, it is insufficient, at current prices, to support extraction of indium as the main product. Different estimates exist of the amounts of indium contained within the ores of other metals. However, these amounts are not extractable without mining of the host materials (see Production and availability). Thus, the availability of indium is fundamentally determined by the rate at which these ores are extracted, and not their absolute amount. This is an aspect that is often forgotten in the current debate, e.g. by the Graedel group at Yale in their criticality assessments, explaining the paradoxically low depletion times some studies cite. Production and availability Indium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material are sulfidic zinc ores, where it is mostly hosted by sphalerite. Minor amounts are probably also extracted from sulfidic copper ores. During the roast-leach-electrowinning process of zinc smelting, indium accumulates in the iron-rich residues. From these, it can be extracted in different ways. It may also be recovered directly from the process solutions. Further purification is done by electrolysis. The exact process varies with the mode of operation of the smelter. Its by-product status means that indium production is constrained by the amount of sulfidic zinc (and copper) ores extracted each year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of indium at a minimum of 1,300 t/yr from sulfidic zinc ores and 20 t/yr from sulfidic copper ores. These figures are significantly greater than current production (655 t in 2016). Thus, major future increases in the by-product production of indium will be possible without significant increases in production costs or price. The average indium price in 2016 was 240/kg, down from 705/kg in 2014. China is a leading producer of indium (290 tonnes in 2016), followed by South Korea (195 t), Japan (70 t) and Canada (65 t). The Teck Resources refinery in Trail, British Columbia, is a large single-source indium producer, with an output of 32.5 tonnes in 2005, 41.8 tonnes in 2004 and 36.1 tonnes in 2003. The primary consumption of indium worldwide is LCD production. Demand rose rapidly from the late 1990s to 2010 with the popularity of LCD computer monitors and television sets, which now account for 50% of indium consumption. Increased manufacturing efficiency and recycling (especially in Japan) maintain a balance between demand and supply. According to the UNEP, indium's end-of-life recycling rate is less than 1%. Applications In 1924, indium was found to have a valued property of stabilizing non-ferrous metals, and that became the first significant use for the element. The first large-scale application for indium was coating bearings in high-performance aircraft engines during World War II, to protect against damage and corrosion; this is no longer a major use of the element. New uses were found in fusible alloys, solders, and electronics. In the 1950s, tiny beads of indium were used for the emitters and collectors of PNP alloy-junction transistors. In the middle and late 1980s, the development of indium phosphide semiconductors and indium tin oxide thin films for liquid-crystal displays (LCD) aroused much interest. By 1992, the thin-film application had | Reich, who was color-blind, employed Richter as an assistant for detecting the colored spectral lines. Knowing that ores from that region sometimes contain thallium, they searched for the green thallium emission spectrum lines. Instead, they found a bright blue line. Because that blue line did not match any known element, they hypothesized a new element was present in the minerals. They named the element indium, from the indigo color seen in its spectrum, after the Latin indicum, meaning 'of India'. Richter went on to isolate the metal in 1864. An ingot of was presented at the World Fair 1867. Reich and Richter later fell out when the latter claimed to be the sole discoverer. Occurrence Indium is created by the long-lasting (up to thousands of years) s-process (slow neutron capture) in low-to-medium-mass stars (range in mass between 0.6 and 10 solar masses). When a silver-109 atom captures a neutron, it transmutes into silver-110, which then undergoes beta decay to become cadmium-110. Capturing further neutrons, it becomes cadmium-115, which decays to indium-115 by another beta decay. This explains why the radioactive isotope is more abundant than the stable one. The stable indium isotope, indium-113, is one of the p-nuclei, the origin of which is not fully understood; although indium-113 is known to be made directly in the s- and r-processes (rapid neutron capture), and also as the daughter of very long-lived cadmium-113, which has a half-life of about eight quadrillion years, this cannot account for all indium-113. Indium is the 68th most abundant element in Earth's crust at approximately 50 ppb. This is similar to the crustal abundance of silver, bismuth and mercury. It very rarely forms its own minerals, or occurs in elemental form. Fewer than 10 indium minerals such as roquesite (CuInS2) are known, and none occur at sufficient concentrations for economic extraction. Instead, indium is usually a trace constituent of more common ore minerals, such as sphalerite and chalcopyrite. From these, it can be extracted as a by-product during smelting. While the enrichment of indium in these deposits is high relative to its crustal abundance, it is insufficient, at current prices, to support extraction of indium as the main product. Different estimates exist of the amounts of indium contained within the ores of other metals. However, these amounts are not extractable without mining of the host materials (see Production and availability). Thus, the availability of indium is fundamentally determined by the rate at which these ores are extracted, and not their absolute amount. This is an aspect that is often forgotten in the current debate, e.g. by the Graedel group at Yale in their criticality assessments, explaining the paradoxically low depletion times some studies cite. Production and availability Indium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material are sulfidic zinc ores, where it is mostly hosted by sphalerite. Minor amounts are probably also extracted from sulfidic copper ores. During the roast-leach-electrowinning process of zinc smelting, indium accumulates in the iron-rich residues. From these, it can be extracted in different ways. It may also be recovered directly from the process solutions. Further purification is done by electrolysis. The exact process varies with the mode of operation of the smelter. Its by-product status means that indium production is constrained by the amount of sulfidic zinc (and copper) ores extracted each year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of indium at a minimum of 1,300 t/yr from sulfidic zinc ores and 20 t/yr from sulfidic copper ores. These figures are significantly greater than current production (655 t in 2016). Thus, major future increases in the by-product production of indium will be possible without significant increases in production costs or price. The average indium price in 2016 was 240/kg, down from 705/kg in 2014. China is a leading producer of indium (290 tonnes in 2016), followed by South Korea (195 t), Japan (70 t) and Canada (65 t). The Teck Resources refinery in Trail, British Columbia, is a large single-source indium producer, with an output of 32.5 tonnes in 2005, 41.8 tonnes in 2004 and 36.1 tonnes in 2003. The primary consumption of indium worldwide is LCD production. Demand rose rapidly from the late 1990s to 2010 with the popularity of LCD computer monitors and television sets, which now account for 50% of indium consumption. Increased manufacturing efficiency and recycling (especially in Japan) maintain a balance between demand and supply. According to the UNEP, indium's end-of-life recycling rate is less than 1%. Applications In 1924, indium was found to have a valued property of stabilizing non-ferrous metals, and that became the first significant use for the element. The first large-scale application for indium was coating bearings in high-performance aircraft engines during World War II, to protect against damage and corrosion; this is no longer a major use of the element. New uses were found in fusible alloys, solders, and electronics. In the 1950s, tiny beads of indium were used for the emitters and collectors of PNP alloy-junction transistors. In the middle and late 1980s, the development of indium phosphide semiconductors and indium tin oxide thin films for liquid-crystal displays (LCD) aroused much interest. By 1992, the thin-film application had become the largest end use. Indium(III) oxide and indium tin oxide (ITO) are used as a transparent conductive coating on glass substrates in electroluminescent panels. Indium tin oxide is used as a light filter in low-pressure sodium-vapor lamps. The infrared radiation is reflected back into the lamp, which increases the temperature within the tube and improves the performance of the lamp. Indium has many semiconductor-related applications. Some indium compounds, such as indium antimonide and indium phosphide, are semiconductors with useful properties: one precursor is usually trimethylindium (TMI), which is also used as the semiconductor dopant in II–VI compound semiconductors. InAs and InSb are used for low-temperature transistors and InP for high-temperature transistors. The compound semiconductors InGaN and InGaP are used in light-emitting diodes (LEDs) and laser diodes. Indium is used in photovoltaics as the semiconductor copper indium gallium selenide (CIGS), also called CIGS solar cells, a type of second-generation thin-film solar cell. Indium is used in PNP bipolar junction transistors with germanium: when soldered at low temperature, indium does not stress the germanium. Indium wire is used as a vacuum seal and a thermal conductor in cryogenics and ultra-high-vacuum applications, in such manufacturing applications as gaskets that deform to fill gaps. Owing to its great plasticity and adhesion to metals, Indium sheets are sometimes used for cold-soldering in microwave circuits and waveguide joints, where direct soldering is complicated. Indium is an ingredient in the gallium–indium–tin alloy galinstan, which is liquid at room temperature and replaces mercury in some thermometers. Other alloys of indium with bismuth, cadmium, lead, and tin, which have higher but still low melting points (between 50 and 100 °C), are used in fire sprinkler systems and heat regulators. Indium is one of many substitutes for mercury in alkaline batteries to prevent the zinc from corroding and releasing hydrogen gas. Indium is added to some dental amalgam alloys to decrease the surface tension of the mercury and allow for less mercury and easier amalgamation. Indium's high neutron-capture cross-section for thermal neutrons makes it suitable for use in control rods for nuclear reactors, typically in an alloy of 80% silver, 15% indium, and 5% cadmium. In nuclear engineering, the (n,n') reactions of 113In and 115In are used to determine magnitudes of neutron fluxes. In 2009, Professor Mas Subramanian and associates at Oregon State University discovered that indium can be combined with yttrium and manganese to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn blue, the first new inorganic blue pigment discovered in 200 years. Biological role and precautions Indium has no metabolic role in any organism. In a similar way to aluminium salts, indium(III) ions can be toxic to |
to give molybdenum(II) iodide. An example involving halogen exchange is given below, involving the reaction of tantalum(V) chloride with excess aluminium(III) iodide at 400 °C to give tantalum(V) iodide: 3TaCl5 + \underset{(excess)}{5AlI3} -> 3TaI5 + 5AlCl3 Lower iodides may be produced either through thermal decomposition or disproportionation, or by reducing the higher iodide with hydrogen or a metal, for example: TaI5{} + Ta ->[\text{thermal gradient}] [\ce{630^\circ C\ ->\ 575^\circ C}] Ta6I14 Most of the iodides of groups 1, 2, and 3, along with the lanthanides and actinides in the +2 and +3 oxidation states, are mostly ionic, while nonmetals tend to form covalent molecular iodides, as do metals in high oxidation states from +3 and above. Ionic iodides MIn tend to have the lowest melting and boiling points among the halides MXn of the same element, because the electrostatic forces of attraction between the cations and anions are weakest for the large iodide anion. In contrast, covalent iodides tend to instead have the highest melting and boiling points among the halides of the same element, since iodine is the most polarisable of the halogens and, having the most electrons among them, can contribute the most to van der Waals forces. Naturally, exceptions abound in intermediate iodides where one trend gives way to the other. Similarly, solubilities in water of predominantly ionic iodides (e.g. potassium and calcium) are the greatest among ionic halides of that element, while those of covalent iodides (e.g. silver) are the lowest of that element. In particular, silver iodide is very insoluble in water and its formation is often used as a qualitative test for iodine. Iodine halides The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY3, XY5, and XY7 (where X is heavier than Y), and iodine is no exception. Iodine forms all three possible diatomic interhalogens, a trifluoride and trichloride, as well as a pentafluoride and, exceptionally among the halogens, a heptafluoride. Numerous cationic and anionic derivatives are also characterised, such as the wine-red or bright orange compounds of and the dark brown or purplish black compounds of I2Cl+. Apart from these, some pseudohalides are also known, such as cyanogen iodide (ICN), iodine thiocyanate (ISCN), and iodine azide (IN3). Iodine monofluoride (IF) is unstable at room temperature and disproportionates very readily and irreversibly to iodine and iodine pentafluoride, and thus cannot be obtained pure. It can be synthesised from the reaction of iodine with fluorine gas in trichlorofluoromethane at −45 °C, with iodine trifluoride in trichlorofluoromethane at −78 °C, or with silver(I) fluoride at 0 °C. Iodine monochloride (ICl) and iodine monobromide (IBr), on the other hand, are moderately stable. The former, a volatile red-brown compound, was discovered independently by Joseph Louis Gay-Lussac and Humphry Davy in 1813–1814 not long after the discoveries of chlorine and iodine, and it mimics the intermediate halogen bromine so well that Justus von Liebig was misled into mistaking bromine (which he had found) for iodine monochloride. Iodine monochloride and iodine monobromide may be prepared simply by reacting iodine with chlorine or bromine at room temperature and purified by fractional crystallisation. Both are quite reactive and attack even platinum and gold, though not boron, carbon, cadmium, lead, zirconium, niobium, molybdenum, and tungsten. Their reaction with organic compounds depends on conditions. Iodine chloride vapour tends to chlorinate phenol and salicyclic acid, since when iodine chloride undergoes homolytic dissociation, chlorine and iodine are produced and the former is more reactive. However, iodine chloride in tetrachloromethane solution results in iodination being the main reaction, since now heterolytic fission of the I–Cl bond occurs and I+ attacks phenol as an electrophile. However, iodine monobromide tends to brominate phenol even in tetrachloromethane solution because it tends to dissociate into its elements in solution, and bromine is more reactive than iodine. When liquid, iodine monochloride and iodine monobromide dissociate into and anions (X = Cl, Br); thus they are significant conductors of electricity and can be used as ionising solvents. Iodine trifluoride (IF3) is an unstable yellow solid that decomposes above −28 °C. It is thus little-known. It is difficult to produce because fluorine gas would tend to oxidise iodine all the way to the pentafluoride; reaction at low temperature with xenon difluoride is necessary. Iodine trichloride, which exists in the solid state as the planar dimer I2Cl6, is a bright yellow solid, synthesised by reacting iodine with liquid chlorine at −80 °C; caution is necessary during purification because it easily dissociates to iodine monochloride and chlorine and hence can act as a strong chlorinating agent. Liquid iodine trichloride conducts electricity, possibly indicating dissociation to and ions. Iodine pentafluoride (IF5), a colourless, volatile liquid, is the most thermodynamically stable iodine fluoride, and can be made by reacting iodine with fluorine gas at room temperature. It is a fluorinating agent, but is mild enough to store in glass apparatus. Again, slight electrical conductivity is present in the liquid state because of dissociation to and . The pentagonal bipyramidal iodine heptafluoride (IF7) is an extremely powerful fluorinating agent, behind only chlorine trifluoride, chlorine pentafluoride, and bromine pentafluoride among the interhalogens: it reacts with almost all the elements even at low temperatures, fluorinates Pyrex glass to form iodine(VII) oxyfluoride (IOF5), and sets carbon monoxide on fire. Iodine oxides and oxoacids Iodine oxides are the most stable of all the halogen oxides, because of the strong I–O bonds resulting from the large electronegativity difference between iodine and oxygen, and they have been known for the longest time. The stable, white, hygroscopic iodine pentoxide (I2O5) has been known since its formation in 1813 by Gay-Lussac and Davy. It is most easily made by the dehydration of iodic acid (HIO3), of which it is the anhydride. It will quickly oxidise carbon monoxide completely to carbon dioxide at room temperature, and is thus a useful reagent in determining carbon monoxide concentration. It also oxidises nitrogen oxide, ethylene, and hydrogen sulfide. It reacts with sulfur trioxide and peroxydisulfuryl difluoride (S2O6F2) to form salts of the iodyl cation, [IO2]+, and is reduced by concentrated sulfuric acids to iodosyl salts involving [IO]+. It may be fluorinated by fluorine, bromine trifluoride, sulfur tetrafluoride, or chloryl fluoride, resulting iodine pentafluoride, which also reacts with iodine pentoxide, giving iodine(V) oxyfluoride, IOF3. A few other less stable oxides are known, notably I4O9 and I2O4; their structures have not been determined, but reasonable guesses are IIII(IVO3)3 and [IO]+[IO3]− respectively. More important are the four oxoacids: hypoiodous acid (HIO), iodous acid (HIO2), iodic acid (HIO3), and periodic acid (HIO4 or H5IO6). When iodine dissolves in aqueous solution, the following reactions occur: {| |- | I2 + H2O || HIO + H+ + I− || Kac = 2.0 × 10−13 mol2 l−2 |- | I2 + 2 OH− || IO− + H2O + I− || Kalk = 30 mol−1 l |} Hypoiodous acid is unstable to disproportionation. The hypoiodite ions thus formed disproportionate immediately to give iodide and iodate: {| |- | 3 IO− 2 I− + || K = 1020 |} Iodous acid and iodite are even less stable and exist only as a fleeting intermediate in the oxidation of iodide to iodate, if at all. Iodates are by far the most important of these compounds, which can be made by oxidising alkali metal iodides with oxygen at 600 °C and high pressure, or by oxidising iodine with chlorates. Unlike chlorates, which disproportionate very slowly to form chloride and perchlorate, iodates are stable to disproportionation in both acidic and alkaline solutions. From these, salts of most metals can be obtained. Iodic acid is most easily made by oxidation of an aqueous iodine suspension by electrolysis or fuming nitric acid. Iodate has the weakest oxidising power of the halates, but reacts the quickest. Many periodates are known, including not only the expected tetrahedral , but also square-pyramidal , octahedral orthoperiodate , [IO3(OH)3]2−, [I2O8(OH2)]4−, and . They are usually made by oxidising alkaline sodium iodate electrochemically (with lead(IV) oxide as the anode) or by chlorine gas: + 6 OH− → + 3 H2O + 2 e− + 6 OH− + Cl2 → + 2 Cl− + 3 H2O They are thermodymically and kinetically powerful oxidising agents, quickly oxidising Mn2+ to , and cleaving glycols, α-diketones, α-ketols, α-aminoalcohols, and α-diamines. Orthoperiodate especially stabilises high oxidation states among metals because of its very high negative charge of −5. Orthoperiodic acid, H5IO6, is stable, and dehydrates at 100 °C in a vacuum to metaperiodic acid, HIO4. Attempting to go further does not result in the nonexistent iodine heptoxide (I2O7), but rather iodine pentoxide and oxygen. Periodic acid may be protonated by sulfuric acid to give the cation, isoelectronic to Te(OH)6 and , and giving salts with bisulfate and sulfate. Polyiodine compounds When iodine dissolves in strong acids, such as fuming sulfuric acid, a bright blue paramagnetic solution including cations is formed. A solid salt of the diiodine cation may be obtained by oxidising iodine with antimony pentafluoride: 2 I2 + 5 SbF5 2 I2Sb2F11 + SbF3 The salt I2Sb2F11 is dark blue, and the blue tantalum analogue I2Ta2F11 is also known. Whereas the I–I bond length in I2 is 267 pm, that in is only 256 pm as the missing electron in the latter has been removed from an antibonding orbital, making the bond stronger and hence shorter. In fluorosulfuric acid solution, deep-blue reversibly dimerises below −60 °C, forming red rectangular diamagnetic . Other polyiodine cations are not as well-characterised, including bent dark-brown or black and centrosymmetric C2h green or black , known in the and salts among others. The only important polyiodide anion in aqueous solution is linear triiodide, . Its formation explains why the solubility of iodine in water may be increased by the addition of potassium iodide solution: I2 + I− (Keq = ~700 at 20 °C) Many other polyiodides may be found when solutions containing iodine and iodide crystallise, such as , , , and , whose salts with large, weakly polarising cations such as Cs+ may be isolated. Organoiodine compounds Organoiodine compounds have been fundamental in the development of organic synthesis, such as in the Hofmann elimination of amines, the Williamson ether synthesis, the Wurtz coupling reaction, and in Grignard reagents. The carbon–iodine bond is a common functional group that forms part of core organic chemistry; formally, these compounds may be thought of as organic derivatives of the iodide anion. The simplest organoiodine compounds, alkyl iodides, may be synthesised by the reaction of alcohols with phosphorus triiodide; these may then be used in nucleophilic substitution reactions, or for preparing Grignard reagents. The C–I bond is the weakest of all the carbon–halogen bonds due to the minuscule difference in electronegativity between carbon (2.55) and iodine (2.66). As such, iodide is the best leaving group among the halogens, to such an extent that many organoiodine compounds turn yellow when stored over time due to decomposition into elemental iodine; as such, they are commonly used in organic synthesis, because of the easy formation and cleavage of the C–I bond. They are also significantly denser than the other organohalogen compounds thanks to the high atomic weight of iodine. A few organic oxidising agents like the iodanes contain iodine in a higher oxidation state than −1, such as 2-iodoxybenzoic acid, a common reagent for the oxidation of alcohols to aldehydes, and iodobenzene dichloride (PhICl2), used for the selective chlorination of alkenes and alkynes. One of the more well-known uses of organoiodine compounds is the so-called iodoform test, where iodoform (CHI3) is produced by the exhaustive iodination of a methyl ketone (or another compound capable of being oxidised to a methyl ketone), as follows: Some drawbacks of using organoiodine compounds as compared to organochlorine or organobromine compounds is the greater expense and toxicity of the iodine derivatives, since iodine is expensive and organoiodine compounds are stronger alkylating agents. For example, iodoacetamide and iodoacetic acid denature proteins by irreversibly alkylating cysteine residues and preventing the reformation of disulfide linkages. Halogen exchange to produce iodoalkanes by the Finkelstein reaction is slightly complicated by the fact that iodide is a better leaving group than chloride or bromide. The difference is nevertheless small enough that the reaction can be driven to completion by exploiting the differential solubility of halide salts, or by using a large excess of the halide salt. In the classic Finkelstein reaction, an alkyl chloride or an alkyl bromide is converted to an alkyl iodide by treatment with a solution of sodium iodide in acetone. Sodium iodide is soluble in acetone and sodium chloride and sodium bromide are not. The reaction is driven toward products by mass action due to the precipitation of the insoluble salt. Occurrence and production Iodine is the least abundant of the stable halogens, comprising only 0.46 parts per million of Earth's crustal rocks (compare: fluorine 544 ppm, chlorine 126 ppm, bromine 2.5 ppm). Among the 84 elements which occur in significant quantities (elements 1–42, 44–60, 62–83, 90 and 92), it ranks 61st in abundance. Iodide minerals are rare, and most deposits that are concentrated enough for economical extraction are iodate minerals instead. Examples include lautarite, Ca(IO3)2, and dietzeite, 7Ca(IO3)2·8CaCrO4. These are the minerals that occur as trace impurities in the caliche, found in Chile, whose main product is sodium nitrate. In total, they can contain at least 0.02% and at most 1% iodine by mass. Sodium iodate is extracted from the caliche and reduced to iodide by sodium bisulfite. This solution is then reacted with freshly extracted iodate, resulting in comproportionation to iodine, which may be filtered off. The caliche was the main source of iodine in the 19th century and continues to be important today, replacing kelp (which is no longer an economically viable source), but in the late 20th century brines emerged as a comparable source. The Japanese Minami Kanto gas field east of Tokyo and the American Anadarko Basin gas field in northwest Oklahoma are the two largest such sources. The brine is hotter than 60 °C from the depth of the source. The brine is first purified and acidified using sulfuric acid, then the iodide present is oxidised to iodine with chlorine. An iodine solution is produced, but is dilute and must be concentrated. Air is blown into the solution to evaporate the iodine, which is passed into an absorbing tower, where sulfur dioxide reduces the iodine. The hydrogen iodide (HI) is reacted with chlorine to precipitate the iodine. After filtering and purification the iodine is packed. 2 HI + Cl2 → I2↑ + 2 HCl I2 + 2 H2O + SO2 → 2 HI + H2SO4 2 HI + Cl2 → I2↓ + 2 HCl These sources ensure that Chile and Japan are the largest producers of iodine today. Alternatively, the brine may be treated with silver nitrate to precipitate out iodine as silver iodide, which is then decomposed by reaction with iron to form metallic silver and a solution of iron(II) iodide. The iodine may then be liberated by displacement with chlorine. Applications About half of all produced iodine goes into various organoiodine compounds, another 15% remains as the pure element, another 15% is used to form potassium iodide, and another 15% for other inorganic iodine compounds. Among the major uses of iodine compounds are catalysts, animal feed supplements, stabilisers, dyes, colourants and pigments, pharmaceutical, sanitation (from tincture of iodine), and photography; minor uses include smog inhibition, cloud seeding, and various uses in analytical chemistry. Chemical analysis The iodide and iodate anions are often used | the other hand, nonpolar solutions are violet, the color of iodine vapour. Charge-transfer complexes form when iodine is dissolved in polar solvents, hence changing the colour. Iodine is violet when dissolved in carbon tetrachloride and saturated hydrocarbons but deep brown in alcohols and amines, solvents that form charge-transfer adducts. The melting and boiling points of iodine are the highest among the halogens, conforming to the increasing trend down the group, since iodine has the largest electron cloud among them that is the most easily polarised, resulting in its molecules having the strongest van der Waals interactions among the halogens. Similarly, iodine is the least volatile of the halogens, though the solid still can be observed to give off purple vapor. Due to this property Iodine is commonly used to demonstrate sublimation directly from solid to gas, which gives rise to a misconception that it does not melt in atmospheric pressure. Because it has the largest atomic radius among the halogens, iodine has the lowest first ionisation energy, lowest electron affinity, lowest electronegativity and lowest reactivity of the halogens. The interhalogen bond in diiodine is the weakest of all the halogens. As such, 1% of a sample of gaseous iodine at atmospheric pressure is dissociated into iodine atoms at 575 °C. Temperatures greater than 750 °C are required for fluorine, chlorine, and bromine to dissociate to a similar extent. Most bonds to iodine are weaker than the analogous bonds to the lighter halogens. Gaseous iodine is composed of I2 molecules with an I–I bond length of 266.6 pm. The I–I bond is one of the longest single bonds known. It is even longer (271.5 pm) in solid orthorhombic crystalline iodine, which has the same crystal structure as chlorine and bromine. (The record is held by iodine's neighbour xenon: the Xe–Xe bond length is 308.71 pm.) As such, within the iodine molecule, significant electronic interactions occur with the two next-nearest neighbours of each atom, and these interactions give rise, in bulk iodine, to a shiny appearance and semiconducting properties. Iodine is a two-dimensional semiconductor with a band gap of 1.3 eV (125 kJ/mol): it is a semiconductor in the plane of its crystalline layers and an insulator in the perpendicular direction. Isotopes Of the thirty-seven known isotopes of iodine, only one occurs in nature, iodine-127. The others are radioactive and have half-lives too short to be primordial. As such, iodine is both monoisotopic and mononuclidic and its atomic weight is known to great precision, as it is a constant of nature. The longest-lived of the radioactive isotopes of iodine is iodine-129, which has a half-life of 15.7 million years, decaying via beta decay to stable xenon-129. Some iodine-129 was formed along with iodine-127 before the formation of the Solar System, but it has by now completely decayed away, making it an extinct radionuclide that is nevertheless still useful in dating the history of the early Solar System or very old groundwaters, due to its mobility in the environment. Its former presence may be determined from an excess of its daughter xenon-129. Traces of iodine-129 still exist today, as it is also a cosmogenic nuclide, formed from cosmic ray spallation of atmospheric xenon: these traces make up 10−14 to 10−10 of all terrestrial iodine. It also occurs from open-air nuclear testing, and is not hazardous because of its incredibly long half-life, the longest of all fission products. At the peak of thermonuclear testing in the 1960s and 1970s, iodine-129 still made up only about 10−7 of all terrestrial iodine. Excited states of iodine-127 and iodine-129 are often used in Mössbauer spectroscopy. The other iodine radioisotopes have much shorter half-lives, no longer than days. Some of them have medical applications involving the thyroid gland, where the iodine that enters the body is stored and concentrated. Iodine-123 has a half-life of thirteen hours and decays by electron capture to tellurium-123, emitting gamma radiation; it is used in nuclear medicine imaging, including single photon emission computed tomography (SPECT) and X-ray computed tomography (X-Ray CT) scans. Iodine-125 has a half-life of fifty-nine days, decaying by electron capture to tellurium-125 and emitting low-energy gamma radiation; the second-longest-lived iodine radioisotope, it has uses in biological assays, nuclear medicine imaging and in radiation therapy as brachytherapy to treat a number of conditions, including prostate cancer, uveal melanomas, and brain tumours. Finally, iodine-131, with a half-life of eight days, beta decays to an excited state of stable xenon-131 that then converts to the ground state by emitting gamma radiation. It is a common fission product and thus is present in high levels in radioactive fallout. It may then be absorbed through contaminated food, and will also accumulate in the thyroid. As it decays, it may cause damage to the thyroid. The primary risk from exposure to high levels of iodine-131 is the chance occurrence of radiogenic thyroid cancer in later life. Other risks include the possibility of non-cancerous growths and thyroiditis. The usual means of protection against the negative effects of iodine-131 is by saturating the thyroid gland with stable iodine-127 in the form of potassium iodide tablets, taken daily for optimal prophylaxis. However, iodine-131 may also be used for medicinal purposes in radiation therapy for this very reason, when tissue destruction is desired after iodine uptake by the tissue. Iodine-131 is also used as a radioactive tracer. Chemistry and compounds Iodine is quite reactive, but it is much less reactive than the other halogens. For example, while chlorine gas will halogenate carbon monoxide, nitric oxide, and sulfur dioxide (to phosgene, nitrosyl chloride, and sulfuryl chloride respectively), iodine will not do so. Furthermore, iodination of metals tends to result in lower oxidation states than chlorination or bromination; for example, rhenium metal reacts with chlorine to form rhenium hexachloride, but with bromine it forms only rhenium pentabromide and iodine can achieve only rhenium tetraiodide. By the same token, however, since iodine has the lowest ionisation energy among the halogens and is the most easily oxidised of them, it has a more significant cationic chemistry and its higher oxidation states are rather more stable than those of bromine and chlorine, for example in iodine heptafluoride. I2 dissociates in light with an absorbance at 578 nm wavelength. Charge-transfer complexes The iodine molecule, I2, dissolves in CCl4 and aliphatic hydrocarbons to give bright violet solutions. In these solvents the absorption band maximum occurs in the 520 – 540 nm region and is assigned to a * to σ* transition. When I2 reacts with Lewis bases in these solvents a blue shift in I2 peak is seen and the new peak (230 – 330 nm) arises that is due to the formation of adducts, which are referred to as charge-transfer complexes. Hydrogen iodide The simplest compound of iodine is hydrogen iodide, HI. It is a colourless gas that reacts with oxygen to give water and iodine. Although it is useful in iodination reactions in the laboratory, it does not have large-scale industrial uses, unlike the other hydrogen halides. Commercially, it is usually made by reacting iodine with hydrogen sulfide or hydrazine: 2 I2 + N2H4 4 HI + N2 At room temperature, it is a colourless gas, like all of the hydrogen halides except hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative iodine atom. It melts at −51.0 °C and boils at −35.1 °C. It is an endothermic compound that can exothermically dissociate at room temperature, although the process is very slow unless a catalyst is present: the reaction between hydrogen and iodine at room temperature to give hydrogen iodide does not proceed to completion. The H–I bond dissociation energy is likewise the smallest of the hydrogen halides, at 295 kJ/mol. Aqueous hydrogen iodide is known as hydroiodic acid, which is a strong acid. Hydrogen iodide is exceptionally soluble in water: one litre of water will dissolve 425 litres of hydrogen iodide, and the saturated solution has only four water molecules per molecule of hydrogen iodide. Commercial so-called "concentrated" hydroiodic acid usually contains 48–57% HI by mass; the solution forms an azeotrope with boiling point 126.7 °C at 56.7 g HI per 100 g solution. Hence hydroiodic acid cannot be concentrated past this point by evaporation of water. Unlike hydrogen fluoride, anhydrous liquid hydrogen iodide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2I+ and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and iodine, though its salts with very large and weakly polarising cations such as Cs+ and (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen iodide is a poor solvent, able to dissolve only small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. Other binary iodides Nearly all elements in the periodic table form binary iodides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than iodine's (oxygen, nitrogen, and the first three halogens), so that the resultant binary compounds are formally not iodides but rather oxides, nitrides, or halides of iodine. (Nonetheless, nitrogen triiodide is named as an iodide as it is analogous to the other nitrogen trihalides.) Given the large size of the iodide anion and iodine's weak oxidising power, high oxidation states are difficult to achieve in binary iodides, the maximum known being in the pentaiodides of niobium, tantalum, and protactinium. Iodides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydroiodic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen iodide gas. These methods work best when the iodide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative iodination of the element with iodine or hydrogen iodide, high-temperature iodination of a metal oxide or other halide by iodine, a volatile metal halide, carbon tetraiodide, or an organic iodide. For example, molybdenum(IV) oxide reacts with aluminium(III) iodide at 230 °C to give molybdenum(II) iodide. An example involving halogen exchange is given below, involving the reaction of tantalum(V) chloride with excess aluminium(III) iodide at 400 °C to give tantalum(V) iodide: 3TaCl5 + \underset{(excess)}{5AlI3} -> 3TaI5 + 5AlCl3 Lower iodides may be produced either through thermal decomposition or disproportionation, or by reducing the higher iodide with hydrogen or a metal, for example: TaI5{} + Ta ->[\text{thermal gradient}] [\ce{630^\circ C\ ->\ 575^\circ C}] Ta6I14 Most of the iodides of groups 1, 2, and 3, along with the lanthanides and actinides in the +2 and +3 oxidation states, are mostly ionic, while nonmetals tend to form covalent molecular iodides, as do metals in high oxidation states from +3 and above. Ionic iodides MIn tend to have the lowest melting and boiling points among the halides MXn of the same element, because the electrostatic forces of attraction between the cations and anions are weakest for the large iodide anion. In contrast, covalent iodides tend to instead have the highest melting and boiling points among the halides of the same element, since iodine is the most polarisable of the halogens and, having the most electrons among them, can contribute the most to van der Waals forces. Naturally, exceptions abound in intermediate iodides where one trend gives way to the other. Similarly, solubilities in water of predominantly ionic iodides (e.g. potassium and calcium) are the greatest among ionic halides of that element, while those of covalent iodides (e.g. silver) are the lowest of that element. In particular, silver iodide is very insoluble in water and its formation is often used as a qualitative test for iodine. Iodine halides The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY3, XY5, and XY7 (where X is heavier than Y), and iodine is no exception. Iodine forms all three possible diatomic interhalogens, a trifluoride and trichloride, as well as a pentafluoride and, exceptionally among the halogens, a heptafluoride. Numerous cationic and anionic derivatives are also characterised, such as the wine-red or bright orange compounds of and the dark brown or purplish black compounds of I2Cl+. Apart from these, some pseudohalides are also known, such as cyanogen iodide (ICN), iodine thiocyanate (ISCN), and iodine azide (IN3). Iodine monofluoride (IF) is unstable at room temperature and disproportionates very readily and irreversibly to iodine and iodine pentafluoride, and thus cannot be obtained pure. It can be synthesised from the reaction of iodine with fluorine gas in trichlorofluoromethane at −45 °C, with iodine trifluoride in trichlorofluoromethane at −78 °C, or with silver(I) fluoride at 0 °C. Iodine monochloride (ICl) and iodine monobromide (IBr), on the other hand, are moderately stable. The former, a volatile red-brown compound, was discovered independently by Joseph Louis Gay-Lussac and Humphry Davy in 1813–1814 not long after the discoveries of chlorine and iodine, and it mimics the intermediate halogen bromine so well that Justus von Liebig was misled into mistaking bromine (which he had found) for iodine monochloride. Iodine monochloride and iodine monobromide may be prepared simply by reacting iodine with chlorine or bromine at room temperature and purified by fractional crystallisation. Both are quite reactive and attack even platinum and gold, though not boron, carbon, cadmium, lead, zirconium, niobium, molybdenum, and tungsten. Their reaction with organic compounds depends on conditions. Iodine chloride vapour tends to chlorinate phenol and salicyclic acid, since when iodine chloride undergoes homolytic dissociation, chlorine and iodine are produced and the former is more reactive. However, iodine chloride in tetrachloromethane solution results in iodination being the main reaction, since now heterolytic fission of the I–Cl bond occurs and I+ attacks phenol as an electrophile. However, iodine monobromide tends to brominate phenol even in tetrachloromethane solution because it tends to dissociate into its elements in solution, and bromine is more reactive than iodine. When liquid, iodine monochloride and iodine monobromide dissociate into and anions (X = Cl, Br); thus they are significant conductors of electricity and can be used as ionising solvents. Iodine trifluoride (IF3) is an unstable yellow solid that decomposes above −28 °C. It is thus little-known. It is difficult to produce because fluorine gas would tend to oxidise iodine all the way to the pentafluoride; reaction at low temperature with xenon difluoride is necessary. Iodine trichloride, which exists in the solid state as the planar dimer I2Cl6, is a bright yellow solid, synthesised by reacting iodine with liquid chlorine at −80 °C; caution is necessary during purification because it easily dissociates to iodine monochloride and chlorine and hence can act as a strong chlorinating agent. Liquid iodine trichloride conducts electricity, possibly indicating dissociation to and ions. Iodine pentafluoride (IF5), a colourless, volatile liquid, is the most thermodynamically stable iodine fluoride, and can be made by reacting iodine with fluorine gas at room temperature. It is a fluorinating agent, but is mild enough to store in glass apparatus. Again, slight electrical conductivity is present in the liquid state because of dissociation to and . The pentagonal bipyramidal iodine heptafluoride (IF7) is an extremely powerful fluorinating agent, behind only chlorine trifluoride, chlorine pentafluoride, and bromine pentafluoride among the interhalogens: it reacts with almost all the elements even at low temperatures, fluorinates Pyrex glass to form iodine(VII) oxyfluoride (IOF5), and sets carbon monoxide on fire. Iodine oxides and oxoacids Iodine oxides are the most stable of all the halogen oxides, because of the strong I–O bonds resulting from the large electronegativity difference between iodine and oxygen, and they have been known for the longest time. The stable, white, hygroscopic iodine pentoxide (I2O5) has been known since its formation in 1813 by Gay-Lussac and Davy. It is most easily made by the dehydration of iodic acid (HIO3), of which it is the anhydride. It will quickly oxidise carbon monoxide completely to carbon dioxide at room temperature, and is thus a useful reagent in determining carbon monoxide concentration. It also oxidises nitrogen oxide, ethylene, and hydrogen sulfide. It reacts with sulfur trioxide and peroxydisulfuryl difluoride (S2O6F2) to form salts of the iodyl cation, [IO2]+, and is reduced by concentrated sulfuric acids to iodosyl salts involving [IO]+. It may be fluorinated by fluorine, bromine trifluoride, sulfur tetrafluoride, or chloryl fluoride, resulting iodine pentafluoride, which also reacts with iodine pentoxide, giving iodine(V) oxyfluoride, IOF3. A few other less stable oxides are known, notably I4O9 and I2O4; their structures have not been determined, but reasonable guesses are IIII(IVO3)3 and [IO]+[IO3]− respectively. More important are the four oxoacids: hypoiodous acid (HIO), iodous acid (HIO2), iodic acid (HIO3), and periodic acid (HIO4 or H5IO6). When iodine dissolves in aqueous solution, the following reactions occur: {| |- | I2 + H2O || HIO + H+ + I− || Kac = 2.0 × 10−13 mol2 l−2 |- | I2 + 2 OH− || IO− + H2O + I− || Kalk = 30 mol−1 l |} Hypoiodous acid is unstable to disproportionation. The hypoiodite ions thus formed disproportionate immediately to give iodide and iodate: {| |- | 3 IO− 2 I− + || K = 1020 |} Iodous acid and iodite are even less stable and exist only as a fleeting intermediate in the oxidation of iodide to iodate, if at all. Iodates are by far the most important of these compounds, which can be made by oxidising alkali metal iodides with oxygen at 600 °C and high pressure, or by oxidising iodine with chlorates. Unlike chlorates, which disproportionate very slowly to form chloride and perchlorate, iodates are stable to disproportionation in both acidic and alkaline solutions. From these, salts of most metals can be obtained. Iodic acid is most easily made by oxidation of an aqueous iodine suspension by electrolysis or fuming nitric acid. Iodate has the weakest oxidising power of the halates, but reacts the quickest. Many periodates are known, including not only the expected tetrahedral , but also square-pyramidal , octahedral orthoperiodate , [IO3(OH)3]2−, [I2O8(OH2)]4−, and . They are usually made by oxidising alkaline sodium iodate electrochemically (with lead(IV) oxide as the anode) or by chlorine gas: + 6 OH− → + 3 H2O + 2 e− + 6 OH− + Cl2 → + 2 Cl− + 3 H2O They are thermodymically and kinetically powerful oxidising agents, quickly oxidising Mn2+ to , and cleaving glycols, α-diketones, α-ketols, α-aminoalcohols, and α-diamines. Orthoperiodate especially stabilises high oxidation states among metals because of its very high negative charge of −5. Orthoperiodic acid, H5IO6, is stable, and dehydrates at 100 °C in a vacuum to metaperiodic acid, HIO4. Attempting to go further does not result in the nonexistent iodine heptoxide (I2O7), but rather iodine pentoxide and oxygen. Periodic acid may be protonated by sulfuric acid to give the cation, isoelectronic to Te(OH)6 and , and giving salts with bisulfate and sulfate. Polyiodine compounds When iodine dissolves in strong acids, such as fuming sulfuric acid, a bright blue paramagnetic solution including cations is formed. A solid salt of the diiodine cation may be obtained by oxidising iodine with antimony pentafluoride: 2 I2 + 5 SbF5 2 I2Sb2F11 + SbF3 The salt I2Sb2F11 is dark blue, and the blue tantalum analogue I2Ta2F11 is also known. Whereas the I–I bond length in I2 is 267 pm, that in is only 256 pm as the missing electron in the latter has been removed from an antibonding orbital, making the bond stronger and hence shorter. In fluorosulfuric acid solution, deep-blue reversibly dimerises below −60 °C, forming red rectangular diamagnetic . Other polyiodine cations are not as well-characterised, including bent dark-brown or black and centrosymmetric C2h green or black , known in the and salts among others. The only important polyiodide anion in aqueous solution is linear triiodide, . Its formation explains why the solubility of iodine in water may be increased by the addition of potassium iodide solution: I2 + |
store sizes for specific countries, IKEA also alters the sizes of their products in order to accommodate cultural differences. In 2015, IKEA announced that it would be attempting a smaller store design at several locations in Canada. This modified store will feature only a display gallery and a small warehouse. One location planned for Kitchener is in the place formerly occupied by a Sears Home store. The warehouses will not keep furniture stocked, and so customers will not be able to drop in to purchase and leave with furniture the same day. Instead, they will purchase the furniture in advance online or in-store and order the furniture delivered to one of the new stores, for a greatly reduced rate. IKEA claims that this new model will allow them to expand quickly into new markets rather than spending years opening a full-size store. In 2020, IKEA opened at Al Wahda Mall in Abu Dhabi, UAE, which at 2,137 sqm was one of the smallest IKEA stores in the world. It also opened at 360 Mall in Kuwait, at the same year. The size of 360 Mall store was slightly larger than Al Wahda's despite bringing similar concept, at 3.000 sqm, located at extension of the mall. In 2021, IKEA opened another of its smallest stores at JEM Mall in Jurong East. Replacing liquidated department store Robinsons, IKEA Jurong is only 6,500 sqm across three levels and the first in Southeast Asia that did not provide the “Market Hall” warehouse in its store. Products and services Furniture and homeware Rather than being sold pre-assembled, much of IKEA's furniture is designed to be assembled by the customer. The company claims that this helps reduce costs and use of packaging by not shipping air; the volume of a bookcase, for example, is considerably less if it is shipped unassembled rather than assembled. This is also more practical for European customers using public transport, because flat packs can be more easily carried. IKEA contends that it has been a pioneering force in sustainable approaches to mass consumer culture. Kamprad calls this "democratic design", meaning that the company applies an integrated approach to manufacturing and design (see also environmental design). In response to the explosion of human population and material expectations in the 20th and 21st centuries, the company implements economies of scale, capturing material streams and creating manufacturing processes that hold costs and resource use down, such as the extensive use of Medium-Density Fiberboard ("MDF"), also called "particle board". Notable items of IKEA furniture include the Poäng armchair, the Billy bookcase and the Klippan sofa, all of which have sold by the tens of millions since the late 1970s. The IKEA and LEGO brands teamed up to create a range of simple storage solutions for children and adults. In June 2021, IKEA Canada unveiled a series of 10 "Love Seats" inspired by different Pride flags, created by four LGBTQ designers. Furniture and product naming IKEA products are identified by one-word (rarely two-word) names. Most of the names are Scandinavian in origin. Although there are some exceptions, most product names are based on a special naming system developed by IKEA. Company founder Kamprad was dyslexic and found that naming the furniture with proper names and words, rather than a product code, made the names easier to remember. Some of IKEA's Swedish product names have amusing or unfortunate connotations in other languages, sometimes resulting in the names being withdrawn in certain countries. Notable examples for English include the "Jerker" computer desk (discontinued several years ago ), "Fukta" plant spray, "Fartfull" workbench, and "Lyckhem" (meaning bliss). Due to several products being named after real locations, this has resulted in some locations sharing names with objects considered generally unpleasant, such as a toilet brush being named after the lake of Bolmen and a trash can named after the village of Toften. In November 2021, Visit Sweden launched a jocular campaign named "Discover the Originals", which invites tourists to visit the locations which have received such unfortunate associations with such items. Design services During the COVID-19 pandemic in 2020, to facilitate social distancing between customers and accommodate the increased volume of customers who were booking IKEA design consultation services, IKEA stores in Saudi Arabia and Bahrain improved their design consulting process by piloting Ombori's paperless queue management system for the brand. IKEA announced in March 2021 the launch of IKEA Studio, enabling customers to design full-scale rooms with Ikea furniture on an iPhone in this Apple-only project for IKEA. Smart home In 2016, IKEA started a move into the smart home business. The IKEA TRÅDFRI smart lighting kit was one of the first ranges signalling this change. IKEA's media team has confirmed that smart home project will be a big move. They have also started a partnership with Philips Hue. The wireless charging furniture, integrating wireless Qi charging into everyday furniture, is another strategy for the smart home business. A collaboration to build Sonos smart speaker technology into furniture sold by IKEA was announced in December 2017. The first products resulting from the collaboration launched in August 2019. Under the product name SYMFONISK, IKEA and Sonos have made two distinct wireless speakers that integrate with existing Sonos households or can be used to start with the Sonos-ecosystem, one that's also a lamp and another that's a more traditional looking bookshelf speaker. Both products as well as accessories for the purpose of mounting the bookshelf speakers have gone on sale worldwide on 1 August. From the start, IKEA SYMFONISK can only be controlled from the Sonos app, but IKEA will add support for the speakers in their own Home Smart app in October to be paired with scenes that control both the lights and smart blinds together with the speakers. Houses and flats IKEA has also expanded its product base to include flat-pack houses and apartments, in an effort to cut prices involved in a first-time buyer's home. The IKEA product, named BoKlok was launched in Sweden in 1996 in a joint venture with Skanska. Now working in the Nordic countries and in the UK, sites confirmed in England include London, Ashton-under-Lyne, Leeds, Gateshead, Warrington and Liverpool. Solar PV systems At the end of September 2013, the company announced that solar panel packages, so-called "residential kits", for houses will be sold at 17 UK stores by mid-2014. The decision followed a successful pilot project at the Lakeside IKEA store, whereby one photovoltaic system was sold almost every day. The solar CIGS panels are manufactured by Solibro, a German-based subsidiary of the Chinese company Hanergy. By the end of 2014, IKEA began to sell Solibro's solar residential kits in the Netherlands and in Switzerland. In November 2015, IKEA ended its contract with Hanergy and in April 2016 started working with Solarcentury to sell solar panels in the United Kingdom. The deal would allow customers to be able to order panels online and at three stores before being expanded to all United Kingdom stores by the end of summer. Furniture rental In April 2019, the company announced that it would begin test marketing a new concept, renting furniture to customers. One of the motivating factors was the fact that inexpensive IKEA products were viewed as "disposable" and often ended up being scrapped after a few years of use. This was at a time when especially younger buyers said they wanted to minimize their impact on the environment. The company understood this view. In an interview, Jesper Brodin, the chief executive of Ingka Group (the largest franchisee of IKEA stores), commented that "climate change and unsustainable consumption are among the biggest challenges we face in society". The other strategic objectives of the plan were to be more affordable and more convenient. The company said it would test the rental concept in all 30 markets by 2020, expecting it to increase the number of times a piece of furniture would be used before recycling. Restaurant and food markets Since 1958, every IKEA store includes a café that, until 2011, sold branded Swedish prepared specialist foods, such as meatballs, packages of gravy, lingonberry jam, various biscuits and crackers, and salmon and fish roe spread. The new label has a variety of items including chocolates, meatballs, jams, pancakes, salmon and various drinks. Although the cafes primarily serve Swedish food, the menu varies based on the culture, food and location of each store. With restaurants in 38 different countries, the menu will incorporate local dishes including shawarma in Saudi Arabia, poutine in Canada, macarons in France, and gelato in Italy. In Indonesia, the Swedish meatballs recipe is changed to accommodate the country's halal requirements. Stores in Israel sell kosher food under rabbinical supervision. The kosher restaurants are separated into dairy and meat areas. In many locations, the IKEA restaurants open daily before the rest of the store and serve breakfast. All food products are based on Swedish recipes and traditions. Food accounts for 5% of IKEA's sales. Since August 2020, IKEA provides plant-based meatballs in all of the European stores, made from potatoes, apples, pea protein, and oats. Småland Every store has a kids play area, named Småland (Swedish for small lands; it is also the Swedish province of Småland where founder Kamprad was born). Parents drop off their children at a gate to the playground, and pick them up after they arrive at another entrance. In some stores, parents are given free pagers by the on-site staff, which the staff can use to summon parents whose children need them earlier than expected; in others, staff summon parents through announcements over the in-store public address system or by calling them on their cellphones. The largest Småland play area is located at the IKEA store in Navi Mumbai, India. Ventures beyond furniture, homeware and Swedish food IKEA owns and operates the MEGA Family Shopping Centre chain in Russia. On 8 August 2008, IKEA UK launched a virtual mobile phone network called IKEA Family Mobile, which ran on T-Mobile. At launch it was the cheapest pay-as-you-go network in the UK. In June 2015 the network announced that its services would cease to operate from 31 August 2015. , IKEA is in joint venture with TCL to provide Uppleva integrated HDTV and entertainment system product. In mid-August 2012, the company announced that it would establish a chain of 100 economy hotels in Europe but, unlike its few existing hotels in Scandinavia, they would not carry the IKEA name, nor would they use IKEA furniture and furnishings – they would be operated by an unnamed international group of hoteliers. As of 30 April 2018, however, the company owned only a single hotel, the IKEA Hotell in Älmhult, Sweden, but was planning to open another one, in New Haven, Connecticut, United States, after converting the historic Pirelli Building. The company received approval for the concept from the city's planning commission in mid-November 2018; the building was to include 165 rooms and the property would offer 129 dedicated parking spaces. Research in April 2019 provided no indication that the hotel had been completed as of that time. In September 2017, IKEA announced they would be acquiring San Francisco-based TaskRabbit. The deal, completed later that year, has TaskRabbit operating as an independent company. In March 2020, IKEA announced that it had partnered with Pizza Hut Hong Kong on a joint venture. IKEA launched a new side table called SÄVA. The table, designed to resemble a pizza saver, would be boxed in packaging resembling a pizza box, and the building instructions included a suggestion to order a Swedish meatball pizza from Pizza Hut, which would contain the same meatballs served in IKEA restaurants. In April 2020, IKEA acquired AI imaging startup Geomagical Labs. In July 2020, IKEA opened a concept store in the Harajuku district of Tokyo, Japan, where it launched its first ever apparel line. In September 2017, an Apple-only IKEA Place iPhone app is launched allowing customers to use AR to envision true-to-scale furniture in their living spaces by placing it there virtually. Ingka Centres, IKEA's malls division, announced in December 2021 that it would open two malls, anchored by IKEA stores, in Gurugram and Noida in India at a cost of around . Both malls are expected to open by 2025. Corporate structure IKEA is owned and operated by a complicated array of not-for-profit and for-profit corporations. The corporate structure is divided into two main parts: operations and franchising. Inter IKEA Systems is owned by Inter IKEA Holding BV, a company registered in the Netherlands, formerly registered in Luxembourg (under the name Inter IKEA Holding SA). Inter IKEA Holding, in turn, is owned by the Interogo Foundation, based in Liechtenstein. In 2016, the INGKA Holding sold its design, manufacturing and logistics subsidiaries to Inter IKEA Holding. In June 2013, Ingvar Kamprad resigned from the board of Inter IKEA Holding SA and his youngest son Mathias Kamprad replaced Per Ludvigsson as the chairman of the holding company. Following his decision to step down, the 87-year-old founder explained, "I see this as a good time for me to leave the board of Inter IKEA Group. By that we are also taking another step in the generation shift that has been ongoing for some years." After the 2016 company restructure, Inter IKEA Holding SA no longer exists, having reincorporated in the Netherlands. Mathias Kamprad became a board member of the Inter IKEA Group and the Interogo Foundation. Mathias and his two older brothers, who also have leadership roles at IKEA, work on the corporation's overall vision and long-term strategy. Control by Kamprad Along with helping IKEA make a non-taxable profit, IKEA's complicated corporate structure allowed Kamprad to maintain tight control over the operations of INGKA Holding, and thus the operation of most IKEA stores. The INGKA Foundation's five-person executive committee was chaired by Kamprad. It appoints a board of INGKA Holding, approves any changes to INGKA Holding's bylaws, and has the right to preempt new share issues. If a member of the executive committee quits or dies, the other four members appoint his or her replacement. In Kamprad's absence, the foundation's bylaws include specific provisions requiring it to continue operating the INGKA Holding group and specifying that shares can be sold only to another foundation with the same objectives as the INGKA Foundation. Financial information The net profit of IKEA Group (which does not include Inter IKEA systems) in fiscal year 2009 (after paying franchise fees to Inter IKEA systems) was €2.538 billion on sales of €21.846 billion. Because INGKA Holding is owned by the non-profit INGKA Foundation, none of this profit is taxed. The foundation's nonprofit status also means that the Kamprad family cannot reap these profits directly, but the Kamprads do collect a portion of IKEA sales profits through the franchising relationship between INGKA Holding and Inter IKEA Systems. Inter IKEA Systems collected €631 million of franchise fees in 2004 but reported pre-tax profits of only €225 million in 2004. One of the major pre-tax expenses that Inter IKEA systems reported was €590 million of "other operating charges". IKEA has refused to explain these charges, but Inter IKEA Systems appears to make large payments to I.I. Holding, another Luxembourg-registered group that, according to The Economist, "is almost certain to be controlled by the Kamprad family." I.I. Holding made a profit of €328 million in 2004. In 2004, the Inter IKEA group of companies and I.I. Holding reported combined profits of €553m and paid €19m in taxes, or approximately 3.5 percent. Public Eye (formerly known as Erklärung von Bern, literally The Berne Declaration), a non-profit organisation in Switzerland that promotes corporate responsibility, has formally criticised IKEA for its tax avoidance strategies. In 2007, the organisation nominated IKEA for one of its Public Eye "awards", which highlight corporate irresponsibility and are announced during the World Economic Forum in Davos, Switzerland. In February 2016, the Greens / EFA group in the European Parliament issued a report entitled IKEA: Flat Pack Tax Avoidance on the tax planning strategies of IKEA and their possible use to avoid tax in several European countries. The report was sent to Pierre Moscovici, the European Commissioner for Economic and Financial Affairs, Taxation and Customs, and Margrethe Vestager, the European Commissioner for Competition, expressing the hope that it would be of use to them in their respective roles "to advance the fight for tax justice in Europe." Manufacturing Although IKEA household products and furniture are designed in Sweden, they are largely manufactured in developing countries to keep costs down. For most of its products, the final assembly is performed by the end-user (consumer). Swedwood, an IKEA subsidiary, handles production of all of the company's wood-based products, with the largest Swedwood factory located in Southern Poland. According to the subsidiary, over 16,000 employees across 50 sites in 10 countries manufacture the 100 million pieces of furniture that IKEA sells annually. IKEA furniture uses the hardwood alternative particle board. Hultsfred, a factory in southern Sweden, is the company's sole supplier. Logistics Distribution center efficiency and flexibility have been one of IKEA's ongoing priorities and thus it has implemented automated, robotic warehouse systems and warehouse management systems (WMS). Such systems facilitate a merger of the traditional retail and mail order sales channels into an omni-channel fulfillment model. In 2020, Ikea was noted by Supply Chain magazine as having one of the most automated warehouse systems in the world. 2021 Supply Chain issues Due to the COVID-19 pandemic, IKEA has been facing major supply chain issues since 2021, which could extend into 2022. Jon Abrahamsson, the chief executive of Inter IKEA has stated that the main issue is shipping products from China, as a "quarter" of IKEA products are made there. Labour practices During the 1980s, IKEA kept its costs down by using production facilities in East Germany. | launch it was the cheapest pay-as-you-go network in the UK. In June 2015 the network announced that its services would cease to operate from 31 August 2015. , IKEA is in joint venture with TCL to provide Uppleva integrated HDTV and entertainment system product. In mid-August 2012, the company announced that it would establish a chain of 100 economy hotels in Europe but, unlike its few existing hotels in Scandinavia, they would not carry the IKEA name, nor would they use IKEA furniture and furnishings – they would be operated by an unnamed international group of hoteliers. As of 30 April 2018, however, the company owned only a single hotel, the IKEA Hotell in Älmhult, Sweden, but was planning to open another one, in New Haven, Connecticut, United States, after converting the historic Pirelli Building. The company received approval for the concept from the city's planning commission in mid-November 2018; the building was to include 165 rooms and the property would offer 129 dedicated parking spaces. Research in April 2019 provided no indication that the hotel had been completed as of that time. In September 2017, IKEA announced they would be acquiring San Francisco-based TaskRabbit. The deal, completed later that year, has TaskRabbit operating as an independent company. In March 2020, IKEA announced that it had partnered with Pizza Hut Hong Kong on a joint venture. IKEA launched a new side table called SÄVA. The table, designed to resemble a pizza saver, would be boxed in packaging resembling a pizza box, and the building instructions included a suggestion to order a Swedish meatball pizza from Pizza Hut, which would contain the same meatballs served in IKEA restaurants. In April 2020, IKEA acquired AI imaging startup Geomagical Labs. In July 2020, IKEA opened a concept store in the Harajuku district of Tokyo, Japan, where it launched its first ever apparel line. In September 2017, an Apple-only IKEA Place iPhone app is launched allowing customers to use AR to envision true-to-scale furniture in their living spaces by placing it there virtually. Ingka Centres, IKEA's malls division, announced in December 2021 that it would open two malls, anchored by IKEA stores, in Gurugram and Noida in India at a cost of around . Both malls are expected to open by 2025. Corporate structure IKEA is owned and operated by a complicated array of not-for-profit and for-profit corporations. The corporate structure is divided into two main parts: operations and franchising. Inter IKEA Systems is owned by Inter IKEA Holding BV, a company registered in the Netherlands, formerly registered in Luxembourg (under the name Inter IKEA Holding SA). Inter IKEA Holding, in turn, is owned by the Interogo Foundation, based in Liechtenstein. In 2016, the INGKA Holding sold its design, manufacturing and logistics subsidiaries to Inter IKEA Holding. In June 2013, Ingvar Kamprad resigned from the board of Inter IKEA Holding SA and his youngest son Mathias Kamprad replaced Per Ludvigsson as the chairman of the holding company. Following his decision to step down, the 87-year-old founder explained, "I see this as a good time for me to leave the board of Inter IKEA Group. By that we are also taking another step in the generation shift that has been ongoing for some years." After the 2016 company restructure, Inter IKEA Holding SA no longer exists, having reincorporated in the Netherlands. Mathias Kamprad became a board member of the Inter IKEA Group and the Interogo Foundation. Mathias and his two older brothers, who also have leadership roles at IKEA, work on the corporation's overall vision and long-term strategy. Control by Kamprad Along with helping IKEA make a non-taxable profit, IKEA's complicated corporate structure allowed Kamprad to maintain tight control over the operations of INGKA Holding, and thus the operation of most IKEA stores. The INGKA Foundation's five-person executive committee was chaired by Kamprad. It appoints a board of INGKA Holding, approves any changes to INGKA Holding's bylaws, and has the right to preempt new share issues. If a member of the executive committee quits or dies, the other four members appoint his or her replacement. In Kamprad's absence, the foundation's bylaws include specific provisions requiring it to continue operating the INGKA Holding group and specifying that shares can be sold only to another foundation with the same objectives as the INGKA Foundation. Financial information The net profit of IKEA Group (which does not include Inter IKEA systems) in fiscal year 2009 (after paying franchise fees to Inter IKEA systems) was €2.538 billion on sales of €21.846 billion. Because INGKA Holding is owned by the non-profit INGKA Foundation, none of this profit is taxed. The foundation's nonprofit status also means that the Kamprad family cannot reap these profits directly, but the Kamprads do collect a portion of IKEA sales profits through the franchising relationship between INGKA Holding and Inter IKEA Systems. Inter IKEA Systems collected €631 million of franchise fees in 2004 but reported pre-tax profits of only €225 million in 2004. One of the major pre-tax expenses that Inter IKEA systems reported was €590 million of "other operating charges". IKEA has refused to explain these charges, but Inter IKEA Systems appears to make large payments to I.I. Holding, another Luxembourg-registered group that, according to The Economist, "is almost certain to be controlled by the Kamprad family." I.I. Holding made a profit of €328 million in 2004. In 2004, the Inter IKEA group of companies and I.I. Holding reported combined profits of €553m and paid €19m in taxes, or approximately 3.5 percent. Public Eye (formerly known as Erklärung von Bern, literally The Berne Declaration), a non-profit organisation in Switzerland that promotes corporate responsibility, has formally criticised IKEA for its tax avoidance strategies. In 2007, the organisation nominated IKEA for one of its Public Eye "awards", which highlight corporate irresponsibility and are announced during the World Economic Forum in Davos, Switzerland. In February 2016, the Greens / EFA group in the European Parliament issued a report entitled IKEA: Flat Pack Tax Avoidance on the tax planning strategies of IKEA and their possible use to avoid tax in several European countries. The report was sent to Pierre Moscovici, the European Commissioner for Economic and Financial Affairs, Taxation and Customs, and Margrethe Vestager, the European Commissioner for Competition, expressing the hope that it would be of use to them in their respective roles "to advance the fight for tax justice in Europe." Manufacturing Although IKEA household products and furniture are designed in Sweden, they are largely manufactured in developing countries to keep costs down. For most of its products, the final assembly is performed by the end-user (consumer). Swedwood, an IKEA subsidiary, handles production of all of the company's wood-based products, with the largest Swedwood factory located in Southern Poland. According to the subsidiary, over 16,000 employees across 50 sites in 10 countries manufacture the 100 million pieces of furniture that IKEA sells annually. IKEA furniture uses the hardwood alternative particle board. Hultsfred, a factory in southern Sweden, is the company's sole supplier. Logistics Distribution center efficiency and flexibility have been one of IKEA's ongoing priorities and thus it has implemented automated, robotic warehouse systems and warehouse management systems (WMS). Such systems facilitate a merger of the traditional retail and mail order sales channels into an omni-channel fulfillment model. In 2020, Ikea was noted by Supply Chain magazine as having one of the most automated warehouse systems in the world. 2021 Supply Chain issues Due to the COVID-19 pandemic, IKEA has been facing major supply chain issues since 2021, which could extend into 2022. Jon Abrahamsson, the chief executive of Inter IKEA has stated that the main issue is shipping products from China, as a "quarter" of IKEA products are made there. Labour practices During the 1980s, IKEA kept its costs down by using production facilities in East Germany. A portion of the workforce at those factories consisted of political prisoners. This fact, revealed in a report by Ernst & Young commissioned by the company, resulted from the intermingling of criminals and political dissidents in the state-owned production facilities IKEA contracted with, a practice which was generally known in West Germany. IKEA was one of a number of companies, including West German firms, which benefited from this practice. The investigation resulted from attempts by former political prisoners to obtain compensation. In November 2012, IKEA admitted being aware at the time of the possibility of use of forced labor and failing to exercise sufficient control to identify and avoid it. A summary of the Ernst & Young report was released on 16 November 2012. IKEA was named one of the 100 Best Companies for Working Mothers in 2004 and 2005 by Working Mothers magazine. It ranked 80 in Fortune's 200 Best Companies to Work For in 2006 and in October 2008, IKEA Canada LP was named one of "Canada's Top 100 Employers" by Mediacorp Canada Inc. Environmental initiatives Umbrella initiatives After initial environmental issues like the highly publicized formaldehyde scandals in the early 1980s and 1992, IKEA took a proactive stance on environmental issues and tried to prevent future incidents through a variety of measures. In 1990, IKEA invited Karl-Henrik Robèrt, founder of the Natural Step, to address its board of directors. Robert's system conditions for sustainability provided a strategic approach to improving the company's environmental performance. In 1990, IKEA adopted the Natural Step framework as the basis for its environmental plan. This led to the development of an Environmental Action Plan, which was adopted in 1992. The plan focused on structural change, allowing IKEA to "maximize the impact of resources invested and reduce the energy necessary to address isolated issues." The environmental measures taken include the following: Replacing polyvinylchloride (PVC) in wallpapers, home textiles, shower curtains, lampshades and furniture—PVC has been eliminated from packaging and is being phased out in electric cables; Minimizing the use of formaldehyde in its products, including textiles; Eliminating acid-curing lacquers; Producing a model of chair (OGLA) made from 100% post-consumer plastic waste; Introducing a series of air-inflatable furniture products into the product line. Such products reduce the use of raw materials for framing and stuffing and reduce transportation weight and volume to about 15% of that of conventional furniture; Reducing the use of chromium for metal surface treatment; Limiting the use of substances such as cadmium, lead, PCB, PCP, and Azo pigments; Using wood from responsibly managed forests that replant and maintain biological diversity; Using only recyclable materials for flat packaging and "pure" (non-mixed) materials for packaging to assist in recycling. Introducing rental bicycles with trailers for customers in Denmark. In 2000, IKEA introduced its code of conduct for suppliers that covers social, safety, and environmental questions. Today IKEA has around 60 auditors who perform hundreds of supplier audits every year. The main purpose of these audits is to make sure that the IKEA suppliers follow the law in each country where they are based. Most IKEA suppliers fulfil the law today with exceptions for some special issues, one being excessive working hours in Asia, in countries such as China and India. , IKEA has signed on with 25 other companies to participate in the British Retail Consortium's Better Retail Better World initiative, which challenges companies to meet objectives outlined by the United Nations Sustainable Development Goals. Product life cycle To make IKEA a more sustainable company, a product life cycle was created. For the idea stage, products should be flat-packed so that more items can be shipped at once; products should also be easier to dismantle and recycle. Raw materials are used, and since wood and cotton are two of IKEA's most important manufacturing products, the company works with environmentally friendly forests and cotton, whereby the excessive use of chemicals and water is avoided. IKEA stores recycle waste and many run on renewable energy. All employees are trained in environmental and social responsibility, while public transit is one of the priorities when the location of stores is considered. Also, the coffee and chocolate served at IKEA stores is UTZ Certified. The last stage of the life cycle is the end of life. Most IKEA stores recycle light bulbs and drained batteries, and the company is also exploring the recycling of sofas and other home furnishing products. Energy sources On 17 February 2011, IKEA announced its plans to develop a wind farm in Dalarna County, Sweden, furthering its goal of using only renewable energy to fuel its operations. , 17 United States IKEA stores are powered by solar panels, with 22 additional installations in progress, and IKEA owns the 165 MW Cameron Wind farm in Cameron County on the South Texas coast and a 42 MW coastal wind farm in Finland. In September 2019, IKEA announced that they would be investing $2.8 billion in renewable energy infrastructure. The company is targeting making their entire supply chain climate positive by 2030. Sourcing of wood According to IKEA's 2012 "Sustainability Report", 23% of all wood that the company uses meets the standards of the Forest Stewardship Council, and the report states that IKEA aims to double this percentage by 2017. The report also states that IKEA does not accept illegally logged wood and supports 13 World Wide Fund for Nature (WWF) projects. IKEA owns about 136,000 acres of forest in USA and about 450,000 acres in Europe. The IKEA sustainability strategy – People & Planet Positive – also launched in 2012 with ambitious goals to transform the IKEA business, the industries in the IKEA value chain and life at home for people across the world. On 14 January 2021, Ikea announced that Ingka Investments had acquired approximately 10,840 acres (4,386 hectares) near the Altamaha River Basin in Georgia from The Conservation Fund. The acquisition comes with the agreement “to protect the land from fragmentation, restore the longleaf pine forest, and safe-guard the habitat of the gopher tortoise.” Use of wood In 2011, the company examined its wood consumption and noticed that almost half of its global pine and spruce consumption was for the fabrication of pallets. The company consequently started a transition to the use of paper pallets and the "Optiledge system". The OptiLedge product is totally recyclable, made from 100% virgin high-impact copolymer polypropylene (PP). The system is a "unit load alternative to the use of a pallet. The system consists of the OptiLedge (usually used in pairs), aligned and strapped to the bottom carton to form a base layer upon which to stack more products. Corner boards are used when strapping to minimize the potential for package compression." The conversion began in Germany and Japan, before its introduction into the rest of Europe and North America. The system has been marketed to other companies, and IKEA has formed the OptiLedge company to manage and sell the product. Packaging and bags Since March 2013, IKEA has stopped providing plastic bags to customers, but offers reusable bags for sale. The IKEA restaurants also only offer reusable plates, knives, forks, spoons, etc. Toilets in some IKEA WC-rooms have been outfitted with dual-function flushers. IKEA has recycling bins for compact fluorescent lamps (CFLs), energy-saving bulbs, and batteries. In 2001 IKEA was one of the first companies to operate its own cross-border goods trains through several countries in Europe. Electric vehicles IKEA has expanded its sustainability plan in the UK to include electric car charge points for customers at all locations by the end of 2013. The effort will include Nissan and Ecotricity and promise to deliver an 80% charge in 30 minutes. From 2016 IKEA has only sold energy-efficient LED lightbulbs, lamps and light fixtures. LED lightbulbs use as little as 15% of the power of a regular incandescent light bulb. Investments In August 2008, IKEA announced that it had created IKEA GreenTech, a €50 million venture capital fund. Located in Lund (a university town in Sweden), it will invest in 8–10 companies in the coming five years with focus on solar panels, alternative light sources, product materials, energy efficiency and water saving and purification. The aim is to commercialise green technologies for sale in IKEA stores within 3–4 years. Donations made by IKEA The INGKA Foundation is officially dedicated to promoting "innovations in architecture and interior design." The net worth of the foundation exceeded the net worth of the much better known Bill & Melinda Gates Foundation (now the largest private foundation in the world) for a period. However, most of the Group's profit is spent on investment. IKEA |
masses below 191 decay by some combination of β+ decay, α decay, and (rare) proton emission, with the exception of 189Ir, which decays by electron capture. Synthetic isotopes heavier than 191 decay by β− decay, although 192Ir also has a minor electron capture decay path. All known isotopes of iridium were discovered between 1934 and 2008, with the most recent discoveries being 200–202Ir. At least 32 metastable isomers have been characterized, ranging in mass number from 164 to 197. The most stable of these is 192m2Ir, which decays by isomeric transition with a half-life of 241 years, making it more stable than any of iridium's synthetic isotopes in their ground states. The least stable isomer is 190m3Ir with a half-life of only 2 μs. The isotope 191Ir was the first one of any element to be shown to present a Mössbauer effect. This renders it useful for Mössbauer spectroscopy for research in physics, chemistry, biochemistry, metallurgy, and mineralogy. History Platinum group The discovery of iridium is intertwined with that of platinum and the other metals of the platinum group. Native platinum used by ancient Ethiopians and by South American cultures always contained a small amount of the other platinum group metals, including iridium. Platinum reached Europe as platina ("silverette"), found in the 17th century by the Spanish conquerors in a region today known as the department of Chocó in Colombia. The discovery that this metal was not an alloy of known elements, but instead a distinct new element, did not occur until 1748. Discovery Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. Joseph Louis Proust thought that the residue was graphite. The French chemists Victor Collet-Descotils, Antoine François, comte de Fourcroy, and Louis Nicolas Vauquelin also observed the black residue in 1803, but did not obtain enough for further experiments. In 1803, British scientist Smithson Tennant (1761–1815) analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed to be of this new metal—which he named ptene, from the Greek word ptēnós, "winged". Tennant, who had the advantage of a much greater amount of residue, continued his research and identified the two previously undiscovered elements in the black residue, iridium and osmium. He obtained dark red crystals (probably of ]·n) by a sequence of reactions with sodium hydroxide and hydrochloric acid. He named iridium after Iris (), the Greek winged goddess of the rainbow and the messenger of the Olympian gods, because many of the salts he obtained were strongly colored. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804. Metalworking and applications British scientist John George Children was the first to melt a sample of iridium in 1813 with the aid of "the greatest galvanic battery that has ever been constructed" (at that time). The first to obtain high-purity iridium was Robert Hare in 1842. He found it had a density of around and noted the metal is nearly immalleable and very hard. The first melting in appreciable quantity was done by Henri Sainte-Claire Deville and Jules Henri Debray in 1860. They required burning more than of pure and gas for each of iridium. These extreme difficulties in melting the metal limited the possibilities for handling iridium. John Isaac Hawkins was looking to obtain a fine and hard point for fountain pen nibs, and in 1834 managed to create an iridium-pointed gold pen. In 1880, John Holland and William Lofland Dudley were able to melt iridium by adding phosphorus and patented the process in the United States; British company Johnson Matthey later stated they had been using a similar process since 1837 and had already presented fused iridium at a number of World Fairs. The first use of an alloy of iridium with ruthenium in thermocouples was made by Otto Feussner in 1933. These allowed for the measurement of high temperatures in air up to . In Munich, Germany in 1957 Rudolf Mössbauer, in what has been called one of the "landmark experiments in twentieth-century physics", discovered the resonant and recoil-free emission and absorption of gamma rays by atoms in a solid metal sample containing only 191Ir. This phenomenon, known as the Mössbauer effect (which has since been observed for other nuclei, such as 57Fe), and developed as Mössbauer spectroscopy, has made important contributions to research in physics, chemistry, biochemistry, metallurgy, and mineralogy. Mössbauer received the Nobel Prize in Physics in 1961, at the age 32, just three years after he published his discovery. In 1986 Rudolf Mössbauer was honored for his achievements with the Albert Einstein Medal and the Elliot Cresson Medal. Occurrence Iridium is one of the nine least abundant stable elements in Earth's crust, having an average mass fraction of 0.001 ppm in crustal rock; platinum is 10 times more abundant, gold is 40 times more abundant, and silver and mercury are 80 times more abundant. Tellurium is about as abundant as iridium. In contrast to its low abundance in crustal rock, iridium is relatively common in meteorites, with concentrations of 0.5 ppm or more. The overall concentration of iridium on Earth is thought to be much higher than what is observed in crustal rocks, but because of the density and siderophilic ("iron-loving") character of iridium, it descended below the crust and into Earth's core when the planet was still molten. Iridium is found in nature as an uncombined element or in natural alloys; especially the iridium–osmium alloys, osmiridium (osmium-rich), and iridosmium (iridium-rich). In the nickel and copper deposits, the platinum group metals occur as sulfides (i.e. (), tellurides (i.e. PtBiTe), antimonides (PdSb), and arsenides (i.e. ). In all of these compounds, platinum is exchanged by a small amount of iridium and osmium. As with all of the platinum group metals, iridium can be found naturally in alloys with raw nickel or raw copper. A number of iridium-dominant minerals, with iridium as the species-forming element, are known. They are exceedingly rare and often represent the iridium analogues of the above-given ones. The examples are irarsite and cuproiridsite, to mention some. Within Earth's crust, iridium is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld igneous complex in South Africa, (near the largest known impact crater, the Vredefort crater) though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin (also an impact crater) in Canada are also significant sources of iridium. Smaller reserves are found in the United States. Iridium is also found in secondary deposits, combined with platinum and other platinum group metals in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department of Colombia are still a source for platinum-group metals. As of 2003, world reserves have not been estimated. Marine Oceanography Iridium is found within marine organisms, sediments, and the water column. In organisms, iridium is found in less than 20 parts per trillion on average. This is most likely due to the "weaker ability of Ir to form stable chloro-metal complexes in seawater". This is more than 5 magnitudes less than what was found contained in the remnants of the biosphere of the Cretaceous-Paleogene time demonstrated by the Cretaceous/Tertiary (K-T) boundary sediments. Iridium is found in the water column in low concentrations (100 times less than platinum). These concentrations and iridium's lower complexing capability with halides cause the interactions to have a higher propensity to hydrolyze. Temperature, anoxia or hypoxic, pressure, along with geologic and biologic process can impact the ratios of iridium in the water column and sediment composition. Iridium can be used to determine the composition origin of the sediments such as extraterrestrial deposits, volcanic activity, seawater deposition, microbial processing, hydrothermal vent exhalations, etc. Most of these sources contain iridium in extremely small quantities with more substantial findings leading scientist to conclusions of sub-tectonic or extraterrestrial origin. Iridium is oxidized in some marine minerals of marine sediments and its likelihood of mineralization in ferromanganese, in concentrations that approach that of the "seawater ratio", enhance their heavy metal value as an ore. Iridium concentration compared to lead or gold in these sediments has been found to be an indicator of whether the sediments came from terrestrial weathering, sub-tectonic activity, or has a cosmic origin. For example, volcanic exhalation contains higher ratios of lead and gold but has the same levels of iridium and high gold, lead and platinum with low levels of iridium is characteristics of hydrothermal exhalation. One interesting origin of iridium in marine sediments is extraterrestrial matter making it a wonderful tracer due to its characteristic of being more sensitive and non-volatile than other cosmic elements. Iridium has been used as the base indicator of quantifying the amount of deposition of interstellar matter, such as asteroids and meteoroids, which make their way through the Earth's atmosphere to deposit in the sediments. Iridium can be linked to some of the major global extinctions by defining the origin of the iridium through the isotope ratio to other elements such as ruthenium or osmium. Sediment layers associated with mass extinctions, such as the K-T boundary sediments, demonstrate iridium ratio spikes which resemble quantities found in meteorites. Geochemical processes of iridium, which are not well understood in low temperatures, could impact these quantities to a degree. However, scientist have concluded the changes would not be significant enough to ignore the highest concentrations though it possibly renders the less substantial spikes to be less conclusive of extraterrestrial impact activity. Cretaceous–Paleogene boundary presence The Cretaceous–Paleogene boundary of 66 million years ago, marking the temporal border between the Cretaceous and Paleogene periods of geological time, was identified by a thin stratum of iridium-rich clay. A team led by Luis Alvarez proposed in 1980 an extraterrestrial origin for this iridium, attributing it to an asteroid or comet impact. Their theory, known as the Alvarez hypothesis, is now widely accepted to explain the extinction of the non-avian dinosaurs. A large buried impact crater structure with an estimated age of about 66 million years was later identified under what is now the Yucatán Peninsula (the Chicxulub crater). Dewey M. McLean and others argue that the iridium may have been of volcanic origin instead, because Earth's core is rich in iridium, and active volcanoes such as Piton de la Fournaise, in the island of Réunion, are still releasing iridium. Production In 2018, worldwide production of iridium totaled 7300 kg. In mid April 2021, iridium reached a price of US$6,400 per troy ounce on Metals Daily (a precious metals commodity listing). Iridium is also obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum group metals as well as selenium and tellurium settle to the bottom of the cell as anode mud, which forms the starting point for their extraction. To separate the metals, they must first be brought into solution. Several separation methods are available depending on the nature of the mixture; two representative methods are fusion with sodium peroxide followed by dissolution in aqua regia, and dissolution in a mixture of chlorine with hydrochloric acid. After the mixture is dissolved, iridium is separated from the other platinum group metals by precipitating ammonium hexachloroiridate () or by extracting with organic amines. The first method is similar to the procedure Tennant and Wollaston used for their separation. The second method can be planned as continuous liquid–liquid extraction and is therefore more suitable for industrial scale production. In either case, the product is reduced using hydrogen, yielding the metal as a powder or sponge that can be treated using powder metallurgy techniques. Iridium prices have fluctuated over a considerable range. With a relatively small volume in the world market (compared to other industrial metals like aluminium or copper), the iridium price reacts strongly to instabilities in production, demand, speculation, hoarding, and politics in the producing countries. As a substance with rare properties, its price has been particularly influenced by changes in modern technology: The gradual decrease between 2001 and 2003 has been related to an oversupply of Ir crucibles used for industrial growth of large single crystals. Likewise the prices above between 2010 and 2014 have been explained with the installation of production facilities for single crystal sapphire used in LED backlights for TVs. Applications The demand for iridium surged from in 2009 to in 2010, mostly because of electronics-related applications that saw a rise from iridium crucibles are commonly used for growing large high-quality single crystals, demand for which has increased sharply. This increase in iridium consumption is predicted to saturate due to accumulating stocks of crucibles, as happened earlier in the 2000s. Other major applications include spark plugs that consumed of iridium in 2007, electrodes for the chloralkali process ( in 2007) and chemical catalysts ( in 2007). Industrial and medical The high melting point, hardness and corrosion resistance of iridium and its alloys determine most of its applications. Iridium (or sometimes platinum alloys or osmium) and mostly iridium alloys have a low wear and are used, for example, for multi-pored spinnerets, through which a plastic polymer melt is extruded to form fibers, such as rayon. Osmium–iridium is used for compass bearings and for balances. Their resistance to arc erosion makes iridium alloys ideal for electrical contacts for spark plugs, and iridium-based spark plugs are particularly used in aviation. Pure iridium is extremely brittle, to the point of being hard to weld because the heat-affected zone cracks, but it can be made more ductile by addition of small quantities of titanium and zirconium (0.2% of each apparently works well). Resistance to heat and corrosion makes iridium an important alloying agent. Certain long-life aircraft engine parts are made of an iridium alloy, and an iridium–titanium alloy is used for deep-water pipes because of its corrosion resistance. Iridium is also used as a hardening agent in platinum alloys. The | were able to melt iridium by adding phosphorus and patented the process in the United States; British company Johnson Matthey later stated they had been using a similar process since 1837 and had already presented fused iridium at a number of World Fairs. The first use of an alloy of iridium with ruthenium in thermocouples was made by Otto Feussner in 1933. These allowed for the measurement of high temperatures in air up to . In Munich, Germany in 1957 Rudolf Mössbauer, in what has been called one of the "landmark experiments in twentieth-century physics", discovered the resonant and recoil-free emission and absorption of gamma rays by atoms in a solid metal sample containing only 191Ir. This phenomenon, known as the Mössbauer effect (which has since been observed for other nuclei, such as 57Fe), and developed as Mössbauer spectroscopy, has made important contributions to research in physics, chemistry, biochemistry, metallurgy, and mineralogy. Mössbauer received the Nobel Prize in Physics in 1961, at the age 32, just three years after he published his discovery. In 1986 Rudolf Mössbauer was honored for his achievements with the Albert Einstein Medal and the Elliot Cresson Medal. Occurrence Iridium is one of the nine least abundant stable elements in Earth's crust, having an average mass fraction of 0.001 ppm in crustal rock; platinum is 10 times more abundant, gold is 40 times more abundant, and silver and mercury are 80 times more abundant. Tellurium is about as abundant as iridium. In contrast to its low abundance in crustal rock, iridium is relatively common in meteorites, with concentrations of 0.5 ppm or more. The overall concentration of iridium on Earth is thought to be much higher than what is observed in crustal rocks, but because of the density and siderophilic ("iron-loving") character of iridium, it descended below the crust and into Earth's core when the planet was still molten. Iridium is found in nature as an uncombined element or in natural alloys; especially the iridium–osmium alloys, osmiridium (osmium-rich), and iridosmium (iridium-rich). In the nickel and copper deposits, the platinum group metals occur as sulfides (i.e. (), tellurides (i.e. PtBiTe), antimonides (PdSb), and arsenides (i.e. ). In all of these compounds, platinum is exchanged by a small amount of iridium and osmium. As with all of the platinum group metals, iridium can be found naturally in alloys with raw nickel or raw copper. A number of iridium-dominant minerals, with iridium as the species-forming element, are known. They are exceedingly rare and often represent the iridium analogues of the above-given ones. The examples are irarsite and cuproiridsite, to mention some. Within Earth's crust, iridium is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld igneous complex in South Africa, (near the largest known impact crater, the Vredefort crater) though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin (also an impact crater) in Canada are also significant sources of iridium. Smaller reserves are found in the United States. Iridium is also found in secondary deposits, combined with platinum and other platinum group metals in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department of Colombia are still a source for platinum-group metals. As of 2003, world reserves have not been estimated. Marine Oceanography Iridium is found within marine organisms, sediments, and the water column. In organisms, iridium is found in less than 20 parts per trillion on average. This is most likely due to the "weaker ability of Ir to form stable chloro-metal complexes in seawater". This is more than 5 magnitudes less than what was found contained in the remnants of the biosphere of the Cretaceous-Paleogene time demonstrated by the Cretaceous/Tertiary (K-T) boundary sediments. Iridium is found in the water column in low concentrations (100 times less than platinum). These concentrations and iridium's lower complexing capability with halides cause the interactions to have a higher propensity to hydrolyze. Temperature, anoxia or hypoxic, pressure, along with geologic and biologic process can impact the ratios of iridium in the water column and sediment composition. Iridium can be used to determine the composition origin of the sediments such as extraterrestrial deposits, volcanic activity, seawater deposition, microbial processing, hydrothermal vent exhalations, etc. Most of these sources contain iridium in extremely small quantities with more substantial findings leading scientist to conclusions of sub-tectonic or extraterrestrial origin. Iridium is oxidized in some marine minerals of marine sediments and its likelihood of mineralization in ferromanganese, in concentrations that approach that of the "seawater ratio", enhance their heavy metal value as an ore. Iridium concentration compared to lead or gold in these sediments has been found to be an indicator of whether the sediments came from terrestrial weathering, sub-tectonic activity, or has a cosmic origin. For example, volcanic exhalation contains higher ratios of lead and gold but has the same levels of iridium and high gold, lead and platinum with low levels of iridium is characteristics of hydrothermal exhalation. One interesting origin of iridium in marine sediments is extraterrestrial matter making it a wonderful tracer due to its characteristic of being more sensitive and non-volatile than other cosmic elements. Iridium has been used as the base indicator of quantifying the amount of deposition of interstellar matter, such as asteroids and meteoroids, which make their way through the Earth's atmosphere to deposit in the sediments. Iridium can be linked to some of the major global extinctions by defining the origin of the iridium through the isotope ratio to other elements such as ruthenium or osmium. Sediment layers associated with mass extinctions, such as the K-T boundary sediments, demonstrate iridium ratio spikes which resemble quantities found in meteorites. Geochemical processes of iridium, which are not well understood in low temperatures, could impact these quantities to a degree. However, scientist have concluded the changes would not be significant enough to ignore the highest concentrations though it possibly renders the less substantial spikes to be less conclusive of extraterrestrial impact activity. Cretaceous–Paleogene boundary presence The Cretaceous–Paleogene boundary of 66 million years ago, marking the temporal border between the Cretaceous and Paleogene periods of geological time, was identified by a thin stratum of iridium-rich clay. A team led by Luis Alvarez proposed in 1980 an extraterrestrial origin for this iridium, attributing it to an asteroid or comet impact. Their theory, known as the Alvarez hypothesis, is now widely accepted to explain the extinction of the non-avian dinosaurs. A large buried impact crater structure with an estimated age of about 66 million years was later identified under what is now the Yucatán Peninsula (the Chicxulub crater). Dewey M. McLean and others argue that the iridium may have been of volcanic origin instead, because Earth's core is rich in iridium, and active volcanoes such as Piton de la Fournaise, in the island of Réunion, are still releasing iridium. Production In 2018, worldwide production of iridium totaled 7300 kg. In mid April 2021, iridium reached a price of US$6,400 per troy ounce on Metals Daily (a precious metals commodity listing). Iridium is also obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum group metals as well as selenium and tellurium settle to the bottom of the cell as anode mud, which forms the starting point for their extraction. To separate the metals, they must first be brought into solution. Several separation methods are available depending on the nature of the mixture; two representative methods are fusion with sodium peroxide followed by dissolution in aqua regia, and dissolution in a mixture of chlorine with hydrochloric acid. After the mixture is dissolved, iridium is separated from the other platinum group metals by precipitating ammonium hexachloroiridate () or by extracting with organic amines. The first method is similar to the procedure Tennant and Wollaston used for their separation. The second method can be planned as continuous liquid–liquid extraction and is therefore more suitable for industrial scale production. In either case, the product is reduced using hydrogen, yielding the metal as a powder or sponge that can be treated using powder metallurgy techniques. Iridium prices have fluctuated over a considerable range. With a relatively small volume in the world market (compared to other industrial metals like aluminium or copper), the iridium price reacts strongly to instabilities in production, demand, speculation, hoarding, and politics in the producing countries. As a substance with rare properties, its price has been particularly influenced by changes in modern technology: The gradual decrease between 2001 and 2003 has been related to an oversupply of Ir crucibles used for industrial growth of large single crystals. Likewise the prices above between 2010 and 2014 have been explained with the installation of production facilities for single crystal sapphire used in LED backlights for TVs. Applications The demand for iridium surged from in 2009 to in 2010, mostly because of electronics-related applications that saw a rise from iridium crucibles are commonly used for growing large high-quality single crystals, demand for which has increased sharply. This increase in iridium consumption is predicted to saturate due to accumulating stocks of crucibles, as happened earlier in the 2000s. Other major applications include spark plugs that consumed of iridium in 2007, electrodes for the chloralkali process ( in 2007) and chemical catalysts ( in 2007). Industrial and medical The high melting point, hardness and corrosion resistance of iridium and its alloys determine most of its applications. Iridium (or sometimes platinum alloys or osmium) and mostly iridium alloys have a low wear and are used, for example, for multi-pored spinnerets, through which a plastic polymer melt is extruded to form fibers, such as rayon. Osmium–iridium is used for compass bearings and for balances. Their resistance to arc erosion makes iridium alloys ideal for electrical contacts for spark plugs, and iridium-based spark plugs are particularly used in aviation. Pure iridium is extremely brittle, to the point of being hard to weld because the heat-affected zone cracks, but it can be made more ductile by addition of small quantities of titanium and zirconium (0.2% of each apparently works well). Resistance to heat and corrosion makes iridium an important alloying agent. Certain long-life aircraft engine parts are made of an iridium alloy, and an iridium–titanium alloy is used for deep-water pipes because of its corrosion resistance. Iridium is also used as a hardening agent in platinum alloys. The Vickers hardness of pure platinum is 56 HV, whereas platinum with 50% of iridium can reach over 500 HV. Devices that must withstand extremely high temperatures are often made from iridium. For example, high-temperature crucibles made of iridium are used in the Czochralski process to produce oxide single-crystals (such as sapphires) for use in computer memory devices and in solid state lasers. The crystals, such as gadolinium gallium garnet and yttrium gallium garnet, are grown by melting pre-sintered charges of mixed oxides under oxidizing conditions at temperatures up to . Iridium compounds are used as catalysts in the Cativa process for carbonylation of methanol to produce acetic acid. Iridium is a good catalyst for the decomposition of hydrazine (into hot nitrogen and ammonia), and this is used in practice in low-thrust rocket engines; there are more details in the monopropellant rocket article. The radioisotope iridium-192 is one of the two most important sources of energy for use in industrial γ-radiography for non-destructive testing of metals. Additionally, 192Ir is used as a source of gamma radiation for the treatment of cancer using brachytherapy, a form of radiotherapy where a sealed radioactive source is placed inside or next to the area requiring treatment. Specific treatments include high-dose-rate prostate brachytherapy, biliary duct brachytherapy, and intracavitary cervix brachytherapy. The use of iridium(III) complexes for imaging of mitochondria has been reviewed. When iridium(III) is attached to albumin, a photosensitized molecule that can penetrate cancer cells is created. This molecule can be used in a process known as photodynamic therapy to destroy cancer cells. Scientific An alloy of 90% platinum and 10% iridium was used in 1889 to construct the International Prototype Metre and kilogram mass, kept by the International Bureau of Weights and Measures near Paris. The meter bar was replaced as the definition of the fundamental unit of length in 1960 by a line in the atomic spectrum of krypton, but the kilogram prototype remained the international standard of mass until 20 May 2019, when the kilogram was redefined in terms of the Planck constant. Iridium is often used as a coating for non-conductive materials in preparation for observation in scanning electron microscopes (SEM). The addition of a layer of iridium helps especially organic materials survive electron beam damage and reduces static charge build-up within the target area of the SEM beam's focal point. A coating of iridium also increases the signal to noise ratio associated with secondary electron emission which is essential to using SEMs for X-Ray spectrographic composition analysis. While other metals can be used for coating objects for SEM use, iridium is the preferred coating when samples will be studied with a wide variety of imaging parameters. Iridium has been used in the radioisotope thermoelectric generators of unmanned spacecraft such as the Voyager, Viking, Pioneer, Cassini, Galileo, and New Horizons. Iridium was chosen to encapsulate the plutonium-238 fuel in the generator because it can withstand the operating temperatures of up to and for its great strength. Another use concerns X-ray optics, especially X-ray telescopes. The mirrors of the Chandra X-ray Observatory are coated with a layer of iridium thick. Iridium proved to be the best choice for reflecting X-rays after nickel, gold, and platinum were also tested. The iridium layer, which had to be smooth to within a few atoms, was applied by depositing iridium vapor under high vacuum on a base layer of chromium. Iridium is used in particle physics for the production of antiprotons, a form of antimatter. Antiprotons are made by shooting a high-intensity proton beam at a conversion target, which needs to be made from a very high density material. Although tungsten may be used instead, iridium has the advantage of better stability under the shock waves induced by the temperature rise due to the incident beam. Carbon–hydrogen bond activation (C–H activation) is an area of research on reactions that cleave carbon–hydrogen bonds, which were traditionally regarded as unreactive. The first reported successes at activating C–H bonds in saturated hydrocarbons, published |
stock exchanges Index of coincidence, In cryptography, the technique of counting the number of times that identical letters appear Indian Ocean Commission, generally COI, but according to World Bank as well IOC Indian Oil Corporation, large Indian oil and gas company Indian Orthodox Church, Kerala, India International Olive Council Initial operating capability, minimum level of deployment, especially in the US | commonly refers to the International Olympic Committee. IOC may also refer to: Computing IBM Open Class, IBM C++ product Indicator of compromise (IoC), an artifact likely indicating a computer intrusion Inversion of control (IoC), a software design pattern Other Icon of Coil, Norwegian electronic music band Immediate or cancel, a type of order used on some stock exchanges Index of coincidence, In cryptography, |
recent change in 2005, there are 107 segmental letters, an indefinitely large number of suprasegmental letters, 44 diacritics (not counting composites) and four extra-lexical prosodic marks in the IPA. Most of these are shown in the current IPA chart, posted below in this article and at the website of the IPA. History In 1886, a group of French and British language teachers, led by the French linguist Paul Passy, formed what would be known from 1897 onwards as the International Phonetic Association (in French, ). Their original alphabet was based on a spelling reform for English known as the Romic alphabet, but to make it usable for other languages the values of the symbols were allowed to vary from language to language. For example, the sound (the sh in shoe) was originally represented with the letter in English, but with the digraph in French. In 1888, the alphabet was revised so as to be uniform across languages, thus providing the base for all future revisions. The idea of making the IPA was first suggested by Otto Jespersen in a letter to Paul Passy. It was developed by Alexander John Ellis, Henry Sweet, Daniel Jones, and Passy. Since its creation, the IPA has undergone a number of revisions. After revisions and expansions from the 1890s to the 1940s, the IPA remained primarily unchanged until the Kiel Convention in 1989. A minor revision took place in 1993 with the addition of four letters for mid central vowels and the removal of letters for voiceless implosives. The alphabet was last revised in May 2005 with the addition of a letter for a labiodental flap. Apart from the addition and removal of symbols, changes to the IPA have consisted largely of renaming symbols and categories and in modifying typefaces. Extensions to the International Phonetic Alphabet for speech pathology (extIPA) were created in 1990 and were officially adopted by the International Clinical Phonetics and Linguistics Association in 1994. Description The general principle of the IPA is to provide one letter for each distinctive sound (speech segment). This means that: It does not normally use combinations of letters to represent single sounds, the way English does with , and , or single letters to represent multiple sounds, the way represents or in English. There are no letters that have context-dependent sound values, the way and in several European languages have a "hard" or "soft" pronunciation. The IPA does not usually have separate letters for two sounds if no known language makes a distinction between them, a property known as "selectiveness". However, if a large number of phonemically distinct letters can be derived with a diacritic, that may be used instead. The alphabet is designed for transcribing sounds (phones), not phonemes, though it is used for phonemic transcription as well. A few letters that did not indicate specific sounds have been retired (, once used for the "compound" tone of Swedish and Norwegian, and , once used for the moraic nasal of Japanese), though one remains: , used for the sj-sound of Swedish. When the IPA is used for phonemic transcription, the letter–sound correspondence can be rather loose. For example, and are used in the IPA Handbook for and . Among the symbols of the IPA, 107 letters represent consonants and vowels, 31 diacritics are used to modify these, and 17 additional signs indicate suprasegmental qualities such as length, tone, stress, and intonation. These are organized into a chart; the chart displayed here is the official chart as posted at the website of the IPA. Letter forms The letters chosen for the IPA are meant to harmonize with the Latin alphabet. For this reason, most letters are either Latin or Greek, or modifications thereof. Some letters are neither: for example, the letter denoting the glottal stop, , originally had the form of a dotless question mark, and derives from an apostrophe. A few letters, such as that of the voiced pharyngeal fricative, , were inspired by other writing systems (in this case, the Arabic letter ⟨⟩, , via the reversed apostrophe). Some letter forms derive from existing letters: The right-swinging tail, as in , marks retroflex articulation. It derives from the hook of an r. The top hook, as in , marks implosion. Several nasal consonants are based on the form : . and derive from ligatures of gn and ng, and is an ad hoc imitation of . Letters turned 180 degrees, such as (from ), when either the original letter (e.g., ) or the turned one (e.g., ) is reminiscent of the target sound. This was easily done in the era of mechanical typesetting, and had the advantage of not requiring the casting of special type for IPA symbols, much as the same type had often been used for b and q, d and p, n and u, 6 and 9 to reduce costs. The small capital letters are more guttural than their base letters. is an exception. Typography and iconicity The International Phonetic Alphabet is based on the Latin alphabet, using as few non-Latin forms as possible. The Association created the IPA so that the sound values of most consonant letters taken from the Latin alphabet would correspond to "international usage" (approximately Classical Latin). Hence, the letters , , , (hard) , (non-silent) , (unaspirated) , , , , (unaspirated) , (voiceless) , (unaspirated) , , , and have the values used in English; and the vowel letters from the Latin alphabet (, , , , ) correspond to the (long) sound values of Latin: is like the vowel in machne, is as in rle, etc. Other letters may differ from English, but are used with these values in other European languages, such as , , and . This inventory was extended by using small-capital and cursive forms, diacritics and rotation. There are also several symbols derived or taken from the Greek alphabet, though the sound values may differ. For example, is a vowel in Greek, but an only indirectly related consonant in the IPA. For most of these, subtly different glyph shapes have been devised for the IPA, namely , , , , , , and , which are encoded in Unicode separately from their parent Greek letters, though one of them – – is not, while both Latin , and Greek , are in common use. The sound values of modified Latin letters can often be derived from those of the original letters. For example, letters with a rightward-facing hook at the bottom represent retroflex consonants; and small capital letters usually represent uvular consonants. Apart from the fact that certain kinds of modification to the shape of a letter generally correspond to certain kinds of modification to the sound represented, there is no way to deduce the sound represented by a symbol from its shape (as for example in Visible Speech) nor even any systematic relation between signs and the sounds they represent (as in Hangul). Beyond the letters themselves, there are a variety of secondary symbols which aid in transcription. Diacritic marks can be combined with IPA letters to transcribe modified phonetic values or secondary articulations. There are also special symbols for suprasegmental features such as stress and tone that are often employed. Brackets and transcription delimiters There are two principal types of brackets used to set off (delimit) IPA transcriptions: Other conventions are less commonly seen: All three of the above are provided by the IPA Handbook. The following are not, but may be seen in IPA transcription or in associated material (especially angle brackets): For example, Cursive forms IPA letters have cursive forms designed for use in manuscripts and when taking field notes, but the 1999 Handbook of the International Phonetic Association recommended against their use, as cursive IPA is "harder for most people to decipher." Braille representation Several Braille adaptations of the IPA have seen use, the most recent published in 2008 and widely accepted since 2011. It does not have complete support for tone. Letter g In the early stages of the alphabet, the typographic variants of g, opentail () and looptail (), represented different values, but are now regarded as equivalents. Opentail has always represented a voiced velar plosive, while was distinguished from and represented a voiced velar fricative from 1895 to 1900. Subsequently, represented the fricative, until 1931 when it was replaced again by . In 1948, the Council of the Association recognized and as typographic equivalents, a decision reaffirmed in 1993. Braille IPA does not make the distinction. Modifying the IPA chart The International Phonetic Alphabet is occasionally modified by the Association. After each modification, the Association provides an updated simplified presentation of the alphabet in the form of a chart. (See History of the IPA.) Not all aspects of the alphabet can be accommodated in a chart of the size published by the IPA. The alveolo-palatal and epiglottal consonants, for example, are not included in the consonant chart for reasons of space rather than of theory (two additional columns would be required, one between the retroflex and palatal columns and the other between the pharyngeal and glottal columns), and the lateral flap would require an additional row for that single consonant, so they are listed instead under the catchall block of "other symbols". The indefinitely large number of tone letters would make a full accounting impractical even on a larger page, and only a few examples are shown, and even the tone diacritics are not complete; the reversed tone letters are not illustrated at all. The procedure for modifying the alphabet or the chart is to propose the change in the Journal of the IPA. (See, for example, August 2008 on an open central unrounded vowel and August 2011 on central approximants.) Reactions to the proposal may be published in the same or subsequent issues of the Journal (as in August 2009 on the open central vowel). A formal proposal is then put to the Council of the IPA – which is elected by the membership – for further discussion and a formal vote. Nonetheless, many users of the alphabet, including the leadership of the Association itself, deviate from this norm. The Journal of the IPA finds it acceptable to mix IPA and extIPA symbols in consonant charts in their articles. (For instance, including the extIPA letter , rather than , in an illustration of the IPA.) Usage Of more than 160 IPA symbols, relatively few will be used to transcribe speech in any one language, with various levels of precision. A precise phonetic transcription, in which sounds are specified in detail, is known as a narrow transcription. A coarser transcription with less detail is called a broad transcription. Both are relative terms, and both are generally enclosed in square brackets. Broad phonetic transcriptions may restrict themselves to easily heard details, or only to details that are relevant to the discussion at hand, and may differ little if at all from phonemic transcriptions, but they make no theoretical claim that all the distinctions transcribed are necessarily meaningful in the language. For example, the English word little may be transcribed broadly as , approximately describing many pronunciations. A narrower transcription may focus on individual or dialectical details: in General American, in Cockney, or in Southern US English. Phonemic transcriptions, which express the conceptual counterparts of spoken sounds, are usually enclosed in slashes (/ /) and tend to use simpler letters with few diacritics. The choice of IPA letters may reflect theoretical claims of how speakers conceptualize sounds as phonemes or they may be merely a convenience for typesetting. Phonemic approximations between slashes do not have absolute sound values. For instance, in English, either the vowel of pick or the vowel of peak may be transcribed as , so that pick, peak would be transcribed as or as ; and neither is identical to the vowel of the French which would also be transcribed . By contrast, a narrow phonetic transcription of pick, peak, pique could be: , , . Linguists IPA is popular for transcription by linguists. Some American linguists, however, use a mix of IPA with Americanist phonetic notation or use some nonstandard symbols for various reasons. Authors who employ such nonstandard use are encouraged to include a chart or other explanation of their choices, which is good practice in general, as linguists differ in their understanding of the exact meaning of IPA symbols and common conventions change over time. Dictionaries English Many British dictionaries, including the Oxford English Dictionary and some learner's dictionaries such as the Oxford Advanced Learner's Dictionary and the Cambridge Advanced Learner's Dictionary, now use the International Phonetic Alphabet to represent the pronunciation of words. However, most American (and some British) volumes use one of a variety of pronunciation respelling systems, intended to be more comfortable for readers of English. For example, the respelling systems in many American dictionaries (such as Merriam-Webster) use for IPA and for IPA , reflecting common representations of those sounds in written English, using only letters of the English Roman alphabet and variations of them. (In IPA, represents the sound of the French (as in ), and represents the pair of sounds in graopper.) Other languages The IPA is also not universal among dictionaries in languages other than English. Monolingual dictionaries of languages with phonemic orthographies generally do not bother with indicating the pronunciation of most words, and tend to use respelling systems for words with unexpected pronunciations. Dictionaries produced in Israel use the IPA rarely and sometimes use the Hebrew alphabet for transcription of foreign words. Bilingual dictionaries that translate from foreign languages into Russian usually employ the IPA, but monolingual Russian dictionaries occasionally use pronunciation respelling for foreign words. The IPA is more common in bilingual dictionaries, but there are exceptions here too. Mass-market bilingual Czech dictionaries, for instance, tend to use the IPA only for sounds not found in Czech. Standard orthographies and case variants IPA letters have been incorporated into the alphabets of various languages, notably via the Africa Alphabet in many sub-Saharan languages such as Hausa, Fula, Akan, Gbe languages, Manding languages, Lingala, etc. This has created the need for capital variants. For example, Kabiyè of northern Togo has Ɖ ɖ, Ŋ ŋ, Ɣ ɣ, Ɔ ɔ, Ɛ ɛ, Ʋ ʋ. These, and others, are supported by Unicode, but appear in Latin ranges other than the IPA extensions. In the IPA itself, however, only lower-case letters are used. The 1949 edition of the IPA handbook indicated that an asterisk may be prefixed to indicate that a word is a proper name, but this convention was not included in the 1999 Handbook, which notes instead extIPA use of the asterisk as a placeholder for a sound that does not have a symbol. Classical singing The IPA has widespread use among classical singers during preparation as they are frequently required to sing in a variety of foreign languages. They are also taught by vocal coaches to perfect diction and improve tone quality and tuning. Opera librettos are authoritatively transcribed in IPA, such as Nico Castel's volumes and Timothy Cheek's book Singing in Czech. Opera singers' ability to read IPA was used by the site Visual Thesaurus, which employed several opera singers "to make recordings for the 150,000 words and phrases in VT's | stop , then a prenasalized doubly articulated stop would be If a diacritic needs to be placed on or under a tie bar, the combining grapheme joiner (U+034F) needs to be used, as in 'chewed' (Margi). Font support is spotty, however. Vowels The IPA defines a vowel as a sound which occurs at a syllable center. Below is a chart depicting the vowels of the IPA. The IPA maps the vowels according to the position of the tongue. The vertical axis of the chart is mapped by vowel height. Vowels pronounced with the tongue lowered are at the bottom, and vowels pronounced with the tongue raised are at the top. For example, (the first vowel in father) is at the bottom because the tongue is lowered in this position. (the vowel in "meet") is at the top because the sound is said with the tongue raised to the roof of the mouth. In a similar fashion, the horizontal axis of the chart is determined by vowel backness. Vowels with the tongue moved towards the front of the mouth (such as , the vowel in "met") are to the left in the chart, while those in which it is moved to the back (such as , the vowel in "but") are placed to the right in the chart. In places where vowels are paired, the right represents a rounded vowel (in which the lips are rounded) while the left is its unrounded counterpart. Diphthongs Diphthongs are typically specified with a non-syllabic diacritic, as in or , or with a superscript for the on- or off-glide, as in or . Sometimes a tie bar is used: , especially if it is difficult to tell if the diphthong is characterized by an on-glide, an off-glide or is variable. Notes officially represents a front vowel, but there is little if any distinction between front and central open vowels (see ), and is frequently used for an open central vowel. If disambiguation is required, the retraction diacritic or the centralized diacritic may be added to indicate an open central vowel, as in or . Diacritics and prosodic notation Diacritics are used for phonetic detail. They are added to IPA letters to indicate a modification or specification of that letter's normal pronunciation. By being made superscript, any IPA letter may function as a diacritic, conferring elements of its articulation to the base letter. Those superscript letters listed below are specifically provided for by the IPA Handbook; other uses can be illustrated with ( with fricative release), ( with affricate onset), (prenasalized ), ( with breathy voice), (glottalized ), ( with a flavor of ), ( with diphthongization), (compressed ). Superscript diacritics placed after a letter are ambiguous between simultaneous modification of the sound and phonetic detail at the end of the sound. For example, labialized may mean either simultaneous and or else with a labialized release. Superscript diacritics placed before a letter, on the other hand, normally indicate a modification of the onset of the sound ( glottalized , with a glottal onset). (See .) Notes With aspirated voiced consonants, the aspiration is usually also voiced (voiced aspirated – but see voiced consonants with voiceless aspiration). Many linguists prefer one of the diacritics dedicated to breathy voice over simple aspiration, such as . Some linguists restrict that diacritic to sonorants, such as breathy-voice , and transcribe voiced-aspirated obstruents as e.g. . Care must be taken that a superscript retraction sign is not mistaken for mid tone. These are relative to the cardinal value of the letter. They can also apply to unrounded vowels: is more spread (less rounded) than cardinal , and is less spread than cardinal .Since can mean that the is labialized (rounded) throughout its articulation, and makes no sense ( is already completely unrounded), can only mean a less-labialized/rounded . However, readers might mistake for "" with a labialized off-glide, or might wonder if the two diacritics cancel each other out. Placing the 'less rounded' diacritic under the labialization diacritic, , makes it clear that it is the labialization that is 'less rounded' than its cardinal IPA value. Subdiacritics (diacritics normally placed below a letter) may be moved above a letter to avoid conflict with a descender, as in voiceless . The raising and lowering diacritics have optional spacing forms , that avoid descenders. The state of the glottis can be finely transcribed with diacritics. A series of alveolar plosives ranging from open-glottis to closed-glottis phonation is: Additional diacritics are provided by the Extensions to the IPA for speech pathology. Suprasegmentals These symbols describe the features of a language above the level of individual consonants and vowels, that is, at the level of syllable, word or phrase. These include prosody, pitch, length, stress, intensity, tone and gemination of the sounds of a language, as well as the rhythm and intonation of speech. Various ligatures of pitch/tone letters and diacritics are provided for by the Kiel convention and used in the IPA Handbook despite not being found in the summary of the IPA alphabet found on the one-page chart. Under capital letters below we will see how a carrier letter may be used to indicate suprasegmental features such as labialization or nasalization. Some authors omit the carrier letter, for e.g. suffixed or prefixed , or place a spacing diacritic such as at the beginning of a word to indicate that the quality applies to the entire word. Stress Officially, the stress marks appear before the stressed syllable, and thus mark the syllable boundary as well as stress (though the syllable boundary may still be explicitly marked with a period). Occasionally the stress mark is placed immediately before the nucleus of the syllable, after any consonantal onset. In such transcriptions, the stress mark does not mark a syllable boundary. The primary stress mark may be doubled for extra stress (such as prosodic stress). The secondary stress mark is sometimes seen doubled for extra-weak stress, but this convention has not been adopted by the IPA. Some dictionaries place both stress marks before a syllable, , to indicate that pronunciations with either primary or secondary stress are heard, though this is not IPA usage. Boundary markers There are three boundary markers: for a syllable break, for a minor prosodic break and for a major prosodic break. The tags 'minor' and 'major' are intentionally ambiguous. Depending on need, 'minor' may vary from a foot break to a break in list-intonation to a continuing–prosodic-unit boundary (equivalent to a comma), and while 'major' is often any intonation break, it may be restricted to a final–prosodic-unit boundary (equivalent to a period). The 'major' symbol may also be doubled, , for a stronger break. Although not part of the IPA, the following additional boundary markers are often used in conjunction with the IPA: for a mora or mora boundary, for a syllable or syllable boundary, for a morpheme boundary, for a word boundary (may be doubled, , for e.g. a breath-group boundary), for a phrase or intermediate boundary and for a prosodic boundary. For example, C# is a word-final consonant, %V a post-pausa vowel, and T% an IU-final tone (edge tone). Pitch and tone are defined in the Handbook as upstep and downstep, concepts from tonal languages. However, the 'upstep' could also be used for pitch reset, and the IPA Handbook illustration for Portuguese uses it for prosody in a non-tonal language. Phonetic pitch and phonemic tone may be indicated by either diacritics placed over the nucleus of the syllable (e.g. high-pitch ) or by Chao tone letters placed either before or after the word or syllable. There are three graphic variants of the tone letters: with or without a stave, and facing left or facing right from the stave. The stave was introduced with the 1989 Kiel Convention, as was the option of placing a staved letter after the word or syllable, while retaining the older conventions. There are therefore six ways to transcribe pitch/tone in the IPA: i.e. , , , , and for a high pitch/tone. Of the tone letters, only left-facing staved letters and a few representative combinations are shown in the summary on the Chart, and in practice it is currently more common for tone letters to occur after the syllable/word than before, as in the Chao tradition. Placement before the word is a carry-over from the pre-Kiel IPA convention, as is still the case for the stress and upstep/downstep marks. The IPA endorses the Chao tradition of using the left-facing tone letters, , for underlying tone, and the right-facing letters, , for surface tone, as occurs in tone sandhi, and for the intonation of non-tonal languages. In the Portuguese illustration in the 1999 Handbook, tone letters are placed before a word or syllable to indicate prosodic pitch (equivalent to global rise and global fall, but allowing more precision), and in the Cantonese illustration they are placed after a word/syllable to indicate lexical tone. Theoretically therefore prosodic pitch and lexical tone could be simultaneously transcribed in a single text, though this is not a formalized distinction. Rising and falling pitch, as in contour tones, are indicated by combining the pitch diacritics and letters in the table, such as grave plus acute for rising and acute plus grave for falling . Only six combinations of two diacritics are supported, and only across three levels (high, mid, low), despite the diacritics supporting five levels of pitch in isolation. The four other explicitly approved rising and falling diacritic combinations are high/mid rising , low rising , high falling , and low/mid falling . The Chao tone letters, on the other hand, may be combined in any pattern, and are therefore used for more complex contours and finer distinctions than the diacritics allow, such as mid-rising , extra-high falling , etc. There are 20 such possibilities. However, in Chao's original proposal, which was adopted by the IPA in 1989, he stipulated that the half-high and half-low letters may be combined with each other, but not with the other three tone letters, so as not to create spuriously precise distinctions. With this restriction, there are 8 possibilities. The old staveless tone letters tend to be more restricted than the staved letters, though not as restricted as the diacritics. Officially, they support as many distinctions as the staved letters, but typically only three pitch levels are distinguished. Unicode supports default or high-pitch and low-pitch . Only a few mid-pitch tones are supported (such as ), and then only accidentally. Although tone diacritics and tone letters are presented as equivalent on the chart, "this was done only to simplify the layout of the chart. The two sets of symbols are not comparable in this way." Using diacritics, a high tone is and a low tone is ; in tone letters, these are and . One can double the diacritics for extra-high and extra-low ; there is no parallel to this using tone letters. Instead, tone letters have mid-high and mid-low ; again, there is no equivalent among the diacritics. The correspondence breaks down even further once they start combining. For more complex tones, one may combine three or four tone diacritics in any permutation, though in practice only generic peaking (rising-falling) and dipping (falling-rising) combinations are used. Chao tone letters are required for finer detail (, etc.). Although only 10 peaking and dipping tones were proposed in Chao's original, limited set of tone letters, phoneticians often make finer distinctions, and indeed an example is found on the IPA Chart. The system allows the transcription of 112 peaking and dipping pitch contours, including tones that are level for part of their length. More complex contours are possible. Chao gave an example of (mid-high-low-mid) from English prosody. Chao tone letters generally appear after each syllable, for a language with syllable tone (), or after the phonological word, for a language with word tone (). The IPA gives the option of placing the tone letters before the word or syllable (, ), but this is rare for lexical tone. (And indeed reversed tone letters may be used to clarify that they apply to the following rather than to the preceding syllable: , .) The staveless letters are not directly supported by Unicode, but some fonts allow the stave in Chao tone letters to be suppressed. Comparative degree IPA diacritics may be doubled to indicate an extra degree of the feature indicated. This is a productive process, but apart from extra-high and extra-low tones being marked by doubled high- and low-tone diacritics, and the major prosodic break being marked as a double minor break , it is not specifically regulated by the IPA. (Note that transcription marks are similar: double slashes indicate extra (morpho)-phonemic, double square brackets especially precise, and double parentheses especially unintelligible.) For example, the stress mark may be doubled to indicate an extra degree of stress, such as prosodic stress in English. An example in French, with a single stress mark for normal prosodic stress at the end of each prosodic unit (marked as a minor prosodic break), and a double stress mark for contrastive/emphatic stress: . Similarly, a doubled secondary stress mark is commonly used for tertiary (extra-light) stress. In a similar vein, the effectively obsolete (though still official) staveless tone letters were once doubled for an emphatic rising intonation and an emphatic falling intonation . Length is commonly extended by repeating the length mark, as in English shhh! , or for "overlong" segments in Estonian: vere 'blood [gen.sg.]', veere 'edge [gen.sg.]', veere 'roll [imp. 2nd sg.]' lina 'sheet', linna 'town [gen. sg.]', linna 'town [ine. sg.]' (Normally additional degrees of length are handled by the extra-short or half-long diacritic, but the first two words in each of the Estonian examples are analyzed as simply short and long, requiring a different remedy for the final words.) Occasionally other diacritics are doubled: Rhoticity in Badaga "mouth", "bangle", and "crop". Mild and strong aspirations, , . Nasalization, as in Palantla Chinantec lightly nasalized vs heavily nasalized , though in extIPA the latter indicates velopharyngeal frication. Weak vs strong ejectives, , . Especially lowered, e.g. (or , if the former symbol does not display properly) for as a weak fricative in some pronunciations of register. Especially retracted, e.g. or , though some care might be needed to distinguish this from indications of alveolar or alveolarized articulation in extIPA, e.g. . The transcription of strident and harsh voice as extra-creaky may be motivated by the similarities of these phonations. Ambiguous characters A number of IPA characters are not consistently used for their official values. A distinction between voiced fricatives and approximants is only partially implemented, for example. Even with the relatively recent addition of the palatal fricative and the velar approximant to the alphabet, other letters, though defined as fricatives, are often ambiguous between fricative and approximant. For forward places, and can generally be assumed to be fricatives unless they carry a lowering diacritic. Rearward, however, and are perhaps more commonly intended to be approximants even without a lowering diacritic. and are similarly either fricatives or approximants, depending on the language, or even glottal "transitions", without that often being specified in the transcription. Another common ambiguity is among the palatal consonants. and are not uncommonly used as a typographic convenience for affricates, typically and , while and are commonly used for palatalized alveolar and . To some extent this may be an effect of analysis, but it is often common for people to match up available letters to the sounds of a language, without overly worrying whether they are phonetically accurate. It has been argued that the lower-pharyngeal (epiglottal) fricatives and are better characterized as trills, rather than as fricatives that have incidental trilling. This has the advantage of merging the upper-pharyngeal fricatives together with the epiglottal plosive and trills into a single pharyngeal column in the consonant chart. However, in Shilha Berber the epiglottal fricatives are not trilled. Although they might be transcribed to indicate this, the far more common transcription is , which is therefore ambiguous between languages. Among vowels, is officially a front vowel, but is more commonly treated as a central vowel. The difference, to the extent it is even possible, is not phonemic in any language. Three letters are not needed, but are retained due to inertia and would be hard to justify today by the standards of the modern IPA. appears because it is found in English; officially it is a fricative, with terminology dating to the days before 'fricative' and 'approximant' were distinguished. Based on how all other fricatives and approximants are transcribed, one would expect either for a fricative (not how it is actually used) or for an approximant. Indeed, outside of English transcription, that is what is more commonly found in the literature. is another historic remnant. Although a common allophone of [m] in particular It is only phonemically distinct in a single language (Kukuya), a fact that was discovered after it was standardized in the IPA. A number of consonants without dedicated IPA letters are found in many more languages than that; is retained because of its historical use for European languages, where it could easily be normalized to . There have been several votes to retire from the IPA, but so far they have failed. Finally, is officially a simultaneous postalveolar and velar fricative, a realization that does not appear to exist in any language. It is retained because it is convenient for the transcription of Swedish, where it is used for a consonant that has various realizations in different dialects. That is, it is not actually a phonetic character at all, but a phonemic one, which is officially beyond the purview of the IPA alphabet. For all phonetic notation, it is good practice for an author to specify exactly what they mean by the symbols that they use. Superscript IPA Superscript IPA letters may be used to indicate secondary articulation, releases and other transitions, shades of sound, epenthetic and incompletely articulated sounds. In 2020, the International Phonetic Association endorsed the encoding of superscript IPA letters in a proposal to the Unicode Commission for broader coverage of the IPA alphabet. The proposal covered all IPA letters (apart from the tone letters) that were not yet supported, including the implicit retroflex letters , as well as the two length marks and old-style affricate ligatures. A separate request by the International Clinical Phonetics and Linguistics Association for an expansion of extIPA coverage endorsed superscript variants of all extIPA fricative letters, specifically for the fricative release of consonants. Unicode placed the new superscript ("modifier") letters in a new Latin Extended-F block. The Unicode characters for superscript (modifier) IPA and extIPA letters are as follows: The spacing diacritic for ejective consonants, U+2BC, works with superscript letters despite not being superscript itself: . If a distinction needs to be made, the combining apostrophe U+315 may be used: . The spacing diacritic should be used for a baseline letter with a superscript release, such as or , where the scope of the apostrophe includes the non-superscript letter, but the combining apostrophe U+315 might be used to indicate a weakly articulated ejective consonant, where the whole consonant is written as a superscript, or together with U+2BC when separate apostrophes have scope over the base and modifier letters, as in . In addition, the old alternative near-close vowel letters and are supported at U+1DA5 and U+107A4 . The para-IPA letter for a central reduced vowel, , is supported at U+1DA7 ; its rounded equivalent, , is not supported by Unicode. The precomposed rhotic vowel letters are not supported, as the rhotic diacritic should be used instead: ; similarly with other rhotic vowels. Superscript length marks can be used for indicating the length of aspiration of a consonant, e.g. . Another option is to double the diacritic: . Superscript letters can be meaningfully modified by combining diacritics, just as baseline letters are. For example, a superscript dental nasal is , a superscript voiceless velar nasal is , and labial-velar prenasalization is . Although the diacritic may seem a bit oversized compared to the superscript letter it modifies, as with the composite superscript c-cedilla and the rhotic vowels this can be an aid to legibility: . Spacing diacritics, however, as in , cannot be secondarily superscripted in plain text: . Superscript wildcards are partially supported: e.g. (prenasalized consonant), (prestopped nasal), (fricative release), (tone-bearing syllable), (glide/diphthong), and (liquid or lateral and rhotic or resonant release), (epenthetic plosive), (fleeting vowel). However, superscript S and Ʞ for sibilant release and fleeting/epenthetic click release are not supported as of Unicode 15. Obsolete and nonstandard symbols A number of IPA letters and diacritics have been retired or replaced over the years. This number includes duplicate symbols, symbols that were replaced due to user preference, and unitary symbols that were rendered with diacritics or digraphs to reduce the inventory of the IPA. The rejected symbols are now considered obsolete, though some are still seen in the literature. The IPA once had several pairs of duplicate symbols from alternative proposals, but eventually settled on one or the other. An example is the vowel letter , rejected in favor of . Affricates were once transcribed with ligatures, such as (and others not found in Unicode). These have been officially retired but are still used. Letters for specific combinations of primary and secondary articulation have also been mostly retired, with the idea that such features should be indicated with tie bars or diacritics: for is one. In addition, the rare voiceless implosives, , were dropped soon after their introduction and are now usually written . The original set of click letters, , was retired but is still sometimes seen, as the current pipe letters can cause problems with legibility, especially when used with brackets ([ ] or / /), the letter , or the prosodic marks . (For this reason, some publications which use the current IPA pipe letters disallow IPA brackets.) Individual non-IPA letters may find their way into publications that otherwise use the standard IPA. This is especially common with: Affricates, such as the Americanist barred lambda for or for . The Karlgren letters for Chinese vowels, Digits for tonal phonemes that have conventional numbers in a local tradition, such as the four tones of Standard Chinese. This may be more convenient for comparison between related languages and dialects than a phonetic transcription would be, because tones vary more unpredictably than segmental phonemes do. Digits for tone levels, which are simpler to typeset, though the lack of standardization can cause confusion (e.g. is high tone in some languages but low tone in others; may be high, medium or low tone, depending on the local convention). Iconic extensions of standard IPA letters that can be readily understood, such as retroflex and . These are referred to in the Handbook and have been included in IPA requests for Unicode support. In addition, it is common to see ad hoc typewriter substitutions, generally capital letters, for when IPA support is not available, e.g. A for , B for or , D for , or , E for , F or P for , G , I , L , N , O , S , T or , U , V , X , Z , as well as @ for and 7 or ? for . (See also SAMPA and X-SAMPA substitute notation.) Extensions The Extensions to the International Phonetic Alphabet for Disordered Speech, commonly abbreviated "extIPA" and sometimes called "Extended IPA", are symbols whose original purpose was to accurately transcribe disordered speech. At the Kiel Convention in 1989, a group of linguists drew up the initial extensions, which were based on the previous work of the PRDS (Phonetic Representation of Disordered Speech) Group in the early 1980s. The extensions were first published in 1990, then modified, and published again in 1994 in the Journal of the International Phonetic Association, when they were officially adopted by the ICPLA. While the original purpose was to transcribe disordered speech, linguists have used the extensions to designate a number of sounds within standard communication, such as hushing, gnashing teeth, and smacking lips, as well as regular lexical sounds such as lateral fricatives that do not have standard IPA symbols. In addition to the Extensions to the IPA for disordered speech, there are the conventions of the Voice Quality Symbols, which include a number of symbols for additional airstream mechanisms and secondary articulations in what they call "voice quality". Associated notation Capital letters and various characters on the number row of the keyboard are commonly used to extend the alphabet in various ways. Associated symbols There are various punctuation-like conventions for linguistic transcription that are commonly used together with IPA. Some of the more common are: (a) A reconstructed form. (b) An ungrammatical form (including an unphonemic form). (a) A reconstructed form, deeper (more ancient) than a single , used when reconstructing even further back from already-starred forms. (b) An ungrammatical form. A less common convention than (b), this is sometimes used when reconstructed and ungrammatical forms occur in the same text. An ungrammatical form. A less common convention than (b), this is sometimes used when reconstructed and ungrammatical forms occur in the same text. A doubtfully grammatical form. A generalized form, such as a typical shape of a wanderwort that has not actually been reconstructed. A word boundary – e.g. for a word-initial vowel. A phonological word boundary; e.g. for a high tone that occurs in such a position. Full capital letters are not used as IPA symbols, except as typewriter substitutes (e.g. N for , S for , O for – see SAMPA). They are, however, often used in conjunction with the IPA in two cases: for archiphonemes and for natural classes of sounds (that is, as wildcards). The extIPA chart, for example, uses wildcards in its illustrations. as Voice Quality Symbols. Wildcards are commonly used in phonology to summarize syllable or word shapes, or to show the evolution of classes of sounds. For example, the possible syllable shapes of Mandarin can be abstracted as ranging from (an atonic vowel) to (a consonant-glide-vowel-nasal syllable with tone), and word-final devoicing may be schematized as → /_#. In speech pathology, capital letters represent indeterminate sounds, and may be superscripted to indicate they are weakly articulated: e.g. is a weak indeterminate alveolar, a weak indeterminate velar. There is a degree of variation between authors as to the capital letters used, but for {consonant}, for {vowel} and for {nasal} are ubiquitous. Other common conventions are for {tone/accent} (tonicity), for {plosive}, for {fricative}, for {sibilant}, for {glide/semivowel}, for {lateral} or {liquid}, for {rhotic} or {resonant/sonorant}, for {obstruent}, for {click}, for {open, front, back, close, rounded vowel} and for {labial, alveolar, post-alveolar/palatal, velar, uvular, pharyngeal, glottal consonant}, respectively, and for any sound. The letters can be modified with IPA diacritics, for example for {ejective}, for {implosive}, or for {prenasalized consonant}, for {nasal vowel}, for {aspirated CV syllable with high tone}, for {voiced sibilant}, for {voiceless nasal}, or for {affricate}, for {palatalized consonant} and for {dental consonant}. , , are also commonly used for high, mid and low tone, with for falling tone (also , , occasionally ), for rising tone (also , , occasionally ), etc., rather than transcribing them overly precisely with IPA tone letters or with ambiguous digits. Typical examples of archiphonemic use of capital letters are for the Turkish harmonic vowel set }, for the conflated flapped middle consonant of American English writer and rider, and for the homorganic syllable-coda nasal of languages such as |
won a scholarship to study at St John's College, Oxford. He lost the scholarship as the result of poor academic performance stemming from a failed love affair, which is mentioned in the second episode of the third series, "The Last Enemy", and recounted in detail in the novel The Riddle of the Third Mile, Chapter 7. Further details are revealed piece-by-piece in the prequel series. He often reflects on such renowned scholars as A. E. Housman who, like himself, failed to get an academic degree from Oxford. Career After university, he entered the army on National Service. This included serving in West Germany with the Royal Corps of Signals as a cipher clerk. Upon leaving, he joined the police at Carshall-Newtown, before being posted to Oxford with the Oxford City Police. He was awarded the George Medal in the last episode of Endeavour Series 4. Habits and personality Morse is ostensibly the embodiment of white, male, middle-class Englishness, with a set of prejudices and assumptions to match (even though as the son of a taxi driver his background was thoroughly working class). As a result, he may be considered a late example of the gentleman detective, a staple of British detective fiction. This is in sharp contrast to the working-class lifestyle of his assistant Lewis (named after another rival clue-writer, Mrs. B. Lewis); in the novels, Lewis is Welsh, but in the TV series this is altered to a Tyneside (Geordie) background, appropriately for the actor Kevin Whately. Morse is in his forties at the start of the books (Service of all the Dead, Chapter Six: "… a bachelor still, forty-seven years old …"), and Lewis slightly younger (e.g. The Secret of Annexe 3, Chapter Twenty-Six: "a slightly younger man – another policeman, and one also in plain clothes"). John Thaw was 45 at the beginning of shooting the TV series and Kevin Whately was 36. Morse's relationships with authority, the establishment, bastions of power and the status quo, are markedly ambiguous, as are some of his relations with women. He is frequently portrayed as patronising female characters, and once stereotyped the female sex as not naturally prone to crime, being caring and non-violent, but also often empathises with women. He is not shy to show his liking for attractive women and often dates those involved in cases. Indeed, a woman he falls in love with sometimes turns out to be the culprit. Morse is highly intelligent. He is a crossword addict and dislikes grammatical and spelling errors; in every personal or private document that he receives, he manages to point out at least one mistake. He claims that his approach to crime-solving is deductive, and one of his key tenets is that "there is a 50 per cent chance that the person who finds the body is the murderer". Morse uses immense intuition and his fantastic memory to apprehend the perpetrator. Among Morse's conservative tastes are that he likes to drink real ale and whisky, and in the early novels, drives a Lancia. In the television and radio productions, this is altered to a suitably British classic Jaguar Mark 2. His favourite music is opera, which is echoed in the soundtracks to the television series, along with original music by Barrington Pheloung. Morse is portrayed as being an atheist. Novels The novels in the series are: Last Bus to Woodstock (1975) Last Seen Wearing (1976) The Silent World of Nicholas Quinn (1977) Service of All the Dead (1979) The Dead of Jericho (1981) The Riddle of the Third Mile (1983) The Secret of Annexe 3 (1986) The Wench is Dead (1989) The Jewel That Was Ours (1991) The Way Through the Woods (1992) The Daughters of Cain (1994) Death Is Now My Neighbour (1996) The Remorseful Day (1999) Inspector Morse also appears in several stories in Dexter's short story collection, Morse's Greatest Mystery and Other Stories (1993, expanded edition 1994). In other media Television The Inspector Morse novels were made into a TV series (also called Inspector Morse) for the British commercial TV network ITV. The series was made by Zenith Productions for Central (a company later acquired by Carlton) and comprises 33 two-hour episodes (100 minutes excluding commercials)—20 more episodes than there are novels—produced between 1987 and 2000. The last episode was adapted from the final novel The Remorseful Day, in which Morse dies. A spin-off series, similarly comprising 33 two-hour episodes and based on the television incarnation of Lewis, was titled Lewis; it first aired in 2006 and last showed in 2015. In August 2011, ITV announced plans to film a prequel drama called Endeavour, with author Colin Dexter's participation. English actor Shaun Evans was cast as a young Morse in his early career. The drama was broadcast on 2 January 2012 on ITV 1. Four new episodes were televised from 14 April 2013, showing Morse's early cases working for DI Fred Thursday and with Jim Strange, his later boss, and pathologist Max De Bryn. A second series of four episodes followed, screening in March and April 2014. In January 2016, the third series aired, also containing four episodes. A fourth series was aired, with four episodes, in January 2017. Filming of a fifth series of six episodes began in early 2017 with the first episode | study at St John's College, Oxford. He lost the scholarship as the result of poor academic performance stemming from a failed love affair, which is mentioned in the second episode of the third series, "The Last Enemy", and recounted in detail in the novel The Riddle of the Third Mile, Chapter 7. Further details are revealed piece-by-piece in the prequel series. He often reflects on such renowned scholars as A. E. Housman who, like himself, failed to get an academic degree from Oxford. Career After university, he entered the army on National Service. This included serving in West Germany with the Royal Corps of Signals as a cipher clerk. Upon leaving, he joined the police at Carshall-Newtown, before being posted to Oxford with the Oxford City Police. He was awarded the George Medal in the last episode of Endeavour Series 4. Habits and personality Morse is ostensibly the embodiment of white, male, middle-class Englishness, with a set of prejudices and assumptions to match (even though as the son of a taxi driver his background was thoroughly working class). As a result, he may be considered a late example of the gentleman detective, a staple of British detective fiction. This is in sharp contrast to the working-class lifestyle of his assistant Lewis (named after another rival clue-writer, Mrs. B. Lewis); in the novels, Lewis is Welsh, but in the TV series this is altered to a Tyneside (Geordie) background, appropriately for the actor Kevin Whately. Morse is in his forties at the start of the books (Service of all the Dead, Chapter Six: "… a bachelor still, forty-seven years old …"), and Lewis slightly younger (e.g. The Secret of Annexe 3, Chapter Twenty-Six: "a slightly younger man – another policeman, and one also in plain clothes"). John Thaw was 45 at the beginning of shooting the TV series and Kevin Whately was 36. Morse's relationships with authority, the establishment, bastions of power and the status quo, are markedly ambiguous, as are some of his relations with women. He is frequently portrayed as patronising female characters, and once stereotyped the female sex as not naturally prone to crime, being caring and non-violent, but also often empathises with women. He is not shy to show his liking for attractive women and often dates those involved in cases. Indeed, a woman he falls in love with sometimes turns out to be the culprit. Morse is highly intelligent. He is a crossword addict and dislikes grammatical and spelling errors; in every personal or private document that he receives, he manages to point out at least one mistake. He claims that his approach to crime-solving is deductive, and one of his key tenets is that "there is a 50 per cent chance that the person who finds the body is the murderer". Morse uses immense intuition and his fantastic memory to apprehend the perpetrator. Among Morse's conservative tastes are that he likes to drink real ale and whisky, and in the early novels, drives a Lancia. In the television and radio productions, this is altered to a suitably British classic Jaguar Mark 2. His favourite music is opera, which is echoed in the soundtracks to the television series, along with original music by Barrington Pheloung. Morse is portrayed as being an atheist. Novels The novels in the series are: Last Bus to Woodstock (1975) Last Seen Wearing (1976) The Silent World of Nicholas Quinn (1977) Service of All the Dead (1979) The Dead of Jericho (1981) The Riddle of the Third Mile (1983) The Secret of Annexe 3 (1986) The Wench is Dead (1989) The Jewel That Was Ours (1991) The Way Through the Woods (1992) The Daughters of Cain (1994) Death Is Now My Neighbour (1996) The Remorseful Day (1999) Inspector Morse also appears in several stories in Dexter's short story collection, Morse's Greatest Mystery and Other Stories (1993, expanded edition 1994). In other media Television The Inspector Morse novels were made into a TV series (also called Inspector Morse) for the British commercial TV network ITV. The series was made by Zenith Productions for Central (a company later acquired by Carlton) and comprises 33 two-hour episodes (100 minutes excluding commercials)—20 more episodes than there are novels—produced between 1987 and 2000. The last episode was adapted from the final novel The Remorseful |
Scandinavian domination is divided into two main epochs – before and after the conquest of Mann by Godred Crovan in 1079. Warfare and unsettled rule characterise the earlier epoch, the later saw comparatively more peace. Between about AD 800 and 815 the Vikings came to Mann chiefly for plunder. Between about 850 and 990, when they settled, the island fell under the rule of the Scandinavian Kings of Dublin and between 990 and 1079, it became subject to the powerful Earls of Orkney. There was a mint producing coins on Mann between c. 1025 and c. 1065. These Manx coins were minted from an imported type 2 Hiberno-Norse penny die from Dublin. Hiberno-Norse coins were first minted under Sihtric, King of Dublin. This illustrates that Mann may have been under the thumb of Dublin at this time. Little is known about the conqueror, Godred Crovan. According to the Chronicon Manniae he subdued Dublin, and a great part of Leinster, and held the Scots in such subjection that supposedly no one who set out to build a vessel dared to insert more than three bolts. The memory of such a ruler would be likely to survive in tradition, and it seems probable therefore that he is the person commemorated in Manx legend under the name of King Gorse or Orry. He created the Kingdom of Mann and the Isles in around 1079 including the south-western islands of Scotland until 1164, when two separate kingdoms were formed from it. In 1154, later known as the Diocese of Sodor and Man, was formed by the Catholic Church. The islands under his rule were called the Suðr-eyjar (South isles, in contrast to the Norðr-eyjar North isles", i.e. Orkney and Shetland), consisting of the Hebrides, all the smaller western islands of Scotland, and Mann. At a later date his successors took the title of (King of Mann and of the Isles). The kingdom's capital was on St Patrick's Isle, where Peel Castle was built on the site of a Celtic monastery. Olaf, Godred's son, exercised considerable power and according to the Chronicle, maintained such close alliance with the kings of Ireland and Scotland that no one ventured to disturb the Isles during his time (1113–1152). In 1156 his son Godred (reigned 1153–1158), who for a short period also ruled over Dublin, lost the smaller islands off the coast of Argyll as a result of a quarrel with Somerled (the ruler of Argyll). An independent sovereignty thus appeared between the two divisions of his kingdom. In the 1130s the Catholic Church sent a small mission to establish the first bishopric on the Isle of Man, and appointed Wimund as the first bishop. He soon afterwards embarked with a band of followers on a career of murder and looting throughout Scotland and the surrounding islands. During the whole of the Scandinavian period, the Isles remained nominally under the suzerainty of the Kings of Norway but the Norwegians only occasionally asserted it with any vigour. The first such king to assert control over the region was likely Magnus Barelegs, at the turn of the 12th century. It was not until Hakon Hakonarson's 1263 expedition that another king returned to the Isles. Decline of Norse rule From the middle of the 12th century until 1217 the suzerainty had remained of a very shadowy character; Norway had become a prey to civil dissensions. But after that date it became a reality, and Norway consequently came into collision with the growing power of the kingdom of Scotland. Early in the 13th century, when Ragnald (reigned 1187–1229) paid homage to King John of England (reigned 1199–1216), we hear for the first time of English intervention in the affairs of Mann. But a period of Scots domination would precede the establishment of full English control. Finally, in 1261, Alexander III of Scotland sent envoys to Norway to negotiate for the cession of the isles, but their efforts led to no result. He therefore initiated a war, which ended in the indecisive Battle of Largs against the Norwegian fleet in 1263. However, the Norwegian king Haakon Haakonsson died the following winter, and this allowed King Alexander to bring the war to a successful conclusion. Magnus Olafsson, King of Mann and the Isles (reigned 1252–1265), who had campaigned on the Norwegian side, had to surrender all the islands over which he had ruled, except Mann, for which he did homage. Two years later Magnus died and in 1266 King Magnus VI of Norway ceded the islands, including Mann, to Scotland in the Treaty of Perth in consideration of the sum of 4,000 marks (known as in Scotland) and an annuity of 100 marks. But Scotland's rule over Mann did not become firmly established till 1275, when the Manx suffered defeat in the decisive Battle of Ronaldsway, near Castletown. English dominance In 1290 King Edward I of England sent Walter de Huntercombe to seize possession of Mann, and it remained in English hands until 1313, when Robert Bruce took it after besieging Castle Rushen for five weeks. In about 1333 King Edward III of England granted Mann to William de Montacute, 3rd Baron Montacute (later the 1st Earl of Salisbury), as his absolute possession, without reserving any service to be rendered to him. Then, in 1346, the Battle of Neville's Cross decided the long struggle between England and Scotland in England's favour. King David II of Scotland, Robert Bruce's last male heir, had been captured in the Battle of Neville's cross and ransomed; however, when Scotland was unable to raise one of the ransom installments, David made a secret agreement with King Edward III of England to cancel it, in return for transferring the Scottish kingdom to an English prince. Following the secret agreement, there followed a confused period when Mann sometimes experienced English rule and sometimes Scottish. In 1388 the island was "ravaged" by Sir William Douglas of Nithsdale on his way home from the destruction of the town of Carlingford. In 1392 William de Montacute's son sold the island, including sovereignty, to Sir William le Scrope. In 1399 Henry Bolinbroke brought about the beheading of Le Scrope, who had taken the side of Richard II when Bolinbroke usurped the throne and appointed himself Henry IV. The island then came into the de facto possession of Henry, who granted it to Henry Percy, 1st Earl of Northumberland; but following the latter's later attainder, Henry IV, in 1405, made a lifetime grant of it, with the patronage of the bishopric, to Sir John Stanley. In 1406 this grant was extended – on a feudatory basis under the English Crown – to Sir John's heirs and assigns, the feudal fee being the service of rendering homage and two falcons to all future Kings of England on their coronations. Early Modern period With the accession of the Stanleys to the throne there begins a more settled epoch in Manx history. Though the island's new rulers rarely visited its shores, they placed it under governors, who, in the main, seem to have treated it with the justice of the time. Of the thirteen members of the family who ruled in Mann, the second Sir John Stanley (1414–1432), James, the 7th Earl (1627–1651), and the 10th Earl of the same name (1702–1736) had the most important influence on it. They first curbed the power of the spiritual barons, introduced trial by jury, which superseded trial by battle, and ordered the laws to be written. The second, known as the Great Stanley, and his wife, Charlotte de la Tremoille (or Tremouille), are probably the most striking figures in Manx history. Wars of the Three Kingdoms and Interregnum; 1642 to 1660 Shortly after the Wars of the Three Kingdoms began in June 1643, James Stanley, 7th Earl of Derby returned to Mann to find the island on the brink of rebellion. Among the causes were complaints at the level of tithes payable to the Church of England, and Derby's attempts to replace the Manx ‘tenure of straw’ by which many of his tenants held their lands, a customary tenure akin to freehold, with commercial leases. He managed to restore the situation through a series of meetings, but made minimal concessions. Six months after Charles I was executed on 30 January 1649, Derby received a summons from General Ireton to surrender the island, but declined to do so. In August 1651, he and 300 Manxmen landed in Lancashire to take part in the Third English Civil War; defeated at Wigan Lane on 25 August 1651, Derby escaped with only 30 troops to join Charles II. Captured after the Battle of Worcester in September, he was imprisoned in Chester Castle, tried by court-martial and executed at Bolton on 15 October. Soon after Stanley's death, the Manx Militia, under the command of William Christian (known by his Manx name of Illiam Dhone), rose against the Countess and captured all the insular forts except Rushen and Peel. They were then joined by a Parliamentarian force sent from the mainland, led by Colonels Thomas Birch | Northumbria, in 616, did take place, it could not have led to any permanent results, for when the English were driven from the coasts of Cumberland and Lancashire soon afterwards, they could not well have retained their hold on the island to the west of these coasts. One can speculate, however, that when Ecgfrið's Northumbrians laid Ireland waste from Dublin to Drogheda in 684, they temporarily occupied Mann. Viking Age and Norse kingdom The period of Scandinavian domination is divided into two main epochs – before and after the conquest of Mann by Godred Crovan in 1079. Warfare and unsettled rule characterise the earlier epoch, the later saw comparatively more peace. Between about AD 800 and 815 the Vikings came to Mann chiefly for plunder. Between about 850 and 990, when they settled, the island fell under the rule of the Scandinavian Kings of Dublin and between 990 and 1079, it became subject to the powerful Earls of Orkney. There was a mint producing coins on Mann between c. 1025 and c. 1065. These Manx coins were minted from an imported type 2 Hiberno-Norse penny die from Dublin. Hiberno-Norse coins were first minted under Sihtric, King of Dublin. This illustrates that Mann may have been under the thumb of Dublin at this time. Little is known about the conqueror, Godred Crovan. According to the Chronicon Manniae he subdued Dublin, and a great part of Leinster, and held the Scots in such subjection that supposedly no one who set out to build a vessel dared to insert more than three bolts. The memory of such a ruler would be likely to survive in tradition, and it seems probable therefore that he is the person commemorated in Manx legend under the name of King Gorse or Orry. He created the Kingdom of Mann and the Isles in around 1079 including the south-western islands of Scotland until 1164, when two separate kingdoms were formed from it. In 1154, later known as the Diocese of Sodor and Man, was formed by the Catholic Church. The islands under his rule were called the Suðr-eyjar (South isles, in contrast to the Norðr-eyjar North isles", i.e. Orkney and Shetland), consisting of the Hebrides, all the smaller western islands of Scotland, and Mann. At a later date his successors took the title of (King of Mann and of the Isles). The kingdom's capital was on St Patrick's Isle, where Peel Castle was built on the site of a Celtic monastery. Olaf, Godred's son, exercised considerable power and according to the Chronicle, maintained such close alliance with the kings of Ireland and Scotland that no one ventured to disturb the Isles during his time (1113–1152). In 1156 his son Godred (reigned 1153–1158), who for a short period also ruled over Dublin, lost the smaller islands off the coast of Argyll as a result of a quarrel with Somerled (the ruler of Argyll). An independent sovereignty thus appeared between the two divisions of his kingdom. In the 1130s the Catholic Church sent a small mission to establish the first bishopric on the Isle of Man, and appointed Wimund as the first bishop. He soon afterwards embarked with a band of followers on a career of murder and looting throughout Scotland and the surrounding islands. During the whole of the Scandinavian period, the Isles remained nominally under the suzerainty of the Kings of Norway but the Norwegians only occasionally asserted it with any vigour. The first such king to assert control over the region was likely Magnus Barelegs, at the turn of the 12th century. It was not until Hakon Hakonarson's 1263 expedition that another king returned to the Isles. Decline of Norse rule From the middle of the 12th century until 1217 the suzerainty had remained of a very shadowy character; Norway had become a prey to civil dissensions. But after that date it became a reality, and Norway consequently came into collision with the growing power of the kingdom of Scotland. Early in the 13th century, when Ragnald (reigned 1187–1229) paid homage to King John of England (reigned 1199–1216), we hear for the first time of English intervention in the affairs of Mann. But a period of Scots domination would precede the establishment of full English control. Finally, in 1261, Alexander III of Scotland sent envoys to Norway to negotiate for the cession of the isles, but their efforts led to no result. He therefore initiated a war, which ended in the indecisive Battle of Largs against the Norwegian fleet in 1263. However, the Norwegian king Haakon Haakonsson died the following winter, and this allowed King Alexander to bring the war to a successful conclusion. Magnus Olafsson, King of Mann and the Isles (reigned 1252–1265), who had campaigned on the Norwegian side, had to surrender all the islands over which he had ruled, except Mann, for which he did homage. Two years later Magnus died and in 1266 King Magnus VI of Norway ceded the islands, including Mann, to Scotland in the Treaty of Perth in consideration of the sum of 4,000 marks (known as in Scotland) and an annuity of 100 marks. But Scotland's rule over Mann did not become firmly established till 1275, when the Manx suffered defeat in the decisive Battle of Ronaldsway, near Castletown. English dominance In 1290 King Edward I of England sent Walter de Huntercombe to seize possession of Mann, and it remained in English hands until 1313, when Robert Bruce took it after besieging Castle Rushen for five weeks. In about 1333 King Edward III of England granted Mann to William de Montacute, 3rd Baron Montacute (later the 1st Earl of Salisbury), as his absolute possession, without reserving any service to be rendered to him. Then, in 1346, the Battle of Neville's Cross decided the long struggle between England and Scotland in England's favour. King David II of Scotland, Robert Bruce's last male heir, had been captured in the Battle of Neville's cross and ransomed; however, when Scotland was unable to raise one of the ransom installments, David made a secret agreement with King Edward III of England to cancel it, in return for transferring the Scottish kingdom to an English prince. Following the secret agreement, there followed a confused period when Mann sometimes experienced English rule and sometimes Scottish. In 1388 the island was "ravaged" by Sir William Douglas of Nithsdale on his way home from the destruction of the town of Carlingford. In 1392 William de Montacute's son sold the island, including sovereignty, to Sir William le Scrope. In 1399 Henry Bolinbroke brought about the beheading of Le Scrope, who had taken the side of Richard II when Bolinbroke usurped the throne and appointed himself Henry IV. The island then came into the de facto possession of Henry, who granted it to Henry Percy, 1st Earl of Northumberland; but following the latter's later attainder, Henry IV, in 1405, made a lifetime grant of it, with the patronage of the bishopric, to Sir John Stanley. In 1406 this grant was extended – on a feudatory basis under the English Crown – to Sir John's heirs and assigns, the feudal fee being the service of rendering homage and two falcons to all future Kings of England on their coronations. Early Modern period With the accession of the Stanleys to the throne there begins a more settled epoch in Manx history. Though the island's new rulers rarely visited its shores, they placed it under governors, who, in the main, seem to have treated it with the justice of the time. Of the thirteen members of the family who ruled in Mann, the second Sir John Stanley (1414–1432), James, the 7th Earl (1627–1651), and the 10th Earl of the same name (1702–1736) had the most important influence on it. They first curbed the power of the spiritual barons, introduced trial by jury, which superseded trial by battle, and ordered the laws to be written. The second, known as the Great Stanley, and his wife, Charlotte de la Tremoille (or Tremouille), are probably the most striking figures in Manx history. Wars of the Three Kingdoms and Interregnum; 1642 to 1660 Shortly after the Wars of the Three Kingdoms began in June 1643, James Stanley, 7th Earl of Derby returned to Mann to find the island on the brink of rebellion. Among the causes were complaints at the level of tithes payable to the Church of England, and Derby's attempts to replace the Manx ‘tenure of straw’ by which many of his tenants held their lands, a customary tenure akin to freehold, with commercial leases. He managed to restore the situation through a series of meetings, but made minimal concessions. Six months after Charles I was executed on 30 January 1649, Derby received a summons from General Ireton to surrender the island, but declined to do so. In August 1651, he and 300 Manxmen landed in Lancashire to take part in the Third English Civil War; defeated at Wigan Lane on 25 August 1651, Derby escaped with only 30 troops to join Charles II. Captured after the Battle of Worcester in September, he was imprisoned in Chester Castle, tried by court-martial and executed at Bolton on 15 October. Soon after Stanley's death, the Manx Militia, under the command of William Christian (known by his Manx name of Illiam Dhone), rose against the Countess and captured all the insular forts except Rushen and Peel. They were then joined by a Parliamentarian force sent from the mainland, led by Colonels Thomas Birch and Robert Duckenfield, to whom the Countess surrendered after a brief resistance. Oliver Cromwell had appointed Thomas Fairfax "Lord of Mann and the Isles" in September 1651, so that Mann continued under a monarchical government and remained in the same relation to England as before. 1660 Restoration The restoration of Stanley government in 1660 therefore caused as little friction and alteration as its temporary cessation had. One of the first acts of the new Lord, Charles Stanley, 8th Earl of Derby, was to order Christian to be tried. He was found guilty and executed. Of the other persons implicated in the rebellion only three were excepted from the general amnesty. But by Order in Council, Charles II pardoned them, and the judges responsible for the sentence on Christian were punished. Charles Stanley's next act was to dispute the permanency of the tenants' holdings, which they had not at first regarded as being affected by the acceptance of leases, a proceeding which led to an almost open rebellion against his authority and to the neglect of agriculture, in lieu of which the people devoted themselves to the fisheries and to contraband trade. Charles Stanley, who died in 1672, was succeeded first by his son William Richard George Stanley, 9th Earl of Derby until his death in 1702. The agrarian question subsided only in 1704, when James Stanley, 10th Earl of Derby, William's brother and successor, largely through the influence of Bishop Wilson, entered into a compact with his tenants, which became embodied in an Act, called the Act of Settlement. |
rough seas and dense fog. In recent years there has been a marked increase in the frequency of high winds, heavy rains, summer droughts and flooding both from heavy rain and from high seas. Snow fall has decreased significantly over the past century while temperatures are increasing year round with rainfall decreasing. Air pollution, marine pollution and waste disposal are issues in the Isle of Man. Protected sites for nature conservation In order of importance, international first, non-statutory last. Note that ASSIs and MNRs have equal levels of statutory protection under the Wildlife Act 1990. UNESCO Biosphere Reserves The entire territory of the Isle of Man, including all land, sea, freshwater, airspace and seabed is a UNESCO Biosphere Reserve Ramsar sites Ballaugh Curraghs (2006, ). Shares an identical boundary to the Ballaugh Curraghs ASSI. Important Bird Areas The UK RSPB and UK JNCC have designated five areas of the Isle of Man which are of global significance to birdlife. Isle of Man Sea Cliffs - 97 km of the east and west coasts Calf of Man - 250ha The Ayres - c800ha Ballaugh Curraghs - 374ha Isle of Man Hills - 8650ha National nature reserves The Ayres (2000, 272 ha) Areas of Special Scientific Importance There are 22 ASSIs on the Isle of Man as of April 2021. One additional ASSI has been designated but later rescinded (Ramsey Harbour). Dates below refer to year of formal confirmation. Area is in hectares. Ballachurry Meadows (2010, 11.9 ha) Ballacrye Meadow (2005, 0.55 ha) Ballateare Meadow (2014, 0.96 ha) Ballaugh Curraghs (2005, 193.4 ha) Central Ayres (1996, 259.66 ha, extended 2008 by 98.68 ha, total 358.35 ha) Cronk y Bing (2006, 17.71 ha) Cronk y King (2014, 3.02 ha) Dalby Coast (2010, 62.1 ha) Dhoon Glen (2007, 20.92 ha) Eary Vane (2007, 3.96 ha) Glen Maye (2008, 15.92 ha) Glen Rushen (2007, 12.27 ha) Greeba Mountain & Central Hills (2009, 1,080.95 ha) Grenaby Garey (2021, 74.82 ha) Jurby Airfield (2005, 63.04 ha) Langness, Derbyhaven & Sandwick (2001, 310 ha) Maughold Cliffs & Brooghs (2011, 53.63 ha) Port St Mary Ledges & Kallow Point (2011, 14.79 ha) Poyll Vaaish Coast (2007, 44.76 ha) Ramsey Harbour (designated but later rescinded in 2010) Ramsey Mooragh Shore (2006, 2.65 ha) Rosehill Quarry, Billown (2006, 1.37 ha) Santon Gorge & Port Soldrick (2012, 24.35 ha) Marine nature reserves A marine nature reserve was designated in Ramsey Bay in Oct 2011. In 2018 nine further Marine Nature Reserves were given statutory protection. The ten Marine Nature Reserves found around the Isle of Man cover over 10% of the country's territorial waters, in accordance with international requirements. Ramsey Bay 2011 Baie ny Carrickey 2018 Calf and Wart Bank 2018 Douglas Bay 2018 Langness 2018 Laxey Bay 2018 Little Ness 2018 Niarbyl Bay 2018 Port Erin Bay 2018 West Coast 2018 Areas of Special Protection Ayres Gravel Pit designated 2001, 4 hectares. In 2019 this became a nature reserve managed by Manx BirdLife. Bird sanctuaries Bird Sanctuaries where formerly designated under the Wild Birds Protection Act 1932. This designation was superseded by Areas of Special Protection for Birds by the Wildlife Act 1990, however the following formerly designated Bird Sanctuaries remain protected: 'Barnell Reservoir (Patrick)' (1979) 0.02 km2 'Tynwald National Park and Arboretum' (1982) 'Langness, Derbyhaven, Langness and Fort Island and foreshores adjoining' (1936) 'Renscault and Ballachrink (West Baldwin)' (1978) 0.18 km2 'The Willows (Ballamodha, Malew)' (1984) 0.01 km2 Nature reserves and wildlife sites The Isle of Man had 45 non-statutory wildlife sites as of 30 January 2009, covering about 195 ha of land and an additional of inter-tidal coast. By August 2015 this area had increased to 67 sites covering 1229.65 ha in addition to the 10.5 km of coastline. The Manx | the Irish Sea, between Great Britain and Ireland in Northwestern Europe, with a population of almost 85,000. It is a British Crown dependency. It has a small islet, the Calf of Man, to its south. It is located at . Dimensions Area: Land: Water: (100 ha) Total: This makes it: slightly more than three times the size of Washington, DC slightly more than one third the size of Hertfordshire slightly smaller than Saint Lucia. Coast and Territorial Sea The Isle of Man has a coastline of , and a territorial sea extending to a maximum of 12 nm from the coast, or the midpoint between other countries. The total territorial sea area is about 4000 km2 or 1500 sq miles, which is about 87% of the total area of the jurisdiction of the Isle of Man. The Isle of Man only holds exclusive fishing rights in the first 3 nm. The territorial sea is managed by the Isle of Man Government Department of Infrastructure. The Raad ny Foillan long-distance footpath runs around the Manx coast. Climate The Isle of Man enjoys a temperate climate, with cool summers and mild winters. Average rainfall is high compared to the majority of the British Isles, due to its location to the western side of Great Britain and sufficient distance from Ireland for moisture to be accumulated by the prevailing south-westerly winds. Average rainfall is highest at Snaefell, where it is around a year. At lower levels it can fall to around a year. Temperatures remain fairly cool, with the recorded maximum being at Ronaldsway. Terrain The island's terrain is varied. There are two mountainous areas divided by a central valley which runs between Douglas and Peel. The highest point in the Isle of Man, Snaefell, is in the northern area and reaches above sea level. The northern end of the island is a flat plain, consisting of glacial tills and marine sediments. To the south the island is more hilly, with distinct valleys. There is no land below sea level. Land use Arable land: 43.86% Permanent crops: 0% Other: 56.14% (includes permanent pastures, forests, mountain and heathland) (2011) Natural hazards and environmental issues There are few severe natural hazards, the most common being high winds, rough seas and dense fog. In recent years there |
of the Isle of Man. Vital statistics References External links 2001 Manx Census, overview and details | 2001 Manx Census, overview and details (archived from the original on 7 June 2011) |
Isle of Man" as separate from the "Crown in right of the United Kingdom". Her representative on the island is the Lieutenant Governor of the Isle of Man, but his role is mostly ceremonial, though he does have the power to grant Royal Assent (the withholding of which is the same as a veto). Although the Isle of Man is not part of the United Kingdom, its people are British citizens under UK law — there is no separate Manx citizenship. The United Kingdom is responsible for all the island's external affairs, including citizenship, defence, good governance, and foreign relations. The island has no representation in the UK parliament. The legislative power of the government is vested in a bicameral (sometimes called tricameral) parliament called Tynwald (said to be the world's oldest continuously existing parliament), which consists of the directly-elected House of Keys and the indirectly chosen Legislative Council. After every House of Keys general election, the members of Tynwald elect from amongst themselves the Chief Minister of the Isle of Man, who serves as the head of government for five years (until the next general election). Executive power is vested in the Lieutenant Governor (as Governor-in-Council), the Chief Minister, and the Isle of Man's Council of Ministers. The judiciary is independent of the executive and the legislature. Douglas, the largest town on the Isle of Man, is its capital and seat of government, where the Government offices and the parliament chambers (Tynwald) are located. Executive branch The Head of State is the Lord of Mann, which is a hereditary position held by the British monarch (currently Queen Elizabeth II). The Lieutenant Governor is appointed by the Queen, on the advice of the UK's Secretary of State for Justice, for a five-year term and nominally exercises executive power on behalf of the Queen. The Chief Minister is elected by the House of Keys (formerly by Tynwald) following every House of Keys general election and serves for five years until the next general election. When acting as Lord of Mann, the Queen acts on the advice of the Secretary of State for Justice and Lord Chancellor of the United Kingdom having prime responsibility as Privy Counsellor for Manx affairs. The executive branch under the Chief Minister is referred to as "the Government" or the "Civil Service", and consists of the Council of Ministers, nine Departments, ten Statutory Boards and three Offices. Each Department is run by a Minister who reports directly to the Council of Ministers. The Civil Service has more than 2000 employees and the total number of public sector employees including the Civil Service, teachers, nurses, police, etc. is about 9000 people. This is somewhat more than 10% of the population of the island, and a full 23% of the working population. This does not include any military forces, as defence is the responsibility of the United Kingdom. Legislative branch The Manx legislature is Tynwald, which consists of two chambers or "branches". The House of Keys has 24 members, elected for a five-year term in two-seat constituencies by the whole island. The minimum voting age is 16. The Legislative Council has eleven members: the President of Tynwald, the Bishop of Sodor and Man, the Attorney General (non-voting) and eight other members elected by the House of Keys for a five-year term, with four retiring at a time. (In the past they have often already been Members of the House of Keys, but must leave the Keys if elected to the Council.) There are also joint sittings of the Tynwald Court (the two branches together). | the island under its residual responsibilities to guarantee "good government" in all Crown dependencies. The Monarch of the United Kingdom is also the head of state of the Isle of Man, and generally referred to as "The Queen, Lord of Mann". Legislation of the Isle of Man defines "the Crown in right of the Isle of Man" as separate from the "Crown in right of the United Kingdom". Her representative on the island is the Lieutenant Governor of the Isle of Man, but his role is mostly ceremonial, though he does have the power to grant Royal Assent (the withholding of which is the same as a veto). Although the Isle of Man is not part of the United Kingdom, its people are British citizens under UK law — there is no separate Manx citizenship. The United Kingdom is responsible for all the island's external affairs, including citizenship, defence, good governance, and foreign relations. The island has no representation in the UK parliament. The legislative power of the government is vested in a bicameral (sometimes called tricameral) parliament called Tynwald (said to be the world's oldest continuously existing parliament), which consists of the directly-elected House of Keys and the indirectly chosen Legislative Council. After every House of Keys general election, the members of Tynwald elect from amongst themselves the Chief Minister of the Isle of Man, who serves as the head of government for five years (until the next general election). Executive power is vested in the Lieutenant Governor (as Governor-in-Council), the Chief Minister, and the Isle of Man's Council of Ministers. The judiciary is independent of the executive and the legislature. Douglas, the largest town on the Isle of Man, is its capital and seat of government, where the Government offices and the parliament chambers (Tynwald) are located. Executive branch The Head of State is the Lord of Mann, which is a hereditary position held by the British monarch (currently Queen Elizabeth II). The Lieutenant Governor is appointed by the Queen, on the advice of the UK's Secretary of State for Justice, for a five-year term and nominally exercises executive power on behalf of the Queen. The Chief Minister is elected by the House of Keys (formerly by Tynwald) following every House of Keys general election and serves for five years until the next general election. When acting as Lord of Mann, the Queen acts on the advice of the Secretary of State for Justice and Lord Chancellor of the United Kingdom having prime responsibility as Privy Counsellor for Manx affairs. The executive branch under the Chief Minister is referred to as "the Government" or the "Civil Service", and consists of the Council of Ministers, nine Departments, ten Statutory Boards and three Offices. Each Department |
P2P tournaments and Get21, a multiplayer online blackjack site, based their corporate offices on the island. The Isle of Man Government Lottery operated from 1986 to 1997. Since 2 December 1999 the island has participated in the United Kingdom National Lottery. The island is the only jurisdiction outside the United Kingdom where it is possible to play the UK National Lottery. Since 2010 it has also been possible for projects in the Isle of Man to receive national lottery Good Causes Funding. The good causes funding is distributed by the Manx Lottery Trust. Tynwald receives the 12p lottery duty for tickets sold in the Island. The shortage of workers with ICT skills is tackled by several initiatives, like an IT and education campus, a new cyber security degree at the University College of Man, a Code Club, and a work permit waiver for skilled immigrants. Filmmaking and Digital Media Since 1995 Isle of Man Film has co-financed and co-produced over 100 feature film and television dramas which have all filmed on the Island. Among the most successful productions funded in part by Isle of Man Film agency were Waking Ned, where the Manx countryside stood in for rural Ireland, and films like Stormbreaker, Shergar, Tom Brown's Schooldays, I Capture the Castle, The Libertine, Island at War (TV series), Five Children and It, Colour Me Kubrick, Sparkle, and others. Other films that have been filmed on the Isle of Man include Thomas and the Magic Railroad, Harry Potter and the Chamber of Secrets, Keeping Mum and Mindhorn. 2011 Isle of Man Film Oxford Economics was commissioned by Isle of Man Film Ltd to conduct a study into the economic impact of the film industry on the Isle of. Man. The recommendation of this report for Isle of Man Film was to partner with a more established film institution in the UK to source more Isle of Man film production opportunities. This led to the investment of the Isle of Man Government to take shares in Pinewood Shepperton Plc which were sold later with profit. Once one of the busiest areas of film production in the British Isles, the Isle of Man hopes to use its strong foundation in film to grow its television and new digital media industry. In a recent Isle of Man Department of Economic Development strategic review, the Island's over 2,000 jobs counting digital sector features 'digital media' and the creative industries, and embraces partnerships with the industry and its individual sector bodies like the Isle of Media, a new media cluster. Motorsports Hosting of motorsports events, like the Isle of Man Car Rally and the more-prominent TT motorcycle races, contributes to the tourism economy. Tourism Tourism in the Isle of Man developed from advances in transport to the island. In 1819 the first steamship Robert Bruce came to the island, only seven years after the first steam vessel in the UK. In the 1820s, tourism was growing due to improved transport. The island government's own report for the financial years 2014/15-2015/16 shows tourist accommodation to be in the lowest sector at 0.3%, ranking slightly above 'mining and quarrying' (0.1%). Infrastructure Electricity Since 1999, the Isle of Man has received electricity through the world's longest submarine AC cable, the 90 kV Isle of Man to England Interconnector, as well as | without real economic activity" by 2018. Sectors The Isle of Man's Department for Enterprise manages the diversified economy in twelve key sectors. The largest individual sectors by GNI are insurance and eGaming with 17% of GNI each, followed by ICT and banking with 9% each. The 2016 census lists 41,636 total employed. The largest sectors by employment are "medical and health", "financial and business services", construction, retail and public administration. Manufacturing, focused on aerospace and the food and drink industry, employs almost 2000 workers and contributes about 5% of GDP. The sector provides laser optics, industrial diamonds, electronics, plastics and aerospace precision engineering. Finance Sector Insurance, banking (includes retail banking, offshore banking and other banking services), other finance and business services, and corporate service providers together contribute the most to the GNI and most of the jobs, with 10,057 people employed in 2016. eGaming & ICT Among the largest employers of the Island's private sector are eGaming (online gambling) companies like The Stars Group, Microgaming, Newfield, and Playtech. The Manx eGaming Association MEGA is representing the sector. Licenses are issued by the Gambling Supervision Commission. In 2005 PokerStars, one of the world's largest online poker sites, relocated its headquarters to the Isle of Man from Costa Rica. In 2006, RNG Gaming a large gaming software developer of P2P tournaments and Get21, a multiplayer online blackjack site, based their corporate offices on the island. The Isle of Man Government Lottery operated from 1986 to 1997. Since 2 December 1999 the island has participated in the United Kingdom National Lottery. The island is the only jurisdiction outside the United Kingdom where it is possible to play the UK National Lottery. Since 2010 it has also been possible for projects in the Isle of Man to receive national lottery Good Causes Funding. The good causes funding is distributed by the Manx Lottery Trust. Tynwald receives the 12p lottery duty for tickets sold in the Island. The shortage of workers with ICT skills is tackled by several initiatives, like an IT and education campus, a new cyber security degree at the University College of Man, a Code Club, and a work permit waiver for skilled immigrants. Filmmaking and Digital Media Since 1995 Isle of Man Film has co-financed and co-produced over 100 feature film and television dramas which have all filmed on the Island. Among the most successful productions funded in part by Isle of Man Film agency were Waking Ned, where the Manx countryside stood in for rural Ireland, and films like Stormbreaker, Shergar, Tom Brown's Schooldays, I Capture the Castle, The Libertine, Island at War (TV series), Five Children and It, Colour Me Kubrick, Sparkle, and others. Other films that have been filmed on the Isle of Man include Thomas and the Magic Railroad, Harry Potter and the Chamber of Secrets, Keeping Mum and Mindhorn. 2011 Isle of Man Film Oxford Economics was commissioned by Isle of Man Film Ltd to conduct a study into the economic impact of the film industry on the Isle of. Man. The recommendation of this report for Isle of Man Film was to partner with a more established film institution in the UK to source more Isle of Man film production opportunities. This led to the investment of the Isle of Man Government to take shares in Pinewood Shepperton Plc which were sold later with profit. Once one of the busiest areas of film production in |
by the GPO. The internal telegraph system was extended within a year to Castletown and Peel, however by then the previous lack of modern communications in Castletown had already started the Isle of Man Government on its move to Douglas. Due to increasing usage in the years following nationalisation, further cables between Port Cornaa and St Bees were laid in 1875 and 1885. By 1883 Smith's Directory listed several telegraph offices operated by the Post Office, in addition to those at Douglas, Ramsey, Castletown and Peel the telegraph was also available at Laxey, Ballaugh, and Port St. Mary. Throughout the First World War, the cable landing station at Port Cornaa was guarded by the Isle of Man Volunteer Corps. The undersea telegraph cables have been disused since the 1950s, but remain in place. Teleport A Teleport, with several earth stations, is currently under construction on the Isle of Man. SES Satellite Leasing, the entrepreneurial investment arm of SES. The teleport is expected to enter into service in 2017. It will be a state-of-the-art facility providing satellite telemetry, tracking and commanding (TT&C) facilities and capacity management, together with a wide range of teleport services such as uplink, downlink, and contribution services for broadcasters and data centres. Telephones The main telephone provider on the Isle of Man today is Manx Telecom. In 1889 George Gillmore, formerly an electrician for the GPO's Manx telegraph operations, was granted a licence by the Postmaster General to operate the Isle of Man's first telephone service. Based in an exchange in Athol Street, early customers of Gilbert's telephone service included the Isle of Man Steam Packet Company and the Isle of Man Railway. Not having the resources to fund expansion or a link to England, Gillmore sold his licence to the National Telephone Company and stayed on as their manager on the island. By 1901 there were 600 subscribers, and the telephone system had been extended to Ramsey, Castletown, Peel, Port Erin, Port St. Mary and Onchan. On 1 January 1912 the National Telephone Company was nationalised and merged into the General Post Office by the Telephone Transfer Act 1911. Only Guernsey, Portsmouth and Hull remained outside of the GPO. In 1922, the General Post Office offered to sell the island's telephone service to the Manx government, but the offer was not taken up. A similar arrangement in Jersey for that island's telephone service was concluded in 1923. The first off-island telephone link was established in 1929, with the laying of a cable by the CS Faraday between Port Erin and Ballyhornan in Northern Ireland, a distance of 57 km, and then between Port Grenaugh and Blackpool, primarily to provide a link to Northern Ireland. The cable was completed on 6 June 1929 and the first call between the Isle of Man and the outside world was made on 28 June 1929 by Lieutenant Governor Sir Claude Hill in Douglas to the Postmaster General in Liverpool. The cable initially carried only two trunk circuits. In 1942, a pioneering VHF frequency-modulated radio-link was established between Creg-na-Baa and the UK to provide an alternative to the sub-sea cable. This has since been discontinued. This was augmented on 24 June 1943 by a long cable between Cemaes Bay in Anglesey and Port Erin, which had the world's first submerged repeater, laid by HMCS Iris. The repeater doubled the possible number of circuits on the cable, and although it failed after only five months, its replacement worked for seven years. In 1962 a further undersea cable was laid by HMTS Ariel between Colwyn Bay and the Island. Historically, the telephone system on the Isle of Man had been run as a monopoly by the British General Post Office, and later British Telecommunications, and operated as part of the Liverpool telephone district. By 1985 the privatised British Telecom had inherited the telephone operations of the GPO, including those on the Isle of Man. At this time the Manx Government announced that it would award a 20-year licence to operate the telephone system in a tender process. As part of this process, in 1986 British Telecom created a Manx-registered subsidiary company, Manx Telecom, to bid for the tender. It was believed that a local identity and management would be more politically acceptable in the tendering process as they competed with Cable & Wireless to win the licence. Manx Telecom won the tender, and commenced operations under the new identity from 1 January 1987. On 28 March 1988 an 8,000 telephone circuit fibre optic cable, the longest unregenerated system in Europe, was inaugurated. In links Port Grenaugh to Silecroft in Cumbria, and was laid in September 1987. The cable was buried in the seabed along its entire length. A further fibre optic cable, known as BT-MT1 was laid in October 1990 between Millom in Cumbria and Douglas, a distance of . Jointly operated by BT and Manx Telecom, it provides six channels each with a bandwidth of 140 Mbit/s. This cable remains in use today. In July 1992, Mercury Communications laid the LANIS fibre-optic cables. LANIS-1 runs for between Port Grenaugh and Blackpool, and LANIS-2 runs for between the Isle of Man and Northern Ireland. They have six channels each with a bandwidth of 565 Mbit/s. The LANIS cables are now operated by Cable & Wireless. The LANIS-1 cable was damaged 600 m off Port Grenaugh on 27 November 2006, causing loss of the link and resulting in temporary Internet access issues for some Manx customers whilst it was awaiting repair. On 17 November 2001 Manx Telecom became part of mmO2 following the demerger of BT Wireless's operations from BT Group, and the company was owned by Telefónica. On 4 June 2010 Manx Telecom was sold by Telefónica to UK private equity investor HgCapital (who were buying the majority stake), alongside telecoms management company CPS Partners In December 2007, the Manx Electricity Authority and its telecoms subsidiary, e-llan Communications, commissioned the lighting of a new undersea fibre-optic link. It was laid in 1999 between Blackpool and Douglas as part of the Isle of Man to England Interconnector which connects the Manx electricity system to the UK's National Grid. In December 2017, Horizon Electronics Isle of Man (formerly Horizon Electro) helped with the online TV services of the Isle of Man. According to the CIA World Factbook, in 1999 there were 51,000 fixed telephone lines in use in the Isle of Man. The Isle of Man is included within the UK telephone numbering system, and is accessed externally via UK area codes, rather than by its own country calling code. The area codes currently in use are: +44 1624 (landlines) and +44 7425 / +44 7624 / +44 7924 (mobiles). Submarine communications cables in service BT-MT1 (BT/Manx Telecom, 1990 - UK) BT-MT1-NI (BT/Manx Telecom, 2000 - Northern Ireland (UK)) LANIS-1 (Cable & Wireless, 1992 - UK) LANIS-2 (Cable & Wireless, 1992 - Northern Ireland (UK)) Isle of Man to England Interconnector (Manx Electricity Authority, 2007 - UK) Submarine cables in Manx waters are governed by the Submarine Cables Act 2003 (an Act of Tynwald). Telecoms service providers Manx Telecom The incumbent provider offering all types of telecoms and owner of the national network. Sure The islands second full service provider offering all types of telecommunications from: Mobiles, Broadband, Home Phone, Private Circuits, Dedicated Internet Access, Data Centre Hosting, LAN/WAN/PABX consultancy etc. Wi-Manx VoIP and internet services provider since 2007. In 2014 Wi-Manx were granted a Full Telecoms Operator license. Opti-Fi Limited A fast-growing ISP, delivering super-fast fibre, wireless technology, IoT, and networking services throughout the Isle of Man. In 2020 Opti-Fi gained its ISP licence. Mantis A provider of Satellite broadband services, and IT support on the Isle of Man. Manx | 1999 between Blackpool and Douglas as part of the Isle of Man to England Interconnector which connects the Manx electricity system to the UK's National Grid. In December 2017, Horizon Electronics Isle of Man (formerly Horizon Electro) helped with the online TV services of the Isle of Man. According to the CIA World Factbook, in 1999 there were 51,000 fixed telephone lines in use in the Isle of Man. The Isle of Man is included within the UK telephone numbering system, and is accessed externally via UK area codes, rather than by its own country calling code. The area codes currently in use are: +44 1624 (landlines) and +44 7425 / +44 7624 / +44 7924 (mobiles). Submarine communications cables in service BT-MT1 (BT/Manx Telecom, 1990 - UK) BT-MT1-NI (BT/Manx Telecom, 2000 - Northern Ireland (UK)) LANIS-1 (Cable & Wireless, 1992 - UK) LANIS-2 (Cable & Wireless, 1992 - Northern Ireland (UK)) Isle of Man to England Interconnector (Manx Electricity Authority, 2007 - UK) Submarine cables in Manx waters are governed by the Submarine Cables Act 2003 (an Act of Tynwald). Telecoms service providers Manx Telecom The incumbent provider offering all types of telecoms and owner of the national network. Sure The islands second full service provider offering all types of telecommunications from: Mobiles, Broadband, Home Phone, Private Circuits, Dedicated Internet Access, Data Centre Hosting, LAN/WAN/PABX consultancy etc. Wi-Manx VoIP and internet services provider since 2007. In 2014 Wi-Manx were granted a Full Telecoms Operator license. Opti-Fi Limited A fast-growing ISP, delivering super-fast fibre, wireless technology, IoT, and networking services throughout the Isle of Man. In 2020 Opti-Fi gained its ISP licence. Mantis A provider of Satellite broadband services, and IT support on the Isle of Man. Manx Technology Group An IT support, Managed IT solutions, infrastructure management, device-as-a-service, 24×7 help-desk, reporting and IT security solutions company. Domicilium Born out of Advanced Systems (One of the original service providers in Europe) Domicilium is primarily a business ISP providing network and hosting services. Domicilium was the first IOM provider to offer MPLS services to the UK. Continent 8 Hosting provider with locations all over the world. Continent 8 have a specific focus on the gaming industry but are a registered internet provider on the IOM. Netcetera Offers hosting and co-location in its Ballasalla data centre. BlueWave Communications A provider of ISP and 4G services to business and consumers. BlueWave Communications is a communications service provider located in Douglas on the Isle of Man who were granted their Full Telecoms Operator licence in 2018. It was founded in 2007 by Stuart Baggs and provides communications services to both businesses and consumers on the Isle of Man. It is also rumoured that various online gaming companies operate their own networks outside of these providers, although they do not resell that service. Mobile telephones The mobile phone network operated by Manx Telecom has been used by O2 as an environment for developing and testing new products and services prior to wider rollout. In December 2001, the company became the first telecommunications operator in Europe to launch a live 3G network. In November 2005, the company became the first in Europe to offer its customers an HSDPA (3.5G) service. Sure built their own mobile network on the island in 2007 and following various upgrades now deliver 2G/3G and 4G services Internet In 1996 the Isle of Man government obtained permission to use the .im national top level domain (TLD) and has ultimate responsibility for its use. The domain is managed on a daily basis by Domicilium (IOM) Limited, an island based Internet service provider. Broadband Internet services are available through five local providers which are Manx Telecom, Sure, Wi-Manx, Domicilium, Opti-Fi Limited and BlueWave Communications. In 2021 it was revealed Bluewave host a Ground station for the Starlink Satellite Internet system Broadcasting Radio The public-service commercial radio station for the island is Manx Radio. Manx Radio is part funded by government grant, and partly by advertising. There are two other Manx-based FM radio stations, Energy FM and 3 FM. BBC national radio stations are also relayed locally via a transmitter located to the south of Douglas, relayed from Sandale transmitting station in Cumbria, as well as a signal feed from the Holme Moss transmitting station in West Yorkshire. The Douglas transmitter also broadcasts the BBC's DAB digital radio services and Classic FM. Manx Radio is the only local service to broadcast on AM medium wave. No UK services are relayed via local AM transmitters. No longwave stations operate from the Island, although one (MusicMann 279) was proposed. There are currently no proposals to broadcast any of the three insular FM stations on DAB. Transmitters Snaefell - Manx Radio, Energy FM, 3FM Foxdale - Manx Radio (AM) Mull Hill (near Port St. Mary) - Energy FM, 3FM Jurby - Energy FM, Manx Radio, Ramsey - Manx Radio, Energy FM, 3FM, Horizon Pulse, (a nearby site also used for television broadcasts the BBC DAB multiplex) Ballasaig (Maughold) - Energy FM Carnane (Douglas) - Manx Radio, Energy FM, 3FM, Horizon Pulse, Radio 1, Radio 2, Radio 3, Radio 4, Classic FM, BBC DAB multiplex Port St Mary - 3FM, BBC DAB multiplex Beary Peark - Energy FM, 3FM Peel - Manx Radio Cronk ny Arrey - 3FM Television There is no island-specific television service. Local transmitters retransmit UK Freeview broadcasts. The BBC region is BBC North West and the ITV region is Granada Television. Many television services are available by satellite, such as Sky, and Freesat from the Astra 2/Eurobird 1 group, as well as services from a range of other satellites around Europe such as Astra 1 and Hot Bird. Manx ViaSat-IOM, ManSat, Telesat-IOM companies uses the first communications satellite ViaSat-1 that launched in 2011 and positioned at the Isle of Man registered 115.1 degrees West longitude geostationary orbit point. In some areas, terrestrial television directly from the United Kingdom or Republic of Ireland can also be received. Analogue television transmission ceased between 2008 and 2009, when limited local transmission of digital terrestrial television commenced. The UK's television licence regime extends to the island. There is no island-specific opt-out of the BBC regional news programme North West Tonight, in the way that the Channel Islands get their own version of Spotlight. Television was first received on the Isle of Man from the Holme Moss transmitter which started broadcasting BBC Television (later BBC One) from 12 October 1951. Signals from Holme Moss were easily received on the Isle of Man. ITV television has been available on parts of the east of the Isle of Man on 3 May 1956 when Granada Television (and ABC Television from 5 May 1956 to 28 July 1968) transmissions started from the Winter Hill transmitting station, and to parts of the west of the island on 31 October 1959 from the Black Mountain transmitting station in Northern Ireland which broadcasts Ulster Television. Parts of the north of the island received Border Television since 1 September 1961, initially directly from the Caldbeck transmitting station in Cumberland (later became Cumbria from 1974). On 26 March 1965, Border Television commenced relay of their signal through a local transmitter on Richmond Hill, above sea level and from the centre of Douglas. The site allowed reliable reception of the Caldbeck signal, which is rebroadcast on a different frequency. The high transmission tower was re-sited from London, where it had been used for early ITV transmissions. Richmond Hill was decommissioned after the close of 405-line broadcasts, although the 200 ft tower remained in use for radio with Manx Radio transmitting on 96.9 MHz and then 97.3 MHz until 1989. Manx Radio moved their FM service to the Carnane site and the frequency changed to the current 97.2 MHz. The television broadcasts are now transmitted from a high transmitter on a hill to the south of Douglas. The transmitter is operated by Arqiva and |
to in the first full year after passing their driving test (Isle of Man citizens are permitted to start driving at the age of sixteen) and some are not used to having to make progress in the same way as on a larger road network such as that in the UK: even a cautious driver can get from anywhere in the island to anywhere else in no more than sixty minutes. Set against this is a strong culture of motor sport enthusiasm (pinnacled in the TT, but there are many events during the year) and many residents familiar with the roads are well used to traversing country roads at speeds illegal on similar roads elsewhere. This leads to a very diverse spread of both driving competence and speed. In an official survey in 2006 the introduction of blanket speed limits was refused by the population, suggesting that a large number appreciate the freedom. There is a comprehensive bus network, operated by Bus Vannin, a department of the Isle of Man Government, with most routes originating or terminating in Douglas. An organisation on the Isle of Man called the Fare Free Campaign supports making bus and tram travel on the island free of charge | no more than sixty minutes. Set against this is a strong culture of motor sport enthusiasm (pinnacled in the TT, but there are many events during the year) and many residents familiar with the roads are well used to traversing country roads at speeds illegal on similar roads elsewhere. This leads to a very diverse spread of both driving competence and speed. In an official survey in 2006 the introduction of blanket speed limits was refused by the population, suggesting that a large number appreciate the freedom. There is a comprehensive bus network, operated by Bus Vannin, a department of the Isle of Man Government, with most routes originating or terminating in Douglas. An organisation on the Isle of Man called the Fare Free Campaign supports making bus and tram travel on the island free of charge for all routes. One of the reasons the campaign gives for supporting this is to encourage people to change their transportation habits to help mitigate climate change. Railways The island has a total of of railway. There are seven separate public rail or tram systems on the island: aReduced in 2019 due to works on the promenade. These works have overrun badly, and as at October 2019 the situation with the horse trams in the 2020 season is uncertain. (The last three are short-distance tourist rides which cannot be said to be transport services.) All of these routes are seasonal. Airports The only commercial airport on the island is the Isle of Man Airport at Ronaldsway. Scheduled services operate to and from various cities in the United Kingdom and Ireland, operated by several different airlines. The island's other paved runways are at Jurby and Andreas. Jurby remains in Isle of Man Government ownership and is used for |
2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base , where is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy of a discrete random variable is a measure of the amount of uncertainty associated with the value of when only its distribution is known. The entropy of a source that emits a sequence of symbols that are independent and identically distributed (iid) is bits (per message of symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than . If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If is the set of all messages that could be, and is the probability of some , then the entropy, , of is defined: (Here, is the self-information, which is the entropy contribution of an individual message, and is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable ; i.e., most unpredictable, in which case . The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: Joint entropy The of two discrete random variables and is merely the entropy of their pairing: . This implies that if and are independent, then their joint entropy is the sum of their individual entropies. For example, if represents the position of a chess piece— the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused with . Conditional entropy (equivocation) The or conditional uncertainty of given random variable (also called the equivocation of about ) is the average conditional entropy over : Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: Mutual information (transinformation) Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by: where (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y. Mutual information is symmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Kullback–Leibler divergence (information gain) The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution , and an arbitrary probability distribution . If we compress data in a manner that assumes is the distribution underlying some data, when, in reality, is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution . If Alice knows the true distribution , while Bob believes (has a prior) that the distribution is , then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Other quantities Other important information theoretic quantities include Rényi entropy (a generalization of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Coding theory Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models. Source theory Any process that generates successive messages can be considered a of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. Rate Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. Information rate is defined as It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is | in cryptography and cryptanalysis. See the article ban (unit) for a historical application. Historical background The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation (recalling Boltzmann's constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as , where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory. In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as the bit—a new way of seeing the most fundamental unit of information. Quantities of information Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the distributions associated with random variables. Important quantities of information are entropy, a measure of information in a single random variable, and mutual information, a measure of information in common between two random variables. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm. In what follows, an expression of the form is considered by convention to be equal to zero whenever . This is justified because for any logarithmic base. Entropy of an information source Based on the probability mass function of each source symbol to be communicated, the Shannon entropy , in units of bits (per symbol), is given by where is the probability of occurrence of the -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base , where is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy of a discrete random variable is a measure of the amount of uncertainty associated with the value of when only its distribution is known. The entropy of a source that emits a sequence of symbols that are independent and identically distributed (iid) is bits (per message of symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than . If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If is the set of all messages that could be, and is the probability of some , then the entropy, , of is defined: (Here, is the self-information, which is the entropy contribution of an individual message, and is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable ; i.e., most unpredictable, in which case . The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: Joint entropy The of two discrete random variables and is merely the entropy of their pairing: . This implies that if and are independent, then their joint entropy is the sum of their individual entropies. For example, if represents the position of a chess piece— the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused with . Conditional entropy (equivocation) The or conditional uncertainty of given random variable (also called the equivocation of about ) is the average conditional entropy over : Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: Mutual information (transinformation) Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by: where (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y. Mutual information is symmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Kullback–Leibler divergence (information gain) The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution , and an arbitrary probability distribution . If we compress data in a manner that assumes is the distribution underlying some data, when, in reality, is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution . If Alice knows the true distribution , while Bob believes (has a prior) that the distribution is , then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Other quantities Other important information theoretic quantities include Rényi entropy (a generalization of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Coding theory Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information |
affected by this phenomenon is journalism. Such a profession, which in the past was responsible for the dissemination of information, may be suppressed by the overabundance of information today. Techniques to gather knowledge from an overabundance of electronic information (e.g., data fusion may help in data mining) have existed since the 1970s. Another common technique to deal with such amount of information is qualitative research. Such approaches aim to organize the information, synthesizing, categorizing and systematizing in order to be more usable and easier to search. Growth patterns The world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986 to 15.8 in 1993, over 54.5 in 2000, and to 295 (optimally compressed) exabytes in 2007. This is equivalent to less than one 730-MB CD-ROM per person in 1986 (539 MB per person), roughly 4 CD-ROM per person of 1993, 12 CD-ROM per person in the year 2000, and almost 61 CD-ROM per person in 2007. Piling up the imagined 404 billion CD-ROM from 2007 would create a stack from the Earth to the Moon and a quarter of this distance beyond (with 1.2 mm thickness per CD). The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986, 715 (optimally compressed) exabytes in 1993, 1,200 (optimally compressed) exabytes in 2000, and 1,900 in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 0.281 exabytes of (optimally compressed) information in 1986, 0.471 in 1993, 2.2 in 2000, and 65 (optimally compressed) exabytes in 2007. A new metric that is being used in an attempt to characterize the growth in person-specific information, is the disk storage per person (DSP), which is measured in megabytes/person (where megabytes is 106 bytes and is abbreviated MB). Global DSP (GDSP) is the total rigid disk drive space (in MB) of new units sold in a | that is being used in an attempt to characterize the growth in person-specific information, is the disk storage per person (DSP), which is measured in megabytes/person (where megabytes is 106 bytes and is abbreviated MB). Global DSP (GDSP) is the total rigid disk drive space (in MB) of new units sold in a year divided by the world population in that year. The GDSP metric is a crude measure of how much disk storage could possibly be used to collect person-specific data on the world population. In 1983, one million fixed drives with an estimated total of 90 terabytes were sold worldwide; 30MB drives had the largest market segment. In 1996, 105 million drives, totaling 160,623 terabytes were sold with 1 and 2 gigabyte drives leading the industry. By the year 2000, with 20GB drive leading the industry, rigid drives sold for the year are projected to total 2,829,288 terabytes Rigid disk drive sales to top $34 billion in 1997. According to Latanya Sweeney, there are three trends in data gathering today: Type 1. Expansion of the number of fields being collected, known as the “collect more” trend. Type 2. Replace an existing aggregate data collection with a person-specific one, known as the “collect specifically” trend. Type 3. Gather information by starting a new person-specific data collection, known as the “collect it if you can” trend. Related terms Since "information" in electronic media is often used synonymously with "data", the term information explosion is closely related to the concept of data flood (also dubbed data deluge). Sometimes the term information flood is used as well. All of those basically boil down to the ever-increasing amount of electronic data exchanged per time unit. The awareness about non-manageable amounts of data grew along with the advent of ever more powerful data processing since the mid-1960s. Challenges |
small, a medium, and a large man's measures. However, the oldest surviving manuscripts date from the early 14th century and appear to have been altered with the inclusion of newer material. In 1814, Charles Butler, a mathematics teacher at Cheam School, recorded the old legal definition of the inch to be "three grains of sound ripe barley being taken out the middle of the ear, well dried, and laid end to end in a row", and placed the barleycorn, not the inch, as the base unit of the English Long Measure system, from which all other units were derived. John Bouvier similarly recorded in his 1843 law dictionary that the barleycorn was the fundamental measure. Butler observed, however, that "[a]s the length of the barley-corn cannot be fixed, so the inch according to this method will be uncertain", noting that a standard inch measure was now [i.e. by 1843] kept in the Exchequer chamber, Guildhall, and that was the legal definition of the inch. This was a point also made by George Long in his 1842 Penny Cyclopædia, observing that standard measures had since surpassed the barleycorn definition of the inch, and that to recover the inch measure from its original definition, in case the standard measure were destroyed, would involve the measurement of large numbers of barleycorns and taking their average lengths. He noted that this process would not perfectly recover the standard, since it might introduce errors of anywhere between one hundredth and one tenth of an inch in the definition of a yard. Before the adoption of the international yard and pound, various definitions were in use. In the United Kingdom and most countries of the British Commonwealth, the inch was defined in terms of the Imperial Standard Yard. The United States adopted the conversion factor 1 metre = 39.37 inches by an act in 1866. In 1893, Mendenhall ordered the physical realization of the inch to be based on the international prototype metres numbers 21 and 27, which had been received from the CGPM, together with the previously adopted conversion factor. As a result of the definitions above, the U.S. inch was effectively defined as 25.4000508 mm (with a reference temperature of 68 degrees Fahrenheit) and the UK inch at 25.399977 mm (with a reference temperature of 62 degrees Fahrenheit). When Carl Edvard Johansson started manufacturing gauge blocks in inch sizes in 1912, Johansson's compromise was to manufacture gauge blocks with a nominal size of 25.4mm, with a reference temperature of 20 degrees Celsius, accurate to within a few parts per million of both official definitions. Because Johansson's blocks were so popular, his blocks became the de facto standard for manufacturers internationally, with other manufacturers of gauge blocks following Johansson's definition by producing blocks designed to be equivalent to his. In 1930, the British Standards Institution adopted an inch of exactly 25.4 mm. The American Standards Association followed suit in 1933. By 1935, industry in 16 countries had adopted the "industrial inch" as it came to be known, effectively endorsing Johansson's pragmatic choice of conversion ratio. In 1946, the Commonwealth Science Congress recommended a yard of exactly 0.9144 metres for adoption throughout the British Commonwealth. This was adopted by Canada in 1951; the United States on 1 July 1959; Australia in 1961, effective 1 January 1964; and the United Kingdom in 1963, effective on 1 January 1964. The new standards gave an inch of exactly 25.4 mm, 1.7 millionths of an inch longer than the old imperial inch and 2 millionths of an inch shorter than the old US inch. Related units US survey inches The United States retains the -metre definition for surveying, producing a 2 millionth part difference between standard and US survey inches. This is approximately inch per mile; 12.7 kilometres is exactly standard inches and exactly survey inches. This difference is substantial | grains of barley, dry and round, placed end to end, lengthwise". Similar definitions are recorded in both English and Welsh medieval law tracts. One, dating from the first half of the 10th century, is contained in the Laws of Hywel Dda which superseded those of Dyfnwal, an even earlier definition of the inch in Wales. Both definitions, as recorded in Ancient Laws and Institutes of Wales (vol i., pp. 184, 187, 189), are that "three lengths of a barleycorn is the inch". King David I of Scotland in his Assize of Weights and Measures (c. 1150) is said to have defined the Scottish inch as the width of an average man's thumb at the base of the nail, even including the requirement to calculate the average of a small, a medium, and a large man's measures. However, the oldest surviving manuscripts date from the early 14th century and appear to have been altered with the inclusion of newer material. In 1814, Charles Butler, a mathematics teacher at Cheam School, recorded the old legal definition of the inch to be "three grains of sound ripe barley being taken out the middle of the ear, well dried, and laid end to end in a row", and placed the barleycorn, not the inch, as the base unit of the English Long Measure system, from which all other units were derived. John Bouvier similarly recorded in his 1843 law dictionary that the barleycorn was the fundamental measure. Butler observed, however, that "[a]s the length of the barley-corn cannot be fixed, so the inch according to this method will be uncertain", noting that a standard inch measure was now [i.e. by 1843] kept in the Exchequer chamber, Guildhall, and that was the legal definition of the inch. This was a point also made by George Long in his 1842 Penny Cyclopædia, observing that standard measures had since surpassed the barleycorn definition of the inch, and that to recover the inch measure from its original definition, in case the standard measure were destroyed, would involve the measurement of large numbers of barleycorns and taking their average lengths. He noted that this process would not perfectly recover the standard, since it might introduce errors of anywhere between one hundredth and one tenth of an inch in the definition of a yard. Before the adoption of the international yard and pound, various definitions were in use. In the United Kingdom and most countries of the British Commonwealth, the inch was defined in terms of the Imperial Standard Yard. The United States adopted the conversion factor 1 metre = 39.37 inches by an act in 1866. In 1893, Mendenhall ordered the physical realization of the inch to be based on the international prototype metres numbers 21 and 27, which had been received from the CGPM, together with the previously adopted conversion factor. As a result of the definitions above, the U.S. inch was effectively defined as 25.4000508 mm (with a reference temperature of 68 degrees Fahrenheit) and the UK inch at 25.399977 mm (with a reference temperature of 62 degrees Fahrenheit). When Carl Edvard Johansson started manufacturing gauge blocks in inch sizes in 1912, Johansson's compromise was to manufacture gauge blocks with a nominal size of 25.4mm, with a reference temperature of 20 degrees Celsius, accurate to within a few parts per million of both official definitions. Because Johansson's blocks were so popular, his blocks became the de facto standard for manufacturers internationally, with other manufacturers of gauge blocks following Johansson's definition by producing blocks designed to be equivalent to his. In 1930, the British Standards Institution adopted an inch of exactly 25.4 mm. The American Standards Association followed suit in 1933. By 1935, industry in 16 countries had adopted the "industrial inch" as it came to be known, effectively endorsing Johansson's pragmatic choice of conversion ratio. In 1946, the Commonwealth Science Congress recommended a yard of exactly 0.9144 metres for adoption throughout the British Commonwealth. This was adopted by Canada in 1951; the United States on 1 July 1959; Australia in 1961, effective 1 January 1964; and the United Kingdom in 1963, effective on 1 January 1964. The new standards gave an inch of exactly 25.4 mm, 1.7 millionths of an inch longer than the old imperial inch and 2 millionths of an inch shorter than the old US inch. Related units US survey inches The United States retains the -metre definition for surveying, producing a 2 millionth part difference between standard and US survey inches. This is approximately inch per mile; 12.7 kilometres is exactly standard inches and exactly survey inches. This difference is substantial when doing calculations in State Plane Coordinate Systems with coordinate values in the hundreds of thousands or millions of feet. In 2020, the U.S. NIST announced that the U.S. survey foot would "be phased out" on 1 January 2023 and be superseded by the International foot (also known as the |
and meals, thus providing all of the functions of traditional inns. Economy, limited service properties, however, lack a kitchen and bar, and therefore claim at most an included continental breakfast.) The lodging aspect of the word inn lives on in some hotel brand names, like Holiday Inn, and the Inns of Court in London were once accommodations for members of the legal profession. Some laws refer to lodging operators as innkeepers. Forms Other forms of inns exist throughout the world. Among them are the honjin and ryokan of Japan, caravanserai of Central Asia and the Middle East, and Jiuguan in ancient China. In Asia Minor, during the periods of rule by the Seljuq and Ottoman Turks, impressive structures functioning as inns () were built because inns were considered socially significant. These inns provided accommodations for people and either their vehicles or animals, and served as a resting place to those travelling on foot or by other means. These inns were built between towns if the distance between municipalities was too far for one day's travel. These structures, called caravansarais, were inns with large courtyards and ample supplies of water for drinking and other uses. They typically contained a café, in addition to supplies of food and fodder. After the caravans traveled a while they would take a break at these caravansarais, and often spend the night to rest the human travellers and their animals. Usage of the term The term "inn" historically characterized a rural hotel which provided lodging, food and refreshments, and accommodations for travelers' horses. To capitalize on this nostalgic image many typically lower end and middling modern motor hotel operators seek to distance themselves from similar motels by styling themselves "inns", regardless of services and accommodations provided. Examples are Comfort Inn, Days Inn, | inns for water, food, and horses. A passenger train stopped only at designated stations in the city centre, around which were built grand railway hotels. Motorcar traffic on old-style two-lane highways might have paused at any camp, cabin court, or motel along the way, while freeway traffic was restricted to access from designated off-ramps to side roads which quickly become crowded with hotel chain operators. The original functions of an inn are now usually split among separate establishments. For example, hotels, lodges and motels might provide the traditional functions of an inn but focus more on lodging customers than on other services; public houses (pubs) are primarily alcohol-serving establishments; and restaurants and taverns serve food and drink. (Hotels often contain restaurants serving full breakfasts and meals, thus providing all of the functions of traditional inns. Economy, limited service properties, however, lack a kitchen and bar, and therefore claim at most an included continental breakfast.) The lodging aspect of the word inn lives on in some hotel brand names, like Holiday Inn, and the Inns of Court in London were once accommodations for members of the legal profession. Some laws refer to lodging operators as innkeepers. Forms Other forms of inns exist throughout the world. Among them are the honjin and ryokan of Japan, caravanserai of Central Asia and the Middle East, and Jiuguan in ancient China. In Asia Minor, during the periods of rule by the Seljuq and Ottoman Turks, impressive structures functioning as inns () were built because inns were considered socially significant. These inns provided accommodations for people and either their vehicles or animals, and served as a resting place to those travelling on foot or by other means. These inns were built between towns if the distance between municipalities was too far for one day's travel. These structures, called caravansarais, were inns with large courtyards and ample supplies of water for drinking and other uses. They typically contained a café, in addition to supplies of food and fodder. After the caravans traveled a while they would take a break at these caravansarais, and often spend the night to rest the human travellers and their animals. Usage of the term The term "inn" historically characterized a rural hotel which provided lodging, food and refreshments, and accommodations for travelers' horses. To capitalize on |
contest, and the results posted. Contestants will be aware of their scores, but not others', and may resubmit to improve their scores. Starting from 2012, IOI has been using the Contest Management System (CMS) for developing and monitoring the contest. The scores from the two competition days and all problems are summed up separately for each contestant. At the awarding ceremony, contestants are awarded medals depending on their relative total score. The top 50% of the contestants are awarded medals, such that the relative number of gold : silver : bronze : no medal is approximately 1:2:3:6 (thus 1/12 of the contestants get a gold medal). Prior to IOI 2010, students who did not receive medals did not have their scores published, making it impossible for a country to be ranked by adding together scores of its competitors unless each wins a medal. From IOI 2010, although the scores of students who did not receive medals are still not available in the official results, they are known from the live web scoreboard. In IOI 2012 the top 3 nations ranked by aggregate score (Russia, China and USA) were subsequently awarded during the closing ceremony. Analysis of female performance shows 77.9% of women obtain no medal, while 49.2% of men obtain no medal. "The average female participation was 4.4% in 1989–1994 and 2.2% in 1996–2014." It also suggests much higher participation of women on the national level, claiming sometimes double-digit percentages in total participation on the first stage. President of the IOI, Richard Forster, says the competition has difficulty attracting women and that in spite of trying to solve it, "none of us have hit on quite what the problem is, let alone the solution." In IOI 2017 held in Iran, due to not being able to participate in Iran, the Israeli students participated in an offsite competition organized by IOI in Russia.:11 Due to visa issues, the full USA team was unable to attend, although one contestant Zhezheng Luo was able to attend by traveling with the Chinese team and winning gold medal and 3rd place in standings. In IOI 2019 held in Azerbaijan, the Armenia team did not participate due to the dispute between the two countries, despite the guarantees provided and official invitation letter sent by the host Azerbaijan. Due to the COVID-19 pandemic, both the IOI 2020 and IOI 2021, originally scheduled to be hosted by Singapore would be | a live web scoreboard with real-time provisional results. Submissions will be scored as soon as possible during the contest, and the results posted. Contestants will be aware of their scores, but not others', and may resubmit to improve their scores. Starting from 2012, IOI has been using the Contest Management System (CMS) for developing and monitoring the contest. The scores from the two competition days and all problems are summed up separately for each contestant. At the awarding ceremony, contestants are awarded medals depending on their relative total score. The top 50% of the contestants are awarded medals, such that the relative number of gold : silver : bronze : no medal is approximately 1:2:3:6 (thus 1/12 of the contestants get a gold medal). Prior to IOI 2010, students who did not receive medals did not have their scores published, making it impossible for a country to be ranked by adding together scores of its competitors unless each wins a medal. From IOI 2010, although the scores of students who did not receive medals are still not available in the official results, they are known from the live web scoreboard. In IOI 2012 the top 3 nations ranked by aggregate score (Russia, China and USA) were subsequently awarded during the closing ceremony. Analysis of female performance shows 77.9% of women obtain no medal, while 49.2% of men obtain no medal. "The average female participation was 4.4% in 1989–1994 and 2.2% in 1996–2014." It also suggests much higher participation of women on the national level, claiming sometimes double-digit percentages in total participation on the first stage. President of the IOI, Richard Forster, says the competition has difficulty attracting women and that in spite of trying to solve it, "none of us have hit on quite what the problem is, let alone the solution." In IOI 2017 held in Iran, due to not being able to participate in Iran, the Israeli students participated in an offsite competition organized by IOI in Russia.:11 Due to visa issues, the full USA team was unable to attend, although one contestant Zhezheng Luo was able to attend by traveling with the Chinese team and winning gold medal and 3rd place in standings. In IOI 2019 held in Azerbaijan, the Armenia team did not participate due to the dispute |
lowercase: ι; ) is the ninth letter of the Greek alphabet. It was derived from the Phoenician letter Yodh. Letters that arose from this letter include the Latin I and J, the Cyrillic І (І, і), Yi (Ї, ї), and Je (Ј, ј), and iotated letters (e.g. Yu (Ю, ю)). In the system of Greek numerals, iota has a value of 10. Iota represents the sound . In early forms of ancient Greek, it occurred in both long and short versions, but this distinction was lost in Koine Greek. Iota participated as the second element in falling diphthongs, with both long and short vowels as the first element. Where the first element was long, the iota was lost in pronunciation at an early date, and was written in polytonic orthography as iota subscript, in other words as a very small ι under the main vowel. Examples include ᾼ ᾳ ῌ ῃ ῼ ῳ. The former diphthongs became | the ninth letter of the Greek alphabet. It was derived from the Phoenician letter Yodh. Letters that arose from this letter include the Latin I and J, the Cyrillic І (І, і), Yi (Ї, ї), and Je (Ј, ј), and iotated letters (e.g. Yu (Ю, ю)). In the system of Greek numerals, iota has a value of 10. Iota represents the sound . In early forms of ancient Greek, it occurred in both long and short versions, but this distinction was lost in Koine Greek. Iota participated as the second element in falling diphthongs, with both long and short vowels as the first element. Where the first element was long, the iota was lost in pronunciation at an early date, and was written in polytonic orthography as iota subscript, in other |
Airport code ISP), New York, US Education Information Society Project, at Yale Law School Institute of Southern Punjab, a university in Pakistan Instituto Superior Politécnico, a university in São Tomé and Príncipe Institut supérieur de philosophie, the Higher Institute of Philosophy in Louvain-la-Neuve, Belgium Integrated science program, an honors program at Northwestern University International School of Pakistan, Kuwait International School of Panama International School of Paris, France International School of Prague, Czech Republic Petnica Science Center (Istraživačka Stanica Petnica) Law enforcement Idaho State Police Illinois State Police Indiana State Police Iowa State Patrol Iowa State Penitentiary, Fort Madison, Iowa, US Organizations Independent Socialist Party (disambiguation), in several countries Institute of Sales Promotion, Thailand Integrated Service | university in São Tomé and Príncipe Institut supérieur de philosophie, the Higher Institute of Philosophy in Louvain-la-Neuve, Belgium Integrated science program, an honors program at Northwestern University International School of Pakistan, Kuwait International School of Panama International School of Paris, France International School of Prague, Czech Republic Petnica Science Center (Istraživačka Stanica Petnica) Law enforcement Idaho State Police Illinois State Police Indiana State Police Iowa State Patrol Iowa State Penitentiary, Fort Madison, Iowa, US Organizations Independent Socialist Party (disambiguation), in several countries Institute of Sales Promotion, Thailand Integrated Service Provider, a type of logistics services firm Intesa Sanpaolo, Italian bank ISP Sports, US marketing and broadcast company Independence for Scotland Party Science and technology Computing Image signal processor In-system programming, of programmable |
alpha-2 adrenergic receptor agonists, thiazides, hormone modulators, and 5α-reductase inhibitors) Neurogenic disorders (e.g., diabetic neuropathy, temporal lobe epilepsy, multiple sclerosis, Parkinson's disease, multiple system atrophy) Cavernosal disorders (e.g., Peyronie's disease) Hyperprolactinemia (e.g., due to a prolactinoma) Psychological causes: performance anxiety, stress, and mental disorders Surgery (e.g., radical prostatectomy) Aging: after age 40 years, aging itself is a risk factor for ED, although numerous other pathologies that may occur with aging, such as testosterone deficiency, cardiovascular diseases, or diabetes, among others, appear to have interacting effects Kidney disease: ED and chronic kidney disease have pathological mechanisms in common, including vascular and hormonal dysfunction, and may share other comorbidities, such as hypertension and diabetes mellitus that can contribute to ED Lifestyle habits, particularly smoking, which is a key risk factor for ED as it promotes arterial narrowing. COVID-19: preliminary research indicates that COVID-19 viral infection may affect sexual and reproductive health Surgical intervention for a number of conditions may remove anatomical structures necessary to erection, damage nerves, or impair blood supply. ED is a common complication of treatments for prostate cancer, including prostatectomy and destruction of the prostate by external beam radiation, although the prostate gland itself is not necessary to achieve an erection. As far as inguinal hernia surgery is concerned, in most cases, and in the absence of postoperative complications, the operative repair can lead to a recovery of the sexual life of people with preoperative sexual dysfunction, while, in most cases, it does not affect people with a preoperative normal sexual life. ED can also be associated with bicycling due to both neurological and vascular problems due to compression. The increased risk appears to be about 1.7-fold. Concerns that use of pornography can cause ED have little support in epidemiological studies, according to a 2015 literature review. According to Gunter de Win, a Belgian professor and sex researcher, "Put simply, respondents who watch 60 minutes a week and think they're addicted were more likely to report sexual dysfunction than those who watch a care-free 160 minutes weekly." Pathophysiology Penile erection is managed by two mechanisms: the reflex erection, which is achieved by directly touching the penile shaft, and the psychogenic erection, which is achieved by erotic or emotional stimuli. The former involves the peripheral nerves and the lower parts of the spinal cord, whereas the latter involves the limbic system of the brain. In both cases, an intact neural system is required for a successful and complete erection. Stimulation of the penile shaft by the nervous system leads to the secretion of nitric oxide (NO), which causes the relaxation of the smooth muscles of the corpora cavernosa (the main erectile tissue of the penis), and subsequently penile erection. Additionally, adequate levels of testosterone (produced by the testes) and an intact pituitary gland are required for the development of a healthy erectile system. As can be understood from the mechanisms of a normal erection, impotence may develop due to hormonal deficiency, disorders of the neural system, lack of adequate penile blood supply or psychological problems. Spinal cord injury causes sexual dysfunction, including ED. Restriction of blood flow can arise from impaired endothelial function due to the usual causes associated with coronary artery disease, but can also be caused by prolonged exposure to bright light. Diagnosis In many cases, the diagnosis can be made based on the person's history of symptoms. In other cases, a physical examination and laboratory investigations are done to rule out more serious causes such as hypogonadism or prolactinoma. One of the first steps is to distinguish between physiological and psychological ED. Determining whether involuntary erections are present is important in eliminating the possibility of psychogenic causes for ED. Obtaining full erections occasionally, such as nocturnal penile tumescence when asleep (that is, when the mind and psychological issues, if any, are less present), tends to suggest that the physical structures are functionally working. Similarly, performance with manual stimulation, as well as any performance anxiety or acute situational ED, may indicate a psychogenic component to ED. Another factor leading to ED is diabetes mellitus, a well known cause of neuropathy). ED is also related to generally poor physical health, poor dietary habits, obesity, and most specifically cardiovascular disease, such as coronary artery disease and peripheral vascular disease. Screening for cardiovascular risk factors, such as smoking, dyslipidemia, hypertension, and alcoholism, is helpful. In some cases, the simple search for a previously undetected groin hernia can prove useful since it can affect sexual functions in men and is relatively easily curable. The current diagnostic and statistical manual of mental diseases (DSM-IV) lists ED. Ultrasonography Penile ultrasonography with doppler can be used to examine the erect penis. Most cases of ED of organic causes are related to changes in blood flow in the corpora cavernosa, represented by occlusive artery disease (in which less blood is allowed to enter the penis), most often of atherosclerotic origin, or due to failure of the veno-occlusive mechanism (in which too much blood circulates back out of the penis). Before the Doppler sonogram, the penis should be examined in B mode, in order to identify possible tumors, fibrotic plaques, calcifications, or hematomas, and to evaluate the appearance of the cavernous arteries, which can be tortuous or atheromatous. Erection can be induced by injecting 10–20 µg of prostaglandin E1, with evaluations of the arterial flow every five minutes for 25–30 min (see image). The use of prostaglandin E1 is contraindicated in patients with predisposition to priapism (e.g., those with sickle cell anemia), anatomical deformity of the penis, or penile implants. Phentolamine (2 mg) is often added. Visual and tactile stimulation produces better results. Some authors recommend the use of sildenafil by mouth to replace the injectable drugs in cases of contraindications, although the efficacy of such medication is controversial. Before the injection of the chosen drug, the flow pattern is monophasic, with low systolic velocities and an absence of diastolic flow. After injection, systolic and diastolic peak velocities should increase, decreasing progressively with vein occlusion and becoming negative when the penis becomes rigid (see image below). The reference values vary across studies, ranging from > 25 cm/s to > 35 cm/s. Values above 35 cm/s indicate the absence of arterial disease, values below 25 cm/s indicate arterial insufficiency, and values of 25–35 cm/s are indeterminate because they are less specific (see image below). The data obtained should be correlated with the degree of erection observed. If the peak systolic velocities are normal, the final diastolic velocities should be evaluated, those above 5 cm/s being associated with venogenic ED. Other workup methods Penile nerves functionTests such as the bulbocavernosus reflex test are used to ascertain whether there is enough nerve sensation in the penis. The physician squeezes the glans (head) of the penis, which immediately causes the anus to contract if nerve function is normal. A physician measures the latency between squeeze and contraction by observing the anal sphincter or by feeling it with a gloved finger in the anus. Nocturnal penile tumescence (NPT)It is normal for a man to have five to six erections during sleep, especially during rapid eye movement (REM). Their absence may indicate a problem with nerve function or blood supply in the penis. There are two methods for measuring changes in penile rigidity and circumference during nocturnal erection: snap gauge and strain gauge. A significant proportion [give us the percentage] of men who have no sexual dysfunction nonetheless do not have regular nocturnal erections. Penile biothesiometryThis test uses electromagnetic vibration to evaluate sensitivity and nerve function in the glans and shaft of the penis. Dynamic infusion cavernosometry (DICC)Technique in which fluid is pumped into the penis at a known rate and pressure. It gives a measurement of the vascular pressure in the corpus cavernosum during an erection. Corpus cavernosometryCavernosography measurement of the vascular pressure in the corpus cavernosum. Saline is infused under pressure into the corpus cavernosum with a butterfly needle, and the flow rate needed to maintain an erection indicates the degree of venous leakage. The leaking veins responsible may be visualized by infusing a mixture of saline and x-ray contrast medium and performing a cavernosogram. In Digital Subtraction Angiography (DSA), the images are acquired digitally. Magnetic resonance angiography (MRA) This is similar to magnetic resonance imaging. Magnetic resonance angiography uses magnetic fields and radio waves to provide detailed images of the blood vessels. The doctor may inject into the patient's bloodstream a contrast agent, which causes vascular tissues to stand out against other tissues, so that information about blood supply and vascular anomalies is easier to gather. Treatment Treatment depends on the underlying cause. In general, exercise, particularly of the aerobic type, is effective for preventing ED during midlife. Counseling can be used if the underlying cause is psychological, including how to lower stress or anxiety related to sex. Medications by mouth and vacuum erection devices are first-line treatments, followed by injections of drugs into the penis, as well as penile implants. Vascular reconstructive surgeries are beneficial in certain groups. Treatments, other than surgery, do not fix the underlying physiological problem, but are used as needed before sex. Medications The PDE5 inhibitors sildenafil (Viagra), vardenafil (Levitra) and tadalafil (Cialis) are prescription drugs which are taken by mouth. As of 2018, sildenafil is available in the UK without a prescription. Additionally, a cream combining alprostadil with the permeation enhancer DDAIP has been approved in Canada as a first line treatment for ED. Penile injections, on the other hand, can involve one of the following medications: papaverine, phentolamine, and prostaglandin E1, also known as alprostadil. In addition to injections, | diseases, or diabetes, among others, appear to have interacting effects Kidney disease: ED and chronic kidney disease have pathological mechanisms in common, including vascular and hormonal dysfunction, and may share other comorbidities, such as hypertension and diabetes mellitus that can contribute to ED Lifestyle habits, particularly smoking, which is a key risk factor for ED as it promotes arterial narrowing. COVID-19: preliminary research indicates that COVID-19 viral infection may affect sexual and reproductive health Surgical intervention for a number of conditions may remove anatomical structures necessary to erection, damage nerves, or impair blood supply. ED is a common complication of treatments for prostate cancer, including prostatectomy and destruction of the prostate by external beam radiation, although the prostate gland itself is not necessary to achieve an erection. As far as inguinal hernia surgery is concerned, in most cases, and in the absence of postoperative complications, the operative repair can lead to a recovery of the sexual life of people with preoperative sexual dysfunction, while, in most cases, it does not affect people with a preoperative normal sexual life. ED can also be associated with bicycling due to both neurological and vascular problems due to compression. The increased risk appears to be about 1.7-fold. Concerns that use of pornography can cause ED have little support in epidemiological studies, according to a 2015 literature review. According to Gunter de Win, a Belgian professor and sex researcher, "Put simply, respondents who watch 60 minutes a week and think they're addicted were more likely to report sexual dysfunction than those who watch a care-free 160 minutes weekly." Pathophysiology Penile erection is managed by two mechanisms: the reflex erection, which is achieved by directly touching the penile shaft, and the psychogenic erection, which is achieved by erotic or emotional stimuli. The former involves the peripheral nerves and the lower parts of the spinal cord, whereas the latter involves the limbic system of the brain. In both cases, an intact neural system is required for a successful and complete erection. Stimulation of the penile shaft by the nervous system leads to the secretion of nitric oxide (NO), which causes the relaxation of the smooth muscles of the corpora cavernosa (the main erectile tissue of the penis), and subsequently penile erection. Additionally, adequate levels of testosterone (produced by the testes) and an intact pituitary gland are required for the development of a healthy erectile system. As can be understood from the mechanisms of a normal erection, impotence may develop due to hormonal deficiency, disorders of the neural system, lack of adequate penile blood supply or psychological problems. Spinal cord injury causes sexual dysfunction, including ED. Restriction of blood flow can arise from impaired endothelial function due to the usual causes associated with coronary artery disease, but can also be caused by prolonged exposure to bright light. Diagnosis In many cases, the diagnosis can be made based on the person's history of symptoms. In other cases, a physical examination and laboratory investigations are done to rule out more serious causes such as hypogonadism or prolactinoma. One of the first steps is to distinguish between physiological and psychological ED. Determining whether involuntary erections are present is important in eliminating the possibility of psychogenic causes for ED. Obtaining full erections occasionally, such as nocturnal penile tumescence when asleep (that is, when the mind and psychological issues, if any, are less present), tends to suggest that the physical structures are functionally working. Similarly, performance with manual stimulation, as well as any performance anxiety or acute situational ED, may indicate a psychogenic component to ED. Another factor leading to ED is diabetes mellitus, a well known cause of neuropathy). ED is also related to generally poor physical health, poor dietary habits, obesity, and most specifically cardiovascular disease, such as coronary artery disease and peripheral vascular disease. Screening for cardiovascular risk factors, such as smoking, dyslipidemia, hypertension, and alcoholism, is helpful. In some cases, the simple search for a previously undetected groin hernia can prove useful since it can affect sexual functions in men and is relatively easily curable. The current diagnostic and statistical manual of mental diseases (DSM-IV) lists ED. Ultrasonography Penile ultrasonography with doppler can be used to examine the erect penis. Most cases of ED of organic causes are related to changes in blood flow in the corpora cavernosa, represented by occlusive artery disease (in which less blood is allowed to enter the penis), most often of atherosclerotic origin, or due to failure of the veno-occlusive mechanism (in which too much blood circulates back out of the penis). Before the Doppler sonogram, the penis should be examined in B mode, in order to identify possible tumors, fibrotic plaques, calcifications, or hematomas, and to evaluate the appearance of the cavernous arteries, which can be tortuous or atheromatous. Erection can be induced by injecting 10–20 µg of prostaglandin E1, with evaluations of the arterial flow every five minutes for 25–30 min (see image). The use of prostaglandin E1 is contraindicated in patients with predisposition to priapism (e.g., those with sickle cell anemia), anatomical deformity of the penis, or penile implants. Phentolamine (2 mg) is often added. Visual and tactile stimulation produces better results. Some authors recommend the use of sildenafil by mouth to replace the injectable drugs in cases of contraindications, although the efficacy of such medication is controversial. Before the injection of the chosen drug, the flow pattern is monophasic, with low systolic velocities and an absence of diastolic flow. After injection, systolic and diastolic peak velocities should increase, decreasing progressively with vein occlusion and becoming negative when the penis becomes rigid (see image below). The reference values vary across studies, ranging from > 25 cm/s to > 35 cm/s. Values above 35 cm/s indicate the absence of arterial disease, values below 25 cm/s indicate arterial insufficiency, and values of 25–35 cm/s are indeterminate because they are less specific (see image below). The data obtained should be correlated with the degree of erection observed. If the peak systolic velocities are normal, the final diastolic velocities should be evaluated, those above 5 cm/s being associated with venogenic ED. Other workup methods Penile nerves |
"moderates" in Iran, with the United States reimbursing Israel. In his 1990 autobiography An American Life, Reagan claimed that he was deeply committed to securing the release of the hostages; it was this compassion that supposedly motivated his support for the arms initiatives. The president requested that the "moderate" Iranians do everything in their capability to free the hostages held by Hezbollah. Reagan always publicly insisted after the scandal broke in late 1986 that the purpose behind the arms-for-hostages trade was to establish a working relationship with the "moderate" faction associated with Rafsanjani to facilitate the reestablishment of the American–Iranian alliance after the soon to be expected death of Khomeini, to end the Iran–Iraq war and end Iranian support for Islamic terrorism while downplaying the importance of freeing the hostages in Lebanon as a secondary issue. By contrast, when testifying before the Tower Commission, Reagan declared that hostage issue was the main reason for selling arms to Iran. The following arms were supplied to Iran: First arms sales in 1981 (see above) 20 August 1985 – 86 TOW anti-tank missiles 14 September 1985 – 408 more TOWs 24 November 1985 – 18 Hawk anti-aircraft missiles 17 February 1986 – 500 TOWs 27 February 1986 – 500 TOWs 24 May 1986 – 508 TOWs, 240 Hawk spare parts 4 August 1986 – More Hawk spares 28 October 1986 – 500 TOWs First arms sale The first arms sales to Iran began in 1981, though the official paper trail has them beginning in 1985 (see above). On 20 August 1985, Israel sent 96 American-made TOW missiles to Iran through an arms dealer Manucher Ghorbanifar. Subsequently, on 14 September 1985, 408 more TOW missiles were delivered. On 15 September 1985, following the second delivery, Reverend Benjamin Weir was released by his captors, the Islamic Jihad Organization. On 24 November 1985, 18 Hawk anti-aircraft missiles were delivered. Modifications in plans Robert McFarlane resigned on 4 December 1985, stating that he wanted to spend more time with his family, and was replaced by Admiral John Poindexter. Two days later, Reagan met with his advisors at the White House, where a new plan was introduced. This called for a slight change in the arms transactions: instead of the weapons going to the "moderate" Iranian group, they would go to "moderate" Iranian army leaders. As each weapons delivery was made from Israel by air, hostages held by Hezbollah would be released. Israel would continue to be reimbursed by the United States for the weapons. Though staunchly opposed by Secretary of State George Shultz and Secretary of Defense Caspar Weinberger, the plan was authorized by Reagan, who stated that, "We were not trading arms for hostages, nor were we negotiating with terrorists". In his notes of a meeting held in the White House on 7 December 1985, Weinberger wrote he told Reagan that this plan was illegal, writing: I argued strongly that we have an embargo that makes arms sales to Iran illegal and President couldn't violate it and that 'washing' transactions through Israel wouldn't make it legal. Shultz, Don Regan agreed. Weinberger's notes have Reagan saying he "could answer charges of illegality but he couldn't answer charge that 'big strong President Reagan' passed up a chance to free hostages." Now retired National Security Advisor McFarlane flew to London to meet with Israelis and Ghorbanifar in an attempt to persuade the Iranian to use his influence to release the hostages before any arms transactions occurred; this plan was rejected by Ghorbanifar. On the day of McFarlane's resignation, Oliver North, a military aide to the United States National Security Council (NSC), proposed a new plan for selling arms to Iran, which included two major adjustments: instead of selling arms through Israel, the sale was to be direct at a markup; and a portion of the proceeds would go to Contras, or Nicaraguan paramilitary fighters waging guerrilla warfare against the Sandinista government, claiming power after an election full of irregularities.[See Washington Post at the time.] The dealings with the Iranians were conducted via the NSC with Admiral Poindexter and his deputy Colonel North, with the American historians Malcolm Byrne and Peter Kornbluh writing that Poindexter granted much power to North "...who made the most of the situation, often deciding important matters on his own, striking outlandish deals with the Iranians, and acting in the name of the president on issues that were far beyond his competence. All of these activities continued to take place within the framework of the president's broad authorization. Until the press reported on the existence of the operation, nobody in the administration questioned the authority of Poindexter's and North's team to implement the president's decisions". North proposed a $15 million markup, while contracted arms broker Ghorbanifar added a 41% markup of his own. Other members of the NSC were in favor of North's plan; with large support, Poindexter authorized it without notifying President Reagan, and it went into effect. At first, the Iranians refused to buy the arms at the inflated price because of the excessive markup imposed by North and Ghorbanifar. They eventually relented, and in February 1986, 1,000 TOW missiles were shipped to the country. From May to November 1986, there were additional shipments of miscellaneous weapons and parts. Both the sale of weapons to Iran and the funding of the Contras attempted to circumvent not only stated administration policy, but also the Boland Amendment. Administration officials argued that regardless of Congress restricting funds for the Contras, or any affair, the President (or in this case the administration) could carry on by seeking alternative means of funding such as private entities and foreign governments. Funding from one foreign country, Brunei, was botched when North's secretary, Fawn Hall, transposed the numbers of North's Swiss bank account number. A Swiss businessman, suddenly $10 million richer, alerted the authorities of the mistake. The money was eventually returned to the Sultan of Brunei, with interest. On 7 January 1986, John Poindexter proposed to Reagan a modification of the approved plan: instead of negotiating with the "moderate" Iranian political group, the United States would negotiate with "moderate" members of the Iranian government. Poindexter told Reagan that Ghorbanifar had important connections within the Iranian government, so with the hope of the release of the hostages, Reagan approved this plan as well. Throughout February 1986, weapons were shipped directly to Iran by the United States (as part of Oliver North's plan), but none of the hostages were released. Retired National Security Advisor McFarlane conducted another international voyage, this one to Tehran – bringing with him a gift of a bible with a handwritten inscription by Ronald Reagan and, according to George Cave, a cake baked in the shape of a key. Howard Teicher described the cake as a joke between North and Ghorbanifar. McFarlane met directly with Iranian officials associated with Rafsanjani, who sought to establish U.S.-Iranian relations in an attempt to free the four remaining hostages. The American delegation comprised McFarlane, North, Cave (a retired CIA officer who worked in Iran in the 1960s–70s), Teicher, Israeli diplomat Amiram Nir and a CIA translator. They arrived in Tehran in an Israeli plane carrying forged Irish passports on 25 May 1986. This meeting also failed. Much to McFarlane's disgust, he did not meet ministers, and instead met in his words "third and fourth level officials". At one point, an angry McFarlane shouted: "As I am a Minister, I expect to meet with decision-makers. Otherwise, you can work with my staff." The Iranians requested concessions such as Israel's withdrawal from the Golan Heights, which the United States rejected. More importantly, McFarlane refused to ship spare parts for the Hawk missiles until the Iranians had Hezbollah release the American hostages, whereas the Iranians wanted to reverse that sequence with the spare parts being shipped first before the hostages were freed. The differing negotiating positions led to McFarlane's mission going home after four days. After the failure of the secret visit to Tehran, McFarlane advised Reagan not to talk to the Iranians anymore, advice that was disregarded. Subsequent dealings On 26 July 1986, Hezbollah freed the American hostage Father Lawrence Jenco, former head of Catholic Relief Services in Lebanon. Following this, William Casey, head of the CIA, requested that the United States authorize sending a shipment of small missile parts to Iranian military forces as a way of expressing gratitude. Casey also justified this request by stating that the contact in the Iranian government might otherwise lose face or be executed, and hostages might be killed. Reagan authorized the shipment to ensure that those potential events would not occur. North used this release to persuade Reagan to switch over to a "sequential" policy of freeing the hostages one by one, instead of the "all or nothing" policy that the Americans had pursued until then. By this point, the Americans had grown tired of Ghobanifar who had proven himself a dishonest intermediary who played off both sides to his own commercial advantage. In August 1986, the Americans had established a new contact in the Iranian government, Ali Hashemi Bahramani, the nephew of Rafsanjani and an officer in the Revolutionary Guard. The fact that the Revolutionary Guard was deeply involved in international terrorism seemed only to attract the Americans more to Bahramani, who was seen as someone with the influence to change Iran's policies. Richard Secord, an American arms dealer, who was being used as a contact with Iran, wrote to North: "My judgment is that we have opened up a new and probably better channel into Iran". North was so impressed with Bahramani that he arranged for him to secretly visit Washington D.C and gave him a guided tour at midnight of the White House. North frequently met with Bahramani in the summer and fall of 1986 in West Germany, discussing arms sales to Iran, the freeing of hostages held by Hezbollah and how best to overthrow President Saddam Hussein of Iraq and the establishment of "a non-hostile regime in Baghdad". In September and October 1986 three more Americans – Frank Reed, Joseph Cicippio, and Edward Tracy – were abducted in Lebanon by a separate terrorist group, who referred to them simply as "G.I. Joe," after the popular American toy. The reasons for their abduction are unknown, although it is speculated that they were kidnapped to replace the freed Americans. One more original hostage, David Jacobsen, was later released. The captors promised to release the remaining two, but the release never happened. During a secret meeting in Frankfurt in October 1986, North told Bahramani that: "Saddam Hussein must go". North also claimed that Reagan had told him to tell Bahramani that: "Saddam Hussein is an asshole." Behramani during a secret meeting in Mainz informed North that Rafsanjani "for his own politics ... decided to get all the groups involved and give them a role to play." Thus, all the factions in the Iranian government would be jointly responsible for the talks with the Americans and "there would not be an internal war". This demand of Behramani caused much dismay on the American side as it made clear to them that they would not be dealing solely with a "moderate" faction in the Islamic Republic, as the Americans liked to pretend to themselves, but rather with all the factions in the Iranian government – including those who were very much involved in terrorism. Despite this the talks were not broken off. Discovery and scandal After a leak by Mehdi Hashemi, a senior official in the Islamic Revolutionary Guard Corps, the Lebanese magazine Ash-Shiraa exposed the arrangement on 3 November 1986. The leak may have been orchestrated by a covert team led by Arthur S. Moreau Jr., assistant to the chairman of the United States Joint Chiefs of Staff, due to fears the scheme had grown out of control. This was the first public report of the weapons-for-hostages deal. The operation was discovered only after an airlift of guns (Corporate Air Services HPF821) was downed over Nicaragua. Eugene Hasenfus, who was captured by Nicaraguan authorities after surviving the plane crash, initially alleged in a press conference on Nicaraguan soil that two of his coworkers, Max Gomez and Ramon Medina, worked for the Central Intelligence Agency. He later said he did not know whether they did or not. The Iranian government confirmed the Ash-Shiraa story, and ten days after the story was first published, President Reagan appeared on national television from the Oval Office on 13 November, stating: My purpose was ... to send a signal that the United States was prepared to replace the animosity between [the U.S. and Iran] with a new relationship ... At the same time we undertook this initiative, we made clear that Iran must oppose all forms of international terrorism as a condition of progress in our relationship. The most significant step which Iran could take, we indicated, would be to use its influence in Lebanon to secure the release of all hostages held there. The scandal was compounded when Oliver North destroyed or hid pertinent documents between 21 November and 25 November 1986. During North's trial in 1989, his secretary, Fawn Hall, testified extensively about helping North alter and shred official United States National Security Council (NSC) documents from the White House. According to The New York Times, enough documents were put into a government shredder to jam it. Hall also testified that she smuggled classified documents out of the Old Executive Office Building by concealing them in her boots and dress. North's explanation for destroying some documents was to protect the lives of individuals involved in Iran and Contra operations. It was not until 1993, years after the trial, that North's notebooks were made public, and only after the National Security Archive and Public Citizen sued the Office of the Independent Counsel under the Freedom of Information Act. During the trial, North testified that on 21, 22 or 24 November, he witnessed Poindexter destroy what may have been the only signed copy of a presidential covert-action finding that sought to authorize CIA participation in the November 1985 Hawk missile shipment to Iran. U.S. Attorney General Edwin Meese admitted on 25 November that profits from weapons sales to Iran were made available to assist the Contra rebels in Nicaragua. On the same day, John Poindexter resigned, and President Reagan fired Oliver North. Poindexter was replaced by Frank Carlucci on 2 December 1986. When the story broke, many legal and constitutional scholars expressed dismay that the NSC, which was supposed to be just an advisory body to assist the President with formulating foreign policy had "gone operational" by becoming an executive body covertly executing foreign policy on its own. The National Security Act of 1947, which created the NSC, gave it the vague right to perform "such other functions and duties related to the intelligence as the National Security Council may from time to time direct." However, the NSC had usually, although not always, acted as an advisory agency until the Reagan administration when the NSC had "gone operational", a situation that was condemned by both the Tower commission and by Congress as a departure from the norm. The American historian James Canham-Clyne asserted that Iran–Contra affair and the NSC "going operational" were not departures from the norm, but were the logical and natural consequence of existence of the "national security state", the plethora of shadowy government agencies with multi-million dollar budgets operating with little oversight from Congress, the courts or the media, and for whom upholding national security justified almost everything. Canham-Clyne argued that for the "national security state", the law was an obstacle to be surmounted rather than something to uphold and that the Iran–Contra affair was just "business as usual", something he asserted that the media missed by focusing on the NSC having "gone operational." In Veil: The Secret Wars of the CIA 1981–1987, journalist Bob Woodward chronicled the role of the CIA in facilitating the transfer of funds from the Iran arms sales to the Nicaraguan Contras spearheaded by Oliver North. According to Woodward, then-Director of the CIA William J. Casey admitted to him in February 1987 that he was aware of the diversion of funds to the Contras. The controversial admission occurred while Casey was hospitalized for a stroke, and, according to his wife, was unable to communicate. On 6 May 1987, William Casey died the day after Congress began public hearings on Iran–Contra. Independent Counsel, Lawrence Walsh later wrote: "Independent Counsel obtained no documentary evidence showing Casey knew about or approved the diversion. The only direct testimony linking Casey to early knowledge of the diversion came from [Oliver] North." Gust Avrakodos, who was responsible for the arms supplies to the Afghans at this time, was aware of the operation as well and strongly opposed it, in particular the diversion of funds allotted to the Afghan operation. According to his Middle Eastern experts, the operation was pointless because the moderates in Iran were not in a position to challenge the fundamentalists. However, he was overruled by Clair George. Tower Commission On 25 November 1986, President Reagan announced the creation of a Special Review Board to look into the matter; the following day, he appointed former Senator John Tower, former Secretary of State Edmund Muskie, and former National Security Adviser Brent Scowcroft to serve as members. This Presidential Commission took effect on 1 December and became known as the Tower Commission. The main objectives of the commission were to inquire into "the circumstances surrounding the Iran–Contra matter, other case studies that might reveal strengths and weaknesses in the operation of the National Security Council system under stress, and the manner in which that system has served eight different presidents since its inception in 1947". The Tower Commission was the first presidential commission to review and evaluate the National Security Council. President Reagan appeared before the Tower Commission on 2 December 1986, to answer questions regarding his involvement in the affair. When asked about his role in authorizing the arms deals, he first stated that he had; later, he appeared to contradict himself by stating that he had no recollection of doing so. In his 1990 autobiography, An American Life, Reagan acknowledges authorizing the shipments to Israel. The report published by the Tower Commission was delivered to the president on 26 February 1987. The Commission had interviewed 80 witnesses to the scheme, including Reagan, and two of the arms trade middlemen: Manucher Ghorbanifar and Adnan Khashoggi. The 200-page report was the most comprehensive of any released, criticizing the actions of Oliver North, John Poindexter, Caspar Weinberger, and others. It determined that President Reagan did not have knowledge of the extent of the program, especially about the diversion of funds to the Contras, although it argued that the president ought to have had better control of the National Security Council staff. The report heavily criticized Reagan for not properly supervising his subordinates or being aware of their actions. A major result of the Tower Commission was the consensus that Reagan should have listened to his National Security Advisor more, thereby placing more power in the hands of that chair. Congressional committees investigating the affair In January 1987, Congress announced it was opening an investigation into the Iran–Contra | of the presidency of George H. W. Bush, who had been Vice President at the time of the affair. Former Independent Council Walsh noted that in issuing the pardons, Bush appeared to have been preempting being implicated himself by evidence that came to light during the Weinberger trial, and noted that there was a pattern of "deception and obstruction" by Bush, Weinberger and other senior Reagan administration officials. Walsh submitted his final report on August 4, 1993, and later wrote an account of his experiences as counsel, Firewall: The Iran-Contra Conspiracy and Cover-Up. Background The United States was the largest seller of arms to Iran under Mohammad Reza Pahlavi, and the vast majority of the weapons that the Islamic Republic of Iran inherited in January 1979 were American-made. To maintain this arsenal, Iran required a steady supply of spare parts to replace those broken and worn out. After Iranian students stormed the American embassy in Tehran in November 1979 and took 52 Americans hostage, U.S. President Jimmy Carter imposed an arms embargo on Iran. After Iraq invaded Iran in September 1980, Iran desperately needed weapons and spare parts for its current weapons. After Ronald Reagan took office as President on 20 January 1981, he vowed to continue Carter's policy of blocking arms sales to Iran on the grounds that Iran supported terrorism. A group of senior Reagan administration officials in the Senior Interdepartmental Group conducted a secret study on 21 July 1981, and concluded that the arms embargo was ineffective because Iran could always buy arms and spare parts for its American weapons elsewhere, while at the same time the arms embargo opened the door for Iran to fall into the Soviet sphere of influence as the Kremlin could sell Iran weapons if the United States would not. The conclusion was that the United States should start selling Iran arms as soon as it was politically possible to keep Iran from falling into the Soviet sphere of influence. At the same time, the openly declared goal of Ayatollah Khomeini to export his Islamic revolution all over the Middle East and overthrow the governments of Iraq, Kuwait, Saudi Arabia, and the other states around the Persian Gulf led to the Americans perceiving Khomeini as a major threat to the United States. In the spring of 1983, the United States launched Operation Staunch, a wide-ranging diplomatic effort to persuade other nations all over the world not to sell arms or spare parts for weapons to Iran. This was at least part of the reason the Iran–Contra affair proved so humiliating for the United States when the story first broke in November 1986 that the US itself was selling arms to Iran. At the same time that the American government was considering its options on selling arms to Iran, Contra militants based in Honduras were waging a guerrilla war to topple the Sandinista National Liberation Front (FSLN) revolutionary government of Nicaragua. Almost from the time he took office in 1981, a major goal of the Reagan administration was the overthrow of the left-wing Sandinista government in Nicaragua and to support the Contra rebels. The Reagan administration's policy towards Nicaragua produced a major clash between the executive and legislative branches as Congress sought to limit, if not curb altogether, the ability of the White House to support the Contras. Direct U.S. funding of the Contras insurgency was made illegal through the Boland Amendment, the name given to three U.S. legislative amendments between 1982 and 1984 aimed at limiting U.S. government assistance to Contra militants. By 1984, funding for the Contras had run out; and, in October of that year, a total ban came into effect. The second Boland Amendment, in effect from 3 October 1984 to 3 December 1985, stated:During the fiscal year 1985 no funds available to the Central Intelligence Agency, the Department of Defense or any other agency or entity of the United States involved in intelligence activities may be obligated or expended for the purpose of or which may have the effect of supporting directly or indirectly military or paramilitary operations in Nicaragua by any nation, organization, group, movement, or individual. In violation of the Boland Amendment, senior officials of the Reagan administration continued to secretly arm and train the Contras and provide arms to Iran, an operation they called "the Enterprise". Given the Contras' heavy dependence on U.S. military and financial support, the second Boland amendment threatened to break the Contra movement, and led to President Reagan ordering in 1984 that the National Security Council (NSC) "keep the Contras together 'body and soul", no matter what Congress voted for. A major legal debate at the center of the Iran–Contra affair concerned the question of whether the NSC was one of the "any other agency or entity of the United States involved in intelligence activities" covered by the Boland amendment. The Reagan administration argued it was not, and many in Congress argued that it was. The majority of constitutional scholars have asserted the NSC did indeed fall within the purview of the second Boland amendment, though the amendment did not mention the NSC by name. The broader constitutional question at stake was the power of Congress versus the power of the presidency. The Reagan administration argued that because the constitution assigned the right to conduct foreign policy to the executive, its efforts to overthrow the government of Nicaragua were a presidential prerogative that Congress had no right to try to halt via the Boland amendments. By contrast congressional leaders argued that the constitution had assigned Congress control of the budget, and Congress had every right to use that power not to fund projects like attempting to overthrow the government of Nicaragua that they disapproved of. As part of the effort to circumvent the Boland amendment, the NSC established "the Enterprise", an arms-smuggling network headed by a retired U.S. Air Force officer turned arms dealer Richard Secord that supplied arms to the Contras. It was ostensibly a private sector operation, but in fact was controlled by the NSC. To fund "the Enterprise", the Reagan administration was constantly on the look-out for funds that came from outside the U.S. government in order not to explicitly violate the letter of the Boland amendment, though the efforts to find alternative funding for the Contras violated the spirit of the Boland amendment. Ironically, military aid to the Contras was reinstated with Congressional consent in October 1986, a month before the scandal broke. In 1985, Manuel Noriega offered to help the United States by allowing Panama as a staging ground for operations against the Sandinistas, and offering to train Contras in Panama, but this would later be overshadowed by the Iran–Contra affair itself. At around the same time, the Soviet bloc also engaged in arms deals with ideologically opponent buyers, possibly involving some of the same players as the Iran–Contra affair. In 1986, a complex operation involving East Germany's Stasi and the Danish-registered ship Pia Vesta ultimately aimed to sell Soviet arms and military vehicles to South Africa's Armscor, using various intermediaries to distance themselves from the deal. Manuel Noriega of Panama was apparently one of these intermediaries but backed out on the deal as the ship and weapons were seized at a Panamanian port. The Pia Vesta led to a small controversy, as the Panama and Peru governments in 1986 accused the United States and each other of being involved in the East Germany-originated shipment. Arms sales to Iran As reported in The New York Times in 1991, "continuing allegations that Reagan campaign officials made a deal with the Iranian Government of Ayatollah Ruhollah Khomeini in the fall of 1980" led to "limited investigations." However "limited," those investigations established that "Soon after taking office in 1981, the Reagan Administration secretly and abruptly changed United States policy." Secret Israeli arms sales and shipments to Iran began in that year, even as, in public, "the Reagan Administration" presented a different face, and "aggressively promoted a public campaign... to stop worldwide transfers of military goods to Iran." The New York Times explains: "Iran at that time was in dire need of arms and spare parts for its American-made arsenal to defend itself against Iraq, which had attacked it in September 1980," while "Israel [a U.S. ally] was interested in keeping the war between Iran and Iraq going to ensure that these two potential enemies remained preoccupied with each other." Maj. Gen. Avraham Tamir, a high-ranking Israeli Defense Ministry official in 1981, said there was an "oral agreement" to allow the sale of "spare parts" to Iran. This was based on an "understanding" with Secretary Alexander Haig (which a Haig adviser denied). This account was confirmed by a former senior American diplomat with a few modifications. The diplomat claimed that "[Ariel] Sharon violated it, and Haig backed away...". A former "high-level" CIA official who saw reports of arms sales to Iran by Israel in the early 1980s estimated that the total was about $2 billion a year - but also said, "The degree to which it was sanctioned I don't know." On 17 June 1985, National Security Adviser Robert McFarlane wrote a National Security Decision Directive which called for the United States of America to begin a rapprochement with the Islamic Republic of Iran. The paper read: Dynamic political evolution is taking place inside Iran. Instability caused by the pressures of the Iraq-Iran war, economic deterioration and regime in-fighting create the potential for major changes inside Iran. The Soviet Union is better positioned than the U.S. to exploit and benefit from any power struggle that results in changes from the Iranian regime ... The U.S should encourage Western allies and friends to help Iran meet its import requirements so as to reduce the attractiveness of Soviet assistance ... This includes provision of selected military equipment. Defense Secretary Caspar Weinberger was highly negative, writing on his copy of McFarlane's paper: "This is almost too absurd to comment on ... like asking Qaddafi to Washington for a cozy chat." Secretary of State George Shultz was also opposed, stating that having designated Iran a State Sponsor of Terrorism in January 1984, how could the United States possibly sell arms to Iran? Only the Director of the Central Intelligence Agency William Casey supported McFarlane's plan to start selling arms to Iran. In early July 1985, the historian Michael Ledeen, a consultant of National Security Adviser Robert McFarlane, requested assistance from Israeli Prime Minister Shimon Peres for help in the sale of arms to Iran. Having talked to an Israeli diplomat David Kimche and Ledeen, McFarlane learned that the Iranians were prepared to have Hezbollah release American hostages in Lebanon in exchange for Israelis shipping Iran American weapons. Having been designated a State Sponsor of Terrorism since January 1984, Iran was in the midst of the Iran–Iraq War and could find few Western nations willing to supply it with weapons. The idea behind the plan was for Israel to ship weapons through an intermediary (identified as Manucher Ghorbanifar) to the Islamic republic as a way of aiding a supposedly moderate, politically influential faction within the regime of Ayatollah Khomeini who was believed to be seeking a rapprochement with the United States; after the transaction, the United States would reimburse Israel with the same weapons, while receiving monetary benefits. McFarlane in a memo to Shultz and Weinberger wrote: The short term dimension concerns the seven hostages; the long term dimension involves the establishment of a private dialogue with Iranian officials on the broader relations ... They sought specifically the delivery from Israel of 100 TOW missiles ... The plan was discussed with President Reagan on 18 July 1985 and again on 6 August 1985. Shultz at the latter meeting warned Reagan that "we were just falling into the arms-for-hostages business and we shouldn't do it." The Americans believed that there was a moderate faction in the Islamic republic headed by Akbar Hashemi Rafsanjani, the powerful speaker of the Majlis who was seen as a leading potential successor to Khomeini and who was alleged to want a rapprochement with the United States. The Americans believed that Rafsanjani had the power to order Hezbollah to free the American hostages and establishing a relationship with him by selling Iran arms would ultimately place Iran back within the American sphere of influence. It remains unclear if Rafsanjani really wanted a rapprochement with the United States or was just deceiving Reagan administration officials who were willing to believe that he was a moderate who would effect a rapprochement. Rafsanjani, whose nickname is "the Shark" was described by the British journalist Patrick Brogan as a man of great charm and formidable intelligence known for his subtlety and ruthlessness whose motives in the Iran–Contra affair remain completely mysterious. The Israeli government required that the sale of arms meet high-level approval from the United States government, and when McFarlane convinced them that the U.S. government approved the sale, Israel obliged by agreeing to sell the arms. In 1985, President Reagan entered Bethesda Naval Hospital for colon cancer surgery. Reagan’s recovery was nothing short of miserable, as the 74-year-old President admitted having little sleep for days in addition to his immense physical discomfort. While doctors seemed to be confident that the surgery was successful, the discovery of his localized cancer was a daunting realization for Reagan. From seeing the recovery process of other patients, as well as medical “experts” on television predicting his death to be soon, Reagan’s typical optimistic outlook was dampened. These factors were bound to contribute to psychological distress in the midst of an already distressing situation. Additionally, Reagan’s invocation of the 25th amendment prior to the surgery was a risky and unprecedented decision that smoothly flew under the radar for the duration of the complex situation. While it only lasted slightly longer than the length of the procedure (approximately seven hours and 54 minutes), this temporary transfer of power was never formally recognized by the White House. It was later revealed that this decision was made on the grounds that “Mr. Reagan and his advisors did not want his actions to establish a definition of incapacitation that would bind future presidents.” Reagan expressed this transfer of power in two identical letters that were sent to the speaker of the House of Representatives, Rep. Thomas P. “Tip” O’Neill, and the president pro tempore of the senate, Sen. Strom Thurmond. While the President was recovering in the hospital, McFarlane met with him and told him that representatives from Israel had contacted the National Security Agency to pass on confidential information from what Reagan later described as the "moderate" Iranian faction headed by Rafsanjani opposed to the Ayatollah's hardline anti-American policies. The visit from McFarlane in Reagan’s hospital room was the first visit from an administration official outside of Donald Regan since the surgery. The meeting took place five days after the surgery and only three days after doctors gave the news that his polyp had been malignant. The three participants of this meeting had very different recollections of what was discussed during its 23-minute duration. Months later, Reagan even stated that he “had no recollection of a meeting in the hospital in July with McFarlane and that he had no notes which would show such a meeting.” This does not come as a surprise considering the possible short and long-term effects of anesthesia on patients above the age of 60, in addition to his already weakened physical and mental state. According to Reagan, these Iranians sought to establish a quiet relationship with the United States, before establishing formal relationships upon the death of the aging Ayatollah. In Reagan's account, McFarlane told Reagan that the Iranians, to demonstrate their seriousness, offered to persuade the Hezbollah militants to release the seven U.S. hostages. McFarlane met with the Israeli intermediaries; Reagan claimed that he allowed this because he believed that establishing relations with a strategically located country, and preventing the Soviet Union from doing the same, was a beneficial move. Although Reagan claims that the arms sales were to a "moderate" faction of Iranians, the Walsh Iran/Contra Report states that the arms sales were "to Iran" itself, which was under the control of the Ayatollah. Following the Israeli–U.S. meeting, Israel requested permission from the United States to sell a small number of BGM-71 TOW antitank missiles to Iran, claiming that this would aid the "moderate" Iranian faction, by demonstrating that the group actually had high-level connections to the U.S. government. Reagan initially rejected the plan, until Israel sent information to the United States showing that the "moderate" Iranians were opposed to terrorism and had fought against it. Now having a reason to trust the "moderates", Reagan approved the transaction, which was meant to be between Israel and the "moderates" in Iran, with the United States reimbursing Israel. In his 1990 autobiography An American Life, Reagan claimed that he was deeply committed to securing the release of the hostages; it was this compassion that supposedly motivated his support for the arms initiatives. The president requested that the "moderate" Iranians do everything in their capability to free the hostages held by Hezbollah. Reagan always publicly insisted after the scandal broke in late 1986 that the purpose behind the arms-for-hostages trade was to establish a working relationship with the "moderate" faction associated with Rafsanjani to facilitate the reestablishment of the American–Iranian alliance after the soon to be expected death of Khomeini, to end the Iran–Iraq war and end Iranian support for Islamic terrorism while downplaying the importance of freeing the hostages in Lebanon as a secondary issue. By contrast, when testifying before the Tower Commission, Reagan declared that hostage issue was the main reason for selling arms to Iran. The following arms were supplied to Iran: First arms sales in 1981 (see above) 20 August 1985 – 86 TOW anti-tank missiles 14 September 1985 – 408 more TOWs 24 November 1985 – 18 Hawk anti-aircraft missiles 17 February 1986 – 500 TOWs 27 February 1986 – 500 TOWs 24 May 1986 – 508 TOWs, 240 Hawk spare parts 4 August 1986 – More Hawk spares 28 October 1986 – 500 TOWs First arms sale The first arms sales to Iran began in 1981, though the official paper trail has them beginning in 1985 (see above). On 20 August 1985, Israel sent 96 American-made TOW missiles to Iran through an arms dealer Manucher Ghorbanifar. Subsequently, on 14 September 1985, 408 more TOW missiles were delivered. On 15 September 1985, following the second delivery, Reverend Benjamin Weir was released by his captors, the Islamic Jihad Organization. On 24 November 1985, 18 Hawk anti-aircraft missiles were delivered. Modifications in plans Robert McFarlane resigned on 4 December 1985, stating that he wanted to spend more time with his family, and was replaced by Admiral John Poindexter. Two days later, Reagan met with his advisors at the White House, where a new plan was introduced. This called for a slight change in the arms transactions: instead of the weapons going to the "moderate" Iranian group, they would go to "moderate" Iranian army leaders. As each weapons delivery was made from Israel by air, hostages held by Hezbollah would be released. Israel would continue to be reimbursed by the United States for the weapons. Though staunchly opposed by Secretary of State George Shultz and Secretary of Defense Caspar Weinberger, the plan was authorized by Reagan, who stated that, "We were not trading arms for hostages, nor were we negotiating with terrorists". In his notes of a meeting held in the White House on 7 December 1985, Weinberger wrote he told Reagan that this plan was illegal, writing: I argued strongly that we have an embargo that makes arms sales to Iran illegal and President couldn't violate it and that 'washing' transactions through Israel wouldn't make it legal. Shultz, Don Regan agreed. Weinberger's notes have Reagan saying he "could answer charges of illegality but he couldn't answer charge that 'big strong President Reagan' passed up a chance to free hostages." Now retired National Security Advisor McFarlane flew to London to meet with Israelis and Ghorbanifar in an attempt to persuade the Iranian to use his influence to release the hostages before any arms transactions occurred; this plan was rejected by Ghorbanifar. On the day of McFarlane's resignation, Oliver North, a military aide to the United States National Security Council (NSC), proposed a new plan for selling arms to Iran, which included two major adjustments: instead of selling arms through Israel, the sale was to be direct at a markup; and a portion of the proceeds would go to Contras, or Nicaraguan paramilitary fighters waging guerrilla warfare against the Sandinista government, claiming power after an election full of irregularities.[See Washington Post at the time.] The dealings with the Iranians were conducted via the NSC with Admiral Poindexter and his deputy Colonel North, with the American historians Malcolm Byrne and Peter Kornbluh writing that Poindexter granted much power to North "...who made the most of the situation, often deciding important matters on his own, striking outlandish deals with the Iranians, and acting in the name of the president on issues that were far beyond his competence. All of these activities continued to take place within the framework of the president's broad authorization. Until the press reported on the existence of the operation, nobody in the administration questioned the authority of Poindexter's and North's team to implement the president's decisions". North proposed a $15 million markup, while contracted arms broker Ghorbanifar added a 41% markup of his own. Other members of the NSC were in favor of North's plan; with large support, Poindexter authorized it without notifying President Reagan, and it went into effect. At first, the Iranians refused to buy the arms at the inflated price because of the excessive markup imposed by North and Ghorbanifar. They eventually relented, and in February 1986, 1,000 TOW missiles were shipped to the country. From May to November 1986, there were additional shipments of miscellaneous weapons and parts. Both the sale of weapons to Iran and the funding of the Contras attempted to circumvent not only stated administration policy, but also the Boland Amendment. Administration officials argued that regardless of Congress restricting funds for the Contras, or any affair, the President (or in this case the administration) could carry on by seeking alternative means of funding such as private entities and foreign governments. Funding from one foreign country, Brunei, was botched when North's secretary, Fawn Hall, transposed the numbers of North's Swiss bank account number. A Swiss businessman, suddenly $10 million richer, alerted the authorities of the mistake. The money was eventually returned to the Sultan of Brunei, with interest. On 7 January 1986, John Poindexter proposed to Reagan a modification of the approved plan: instead of negotiating with the "moderate" Iranian political group, the United States would negotiate with "moderate" members of the Iranian government. Poindexter told Reagan that Ghorbanifar had important connections within the Iranian government, so with the hope of the release of the hostages, Reagan approved this plan as well. Throughout February 1986, weapons |
sequels and spinoff games in the Zork series, The Hitchhiker's Guide to the Galaxy by Douglas Adams, and A Mind Forever Voyaging. In its first few years of operation, text adventures proved to be a huge revenue stream for the company. Whereas most computer games of the era would achieve initial success and then suffer a significant drop-off in sales, Infocom titles continued to sell for years and years. Employee Tim Anderson said of their situation, "It was phenomenal – we had a basement that just printed money." By 1983 Infocom was perhaps the dominant computer-game company; for example, all ten of its games were on the Softsel top 40 list of best-selling computer games for the week of December 12, 1983, with Zork in first place and two others in the top ten. In late 1984, management declined an offer by publisher Simon & Schuster to acquire Infocom for $28 million, far more than the board of directors's valuation of $10–12 million. In 1993, Computer Gaming World described this era as the "Cambridge Camelot, where the Great Underground Empire was formed". As an in-joke, the number 69,105 made a number of appearances in Infocom games. Reception Infocom games were popular, InfoWorld said, in part because "in offices all over America (more than anyone realizes) executives and managers are playing games on their computers". An estimated 25% had a computer game "hidden somewhere in their drawers", Inc. reported, and they preferred Infocom adventures to arcade games. The company stated that year that 75% of players were over 25 years old and that 80% were men; more women played its games than other companies', especially the mysteries. Most players enjoyed reading books; in 1987 president Joel Berez stated, "[Infocom's] audience tends to be composed of heavy readers. We sell to the minority that does read". A 1996 article in Next Generation said Infocom's "games were noted for having more depth than any other adventure games, before or since." Three components proved key to Infocom's success: marketing strategy, rich storytelling and feelies. Whereas most game developers sold their games mainly in software stores, Infocom also distributed their games via bookstores. Infocom's products appealed more to those with expensive computers, such as the Apple Macintosh, IBM PC, and Commodore Amiga. Berez stated that "there is no noticeable correlation between graphics machines and our penetration. There is a high correlation between the price of the machine and our sales ... people who are putting more money into their machines tend to buy more of our software". Since their games were text-based, patrons of bookstores were drawn to the Infocom games as they were already interested in reading. Unlike most computer software, Infocom titles were distributed under a no-returns policy, which allowed them to make money from a single game for a longer period of time. Next, Infocom titles featured strong storytelling and rich descriptions, eschewing the inherent restrictions of graphic displays and allowing users to use their own imaginations for the lavish and exotic locations the games described. Infocom's puzzles were unique in that they were usually tightly integrated into the storyline, and rarely did gamers feel like they were being made to jump through one arbitrary hoop after another, as was the case in many of the competitors' games. The puzzles were generally logical but also required close attention to the clues and hints given in the story, causing many gamers to keep copious notes as they went along. Sometimes, though, Infocom threw in puzzles just for the humor of it—if the user never ran into these, they could still finish the game. But discovering these early Easter Eggs was satisfying for some fans of the games. For example, one popular Easter egg was in the Enchanter game, which involves collecting magic spells to use in accomplishing the quest. One of these is a summoning spell, which the player needs to use to summon certain characters at different parts of the game. At one point the game mentions the "Implementers" who were responsible for creating the land of Zork. If the player tries to summon the Implementers, the game produces a vision of Dave Lebling and Marc Blank at their computers, surprised at this "bug" in the game and working feverishly to fix it. Third, the inclusion of "feelies"—imaginative props and extras tied to the game's theme—provided copy protection against copyright infringement. Some games were unsolvable without the extra content provided with the boxed game. And because of the cleverness and uniqueness of the feelies, users rarely felt like they were an intrusion or inconvenience, as was the case with most of the other copy-protection schemes of the time. Although Infocom started out with Zork, and although the Zork world was the centerpiece of their product line throughout the Zork and Enchanter series, the company quickly branched out into a wide variety of story lines: fantasy, science-fiction, mystery, horror, historical adventure, children's stories, and others that defied easy categorization. In an attempt to reach out to female customers, Infocom also produced Plundered Hearts, which cast the gamer in the role of the heroine of a swashbuckling adventure on the high seas, and which required the heroine to use more feminine tactics to win the game, since hacking-and-slashing was not a very ladylike way to behave. And to compete with the Leisure Suit Larry style games that were also appearing, Infocom also came out with Leather Goddesses of Phobos in 1986, which featured "tame", "suggestive", and "lewd" playing modes. It included among its "feelies" a "scratch-and-sniff" card with six odors that corresponded to cues given to the player during the game. Invisiclues Originally, hints for the game were provided as a "pay-per-hint" service created by Mike Dornbrook, called the Zork Users Group (ZUG). Dornbrook also started Infocom's customer newsletter, called The New Zork Times, to discuss game hints and preview and showcase new products. The pay-per-hint service eventually led to the development of InvisiClues: books with hints, maps, clues, and solutions for puzzles in the games. The answers to the puzzles were printed in invisible ink that only became visible when rubbed with a special marker that was provided with each book. Usually, two or more answers were given for each question that a gamer might have. The first answer would provide a subtle hint, the second a less subtle hint, and so forth until the last one gave an explicit walkthrough. Gamers could thus reveal only the hints that they needed to have to play the game. To prevent the mere questions (printed in normal ink) from giving away too much information about the game, a certain number of misleading fake questions were included in every InvisiClues book. Answers to these questions would start by giving misleading or impossible to carry out answers, before the final answer revealed that the question was a fake (and usually admonishing the player that revealing random clues from the book would spoil their enjoyment of the game). The InvisiClues books were regularly ranked in near the top of best seller lists for computer books. In the Solid Gold line of re-releases, InvisiClues were integrated into the game. By typing "HINT" twice the player would open up a screen of possible topics where they could then reveal one hint at a time for each puzzle, just like the books. Interactive fiction Infocom also released a small number of "interactive fiction paperbacks" (gamebooks), which were based on the games and featured the ability to choose a different path through the story. Similar to the Choose Your Own Adventure series, every couple | style games that were also appearing, Infocom also came out with Leather Goddesses of Phobos in 1986, which featured "tame", "suggestive", and "lewd" playing modes. It included among its "feelies" a "scratch-and-sniff" card with six odors that corresponded to cues given to the player during the game. Invisiclues Originally, hints for the game were provided as a "pay-per-hint" service created by Mike Dornbrook, called the Zork Users Group (ZUG). Dornbrook also started Infocom's customer newsletter, called The New Zork Times, to discuss game hints and preview and showcase new products. The pay-per-hint service eventually led to the development of InvisiClues: books with hints, maps, clues, and solutions for puzzles in the games. The answers to the puzzles were printed in invisible ink that only became visible when rubbed with a special marker that was provided with each book. Usually, two or more answers were given for each question that a gamer might have. The first answer would provide a subtle hint, the second a less subtle hint, and so forth until the last one gave an explicit walkthrough. Gamers could thus reveal only the hints that they needed to have to play the game. To prevent the mere questions (printed in normal ink) from giving away too much information about the game, a certain number of misleading fake questions were included in every InvisiClues book. Answers to these questions would start by giving misleading or impossible to carry out answers, before the final answer revealed that the question was a fake (and usually admonishing the player that revealing random clues from the book would spoil their enjoyment of the game). The InvisiClues books were regularly ranked in near the top of best seller lists for computer books. In the Solid Gold line of re-releases, InvisiClues were integrated into the game. By typing "HINT" twice the player would open up a screen of possible topics where they could then reveal one hint at a time for each puzzle, just like the books. Interactive fiction Infocom also released a small number of "interactive fiction paperbacks" (gamebooks), which were based on the games and featured the ability to choose a different path through the story. Similar to the Choose Your Own Adventure series, every couple of pages the book would give the reader the chance to make a choice, such as which direction they wanted to go or how they wanted to respond to another character. The reader would then choose one of the given answers and turn to the appropriate page. These books, however, never did sell particularly well, and quickly disappeared from the bookshelves. Cornerstone Despite their success with computer games, Vezza and other company founders hoped to produce successful business programs like Lotus Development, also founded by people from MIT and located in the same building as Infocom. Lotus released its first product, 1-2-3, in January 1983; within a year it had earned $53 million, compared to Infocom's $6 million. In 1982 Infocom started putting resources into a new division to produce business products. In 1985 they released a database product, Cornerstone, aimed at capturing the then booming database market for small business. Though this application was hailed upon its release for ease of use, it sold only 10,000 copies; not enough to cover the development expenses. The program failed for a number of reasons. Although it was packaged in a slick hard plastic carrying case and was a very good database for personal and home use, it was originally priced at USD$495 per copy and used copy-protected disks. Another serious miscalculation was that the program did not include any kind of scripting language, so it was not promoted by any of the database consultants that small businesses typically hired to create and maintain their DB applications. Reviewers were also consistently disappointed that Infocom—noted for the natural language syntax of their games—did not include a natural language query ability, which had been the most anticipated feature for this database application. In a final disappointment, Cornerstone was available only for IBM PCs; while Cornerstone had been programmed with its own virtual machine for maximum portability, it was not ported to any of the other platforms that Infocom supported for their games, so that feature had become essentially irrelevant. And because Cornerstone used this virtual machine for its processing, it suffered from slow, lackluster performance. Changing marketplace Infocom's games' sales benefited significantly from the portability offered by running on top of a virtual machine. InfoWorld wrote in 1984 that "the company always sells games for computers you don't normally think of as game machines, such as the DEC Rainbow or the Texas Instruments Professional Computer. This is one of the key reasons for the continued success of old titles such as Zork." Dornbrook estimated that year that of the 1.8 million home computers in America, one half million homes had Infocom games ("all, if you count the pirated games"). Computer companies sent prototypes of new systems to encourage Infocom to port Z-machine to them; the virtual machine supported more than 20 different systems, including orphaned computers for which Infocom games were among the only commercial products. The company produced the only third-party games available for the Macintosh at launch, and Berlyn promised that all 13 of its games would be available for the Atari ST within one month of its release. The virtual machine significantly slowed Cornerstones execution speed, however. Businesses were moving en masse to the IBM PC platform by that time, so portability was no longer a significant differentiator. Infocom had sunk much of the money from games sales into Cornerstone; this, in addition to a slump in computer game sales, left the company in a very precarious financial position. By the time Infocom removed the copy-protection and reduced the price to less than $100, it was too late, and the market had moved on to other database solutions. By 1982 the market was moving to graphic adventures. Infocom was interested in producing them, that year proposing to Penguin Software that Antonio Antiochia, author of its Transylvania, provide artwork. Within Infocom the game designers tended to oppose graphics, while marketing and business employees supported using them for the company to remain competitive. The partnership negotiations failed, in part because of the difficulty of adding graphics to the Z-machine, and Infocom instead began a series of advertisements mocking graphical games as "graffiti" compared to the human imagination. The marketing campaign was very successful, and Infocom's success led to other companies like Broderbund and Electronic Arts also releasing their own |
(later Sierra Entertainment); Ken and Roberta Williams played the game and decided to design one of their own, but with graphics. Commercial era Adventure International was founded by Scott Adams (not to be confused with the creator of Dilbert). In 1978, Adams wrote Adventureland, which was loosely patterned after (the original) Colossal Cave Adventure. He took out a small ad in a computer magazine in order to promote and sell Adventureland, thus creating the first commercial adventure game. In 1979 he founded Adventure International, the first commercial publisher of interactive fiction. That same year, Dog Star Adventure was published in source code form in SoftSide, spawning legions of similar games in BASIC. The largest company producing works of interactive fiction was Infocom, which created the Zork series and many other titles, among them Trinity, The Hitchhiker's Guide to the Galaxy and A Mind Forever Voyaging. In June 1977, Marc Blank, Bruce K. Daniels, Tim Anderson, and Dave Lebling began writing the mainframe version of Zork (also known as Dungeon), at the MIT Laboratory for Computer Science, directly inspired by Colossal Cave Adventure. The game was programmed in a computer language called MDL, a variant of LISP. The term Implementer was the self-given name of the creators of the text adventure series Zork. It is for this reason that game designers and programmers can be referred to as an implementer, often shortened to Imp, rather than a writer. In early 1979, the game was completed. Ten members of the MIT Dynamics Modelling Group went on to join Infocom when it was incorporated later that year. In order to make its games as portable as possible, Infocom developed the Z-machine, a custom virtual machine that could be implemented on a large number of platforms, and took standardized "story files" as input. In a non-technical sense, Infocom was responsible for developing the interactive style that would be emulated by many later interpreters. The Infocom parser was widely regarded as the best of its era. It accepted complex, complete sentence commands like "put the blue book on the writing desk" at a time when most of its competitors parsers were restricted to simple two-word verb-noun combinations such as "put book". The parser was actively upgraded with new features like undo and error correction, and later games would 'understand' multiple sentence input: 'pick up the gem and put it in my bag. take the newspaper clipping out of my bag then burn it with the book of matches'. Infocom and other companies offered optional commercial feelies (physical props associated with a game). The tradition of 'feelies' (and the term itself) is believed to have originated with Deadline (1982), the third Infocom title after Zork I and II. When writing this game, it was not possible to include all of the information in the limited (80KB) disk space, so Infocom created the first feelies for this game; extra items that gave more information than could be included within the digital game itself. These included police interviews, the coroner's findings, letters, crime scene evidence and photos of the murder scene. These materials were very difficult for others to copy or otherwise reproduce, and many included information that was essential to completing the game. Seeing the potential benefits of both aiding game-play immersion and providing a measure of creative copy-protection, in addition to acting as a deterrent to software piracy, Infocom and later other companies began creating feelies for numerous titles. In 1987, Infocom released a special version of the first three Zork titles together with plot-specific coins and other trinkets. This concept would be expanded as time went on, such that later game feelies would contain passwords, coded instructions, page numbers, or other information that would be required to successfully complete the game. 1980s United States Interactive fiction became a standard product for many software companies. By 1982 Softline wrote that "the demands of the market are weighted heavily toward hi-res graphics" in games like Sierra's The Wizard and the Princess and its imitators. Such graphic adventures became the dominant form of the genre on computers with graphics, like the Apple II. By 1982 Adventure International began releasing versions of its games with graphics. The company went bankrupt in 1985. Synapse Software and Acornsoft were also closed in 1985. Leaving Infocom as the leading company producing text-only adventure games on the Apple II with sophisticated parsers and writing, and still advertising its lack of graphics as a virtue. The company was bought by Activision in 1986 after the failure of Cornerstone, Infocom's database software program, and stopped producing text adventures a few years later. Soon after Telaium/Trillium also closed. Outside the United States Probably the first commercial work of interactive fiction produced outside the U.S. was the dungeon crawl game of Acheton, produced in Cambridge, England, and first commercially released by Acornsoft (later expanded and reissued by Topologika). Other leading companies in the UK were Magnetic Scrolls and Level 9 Computing. Also worthy of mention are Delta 4, Melbourne House, and the homebrew company Zenobi. In the early 1980s Edu-Ware also produced interactive fiction for the Apple II as designated by the "if" graphic that was displayed on startup. Their titles included the Prisoner and Empire series (Empire I: World Builders, Empire II: Interstellar Sharks, Empire III: Armageddon). In 1981, CE Software published SwordThrust as a commercial successor to the Eamon gaming system for the Apple II. SwordThrust and Eamon were simple two-word parser games with many role-playing elements not available in other interactive fiction. While SwordThrust published seven different titles, it was vastly overshadowed by the non-commercial Eamon system which allowed private authors to publish their own titles in the series. By March 1984, there were 48 titles published for the Eamon system (and over 270 titles in total as of March 2013). In Italy, interactive fiction games were mainly published and distributed through various magazines in included tapes. The largest number of games were published in the two magazines Viking and Explorer, with versions for the main 8-bit home computers (Sinclair ZX Spectrum, Commodore 64 and MSX). The software house producing those games was Brainstorm Enterprise, and the most prolific IF author was Bonaventura Di Bello, who produced 70 games in the Italian language. The wave of interactive fiction in Italy lasted for a couple of years thanks to the various magazines promoting the genre, then faded and remains still today a topic of interest for a small group of fans and less known developers, celebrated on Web sites and in related newsgroups. In Spain, interactive fiction was considered a minority genre, and was not very successful. The first Spanish interactive fiction commercially released was Yenght in 1983, by Dinamic Software, for the ZX Spectrum. Later on, in 1987, the same company produced an interactive fiction about Don Quijote. After several other attempts, the company Aventuras AD, emerged from Dinamic, became the main interactive fiction publisher in Spain, including titles like a Spanish adaptation of Colossal Cave Adventure, an adaptation of the Spanish comic El Jabato, and mainly the Ci-U-Than trilogy, composed by La diosa de Cozumel (1990), Los templos sagrados (1991) and Chichen Itzá (1992). During this period, the Club de Aventuras AD (CAAD), the main Spanish speaking community around interactive fiction in the world, was founded, and after the end of Aventuras AD in 1992, the CAAD continued on its own, first with their own magazine, and then with the advent of Internet, with the launch of an active internet community that still produces interactive non-commercial fiction nowadays. During the 1990s Legend Entertainment was founded by Bob Bates and Mike Verdu in 1989. It started out from the ashes of Infocom. The text adventures produced by Legend Entertainment used (high-resolution) graphics as well as sound. Some of their titles include Eric the Unready, the Spellcasting series and Gateway (based on Frederik Pohl's novels). The last text adventure created by Legend Entertainment was Gateway II (1992), while the last game ever created by Legend was Unreal II: The Awakening (2003) – a well-known first-person shooter action game using the Unreal Engine for both impressive graphics and realistic physics. In 2004, Legend Entertainment was acquired by Atari, who published Unreal II and released for both Microsoft Windows and Microsoft's Xbox. Many other companies such as Level 9 Computing, Magnetic Scrolls, Delta 4 and Zenobi had closed by 1992. In 1991 and 1992, Activision released The Lost Treasures of Infocom in two volumes, a collection containing most of Infocom's games, followed in 1996 by Classic Text Adventure Masterpieces of Infocom. Modern era After the decline of the commercial interactive fiction market in the 1990s, an online community eventually formed around the medium. In 1987, the Usenet newsgroup rec.arts.int-fiction was created, and was soon followed by rec.games.int-fiction. By custom, the topic of rec.arts.int-fiction is interactive fiction authorship and programming, while rec.games.int-fiction encompasses topics related to playing interactive fiction games, such as hint requests and game reviews. As of late 2011, discussions between writers have mostly moved from rec.arts.int-fiction to the Interactive Fiction Community Forum. One of the most important early developments was the reverse-engineering of Infocom's Z-Code format and Z-Machine virtual machine in 1987 by a group of enthusiasts called the InfoTaskForce and the subsequent development of an interpreter for Z-Code story files. As a result, it became possible to play Infocom's work on modern computers. For years, amateurs with the IF community produced interactive fiction works of relatively limited scope using the Adventure Game Toolkit and similar tools. The breakthrough that allowed the interactive fiction community to truly prosper, however, was the creation and distribution of two sophisticated development systems. In 1987, Michael J. Roberts released TADS, a programming language designed to produce works of interactive fiction. In 1993, Graham Nelson released Inform, a programming language and set of libraries which compiled to a Z-Code story file. Each of these systems allowed anyone with sufficient time and dedication to create a game, and caused a growth boom in the online interactive fiction community. Despite the lack of commercial support, the availability of high quality tools allowed enthusiasts of the genre to develop new high quality games. Competitions such as the annual Interactive Fiction Competition for short works, the Spring Thing for longer works, and the XYZZY Awards, further helped to improve the quality and complexity of the games. Modern games go much further than the original "Adventure" style, improving upon Infocom games, which relied extensively on puzzle solving, and to a lesser extent on communication with non-player characters, to include experimentation with writing and story-telling techniques. While the majority of modern interactive fiction that is developed is distributed for free, there are some commercial endeavors. In 1998, Michael Berlyn, a former Implementor at Infocom, started a new game company, Cascade Mountain Publishing, whose goals were to publish interactive fiction. Despite the Interactive Fiction community providing social and financial backing Cascade Mountain Publishing went out of business in 2000. Other commercial endeavours include Peter Nepstad's 1893: A World's Fair Mystery, several games by Howard Sherman published as Malinche Entertainment, The General Coffee Company's Future Boy!, Cypher, a graphically enhanced cyberpunk game and various titles by Textfyre. Emily Short was commissioned to develop the game City of Secrets but the project fell through and she ended up releasing it herself. Artificial Intelligence The increased effectiveness of natural-language-generation in artificial intelligence (AI) has led to instances of interactive fiction which use AI to dynamically generate new, open-ended content, instead of being constrained to pre-written material. The most notable example of this is AI Dungeon, released in 2019, which generates content using the GPT-3 (previously GPT-2) natural-language-generating neural network, created by OpenAI. Notable works 1970s Colossal Cave Adventure, by Will Crowther and Don Woods, was the first text adventure ever made. Adventureland, by Scott Adams, is considered one of the defining works of interactive fiction. The Zork series by Infocom (1979 onwards) was the first text adventure to see widespread commercial release. 1980s Softporn Adventure, by Chuck Benton, a popular adult game that inspired the Leisure Suit Larry video game series. The Hobbit, by Philip Mitchell and Veronika Megler of Beam Software (1982) was an early reinterpretation of an existing novel into interactive fiction, with several independent non-player characters. Planetfall, by Steve Meretzky of Infocom (1983), featured Floyd the robot, which Allen Varney claimed to be the first game character who evoked a strong emotional commitment from players. Suspended by Michael Berlyn was an Infocom game with a large vocabulary and unique character personalities. The Hitchhiker's Guide to the Galaxy, by Douglas Adams and Steve Meretzky of Infocom (1984), involved the author of the original work in the reinterpretation. A Mind Forever Voyaging, by Steve Meretzky of Infocom (1985), a story-heavy, puzzle-light game often touted as Infocom's first serious work of science fiction. The Pawn, by Magnetic Scrolls was known for understanding complex instructions like 'PLANT THE POT PLANT IN THE PLANT POT WITH THE TROWEL'. Silicon Dreams, by Level 9 Computing (1986), a trilogy of interactive science fiction games. Leather Goddesses of Phobos by Steve Meretzky, a risqué sci-fi parody from Infocom. Amnesia (1987), by Hugo Award and Nebula Award winning science fiction and fantasy author Thomas M. Disch, a text-only adventure published by Electronic Arts. 1990s Curses, by Graham Nelson (1993), the first game written in the Inform programming language. Considered one of the first "modern" games to meet the high standards set by Infocom's best titles. DUNNET, by Ron Schnell (1992 eLisp port from the 1983 MacLisp original), surreal text adventure that has shipped with GNU Emacs since 1994, and thus comes with Mac OS X and most Linux distributions; often mistaken for an easter egg. Anchorhead, by Michael S. Gentry (1998) is a highly rated horror story inspired by H. P. Lovecraft's Cthulhu Mythos. Photopia, by Adam Cadre (1998), one of the first almost entirely puzzle-free games. It won the annual Interactive Fiction Competition in 1998. Spider and Web, by Andrew Plotkin (1998), an award-winning espionage story with many twists and turns. Varicella by Adam Cadre (1999). It won four XYZZY Awards | By 1982 Adventure International began releasing versions of its games with graphics. The company went bankrupt in 1985. Synapse Software and Acornsoft were also closed in 1985. Leaving Infocom as the leading company producing text-only adventure games on the Apple II with sophisticated parsers and writing, and still advertising its lack of graphics as a virtue. The company was bought by Activision in 1986 after the failure of Cornerstone, Infocom's database software program, and stopped producing text adventures a few years later. Soon after Telaium/Trillium also closed. Outside the United States Probably the first commercial work of interactive fiction produced outside the U.S. was the dungeon crawl game of Acheton, produced in Cambridge, England, and first commercially released by Acornsoft (later expanded and reissued by Topologika). Other leading companies in the UK were Magnetic Scrolls and Level 9 Computing. Also worthy of mention are Delta 4, Melbourne House, and the homebrew company Zenobi. In the early 1980s Edu-Ware also produced interactive fiction for the Apple II as designated by the "if" graphic that was displayed on startup. Their titles included the Prisoner and Empire series (Empire I: World Builders, Empire II: Interstellar Sharks, Empire III: Armageddon). In 1981, CE Software published SwordThrust as a commercial successor to the Eamon gaming system for the Apple II. SwordThrust and Eamon were simple two-word parser games with many role-playing elements not available in other interactive fiction. While SwordThrust published seven different titles, it was vastly overshadowed by the non-commercial Eamon system which allowed private authors to publish their own titles in the series. By March 1984, there were 48 titles published for the Eamon system (and over 270 titles in total as of March 2013). In Italy, interactive fiction games were mainly published and distributed through various magazines in included tapes. The largest number of games were published in the two magazines Viking and Explorer, with versions for the main 8-bit home computers (Sinclair ZX Spectrum, Commodore 64 and MSX). The software house producing those games was Brainstorm Enterprise, and the most prolific IF author was Bonaventura Di Bello, who produced 70 games in the Italian language. The wave of interactive fiction in Italy lasted for a couple of years thanks to the various magazines promoting the genre, then faded and remains still today a topic of interest for a small group of fans and less known developers, celebrated on Web sites and in related newsgroups. In Spain, interactive fiction was considered a minority genre, and was not very successful. The first Spanish interactive fiction commercially released was Yenght in 1983, by Dinamic Software, for the ZX Spectrum. Later on, in 1987, the same company produced an interactive fiction about Don Quijote. After several other attempts, the company Aventuras AD, emerged from Dinamic, became the main interactive fiction publisher in Spain, including titles like a Spanish adaptation of Colossal Cave Adventure, an adaptation of the Spanish comic El Jabato, and mainly the Ci-U-Than trilogy, composed by La diosa de Cozumel (1990), Los templos sagrados (1991) and Chichen Itzá (1992). During this period, the Club de Aventuras AD (CAAD), the main Spanish speaking community around interactive fiction in the world, was founded, and after the end of Aventuras AD in 1992, the CAAD continued on its own, first with their own magazine, and then with the advent of Internet, with the launch of an active internet community that still produces interactive non-commercial fiction nowadays. During the 1990s Legend Entertainment was founded by Bob Bates and Mike Verdu in 1989. It started out from the ashes of Infocom. The text adventures produced by Legend Entertainment used (high-resolution) graphics as well as sound. Some of their titles include Eric the Unready, the Spellcasting series and Gateway (based on Frederik Pohl's novels). The last text adventure created by Legend Entertainment was Gateway II (1992), while the last game ever created by Legend was Unreal II: The Awakening (2003) – a well-known first-person shooter action game using the Unreal Engine for both impressive graphics and realistic physics. In 2004, Legend Entertainment was acquired by Atari, who published Unreal II and released for both Microsoft Windows and Microsoft's Xbox. Many other companies such as Level 9 Computing, Magnetic Scrolls, Delta 4 and Zenobi had closed by 1992. In 1991 and 1992, Activision released The Lost Treasures of Infocom in two volumes, a collection containing most of Infocom's games, followed in 1996 by Classic Text Adventure Masterpieces of Infocom. Modern era After the decline of the commercial interactive fiction market in the 1990s, an online community eventually formed around the medium. In 1987, the Usenet newsgroup rec.arts.int-fiction was created, and was soon followed by rec.games.int-fiction. By custom, the topic of rec.arts.int-fiction is interactive fiction authorship and programming, while rec.games.int-fiction encompasses topics related to playing interactive fiction games, such as hint requests and game reviews. As of late 2011, discussions between writers have mostly moved from rec.arts.int-fiction to the Interactive Fiction Community Forum. One of the most important early developments was the reverse-engineering of Infocom's Z-Code format and Z-Machine virtual machine in 1987 by a group of enthusiasts called the InfoTaskForce and the subsequent development of an interpreter for Z-Code story files. As a result, it became possible to play Infocom's work on modern computers. For years, amateurs with the IF community produced interactive fiction works of relatively limited scope using the Adventure Game Toolkit and similar tools. The breakthrough that allowed the interactive fiction community to truly prosper, however, was the creation and distribution of two sophisticated development systems. In 1987, Michael J. Roberts released TADS, a programming language designed to produce works of interactive fiction. In 1993, Graham Nelson released Inform, a programming language and set of libraries which compiled to a Z-Code story file. Each of these systems allowed anyone with sufficient time and dedication to create a game, and caused a growth boom in the online interactive fiction community. Despite the lack of commercial support, the availability of high quality tools allowed enthusiasts of the genre to develop new high quality games. Competitions such as the annual Interactive Fiction Competition for short works, the Spring Thing for longer works, and the XYZZY Awards, further helped to improve the quality and complexity of the games. Modern games go much further than the original "Adventure" style, improving upon Infocom games, which relied extensively on puzzle solving, and to a lesser extent on communication with non-player characters, to include experimentation with writing and story-telling techniques. While the majority of modern interactive fiction that is developed is distributed for free, there are some commercial endeavors. In 1998, Michael Berlyn, a former Implementor at Infocom, started a new game company, Cascade Mountain Publishing, whose goals were to publish interactive fiction. Despite the Interactive Fiction community providing social and financial backing Cascade Mountain Publishing went out of business in 2000. Other commercial endeavours include Peter Nepstad's 1893: A World's Fair Mystery, several games by Howard Sherman published as Malinche Entertainment, The General Coffee Company's Future Boy!, Cypher, a graphically enhanced cyberpunk game and various titles by Textfyre. Emily Short was commissioned to develop the game City of Secrets but the project fell through and she ended up releasing it herself. Artificial Intelligence The increased effectiveness of natural-language-generation in artificial intelligence (AI) has led to instances of interactive fiction which use AI to dynamically generate new, open-ended content, instead of being constrained to pre-written material. The most notable example of this is AI Dungeon, released in 2019, which generates content using the GPT-3 (previously GPT-2) natural-language-generating neural network, created by OpenAI. Notable works 1970s Colossal Cave Adventure, by Will Crowther and Don Woods, was the first text adventure ever made. Adventureland, by Scott Adams, is considered one of the defining works of interactive fiction. The Zork series by Infocom (1979 onwards) was the first text adventure to see widespread commercial release. 1980s Softporn Adventure, by Chuck Benton, a popular adult game that inspired the Leisure Suit Larry video game series. The Hobbit, by Philip Mitchell and Veronika Megler of Beam Software (1982) was an early reinterpretation of an existing novel into interactive fiction, with several independent non-player characters. Planetfall, by Steve Meretzky of Infocom (1983), featured Floyd the robot, which Allen Varney claimed to be the first game character who evoked a strong emotional commitment from players. Suspended by Michael Berlyn was an Infocom game with a large vocabulary and unique character personalities. The Hitchhiker's Guide to the Galaxy, by Douglas Adams and Steve Meretzky of Infocom (1984), involved the author of the original work in the reinterpretation. A Mind Forever Voyaging, by Steve Meretzky of Infocom (1985), a story-heavy, puzzle-light game often touted as Infocom's first serious work of science fiction. The Pawn, by Magnetic Scrolls was known for understanding complex instructions like 'PLANT THE POT PLANT IN THE PLANT POT WITH THE TROWEL'. Silicon Dreams, by Level 9 Computing (1986), a trilogy of interactive science fiction games. Leather Goddesses of Phobos by Steve Meretzky, a risqué sci-fi parody from Infocom. Amnesia (1987), by Hugo Award and Nebula Award winning science fiction and fantasy author Thomas M. Disch, a text-only adventure published by Electronic Arts. 1990s Curses, by Graham Nelson (1993), the first game written in the Inform programming language. Considered one of the first "modern" games to meet the high standards set by Infocom's best titles. DUNNET, by Ron Schnell (1992 eLisp port from the 1983 MacLisp original), surreal text adventure that has shipped with GNU Emacs since 1994, and thus comes with Mac OS X and most Linux distributions; often mistaken for an easter egg. Anchorhead, by Michael S. Gentry (1998) is a highly rated horror story inspired by H. P. Lovecraft's Cthulhu Mythos. Photopia, by Adam Cadre (1998), one of the first almost entirely puzzle-free games. It won the annual Interactive Fiction Competition in 1998. Spider and Web, by Andrew Plotkin (1998), an award-winning espionage story with many twists and turns. Varicella by Adam Cadre (1999). It won four XYZZY Awards in 1999 including the XYZZY Award for Best Game, and had a scholarly essay written about it. 2000s Galatea, by Emily Short (2000). Galatea is focused entirely on interaction with the animated statue of the same name. Galatea has one of the most complex interaction systems for a non-player character in an interactive fiction game. Adam Cadre called Galatea "the best NPC ever". 9:05 by Adam Cadre. It is commonly seen as an easy gateway for people to get involved with interactive fiction. Slouching Towards Bedlam, by Star C. Foster and Daniel Ravipinto (2003). Set in a steampunk setting, the game integrates meta-game functionality (saving, restoring, restarting) into the game world itself. The game won four XYZZY Awards. The Dreamhold, by Andrew Plotkin (2004). Designed as a tutorial game for those new to IF, it provides an extensive help section. Façade by Michael Mateas, Andrew Stern and John Grieve (2005). An interactive drama using natural language processing. Fallen London, also known as Echo Bazaar, an open-world work of interactive fiction, by Failbetter Games Lost Pig by Admiral Jota (2007). A comedic interactive fiction about an orc finding a pig that escaped from his farm. It won best game, best writing, best individual non-player character, and best individual player character in the 2007 XYZZY Awards. 2010s Howling Dogs by Porpentine (2012), hypertext fiction that explores escapism. It is considered one of the most prominent Twine games and was in the 2017 Whitney Biennial. A Dark Room by Michael Townsend (2013), text-based mystery story and idle game. The story is only told through environmental cues, rather than dialogue or exposition. 80 Days by inkle (2014). An interactive adventure based on the novel by Jules Verne, it was nominated by TIME as their Game of the Year for 2014. Depression Quest by Zoë Quinn (2014). Text-based game in which players take the place of |
and official scorers. The most widespread system is the "three-man system", which uses one referee and two linesmen. A less commonly used system is the two referee and one linesman system. This system is close to the regular three-man system except for a few procedure changes. Beginning with the National Hockey League, a number of leagues have implemented the "four-official system", where an additional referee is added to aid in the calling of penalties normally difficult to assess by one referee. The system is used in every NHL game since 2001, at IIHF World Championships, the Olympics and in many professional and high-level amateur leagues in North America and Europe. Officials are selected by the league they work for. Amateur hockey leagues use guidelines established by national organizing bodies as a basis for choosing their officiating staffs. In North America, the national organizing bodies Hockey Canada and USA Hockey approve officials according to their experience level as well as their ability to pass rules knowledge and skating ability tests. Hockey Canada has officiating levels I through VI. USA Hockey has officiating levels 1 through 4. Equipment Since men's ice hockey is a full-contact sport, body checks are allowed so injuries are a common occurrence. Protective equipment is mandatory and is enforced in all competitive situations. This includes a helmet with either a visor or a full face mask, shoulder pads, elbow pads, mouth guard, protective gloves, heavily padded shorts (also known as hockey pants) or a girdle, athletic cup (also known as a jock, for males; and jill, for females), shin pads, skates, and (optionally) a neck protector. Goaltenders use different equipment. With hockey pucks approaching them at speeds of up to 100 mph (160 km/h) they must wear equipment with more protection. Goaltenders wear specialized goalie skates (these skates are built more for movement side to side rather than forwards and backwards), a jock or jill, large leg pads (there are size restrictions in certain leagues), blocking glove, catching glove, a chest protector, a goalie mask, and a large jersey. Goaltenders' equipment has continually become larger and larger, leading to fewer goals in each game and many official rule changes. Hockey skates are optimized for physical acceleration, speed and manoeuvrability. This includes rapid starts, stops, turns, and changes in skating direction. In addition, they must be rigid and tough to protect the skater's feet from contact with other skaters, sticks, pucks, the boards, and the ice itself. Rigidity also improves the overall manoeuvrability of the skate. Blade length, thickness (width), and curvature (rocker/radius (front to back) and radius of hollow (across the blade width) are quite different from speed or figure skates. Hockey players usually adjust these parameters based on their skill level, position, and body type. The blade width of most skates are about thick. The hockey stick consists of a long, relatively wide, and slightly curved flat blade, attached to a shaft. The curve itself has a big impact on its performance. A deep curve allows for lifting the puck easier while a shallow curve allows for easier backhand shots. The flex of the stick also impacts the performance. Typically, a less flexible stick is meant for a stronger player since the player is looking for the right balanced flex that allows the stick to flex easily while still having a strong "whip-back" which sends the puck flying at high speeds. It is quite distinct from sticks in other sports games and most suited to hitting and controlling the flat puck. Its unique shape contributed to the early development of the game. Injury Ice hockey is a full-contact sport and carries a high risk of injury. Players are moving at speeds around approximately and much of the game revolves around the physical contact between the players. Skate blades, hockey sticks, shoulder contact, hip contact, and hockey pucks can all potentially cause injuries. Compared to athletes who play other sports, ice hockey players are at higher risk of overuse injuries and injuries caused by early sports specialization by teenagers. According to the Hughston Health Alert, "Lacerations to the head, scalp, and face are the most frequent types of injury [in hockey]." One of the leading causes of head injury is body checking from behind. Due to the danger of delivering a check from behind, many leagues -- including the NHL -- have made this a major and game misconduct penalty (called "boarding"). Another type of check that accounts for many of the player-to-player contact concussions is a check to the head resulting in a misconduct penalty (called "head contact"). In recent years, the NHL has implemented new rules which penalize and suspend players for illegal checks to the heads, as well as checks to unsuspecting players. Studies show that ice hockey causes 44.3% of all traumatic brain injuries among Canadian children. Tactics Checking An important defensive tactic is checking—attempting to take the puck from an opponent or to remove the opponent from play. Stick checking, sweep checking, and poke checking are legal uses of the stick to obtain possession of the puck. The neutral zone trap is designed to isolate the puck carrier in the neutral zone preventing him from entering the offensive zone. Body checking is using one's shoulder or hip to strike an opponent who has the puck or who is the last to have touched it (the last person to have touched the puck is still legally "in possession" of it, although a penalty is generally called if he is checked more than two seconds after his last touch). Body checking is also a penalty in certain leagues in order to reduce the chance of injury to players. Often the term checking is used to refer to body checking, with its true definition generally only propagated among fans of the game. Offensive tactics Offensive tactics include improving a team's position on the ice by advancing the puck out of one's zone towards the opponent's zone, progressively by gaining lines, first your own blue line, then the red line and finally the opponent's blue line. NHL rules instated for the 2006 season redefined the offside rule to make the two-line pass legal; a player may pass the puck from behind his own blue line, past both that blue line and the centre red line, to a player on the near side of the opponents' blue line. Offensive tactics are designed ultimately to score a goal by taking a shot. When a player purposely directs the puck towards the opponent's goal, he or she is said to "shoot" the puck. A deflection is a shot that redirects a shot or a pass towards the goal from another player, by allowing the puck to strike the stick and carom towards the goal. A one-timer is a shot struck directly off a pass, without receiving the pass and shooting in two separate actions. Headmanning the puck, also known as breaking out, is the tactic of rapidly passing to the player farthest down the ice. Loafing, also known as cherry-picking, is when a player, usually a forward, skates behind an attacking team, instead of playing defence, in an attempt to create an easy scoring chance. A team that is losing by one or two goals in the last few minutes of play will often elect to pull the goalie; that is, remove the goaltender and replace him or her with an extra attacker on the ice in the hope of gaining enough advantage to score a goal. However, it is an act of desperation, as it sometimes leads to the opposing team extending their lead by scoring a goal in the empty net. One of the most important strategies for a team is their forecheck. Forechecking is the act of attacking the opposition in their defensive zone. Forechecking is an important part of the dump and chase strategy (i.e. shooting the puck into the offensive zone and then chasing after it). Each team will use their own unique system but the main ones are: 2–1–2, 1–2–2, and 1–4. The 2–1–2 is the most basic forecheck system where two forwards will go in deep and pressure the opposition's defencemen, the third forward stays high and the two defencemen stay at the blueline. The 1–2–2 is a bit more conservative system where one forward pressures the puck carrier and the other two forwards cover the oppositions' wingers, with the two defencemen staying at the blueline. The 1–4 is the most defensive forecheck system, referred to as the neutral zone trap, where one forward will apply pressure to the puck carrier around the oppositions' blueline and the other 4 players stand basically in a line by their blueline in hopes the opposition will skate into one of them. Another strategy is the left wing lock, which has two forwards pressure the puck and the left wing and the two defencemen stay at the blueline. There are many other little tactics used in the game of hockey. Cycling moves the puck along the boards in the offensive zone to create a scoring chance by making defenders tired or moving them out of position. Pinching is when a defenceman pressures the opposition's winger in the offensive zone when they are breaking out, attempting to stop their attack and keep the puck in the offensive zone. A saucer pass is a pass used when an opposition's stick or body is in the passing lane. It is the act of raising the puck over the obstruction and having it land on a teammate's stick. A deke, short for "decoy", is a feint with the body or stick to fool a defender or the goalie. Many modern players, such as Pavel Datsyuk, Sidney Crosby and Patrick Kane, have picked up the skill of "dangling", which is fancier deking and requires more stick handling skills. Fights Although fighting is officially prohibited in the rules, it is not an uncommon occurrence at the professional level, and its prevalence has been both a target of criticism and a considerable draw for the sport. At the professional level in North America fights are unofficially condoned. Enforcers and other players fight to demoralize the opposing players while exciting their own, as well as settling personal scores. A fight will also break out if one of the team's skilled players gets hit hard or someone receives what the team perceives as a dirty hit. The amateur game penalizes fisticuffs more harshly, as a player who receives a fighting major is also assessed at least a 10-minute misconduct penalty (NCAA and some Junior leagues) or a game misconduct penalty and suspension (high school and younger, as well as some casual adult leagues). Women's ice hockey The International Ice Hockey Federation (IIHF) holds the IIHF World Women's Championships tournaments in several divisions; championships are held annually, except that the top flight does not play in Olympic years. Body checking Body checking has been prohibited in women's ice hockey since the mid-1980s in Canada, and spreading from there internationally. Canada's Rhonda Leeman Taylor was responsible for banning body contact from all national women's ice hockey tournaments in Canada in 1983. Body checking in some of the women's hockey leagues in Canada were completely removed in 1986., resulting in a substantial increase in female participation in ice hockey in Canada. Prior to this point, bodychecking had been a part of the women's game in most cases, including in Europe. It wasn't until after the 1990 Women's World Championship (sanctioned by the International Ice Hockey Federation) that body checking was eliminated from the women's ice hockey format internationally. In women's IIHF ice hockey today, body checking is considered an "illegal hit" and is punishable by a minor penalty, major penalty and game misconduct, or match penalty. In current IIHF women's competition, body checking is either a minor or major penalty, decided at the referee's discretion. Equipment Players in women's competition are required to wear protective full-face masks. At all levels, players must wear a pelvic protector, essentially the female equivalent of a jockstrap, known colloquially as a "jill" or "jillstrap". Other protective equipment for girls and women in ice hockey is sometimes specifically designed for the female body, such as shoulder pads designed to protect a women's breast area without reducing mobility. History Women began playing the game of ice hockey in the late 19th century. Several games were recorded in the 1890s in Ottawa, Ontario, Canada. The women of Lord Stanley's family were known to participate in the game of ice hockey on the outdoor ice rink at Rideau Hall, the residence of Canada's Governor-General. The earliest available records of women's ice hockey were in the late 19th-century in Canada. Much like the men's game, women had previously been playing a conglomeration of stick-and-ball ice games. As with men's hockey, the women's game developed at first without an organizing body. A tournament in 1902 between Montreal and Trois-Rivieres was billed as the first women's ice hockey championship tournament. Several tournaments, such as at the Banff Winter Carnival, were held in the early 20th century with numerous women's teams such as the Seattle Vamps and Vancouver Amazons. Organizations started to develop in the 1920s, such as the Ladies Ontario Hockey Association in Canada, and later, the Dominion Women's Amateur Hockey Association. Starting in Canada in 1961, the women's game spread to more universities after the Fitness and Amateur Sport Act came into force in whereby the Canadian Government of Canada made an official commitment to "encourage, promote and develop fitness and amateur sport in Canada." Today, the women's game is played from youth through adult leagues, and the university level in North America and internationally. There are major professional women's hockey leagues: the Premier Hockey Federation, (formerly the National Women's Hockey League) with teams in the United States, Canada, and the Zhenskaya Hockey League with teams in Russia and China. In 2019, the Professional Women's Hockey Players Association was formed by over 150 players with the goal of creating a sustainable professional league for women's ice hockey in North America. Between 1995 and 2005 the number of participants increased by 400 percent. In 2011, Canada had 85,827 women players, United States had 65,609, Finland 4,760, Sweden 3,075 and Switzerland 1,172. Women's ice hockey was added as a medal sport eight years after the first world women's ice hockey championship in 1990 at the 1998 Winter Olympics in Nagano, Japan. Prior to the professionalization of women's ice hockey in the 21st century, almost all professional women hockey players who played against men were goaltenders. No woman has ever played a full season in top tier men's professional ice hockey. The United States Hockey League (USHL) welcomed the first female professional ice hockey player in 1969–70, when the Marquette Iron Rangers signed 18 year–old, Karen Koch, a goaltender. Only one woman has ever played in the National Hockey League (NHL), goaltender Manon Rhéaume. Rhéaume played in NHL pre-season games as a goaltender for the Tampa Bay Lightning against the St. Louis Blues and the Boston Bruins. In 2003, Hayley Wickenheiser played with the Kirkkonummi Salamat in the Finnish men's Suomi-sarja league. Women have occasionally competed in North American minor leagues: among them Rhéaume, and fellow goaltenders Kelly Dyer and Erin Whitten. Defenseman Angela Ruggiero became the first woman to actively play in a regular season professional hockey game in North America at a position other than goalie, playing in a single game for the Tulsa Oilers of the Central Hockey League. Women's World Championship The 1989 IIHF European Women Championships in West Germany was the first European Championship held in women's ice hockey and preceded the eventual International Ice Hockey Federation-sanctioned Women's World Championship for ice hockey. The first world ice hockey championship for women was the 1990 IIHF World Women's Championship. Leagues and championships The following is a list of professional ice hockey leagues by attendance: Club competition North America The NHL is the best attended and most popular ice hockey league in the world, and is among the major professional sports leagues in the United States and Canada. The league's history began after Canada's National Hockey Association decided to disband in 1917; the result was the creation of the National Hockey League with four teams. The league expanded to the United States beginning in 1924 and had as many as 10 teams before contracting to six teams by 1942–43. In 1967, the NHL doubled in size to 12 teams, undertaking one of the greatest expansions in professional sports history. A few years later, in 1972, a new 12-team league, the World Hockey Association (WHA) was formed and due to its ensuing rivalry with the NHL, it caused an escalation in players' salaries. In 1979, the 17-team NHL merged with the WHA creating a 21-team league. By 2017, the NHL had expanded to 31 teams, and after a realignment in 2013, these teams were divided into two conferences and four divisions. The league expanded to 32 teams in 2021. The American Hockey League (AHL) is the primary developmental professional league for players aspiring to enter the NHL. It comprises 31 teams from the United States and Canada. It is run as a farm league to the NHL, with the vast majority of AHL players under contract to an NHL team. The ECHL (called the East Coast Hockey League before the 2003–04 season) is a mid-level minor league in the United States with a few players under contract to NHL or AHL teams. As of 2019, there are three minor professional leagues with no NHL affiliations: the Federal Prospects Hockey League (FPHL), Ligue Nord-Américaine de Hockey (LNAH), and the Southern Professional Hockey League (SPHL). U Sports ice hockey is the highest level of play at the Canadian university level under the auspices of U Sports, Canada's governing body for university sports. As these players compete at the university level, they are obligated to follow the rule of standard eligibility of five years. In the United States especially, college hockey is popular and the best university teams compete in the annual NCAA Men's Ice Hockey Championship. The American Collegiate Hockey Association is composed of college teams at the club level. In Canada, the Canadian Hockey League is an umbrella organization comprising three major junior leagues: the Ontario Hockey League, the Western Hockey League, and the Quebec Major Junior Hockey League. It attracts players from Canada, the United States and Europe. The major junior players are considered amateurs as they | organized game with codified rules which today is ice hockey. Name In England, field hockey has historically been called simply hockey and was what was referenced by first appearances in print. The first known mention spelled as hockey occurred in the 1772 book Juvenile Sports and Pastimes, to Which Are Prefixed, Memoirs of the Author: Including a New Mode of Infant Education, by Richard Johnson (Pseud. Master Michel Angelo), whose chapter XI was titled "New Improvements on the Game of Hockey". The 1527 Statute of Galway banned a sport called hokie'—the hurling of a little ball with sticks or staves". A form of this word was thus being used in the 16th century, though much removed from its current usage. The belief that hockey was mentioned in a 1363 proclamation by King Edward III of England is based on modern translations of the proclamation, which was originally in Latin and explicitly forbade the games . According to the Austin Hockey Association, the word puck derives from the Scottish Gaelic or the Irish ('to poke, punch or deliver a blow'). "...The blow given by a hurler to the ball with his or hurley is always called a puck." Precursors Stick-and-ball games date back to pre-Christian times. In Europe, these games included the Irish game of hurling, the closely related Scottish game of shinty and versions of field hockey (including bandy ball, played in England). IJscolf, a game resembling colf on an ice-covered surface, was popular in the Low Countries between the Middle Ages and the Dutch Golden Age. It was played with a wooden curved bat (called a colf or kolf), a wooden or leather ball and two poles (or nearby landmarks), with the objective to hit the chosen point using the fewest strokes. A similar game (knattleikr) had been played for a thousand years or more by the Scandinavian peoples, as documented in the Icelandic sagas. Polo has been referred to as "hockey on horseback". In England, field hockey developed in the late 17th century, and there is evidence that some games of field hockey took place on the ice. These games of "hockey on ice" were sometimes played with a bung (a plug of cork or oak used as a stopper on a barrel). William Pierre Le Cocq stated, in a 1799 letter written in Chesham, England: I must now describe to you the game of Hockey; we have each a stick turning up at the end. We get a bung. There are two sides one of them knocks one way and the other side the other way. If any one of the sides makes the bung reach that end of the churchyard it is victorious. A 1797 engraving unearthed by Swedish sport historians Carl Gidén and Patrick Houda shows a person on skates with a stick and bung on the River Thames, probably in December 1796. British soldiers and immigrants to Canada and the United States brought their stick-and-ball games with them and played them on the ice and snow of winter. In 1825, John Franklin wrote "The game of hockey played on the ice was the morning sport" on Great Bear Lake near the town of Déline during one of his Arctic expeditions. A mid-1830s watercolour portrays New Brunswick lieutenant-governor Archibald Campbell and his family with British soldiers on skates playing a stick-on-ice sport. Captain R.G.A. Levinge, a British Army officer in New Brunswick during Campbell's time, wrote about "hockey on ice" on Chippewa Creek (a tributary of the Niagara River) in 1839. In 1843 another British Army officer in Kingston, Ontario wrote, "Began to skate this year, improved quickly and had great fun at hockey on the ice." An 1859 Boston Evening Gazette article referred to an early game of hockey on ice in Halifax that year. An 1835 painting by John O'Toole depicts skaters with sticks and bung on a frozen stream in the American state of West Virginia, at that time still part of Virginia. In the same era, the Mi'kmaq, a First Nations people of the Canadian Maritimes, also had a stick-and-ball game. Canadian oral histories describe a traditional stick-and-ball game played by the Mi'kmaq, and Silas Tertius Rand (in his 1894 Legends of the Micmacs) describes a Mi'kmaq ball game known as tooadijik. Rand also describes a game played (probably after European contact) with hurleys, known as wolchamaadijik. Sticks made by the Mi'kmaq were used by the British for their games. Early 19th-century paintings depict shinney (or "shinny"), an early form of hockey with no standard rules which was played in Nova Scotia. Many of these early games absorbed the physical aggression of what the Onondaga called dehuntshigwa'es (lacrosse). Shinney was played on the St. Lawrence River at Montreal and Quebec City, and in Kingston and Ottawa. The number of players was often large. To this day, shinney (derived from "shinty") is a popular Canadian term for an informal type of hockey, either ice or street hockey. Thomas Chandler Haliburton, in The Attache: Second Series (published in 1844) imagined a dialogue, between two of the novel's characters, which mentions playing "hurly on the long pond on the ice". This has been interpreted by some historians from Windsor, Nova Scotia as reminiscent of the days when the author was a student at King's College School in that town in 1810 and earlier. Based on Haliburton's quote, claims were made that modern hockey was invented in Windsor, Nova Scotia, by King's College students and perhaps named after an individual ("Colonel Hockey's game"). Others claim that the origins of hockey come from games played in the area of Dartmouth and Halifax in Nova Scotia. However, several references have been found to hurling and shinty being played on the ice long before the earliest references from both Windsor and Dartmouth/Halifax, and the word "hockey" was used to designate a stick-and-ball game at least as far back as 1773, as it was mentioned in the book Juvenile Sports and Pastimes, to Which Are Prefixed, Memoirs of the Author: Including a New Mode of Infant Education by Richard Johnson (Pseud. Master Michel Angelo), whose chapter XI was titled "New Improvements on the Game of Hockey". Initial development The Canadian city of Montreal, Quebec, became the centre of the development of contemporary ice hockey, and is recognized as the birthplace of organized ice hockey. On March 3, 1875, the first organized indoor game was played at Montreal's Victoria Skating Rink between two nine-player teams, including James Creighton and several McGill University students. Instead of a ball or bung, the game featured a "flat circular piece of wood" (to keep it in the rink and to protect spectators). The goal posts were apart (today's goals are six feet wide). Some observers of the game at McGill made quick note of its surprisingly aggressive and violent nature. In 1876, games played in Montreal were "conducted under the 'Hockey Association' rules"; the Hockey Association was England's field hockey organization. In 1877, The Gazette (Montreal) published a list of seven rules, six of which were largely based on six of the Hockey Association's twelve rules, with only minor differences (even the word "ball" was kept); the one added rule explained how disputes should be settled. The McGill University Hockey Club, the first ice hockey club, was founded in 1877 (followed by the Quebec Hockey Club in 1878 and the Montreal Victorias in 1881). In 1880, the number of players per side was reduced from nine to seven. The number of teams grew, enough to hold the first "world championship" of ice hockey at Montreal's annual Winter Carnival in 1883. The McGill team won the tournament and was awarded the Carnival Cup. The game was divided into thirty-minute halves. The positions were now named: left and right wing, centre, rover, point and cover-point, and goaltender. In 1886, the teams competing at the Winter Carnival organized the Amateur Hockey Association of Canada (AHAC), and played a season comprising "challenges" to the existing champion. In Europe, it was previously believed that in 1885 the Oxford University Ice Hockey Club was formed to play the first Ice Hockey Varsity Match against traditional rival Cambridge in St. Moritz, Switzerland; however, this is now considered to have been a game of bandy. A similar claim which turned out to be accurate is that the oldest rivalry in ice hockey history is between Queen's University at Kingston and Royal Military College of Kingston, Ontario, with the first known match taking place in 1886. In 1888, the Governor General of Canada, Lord Stanley of Preston (whose sons and daughter were hockey enthusiasts), first attended the Montreal Winter Carnival tournament and was impressed with the game. In 1892, realizing that there was no recognition for the best team in Canada (although a number of leagues had championship trophies), he purchased a silver bowl for use as a trophy. The Dominion Hockey Challenge Cup (which later became known as the Stanley Cup) was first awarded in 1893 to the Montreal Hockey Club, champions of the AHAC; it continues to be awarded annually to the National Hockey League's championship team. Stanley's son Arthur helped organize the Ontario Hockey Association, and Stanley's daughter Isobel was one of the first women to play ice hockey. By 1893, there were almost a hundred teams in Montreal alone; in addition, there were leagues throughout Canada. Winnipeg hockey players used cricket pads to better protect the goaltender's legs; they also introduced the "scoop" shot, or what is now known as the wrist shot. William Fairbrother, from Ontario, Canada is credited with inventing the ice hockey net in the 1890s. Goal nets became a standard feature of the Canadian Amateur Hockey League (CAHL) in 1900. Left and right defence began to replace the point and cover-point positions in the OHA in 1906. American financier Malcolm Greene Chace is credited with being the father of hockey in the United States. In 1892, Chace put together a team of men from Yale, Brown, and Harvard, and toured across Canada as captain of this team. The first collegiate hockey match in the United States was played between Yale and Johns Hopkins in Baltimore in 1893. In 1896, the first ice hockey league in the US was formed. The US Amateur Hockey League was founded in New York City, shortly after the opening of the artificial-ice St. Nicholas Rink. By 1898 the following leagues had already formed: the Amateur Hockey League of New York, the Amateur Hockey Association of Canada, and the Ontario Hockey Association. The 1898 Spalding Athletic Library book includes rules and results for each league. Lord Stanley's five sons were instrumental in bringing ice hockey to Europe, defeating a court team (which included the future Edward VII and George V) at Buckingham Palace in 1895. By 1903, a five-team league had been founded. The Ligue Internationale de Hockey sur Glace was founded in 1908 to govern international competition, and the first European championship was won by Great Britain in 1910. The sport grew further in Europe in the 1920s, after ice hockey became an Olympic sport. Many bandy players switched to hockey so as to be able to compete in the Olympics. In the mid-20th century, the Ligue became the International Ice Hockey Federation. As the popularity of ice hockey as a spectator sport grew, earlier rinks were replaced by larger rinks. Most of the early indoor ice rinks have been demolished; Montreal's Victoria Rink, built in 1862, was demolished in 1925. Many older rinks succumbed to fire, such as Denman Arena, Dey's Arena, Quebec Skating Rink and Montreal Arena, a hazard of the buildings' wood construction. The Stannus Street Rink in Windsor, Nova Scotia (built in 1897) may be the oldest still in existence; however, it is no longer used for hockey. The Aberdeen Pavilion (built in 1898) in Ottawa was used for hockey in 1904 and is the oldest existing facility that has hosted Stanley Cup games. The oldest indoor ice hockey arena still in use today for hockey is Boston's Matthews Arena, which was built in 1910. It has been modified extensively several times in its history and is used today by Northeastern University for hockey and other sports. It was the original home rink of the Boston Bruins professional team, itself the oldest United States-based team in the NHL, starting play in the league in what was then called Boston Arena on December 1, 1924. Madison Square Garden in New York City, built in 1968, is the oldest continuously-operating arena in the NHL. Professional era While scattered incidents of players taking pay to play hockey occurred as early as the 1890s, those found to have done so were banned from playing in the amateur leagues which dominated the sport. By 1902, the Western Pennsylvania Hockey League was the first to employ professionals. The league joined with teams in Michigan and Ontario to form the first fully professional league—the International Professional Hockey League (IPHL)—in 1904. The WPHL and IPHL hired players from Canada; in response, Canadian leagues began to pay players (who played with amateurs). The IPHL, cut off from its largest source of players, disbanded in 1907. By then, several professional hockey leagues were operating in Canada (with leagues in Manitoba, Ontario and Quebec). In 1910, the National Hockey Association (NHA) was formed in Montreal. The NHA would further refine the rules: dropping the rover position, dividing the game into three 20-minute periods and introducing minor and major penalties. After re-organizing as the National Hockey League in 1917, the league expanded into the United States, starting with the Boston Bruins in 1924. Professional hockey leagues developed later in Europe, but amateur leagues leading to national championships were in place. One of the first was the Swiss National League A, founded in 1916. Today, professional leagues have been introduced in most countries of Europe. Top European leagues include the Kontinental Hockey League, the Czech Extraliga, the Finnish Liiga and the Swedish Hockey League. Game While the general characteristics of the game remain constant, the exact rules depend on the particular code of play being used. The two most important codes are those of the IIHF and the NHL. Both of these codes, and others, originated from Canadian rules of ice hockey of the early 20th century. Ice hockey is played on a hockey rink. During normal play, there are six players on ice skates on the ice per side, one of them being the goaltender. The objective of the game is to score goals by shooting a hard vulcanized rubber disc, the puck, into the opponent's goal net at the opposite end of the rink. The players use their sticks to pass or shoot the puck. With certain restrictions, players may redirect the puck with any part of their body. Players may not hold the puck in their hand and are prohibited from using their hands to pass the puck to their teammates unless they are in the defensive zone. Players however can knock a puck out of the air with their hand to themself. Players are prohibited from kicking the puck into the opponent's goal, though unintentional redirections off the skate are permitted. Players may not intentionally bat the puck into the net with their hands. Hockey is an off-side game, meaning that forward passes are allowed, unlike in rugby. Before the 1930s, hockey was an on-side game, meaning that only backward passes were allowed. Those rules emphasized individual stick-handling to drive the puck forward. With the arrival of offside rules, the forward pass transformed hockey into a true team sport, where individual performance diminished in importance relative to team play, which could now be coordinated over the entire surface of the ice as opposed to merely rearward players. The six players on each team are typically divided into three forwards, two defencemen, and a goaltender. The term skaters typically applies to all players except goaltenders. The forward positions consist of a centre and two wingers: a left wing and a right wing. Forwards often play together as units or lines, with the same three forwards always playing together. The defencemen usually stay together as a pair generally divided between left and right. Left and right side wingers or defencemen are generally positioned on the side on which they carry their stick. A substitution of an entire unit at once is called a line change. Teams typically employ alternate sets of forward lines and defensive pairings when short-handed or on a power play. The goaltender stands in a, usually blue, semi-circle called the crease in the defensive zone keeping pucks out of the goal. Substitutions are permitted at any time during the game, although during a stoppage of play the home team is permitted the final change. When players are substituted during play, it is called changing on the fly. An NHL rule added in the 2005–06 season prevents a team from changing their line after they ice the puck. The boards surrounding the ice help keep the puck in play and they can also be used as tools to play the puck. Players are permitted to bodycheck opponents into the boards to stop progress. The referees, linesmen and the outsides of the goal are "in play" and do not stop the game when the puck or players either bounce into or collide with them. Play can be stopped if the goal is knocked out of position. Play often proceeds for minutes without interruption. After a stoppage, play is restarted with a faceoff. Two players face each other and an official drops the puck to the ice, where the two players attempt to gain control of the puck. Markings (circles) on the ice indicate the locations for the faceoff and guide the positioning of players. Three major rules of play in ice hockey limit the movement of the puck: offside, icing, and the puck going out of play. A player is offside if he enters his opponent's zone before the puck itself. Under many situations, a player may not "ice the puck", which means shooting the puck all the way across both the centre line and the opponent's goal line. The puck goes out of play whenever it goes past the perimeter of the ice rink (onto the player benches, over the glass, or onto the protective netting above the glass) and a stoppage of play is called by the officials using whistles. It does not matter if the puck comes back onto the ice surface from outside of the rink, because the puck is considered dead once it leaves the perimeter of the rink. The referee may also blow the whistle for a stoppage in play if the puck is jammed along the boards when 2 or more players are battling for the puck for a long time, or if the puck is stuck on the back of any of the two nets for a period of time. Under IIHF rules, each team may carry a maximum of 20 players and two goaltenders on their roster. NHL rules restrict the total number of players per game to 18, plus two goaltenders. In the NHL, the players are usually divided into four lines of three forwards, and into three pairs of defencemen. On occasion, teams may elect to substitute an extra defenceman for a forward. The seventh defenceman may play as a substitute defenceman, spend the game on the bench, or if a team chooses to play four lines then this seventh defenceman may see ice-time on the fourth line as a forward. Periods and overtime A professional game consists of three periods of twenty minutes, the clock running only when the puck is in play. The teams change ends after each period of play, including overtime. Recreational leagues and children's leagues often play shorter games, generally with three shorter periods of play. If a tie occurs in tournament play, as well as in the NHL playoffs, North Americans favour sudden death overtime, in which the teams continue to play twenty-minute periods until a goal is scored. Up until the 1999–2000 season, regular-season NHL games were settled with a single five-minute sudden death period with five players (plus a goalie) per side, with both teams awarded one point in the standings in the event of a tie. With a goal, the winning team would be awarded two points and the losing team none (just as if they had lost in regulation). The total elapsed time from when the puck first drops, is about 2 hours and 20 minutes for a 60-minute game. From the 1999–2000 until the 2003–04 seasons, the National Hockey League decided ties by playing a single five-minute sudden-death overtime period with each team having four skaters per side (plus the goalie). In the event of a tie, each team would still receive one point in the standings but in the event of a victory the winning team would be awarded two points in the standings and the losing team one point. The idea was to discourage teams from playing for a tie, since previously some teams might have preferred a tie and 1 point to risking a loss and zero points. The exception to this rule is if a team opts to pull their goalie in exchange for an extra skater during overtime and is subsequently scored upon (an empty net goal), in which case the losing team receives no points for the overtime loss. Since the 2015–16 season, the single five-minute sudden-death overtime session involves three skaters on each side. Since three skaters must always be on the ice in an NHL game, the consequences of penalties are slightly different from those during regulation play; any penalty during overtime that would result in a team losing a skater during regulation instead causes the other side to add a skater. Once the penalized team's penalty ends, the penalized skater exits the penalty box and the teams continue at 4-on-4 until the next stoppage of play, at which point the teams return to three skaters per side. International play and several North American professional leagues, including the NHL (in the regular season), now use an overtime period identical to that from 1999–2000 to 2003–04 followed by a penalty shootout. If the score remains tied after an extra overtime period, the subsequent shootout consists of three players from each team taking penalty shots. After these six total shots, the team with the most goals is awarded the victory. If the score is still tied, the shootout then proceeds to sudden death. Regardless of the number of goals scored by either team during the shootout, the final score recorded will award the winning team one more goal than the score at the end of regulation time. In the NHL if a game is decided in overtime or by a shootout the winning team is awarded two points in the standings and the losing team is awarded one point. Ties no longer occur in the NHL. Overtime in the NHL playoffs differs from the regular season. In the playoffs there are no shootouts. If a game is tied after regulation, then a 20-minute period of 5-on-5 sudden-death overtime will be added. If the game is still tied after the overtime, another period is added until a team scores, which wins the match. Since 2019, the IIHF World Championships and the gold medal game in the Olympics use the same format, but in a 3-on-3 format. Penalties In ice hockey, infractions of the rules lead to a play stoppage whereby the play is restarted at a faceoff. Some infractions result in a penalty on a player or team. In the simplest case, the offending player is sent to the penalty box and their team must play with one less player on the ice for a designated time. Minor penalties last for two minutes, major penalties last for five minutes, and a double minor penalty is two consecutive penalties of two minutes duration. A single minor penalty may be extended by two minutes for causing visible injury to the victimized player. This is usually when blood is drawn during high sticking. Players may be also assessed personal extended penalties or game expulsions for misconduct in addition to the penalty or penalties their team must serve. The team that has been given a penalty is said to be playing short-handed while the opposing team is on a power play. A two-minute minor penalty is often charged for lesser infractions such as tripping, elbowing, roughing, high-sticking, delay of the game, too many players on the ice, boarding, illegal equipment, charging (leaping into an opponent or body-checking him after taking more than two strides), holding, holding the stick (grabbing an opponent's stick), interference, hooking, slashing, kneeing, unsportsmanlike conduct (arguing a penalty call with referee, extremely vulgar or inappropriate verbal comments), "butt-ending" (striking an opponent with the knob of the stick), "spearing" (jabbing an opponent with the blade of the stick), or cross-checking. As of the 2005–2006 season, a minor penalty is also assessed for diving, where a player embellishes or simulates an offence. More egregious fouls may be penalized by a four-minute double-minor penalty, particularly those that injure the victimized player. These penalties end either when the time runs out or when the other team scores during the power play. In the case of a goal scored during the first two minutes of a double-minor, the penalty clock is set down to two minutes upon a score, effectively expiring the first minor penalty. Five-minute major penalties are called for especially violent instances of most minor infractions that result in intentional injury to an opponent, or when a minor penalty results in visible injury (such as bleeding), as well as for fighting. Major penalties are always served in full; they do not terminate on a goal scored by the other team. Major penalties assessed for fighting are typically offsetting, meaning neither team is short-handed and the players exit the penalty box upon a stoppage of play following the expiration of their respective penalties. The foul of boarding (defined as "check[ing] an opponent in such a manner that causes the opponent to be thrown violently in the boards") is penalized either by a minor or major penalty at the discretion of the referee, based on the violent state of the hit. A minor or major penalty for boarding is often assessed when a player checks an opponent from behind and into the boards. Some varieties of penalty do not require the offending team to play a man short. Concurrent five-minute major penalties in the NHL usually result from fighting. In the case of two players being assessed five-minute fighting majors, both the players serve five minutes without their team incurring a loss of player (both teams still have a full complement of players on the ice). This differs with two players from opposing sides getting minor penalties, at the same time or at any intersecting moment, resulting from more common infractions. In this case, both teams will have only four skating players (not counting the goaltender) until one or both penalties expire (if one penalty expires before the other, the opposing team gets a power play for the remainder of the time); this applies regardless of current pending penalties. However, in the NHL, a team always has at least three skaters on the ice. Thus, ten-minute misconduct penalties are served in full by the penalized player, but his team may immediately substitute another player on the ice unless a minor or major penalty is assessed in conjunction with the misconduct (a two-and-ten or five-and-ten). In this case, the team designates another player to serve the minor or major; both players go to the penalty box, but only the designee may not be replaced, and he is released upon the expiration of the two or five minutes, at which point the ten-minute misconduct begins. In addition, game misconducts are assessed for deliberate intent to inflict severe injury on an opponent (at the officials' discretion), or for a major penalty for a stick infraction or repeated major penalties. The offending player is ejected from the game and must immediately leave the playing surface (he does not sit in the penalty box); meanwhile, if an additional minor or major penalty is assessed, a designated player must serve out of that segment of the penalty in the box (similar to the above-mentioned "two-and-ten"). In some rare cases, a player may receive up to nineteen minutes in penalties for one string of plays. This could involve receiving a four-minute double-minor penalty, getting in a fight with an opposing player who retaliates, and then receiving a game misconduct after the fight. In this case, the player is ejected and two teammates must serve the double-minor and major penalties. A penalty shot is awarded to a player when the illegal actions of another player stop a clear scoring opportunity, most commonly when the player is on a breakaway. A penalty shot allows the obstructed player to pick up the puck on the centre red-line and attempt to score on the goalie with no other players on the ice, to compensate for the earlier missed scoring opportunity. A penalty shot is also awarded for a defender other than the goaltender covering the puck in the goal crease, a goaltender intentionally displacing his own goal posts during a breakaway to avoid a goal, a defender intentionally displacing his own goal posts when there is less than two minutes to play in regulation time or at any point during overtime, or a player or coach intentionally throwing a stick or other object at the puck or the puck carrier and the throwing action disrupts a shot or pass play. Officials also stop play for puck movement violations, such as using one's hands to pass the puck in the offensive end, but no players are penalized for these offences. The sole exceptions are deliberately falling on or gathering the puck to the body, carrying the puck in the hand, and shooting the puck out of play in one's defensive zone (all penalized two minutes for delay of game). In the NHL, a unique penalty applies to the goalies. The goalies now are forbidden to play the puck in the "corners" of the rink near their own net. This will result in a two-minute penalty against the goalie's team. Only in the area in front of the goal line and immediately behind the net (marked by two red lines on either side of the net) can the goalie play the puck. An additional rule that has never been a penalty, but was an infraction in the NHL before recent rules changes, is the two-line offside pass. Prior to the 2005–06 NHL season, play was stopped when a pass from inside a team's defending zone crossed the centre line, with a face-off held in the defending zone of the offending team. Now, the centre line is no longer used in the NHL to determine a two-line pass infraction, a change that the IIHF had adopted in 1998. Players are now able to pass to teammates who are more than the blue and centre ice red line away. The NHL has taken steps to speed up the game of hockey and create a game of finesse, by reducing the number of illegal hits, fights, and "clutching and grabbing" that occurred in the past. Rules are now more strictly enforced, resulting in more penalties, which provides more protection to the players and facilitates more goals being scored. The governing body for United States' amateur hockey has implemented many new rules to reduce the number of stick-on-body occurrences, as well as other detrimental and illegal facets of the game ("zero tolerance"). In men's hockey, but not in women's, a player may use his hip or shoulder to hit another player if the player has the puck or is the last to have touched it. This use of the hip and shoulder is called body checking. Not all physical contact is legal—in particular, hits from behind, hits to the head and most types of forceful stick-on-body contact are illegal. A delayed penalty call occurs when an offence is committed by the team that does not have possession of the puck. In this circumstance the team with possession of the puck is allowed to complete the play; that is, play continues until a goal is scored, a player on the opposing team gains control of the puck, or the team in possession commits an infraction or penalty of their own. Because the team on which the penalty was called cannot control the puck without stopping play, it is impossible for them to score a goal. In these cases, the team in possession of the puck can pull the goalie for an extra attacker without fear of being scored on. However, it is possible for the controlling team to mishandle the puck into their own net. If a delayed penalty is signalled and the team in possession scores, the penalty is still assessed to the offending player, but not served. In 2012, this rule was changed by the United States' National Collegiate Athletic Association (NCAA) for college level hockey. In college games, the penalty is still enforced even if the team in possession scores. Officials A typical game of hockey is governed by two to four officials on the ice, charged with enforcing the rules of the game. There are typically two linesmen who are mainly responsible for calling |
devices (hubs, switches, routers) by various types of copper or fiber cable. 802.3 is a technology that supports the IEEE 802.1 network architecture. 802.3 also defines LAN access method using CSMA/CD. Communication standards See also IEEE 802 IEEE 802.11, a set of wireless networking standards IEEE 802.16, a set of WiMAX standards IEEE Standards Association | is a technology that supports the IEEE 802.1 network architecture. 802.3 also defines LAN access method using CSMA/CD. Communication standards See also IEEE 802 IEEE 802.11, a set of wireless networking standards IEEE 802.16, a set of WiMAX standards IEEE |
to store values greater than 215−1, the program will fail on computers with 16-bit integers. That variable should have been declared as , which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers. This issue is resolved by C99 in stdint.h in the form of . Short integer A short integer can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine. In C, it is denoted by . It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required. A conforming program can assume that it can safely store values between −(215−1) and 215−1, but it may not assume that the range isn't larger. In Java, a is always a 16-bit integer. In the Windows API, the datatype is defined as a 16-bit signed integer on all machines. Long integer A long integer can represent a whole integer whose range is greater than or equal to that of a standard integer on the same machine. In C, it is denoted by . It is required to be at least 32 bits, and may or may not be larger than a standard integer. A conforming program can assume that it can safely store values between −(231−1) and 231−1, but it may not assume that the range isn't larger. Long long In the C99 version of the C programming language and the C++11 version of C++, a long long type is supported that has double the minimum capacity of the standard long. This type is not supported by compilers that require C code to be compliant with the previous C++ standard, C++03, because the type did not exist in C++03. For an ANSI/ISO compliant compiler, the minimum requirements for the specified ranges, that is, −(263−1) to 263−1 for signed and 0 to 264−1 for unsigned, must be fulfilled; however, extending this range is permitted. This can be an issue when exchanging code and data between platforms, or doing direct hardware access. Thus, there are several sets of headers providing platform independent exact width types. The C standard library provides stdint.h; this was introduced in C99 and C++11. Syntax Literals for integers can be written as regular Arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value. However, most programming languages disallow use of commas or spaces for digit grouping. Examples of integer literals are: 42 10000 -233000 There are several alternate methods for writing integer literals in many programming languages: Most programming languages, especially those influenced by C, prefix an integer literal with 0X or 0x to represent a hexadecimal value, e.g. 0xDEADBEEF. Other languages may use a different notation, e.g. some assembly languages append an H or h to the end of a hexadecimal value. Perl, Ruby, Java, Julia, D, Rust and Python (starting from version 3.6) allow embedded underscores for clarity, e.g. 10_000_000, and fixed-form Fortran ignores embedded spaces in integer literals. In C and C++, a leading zero indicates an octal value, e.g. 0755. This was primarily intended to be used with Unix modes; however, it has been criticized because normal integers may also lead with zero. As such, Python, Ruby, Haskell, and OCaml prefix octal values with 0O or 0o, following the layout used by hexadecimal values. Several languages, including Java, C#, Scala, Python, Ruby, and OCaml, can represent binary values by prefixing a number with 0B or 0b. See also Arbitrary-precision arithmetic Binary-coded decimal (BCD) C data types Integer overflow Signed number representations Notes | (base 16) or octal (base 8). Some programming languages also permit digit group separators. The internal representation of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value. The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The width or precision of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through 2n−1. Other encodings of integer values to bit patterns are sometimes used, for example binary-coded decimal or Gray code, or as printed character codes such as ASCII. There are four well-known ways to represent signed numbers in a binary computing system. The most common is two's complement, which allows a signed integral type with n bits to represent numbers from −2(n−1) through 2(n−1)−1. Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary, sign-magnitude, and ones' complement. Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one programming language may be a different size in a different language or on a different processor. Common integral data types Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths. The table above lists integral type widths that are supported in hardware by common processors. High level programming languages provide more possibilities. It is common to have a 'double width' integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (that can represent only the integers in a specified range). Some languages, such as Lisp, Smalltalk, REXX, Haskell, Python, and Raku support arbitrary precision integers (also known as infinite precision integers or bignums). Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's class or Perl's "" package. These use as much of the computer's memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they too can only represent a finite subset of the mathematical integers. These schemes support very large numbers, for example one kilobyte of memory could be used to store numbers up to 2466 decimal digits long. A Boolean or Flag type is a type that can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access. A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal. Bytes and octets The term byte initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term byte was usually not used at all in connection with bit- and word-addressed machines. The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate. In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet. Words The term 'word' is used for a small group of bits that are handled simultaneously by processors of a particular architecture. The size of |
period There was a continuing opposition to images and their misuse within Christianity from very early times. "Whenever images threatened to gain undue influence within the church, theologians have sought to strip them of their power". Further, "there is no century between the fourth and the eighth in which there is not some evidence of opposition to images even within the Church". Nonetheless, popular favor for icons guaranteed their continued existence, while no systematic apologia for or against icons, or doctrinal authorization or condemnation of icons yet existed. The use of icons was seriously challenged by Byzantine Imperial authority in the 8th century. Though by this time opposition to images was strongly entrenched in Judaism and Islam, attribution of the impetus toward an iconoclastic movement in Eastern Orthodoxy to Muslims or Jews "seems to have been highly exaggerated, both by contemporaries and by modern scholars". Though significant in the history of religious doctrine, the Byzantine controversy over images is not seen as of primary importance in Byzantine history. "Few historians still hold it to have been the greatest issue of the period..." The Iconoclastic Period began when images were banned by Emperor Leo III the Isaurian sometime between 726 and 730. Under his son Constantine V, a council forbidding image veneration was held at Hieria near Constantinople in 754. Image veneration was later reinstated by the Empress Regent Irene, under whom another council was held reversing the decisions of the previous iconoclast council and taking its title as Seventh Ecumenical Council. The council anathemized all who hold to iconoclasm, i.e. those who held that veneration of images constitutes idolatry. Then the ban was enforced again by Leo V in 815. And finally icon veneration was decisively restored by Empress Regent Theodora in 843. From then on all Byzantine coins had a religious image or symbol on the reverse, usually an image of Christ for larger denominations, with the head of the Emperor on the obverse, reinforcing the bond of the state and the divine order. Acheiropoieta The tradition of acheiropoieta (, literally "not-made-by-hand") accrued to icons that are alleged to have come into existence miraculously, not by a human painter. Such images functioned as powerful relics as well as icons, and their images were naturally seen as especially authoritative as to the true appearance of the subject: naturally and especially because of the reluctance to accept mere human productions as embodying anything of the divine, a commonplace of Christian deprecation of man-made "idols". Like icons believed to be painted directly from the live subject, they therefore acted as important references for other images in the tradition. Beside the developed legend of the mandylion or Image of Edessa was the tale of the Veil of Veronica, whose very name signifies "true icon" or "true image", the fear of a "false image" remaining strong. Stylistic developments Although there are earlier records of their use, no panel icons earlier than the few from the 6th century preserved at the Greek Orthodox Saint Catherine's Monastery in Egypt survive, as the other examples in Rome have all been drastically over-painted. The surviving evidence for the earliest depictions of Christ, Mary and saints therefore comes from wall-paintings, mosaics and some carvings. They are realistic in appearance, in contrast to the later stylization. They are broadly similar in style, though often much superior in quality, to the mummy portraits done in wax (encaustic) and found at Fayyum in Egypt. As we may judge from such items, the first depictions of Jesus were generic rather than portrait images, generally representing him as a beardless young man. It was some time before the earliest examples of the long-haired, bearded face that was later to become standardized as the image of Jesus appeared. When they did begin to appear there was still variation. Augustine of Hippo (354–430) said that no one knew the appearance of Jesus or that of Mary. However, Augustine was not a resident of the Holy Land and therefore was not familiar with the local populations and their oral traditions. Gradually, paintings of Jesus took on characteristics of portrait images. At this time the manner of depicting Jesus was not yet uniform, and there was some controversy over which of the two most common icons was to be favored. The first or "Semitic" form showed Jesus with short and "frizzy" hair; the second showed a bearded Jesus with hair parted in the middle, the manner in which the god Zeus was depicted. Theodorus Lector remarked that of the two, the one with short and frizzy hair was "more authentic". To support his assertion, he relates a story (excerpted by John of Damascus) that a pagan commissioned to paint an image of Jesus used the "Zeus" form instead of the "Semitic" form, and that as punishment his hands withered. Though their development was gradual, we can date the full-blown appearance and general ecclesiastical (as opposed to simply popular or local) acceptance of Christian images as venerated and miracle-working objects to the 6th century, when, as Hans Belting writes, "we first hear of the church's use of religious images". "As we reach the second half of the sixth century, we find that images are attracting direct veneration and some of them are credited with the performance of miracles". Cyril Mango writes, "In the post-Justinianic period the icon assumes an ever increasing role in popular devotion, and there is a proliferation of miracle stories connected with icons, some of them rather shocking to our eyes". However, the earlier references by Eusebius and Irenaeus indicate veneration of images and reported miracles associated with them as early as the 2nd century. Symbolism In the icons of Eastern Orthodoxy, and of the Early Medieval West, very little room is made for artistic license. Almost everything within the image has a symbolic aspect. Christ, the saints, and the angels all have halos. Angels (and often John the Baptist) have wings because they are messengers. Figures have consistent facial appearances, hold attributes personal to them, and use a few conventional poses. Colour plays an important role as well. Gold represents the radiance of Heaven; red, divine life. Blue is the color of human life, white is the Uncreated Light of God, only used for resurrection and transfiguration of Christ. If you look at icons of Jesus and Mary: Jesus wears red undergarment with a blue outer garment (God become Human) and Mary wears a blue undergarment with a red overgarment (human was granted gifts by God), thus the doctrine of deification is conveyed by icons. Letters are symbols too. Most icons incorporate some calligraphic text naming the person or event depicted. Even this is often presented in a stylized manner. Miracles In the Eastern Orthodox Christian tradition there are reports of particular, wonderworking icons that exude myrrh (fragrant, healing oil), or perform miracles upon petition by believers. When such reports are verified by the Orthodox hierarchy, they are understood as miracles performed by God through the prayers of the saint, rather than being magical properties of the painted wood itself. Theologically, all icons are considered to be sacred, and are miraculous by nature, being a means of spiritual communion between the heavenly and earthly realms. However, it is not uncommon for specific icons to be characterised as "miracle-working", meaning that God has chosen to glorify them by working miracles through them. Such icons are often given particular names (especially those of the Virgin Mary), and even taken from city to city where believers gather to venerate them and pray before them. Islands like that of Tinos are renowned for possessing such "miraculous" icons, and are visited every year by thousands of pilgrims. Eastern Orthodox teaching The Eastern Orthodox view of the origin of icons is generally quite different from that of most secular scholars and from some in contemporary Roman Catholic circles: "The Orthodox Church maintains and teaches that the sacred image has existed from the beginning of Christianity", Léonid Ouspensky has written. Accounts that some non-Orthodox writers consider legendary are accepted as history within Eastern Orthodoxy, because they are a part of church tradition. Thus accounts such as that of the miraculous "Image Not Made by Hands", and the weeping and moving "Mother of God of the Sign" of Novgorod are accepted as fact: "Church Tradition tells us, for example, of the existence of an Icon of the Savior during His lifetime (the 'Icon-Made-Without-Hands') and of Icons of the Most-Holy Theotokos [Mary] immediately after Him." Eastern Orthodoxy further teaches that "a clear understanding of the importance of Icons" was part of the church from its very beginning, and has never changed, although explanations of their importance may have developed over time. This is because icon painting is rooted in the theology of the Incarnation (Christ being the eikon of God) which did not change, though its subsequent clarification within the Church occurred over the period of the first seven Ecumenical Councils. Also, icons served as tools of edification for the illiterate faithful during most of the history of Christendom. Thus, icons are words in painting; they refer to the history of salvation and to its manifestation in concrete persons. In the Orthodox Church "icons have always been understood as a visible gospel, as a testimony to the great things given man by God the incarnate Logos". In the Council of 860 it was stated that "all that is uttered in words written in syllables is also proclaimed in the language of colors". Eastern Orthodox find the first instance of an image or icon in the Bible when God made man in His own image (Septuagint Greek eikona), in Genesis 1:26–27. In Exodus, God commanded that the Israelites not make any graven image; but soon afterwards, he commanded that they make graven images of cherubim and other like things, both as statues and woven on tapestries. Later, Solomon included still more such imagery when he built the first temple. Eastern Orthodox believe these qualify as icons, in that they were visible images depicting heavenly beings and, in the case of the cherubim, used to indirectly indicate God's presence above the Ark. In the Book of Numbers it is written that God told Moses to make a bronze serpent, Nehushtan, and hold it up, so that anyone looking at the snake would be healed of their snake bites. In John 3, Jesus refers to the same serpent, saying that he must be lifted up in the same way that the serpent was. John of Damascus also regarded the brazen serpent as an icon. Further, Jesus Christ himself is called the "image of the invisible God" in Colossians 1:15, and is therefore in one sense an icon. As people are also made in God's images, people are also considered to be living icons, and are therefore "censed" along with painted icons during Orthodox prayer services. According to John of Damascus, anyone who tries to destroy icons "is the enemy of Christ, the Holy Mother of God and the saints, and is the defender of the Devil and his demons". This is because the theology behind icons is closely tied to the Incarnational theology of the humanity and divinity of Jesus, so that attacks on icons typically have the effect of undermining or attacking the Incarnation of Jesus himself as elucidated in the Ecumenical Councils. Basil of Caesarea, in his writing On the Holy Spirit, says: "The honor paid to the image passes to the prototype". He also illustrates the concept by saying, "If I point to a statue of Caesar and ask you 'Who is that?', your answer would properly be, 'It is Caesar.' When you say such you do not mean that the stone itself is Caesar, but rather, the name and honor you ascribe to the statue passes over to the original, the archetype, Caesar himself." So it is with an icon. Thus to kiss an icon of Christ, in the Eastern Orthodox view, is to show love towards Christ Jesus himself, not mere wood and paint making up the physical substance of the icon. Worship of the icon as somehow entirely separate from its prototype is expressly forbidden by the Seventh Ecumenical Council. Icons are often illuminated with a candle or jar of oil with a wick. (Beeswax for candles and olive oil for oil lamps are preferred because they burn very cleanly, although other materials are sometimes used.) The illumination of religious images with lamps or candles is an ancient practice pre-dating Christianity. Icon painting tradition by region Byzantine Empire Of the icon painting tradition that developed in Byzantium, with Constantinople as the chief city, we have only a few icons from the 11th century and none preceding them, in part because of the Iconoclastic reforms during which many were destroyed or lost, and also because of plundering by the Republic of Venice in 1204 during the Fourth Crusade, and finally the Fall of Constantinople in 1453. It was only in the Komnenian period (1081–1185) that the cult of the icon became widespread in the Byzantine world, partly on account of the dearth of richer materials (such as mosaics, ivory, and vitreous enamels), but also because an iconostasis a special screen for icons was introduced then in ecclesiastical practice. The style of the time was severe, hieratic and distant. In the late Comnenian period this | credited with the performance of miracles". Cyril Mango writes, "In the post-Justinianic period the icon assumes an ever increasing role in popular devotion, and there is a proliferation of miracle stories connected with icons, some of them rather shocking to our eyes". However, the earlier references by Eusebius and Irenaeus indicate veneration of images and reported miracles associated with them as early as the 2nd century. Symbolism In the icons of Eastern Orthodoxy, and of the Early Medieval West, very little room is made for artistic license. Almost everything within the image has a symbolic aspect. Christ, the saints, and the angels all have halos. Angels (and often John the Baptist) have wings because they are messengers. Figures have consistent facial appearances, hold attributes personal to them, and use a few conventional poses. Colour plays an important role as well. Gold represents the radiance of Heaven; red, divine life. Blue is the color of human life, white is the Uncreated Light of God, only used for resurrection and transfiguration of Christ. If you look at icons of Jesus and Mary: Jesus wears red undergarment with a blue outer garment (God become Human) and Mary wears a blue undergarment with a red overgarment (human was granted gifts by God), thus the doctrine of deification is conveyed by icons. Letters are symbols too. Most icons incorporate some calligraphic text naming the person or event depicted. Even this is often presented in a stylized manner. Miracles In the Eastern Orthodox Christian tradition there are reports of particular, wonderworking icons that exude myrrh (fragrant, healing oil), or perform miracles upon petition by believers. When such reports are verified by the Orthodox hierarchy, they are understood as miracles performed by God through the prayers of the saint, rather than being magical properties of the painted wood itself. Theologically, all icons are considered to be sacred, and are miraculous by nature, being a means of spiritual communion between the heavenly and earthly realms. However, it is not uncommon for specific icons to be characterised as "miracle-working", meaning that God has chosen to glorify them by working miracles through them. Such icons are often given particular names (especially those of the Virgin Mary), and even taken from city to city where believers gather to venerate them and pray before them. Islands like that of Tinos are renowned for possessing such "miraculous" icons, and are visited every year by thousands of pilgrims. Eastern Orthodox teaching The Eastern Orthodox view of the origin of icons is generally quite different from that of most secular scholars and from some in contemporary Roman Catholic circles: "The Orthodox Church maintains and teaches that the sacred image has existed from the beginning of Christianity", Léonid Ouspensky has written. Accounts that some non-Orthodox writers consider legendary are accepted as history within Eastern Orthodoxy, because they are a part of church tradition. Thus accounts such as that of the miraculous "Image Not Made by Hands", and the weeping and moving "Mother of God of the Sign" of Novgorod are accepted as fact: "Church Tradition tells us, for example, of the existence of an Icon of the Savior during His lifetime (the 'Icon-Made-Without-Hands') and of Icons of the Most-Holy Theotokos [Mary] immediately after Him." Eastern Orthodoxy further teaches that "a clear understanding of the importance of Icons" was part of the church from its very beginning, and has never changed, although explanations of their importance may have developed over time. This is because icon painting is rooted in the theology of the Incarnation (Christ being the eikon of God) which did not change, though its subsequent clarification within the Church occurred over the period of the first seven Ecumenical Councils. Also, icons served as tools of edification for the illiterate faithful during most of the history of Christendom. Thus, icons are words in painting; they refer to the history of salvation and to its manifestation in concrete persons. In the Orthodox Church "icons have always been understood as a visible gospel, as a testimony to the great things given man by God the incarnate Logos". In the Council of 860 it was stated that "all that is uttered in words written in syllables is also proclaimed in the language of colors". Eastern Orthodox find the first instance of an image or icon in the Bible when God made man in His own image (Septuagint Greek eikona), in Genesis 1:26–27. In Exodus, God commanded that the Israelites not make any graven image; but soon afterwards, he commanded that they make graven images of cherubim and other like things, both as statues and woven on tapestries. Later, Solomon included still more such imagery when he built the first temple. Eastern Orthodox believe these qualify as icons, in that they were visible images depicting heavenly beings and, in the case of the cherubim, used to indirectly indicate God's presence above the Ark. In the Book of Numbers it is written that God told Moses to make a bronze serpent, Nehushtan, and hold it up, so that anyone looking at the snake would be healed of their snake bites. In John 3, Jesus refers to the same serpent, saying that he must be lifted up in the same way that the serpent was. John of Damascus also regarded the brazen serpent as an icon. Further, Jesus Christ himself is called the "image of the invisible God" in Colossians 1:15, and is therefore in one sense an icon. As people are also made in God's images, people are also considered to be living icons, and are therefore "censed" along with painted icons during Orthodox prayer services. According to John of Damascus, anyone who tries to destroy icons "is the enemy of Christ, the Holy Mother of God and the saints, and is the defender of the Devil and his demons". This is because the theology behind icons is closely tied to the Incarnational theology of the humanity and divinity of Jesus, so that attacks on icons typically have the effect of undermining or attacking the Incarnation of Jesus himself as elucidated in the Ecumenical Councils. Basil of Caesarea, in his writing On the Holy Spirit, says: "The honor paid to the image passes to the prototype". He also illustrates the concept by saying, "If I point to a statue of Caesar and ask you 'Who is that?', your answer would properly be, 'It is Caesar.' When you say such you do not mean that the stone itself is Caesar, but rather, the name and honor you ascribe to the statue passes over to the original, the archetype, Caesar himself." So it is with an icon. Thus to kiss an icon of Christ, in the Eastern Orthodox view, is to show love towards Christ Jesus himself, not mere wood and paint making up the physical substance of the icon. Worship of the icon as somehow entirely separate from its prototype is expressly forbidden by the Seventh Ecumenical Council. Icons are often illuminated with a candle or jar of oil with a wick. (Beeswax for candles and olive oil for oil lamps are preferred because they burn very cleanly, although other materials are sometimes used.) The illumination of religious images with lamps or candles is an ancient practice pre-dating Christianity. Icon painting tradition by region Byzantine Empire Of the icon painting tradition that developed in Byzantium, with Constantinople as the chief city, we have only a few icons from the 11th century and none preceding them, in part because of the Iconoclastic reforms during which many were destroyed or lost, and also because of plundering by the Republic of Venice in 1204 during the Fourth Crusade, and finally the Fall of Constantinople in 1453. It was only in the Komnenian period (1081–1185) that the cult of the icon became widespread in the Byzantine world, partly on account of the dearth of richer materials (such as mosaics, ivory, and vitreous enamels), but also because an iconostasis a special screen for icons was introduced then in ecclesiastical practice. The style of the time was severe, hieratic and distant. In the late Comnenian period this severity softened, and emotion, formerly avoided, entered icon painting. Major monuments for this change include the murals at Daphni Monastery (c. 1100) and the Church of St. Panteleimon near Skopje (1164). The Theotokos of Vladimir (c. 1115, illustration, right) is probably the most representative example of the new trend towards spirituality and emotion. The tendency toward emotionalism in icons continued in the Paleologan period, which began in 1261. Palaiologan art reached its pinnacle in mosaics such as those of Chora Church. In the last half of the 14th century, Palaiologan saints were painted in an exaggerated manner, very slim and in contorted positions, that is, in a style known as the Palaiologan Mannerism, of which Ochrid's Annunciation is a superb example. After 1453, the Byzantine tradition was carried on in regions previously influenced by its religion and culture—in the Balkans, Russia, and other Slavic countries, Georgia and Armenia in the Caucasus, and among Eastern Orthodox minorities in the Islamic world. In the Greek-speaking world Crete, ruled by Venice until the mid-17th century, was an important centre of painted icons, as home of the Cretan School, exporting many to Europe. Crete Crete was under Venetian control from 1204 and became a thriving center of art with eventually a Scuola di San Luca, or organized painter's guild, the Guild of Saint Luke, on Western lines. Cretan painting was heavily patronized both by Catholics of Venetian territories and by Eastern Orthodox. For ease of transport, Cretan painters specialized in panel paintings, and developed the ability to work in many styles to fit the taste of various patrons. El Greco, who moved to Venice after establishing his reputation in Crete, is the most famous artist of the school, who continued to use many Byzantine conventions in his works. In 1669 the city of Heraklion, on Crete, which at one time boasted at least 120 painters, finally fell to the Turks, and from that time Greek icon painting went into a decline, with a revival attempted in the 20th century by art reformers such as Photis Kontoglou, who emphasized a return to earlier styles. Russia Russian icons are typically paintings on wood, often small, though some in churches and monasteries may be as large as a table top. Many religious homes in Russia have icons hanging on the wall in the krasny ugol—the "red" corner (see Icon corner). There is a rich history and elaborate religious symbolism associated with icons. In Russian churches, the nave is typically separated from the sanctuary by an iconostasis, a wall of icons. The use and making of icons entered Kievan Rus' following its conversion to Orthodox Christianity from the Eastern Roman (Byzantine) Empire in 988 AD. As a general rule, these icons strictly followed models and formulas hallowed by usage, some of which had originated in Constantinople. As time passed, the Russians—notably Andrei Rublev and Dionisius—widened the vocabulary of iconic types and styles far beyond anything found elsewhere. The personal, improvisatory and creative traditions of Western European religious art are largely lacking in Russia before the 17th century, when Simon Ushakov's painting became strongly influenced by religious paintings and engravings from Protestant as well as Catholic Europe. In the mid-17th century, changes in liturgy and practice instituted by Patriarch Nikon of Moscow resulted in a split in the Russian Orthodox Church. The traditionalists, the persecuted "Old Ritualists" or "Old Believers", continued the traditional stylization of icons, while the State Church modified its practice. From that time icons began to be painted not only in the traditional stylized and nonrealistic mode, but also in a mixture of Russian stylization and Western European realism, and in a Western European manner very much like that of Catholic religious art of the time. The Stroganov School and the icons from Nevyansk rank among the last important schools of Russian icon-painting. Romania In Romania, icons painted as reversed images behind glass and set in frames were common in the 19th century and are still made. The process is known as reverse glass painting. "In the Transylvanian countryside, the expensive icons on panels imported from Moldavia, Wallachia, and Mt. Athos were gradually replaced by small, locally produced icons on glass, which were much less expensive and thus accessible to the Transylvanian peasants[.]" Serbia The earliest historical records about icons in Serbia dates back to the period of Nemanjić dynasty. One of the notable schools of Serb icons was active in the Bay of Kotor from the 17th century to the 19th century. Trojeručica meaning "Three-handed Theotokos" is the most important icon of the Serbian Orthodox Church and main icon of Mount Athos. Egypt and Ethiopia The Coptic Orthodox Church of Alexandria and Oriental Orthodoxy also have distinctive, living icon painting traditions. Coptic icons have their origin in the Hellenistic art of Egyptian Late Antiquity, as exemplified by the Fayum mummy portraits. Beginning in the 4th century, churches painted their walls and made icons to reflect an authentic expression of their faith. Aleppo The Aleppo School was a school of icon-painting, founded by the priest Yusuf al-Musawwir (also known as Joseph the Painter) and active in Aleppo, which was then a part of the Ottoman Empire, between at least 1645 and 1777. Western Christianity Although the word "icon" is not generally used in Western Christianity, there are religious works of art which were largely patterned on Byzantine works, and equally conventional in composition and depiction. Until the 13th century, icon-like depictions of sacred figures followed Eastern patterns—although very few survive from this early period. Italian examples are in a style known as Italo-Byzantine. From the 13th century, the western |
SL5 with a simpler concept of suspend/resume and developed a new concept for the natural successor to SNOBOL4 with the following principles; SNOBOL4's philosophic and sematic basis SL5 syntactic basis SL5 features, excluding the generalized procedure mechanism The new language was initially known as SNOBOL5, but as it was significantly different from SNOBOL in all but the underlying concept, a new name was ultimately desired. After considering "s" as a sort of homage to "C", but this was ultimately abandoned due to the problems with typesetting documents using that name. A series of new names were proposed and abandoned; Irving, bard, and "TL" for "The Language". It was at this time that Xerox PARC began publishing about their work on graphical user interfaces and the term "icon" began to enter the computer lexicon. The decision was made to change the name initially to "icon" before finally choosing "Icon". Language Basic syntax The Icon language is derived from the ALGOL-class of structured programming languages, and thus has syntax similar to C or Pascal. Icon is most similar to Pascal, using syntax for assignments, the keyword and similar syntax. On the other hand, Icon uses C-style braces for structuring execution groups, and programs start by running a procedure called . In many ways Icon also shares features with most scripting languages (as well as SNOBOL and SL5, from which they were taken): variables do not have to be declared, types are cast automatically, and numbers can be converted to strings and back automatically. Another feature common to many scripting languages, but not all, is the lack of a line-ending character; in Icon, lines that do not end with a semicolon get ended by an implied semicolon if it makes sense. Procedures are the basic building blocks of Icon programs. Although they use Pascal naming, they work more like C functions and can return values; there is no keyword in Icon. procedure doSomething(aString) write(aString) end Goal-directed execution One of the key concepts in SNOBOL was that its functions returned the "success" or "failure" as primitives of the language rather than using magic numbers or other techniques. For instance, a function that returns the position of a substring within another string is a common routine found in most language runtime systems; for instance, in JavaScript one might want to find the position of the word "World" within "Hello, World!", which would be accomplished with , which would return 7. If one instead asks for the the code will "fail", as the search term does not appear in the string. In JavaScript, as in most languages, this will be indicated by returning a magic number, in this case -1. In SNOBOL a failure of this sort returns a special value, . SNOBOL's syntax operates directly on the success or failure of the operation, jumping to labelled sections of the code without having to write a separate test. For instance, the following code prints "Hello, world!" five times: * SNOBOL program to print Hello World I = 1 LOOP OUTPUT = "Hello, world!" I = I + 1 LE(I, 5) : S(LOOP) END To perform the loop, the less-than-or-equal operator, , is called on the index variable I, and if it ucceeds, meaning I is less than 5, it branches to the named label and continues. Icon retained the basic concept of flow control based on success or failure but developed the language further. One change was the replacement of the labelled -like branching with block-oriented structures in keeping with the structured programming style that was sweeping the computer industry in the late 1960s. The second was to allow "failure" to be passed along the call chain so that entire blocks will succeed or fail as a whole. This is a key concept of the Icon language. Whereas in traditional languages one would have to include code to test the success or failure based on boolean logic and then branch based on the outcome, such tests and branches are inherent to Icon code and do not have to be explicitly written. For instance, consider this bit of code written in the Java programming language. It calls the function to read a character from a (previously opened) file, assigns the result to the variable , and then s the value of to another file. The result is to copy one file to another. will eventually run out of characters to read from the file, potentially on its very first call, which would leave in an undetermined state and potentially cause to cause a null pointer exception. To avoid this, returns the special value (end-of-file) in this situation, which requires an explicit test to avoid ing it: while ((a = read()) != EOF) { write(a); } In contrast, in Icon the function returns a line of text or . is not simply an analog of , as it is explicitly understood by the language to mean "stop processing" or "do the fail case" depending on the context. The equivalent code in Icon is: while a := read() then write(a) This means, "as long as read does not return fail, call write, otherwise stop". There is no need to specify a test against the magic number as in the Java example, this is implicit, and the resulting code is simplified. Because success and failure are passed up through the call chain, one can embed functions within others and they stop when the nested function fails. For instance, the code above can be reduced to: while write(read()) In this version, if fails, fails, and the stops. Icon's branching and looping constructs are all based on the success or failure of the code inside them, not on an arbitrary boolean test provided by the programmer. performs the block if its "test" returns a value, and the block or moves to the next line if it returns . Likewise, continues calling its block until it receives a fail. Icon refers to this concept as goal-directed execution. It is important to contrast the concept of success and failure with the concept of an exception; exceptions are unusual situations, not expected outcomes. Fails in Icon are expected outcomes; reaching the end of a file is an expected situation and not an exception. Icon does not have exception handling in the traditional sense, although fail is often used in exception-like situations. For instance, if the file being read does not exist, fails without a special situation being indicated. In traditional language, these "other conditions" have no natural way of being indicated; additional magic numbers may be used, but more typically exception handling is used to "throw" a value. For instance, to handle a missing file in the Java code, one might see: try { while ((a = read()) != EOF) { write(a); } } catch (Exception e) { // something else went wrong, use this catch to exit the loop } This case needs two comparisons: one for EOF and another for all other errors. Since Java does not allow exceptions to be compared as logic elements, as under Icon, the lengthy syntax must be used instead. Try blocks also impose a performance penalty even if no exception is thrown, a distributed cost that Icon normally avoids. Icon uses this same goal-directed mechanism to perform traditional boolean tests, although with subtle differences. A simple comparison like does not mean, "if the operations to the right evaluate to true" as they would under most languages; instead, it means something more like, "if the operations to the right succeed. In this case the < operator succeeds if the comparison is true. The calls its clause if the expression succeeds, or the or the next line if it fails. The result is similar to the traditional if/then seen in other languages, the performs if is less than . The subtlety is that the same comparison expression can be placed anywhere, for instance: write(a < b) Another difference is that the operator returns its second argument if it succeeds, which in this example will result in the value of being written if it is larger than , otherwise nothing is written. As this is not a test per se, but an operator that returns a value, they can be strung together allowing things like , a common type of comparison that in most languages must be written as a conjunction of two inequalities like . A key aspect of goal-directed execution is that the program may have to rewind to an earlier state if a procedure fails, a task known as backtracking. For instance, consider code that sets a variable to a starting location and then performs operations that may change the value - this is common in string scanning operations for instance, which will advance a cursor through the string as it scans. If the procedure fails, it is important that any subsequent reads of that variable return the original state, not the state as it was being internally manipulated. For this task, Icon has the reversible assignment operator, , and the reversible exchange, . For instance, consider some code that is attempting to find a pattern string within a larger string: { (i := 10) & (j := (i < find(pattern, inString))) } This code begins by moving to 10, the starting location for the search. However, if the fails, the block will fail as a whole, which results in the value of being left at 10 as an undesirable side effect. Replacing with indicates that should be reset to its previous value if the block fails. This provides an analog of atomicity in the execution. Generators Expressions in Icon may return a single value, for instance, will evaluate and return x if the value of x is less than 5, or else fail. However, Icon also includes the concept of procedures that do not immediately return success or failure, and instead return new values every time they are called. These are known as generators, and are a key part of the Icon language. Within the parlance of Icon, the evaluation of an expression or function produces a result sequence. A result sequence contains all the possible values that can be generated by the expression or function. When the result sequence is exhausted, the expression or function fails. Icon allows any procedure to return a single value or multiple values, controlled using the , and keywords. A procedure that lacks any of these keywords returns | derived from the ALGOL-class of structured programming languages, and thus has syntax similar to C or Pascal. Icon is most similar to Pascal, using syntax for assignments, the keyword and similar syntax. On the other hand, Icon uses C-style braces for structuring execution groups, and programs start by running a procedure called . In many ways Icon also shares features with most scripting languages (as well as SNOBOL and SL5, from which they were taken): variables do not have to be declared, types are cast automatically, and numbers can be converted to strings and back automatically. Another feature common to many scripting languages, but not all, is the lack of a line-ending character; in Icon, lines that do not end with a semicolon get ended by an implied semicolon if it makes sense. Procedures are the basic building blocks of Icon programs. Although they use Pascal naming, they work more like C functions and can return values; there is no keyword in Icon. procedure doSomething(aString) write(aString) end Goal-directed execution One of the key concepts in SNOBOL was that its functions returned the "success" or "failure" as primitives of the language rather than using magic numbers or other techniques. For instance, a function that returns the position of a substring within another string is a common routine found in most language runtime systems; for instance, in JavaScript one might want to find the position of the word "World" within "Hello, World!", which would be accomplished with , which would return 7. If one instead asks for the the code will "fail", as the search term does not appear in the string. In JavaScript, as in most languages, this will be indicated by returning a magic number, in this case -1. In SNOBOL a failure of this sort returns a special value, . SNOBOL's syntax operates directly on the success or failure of the operation, jumping to labelled sections of the code without having to write a separate test. For instance, the following code prints "Hello, world!" five times: * SNOBOL program to print Hello World I = 1 LOOP OUTPUT = "Hello, world!" I = I + 1 LE(I, 5) : S(LOOP) END To perform the loop, the less-than-or-equal operator, , is called on the index variable I, and if it ucceeds, meaning I is less than 5, it branches to the named label and continues. Icon retained the basic concept of flow control based on success or failure but developed the language further. One change was the replacement of the labelled -like branching with block-oriented structures in keeping with the structured programming style that was sweeping the computer industry in the late 1960s. The second was to allow "failure" to be passed along the call chain so that entire blocks will succeed or fail as a whole. This is a key concept of the Icon language. Whereas in traditional languages one would have to include code to test the success or failure based on boolean logic and then branch based on the outcome, such tests and branches are inherent to Icon code and do not have to be explicitly written. For instance, consider this bit of code written in the Java programming language. It calls the function to read a character from a (previously opened) file, assigns the result to the variable , and then s the value of to another file. The result is to copy one file to another. will eventually run out of characters to read from the file, potentially on its very first call, which would leave in an undetermined state and potentially cause to cause a null pointer exception. To avoid this, returns the special value (end-of-file) in this situation, which requires an explicit test to avoid ing it: while ((a = read()) != EOF) { write(a); } In contrast, in Icon the function returns a line of text or . is not simply an analog of , as it is explicitly understood by the language to mean "stop processing" or "do the fail case" depending on the context. The equivalent code in Icon is: while a := read() then write(a) This means, "as long as read does not return fail, call write, otherwise stop". There is no need to specify a test against the magic number as in the Java example, this is implicit, and the resulting code is simplified. Because success and failure are passed up through the call chain, one can embed functions within others and they stop when the nested function fails. For instance, the code above can be reduced to: while write(read()) In this version, if fails, fails, and the stops. Icon's branching and looping constructs are all based on the success or failure of the code inside them, not on an arbitrary boolean test provided by the programmer. performs the block if its "test" returns a value, and the block or moves to the next line if it returns . Likewise, continues calling its block until it receives a fail. Icon refers to this concept as goal-directed execution. It is important to contrast the concept of success and failure with the concept of an exception; exceptions are unusual situations, not expected outcomes. Fails in Icon are expected outcomes; reaching the end of a file is an expected situation and not an exception. Icon does not have exception handling in the traditional sense, although fail is often used in exception-like situations. For instance, if the file being read does not exist, fails without a special situation being indicated. In traditional language, these "other conditions" have no natural way of being indicated; additional magic numbers may be used, but more typically exception handling is used to "throw" a value. For instance, to handle a missing file in the Java code, one might see: try { while ((a = read()) != EOF) { write(a); } } catch (Exception e) { // something else went wrong, use this catch to exit the loop } This case needs two comparisons: one for EOF and another for all other errors. Since Java does not allow exceptions to be compared as logic elements, as under Icon, the lengthy syntax must be used instead. Try blocks also impose a performance penalty even if no exception is thrown, a distributed cost that Icon normally avoids. Icon uses this same goal-directed mechanism to perform traditional boolean tests, although with subtle differences. A simple comparison like does not mean, "if the operations to the right evaluate to true" as they would under most languages; instead, it means something more like, "if the operations to the right succeed. In this case the < operator succeeds if the comparison is true. The calls its clause if the expression succeeds, or the or the next line if it fails. The result is similar to the traditional if/then seen in other languages, the performs if is less than . The subtlety is that the same comparison expression can be placed anywhere, for instance: write(a < b) Another difference is that the operator returns its second argument if it succeeds, which in this example will result in the value of being written if it is larger than , otherwise nothing is written. As this is not a test per se, but an operator that returns a value, they can be strung together allowing things like , a common type of comparison that in most languages must be written as a conjunction of two inequalities like . A key aspect of goal-directed execution is that the program may have to rewind to an earlier state if a procedure fails, a task known as backtracking. For instance, consider code that sets a variable to a starting location and then performs operations that may change the value - this is common in string scanning operations for instance, which will advance a cursor through the string as it scans. If the procedure fails, it is important that any subsequent reads of that variable return the original state, not the state as it was being internally manipulated. For this task, Icon has the reversible assignment operator, , and the reversible exchange, . For instance, consider some code that is attempting to find a pattern string within a larger string: { (i := 10) & (j := (i < find(pattern, inString))) } This code begins by moving to 10, the starting location for the search. However, if the fails, the block will fail as a whole, which results in the value of being left at 10 as an undesirable side effect. Replacing with indicates that should be reset to its previous value if the block fails. This provides an analog of atomicity in the execution. Generators Expressions in Icon may return a single value, for instance, will evaluate and return x if the value of x is less than 5, or else fail. However, Icon also includes the concept of procedures that do not immediately return success or failure, and instead return new values every time they are called. These are known as generators, and are a key part of the Icon language. Within the parlance of Icon, the evaluation of an expression or function produces a result sequence. A result sequence contains all the possible values that can be generated by the expression or function. When the result sequence is exhausted, the expression or function fails. Icon allows any procedure to return a single value or multiple values, controlled using the , and keywords. A procedure that lacks any of these keywords returns , which occurs whenever execution runs to the of a procedure. For instance: procedure f(x) if x > 0 then { return 1 } end Calling will return 1, but calling will return . This can lead to non-obvious behavior, for instance, will output nothing because fails and suspends operation of . Converting a procedure to be a generator uses the keyword, which means "return this value, and when called again, start execution at this point". In this respect it is something like a combination of the concept in C and . For instance: procedure ItoJ(i, j) while i <= j do { suspend i i +:= 1 } fail end creates a generator that returns a series of numbers starting at and ending a , and then returns after that. the stops execution and returns the value of without reseting any of the state. When another call is made to the same function, execution picks up at that point with the previous values. In this case, that causes it to perform , loop back to the start of the while block, and then return the next value and suspend again. This continues until fails, at which point it exits the block and calls . This allows iterators to be constructed with ease. Another type of generator-builder is the alternator, which looks and operates like the boolean operator. For instance: if y < (x | 5) then write("y=", y) This appears to say "if y is smaller than x or 5 then...", but is actually a short-form for a generator that returns values until it falls off the end of the list. The values of the list are "injected" into the operations, in this case, . So in this example, the system first tests y < x, if x is indeed larger than y it returns the value of x, the test passes, and the value of y is written out in the clause. However, if x is not larger than y it fails, and the alternator continues, performing y < 5. If that test passes, y is written. If y is smaller than neither x or 5, the alternator runs out of tests and fails, the fails, and the is not performed. Thus, the value of y will appear on the console if it is smaller than x or 5, thereby fulfilling the purpose of a boolean . Functions will not be called unless evaluating their parameters succeeds, so this example can be shortened to: write("y=", (x | 5) > y) Internally, the alternator is not simply an and one can also use it to construct arbitrary lists of values. This can be used to iterate over arbitrary values, like: every i := (1|3|4|5|10|11|23) do write(i) As lists of integers are commonly found in many programming contexts, Icon also includes the keyword to construct ad hoc integer generators: every k := i to j do write(k) which can be shortened: every write(1 to 10) Icon is not strongly typed, so the alternator lists can contain different types of items: every i := (1 | "hello" | x < 5) do write(i) This writes 1, "hello" and maybe 5 depending on the value of x. Likewise the conjunction operator, , is used in a fashion similar to a boolean operator: every x := ItoJ(0,10) & x % 2 == 0 do write(x) This code calls and returns an initial value of 0 which is assigned to x. It then performs the right-hand side of the conjunction, and since does equal 0, it writes out the value. It then calls the generator again which assigns 1 to x, which fails the right-hand-side and prints nothing. The result is a list of every even integer from 0 to 10. The concept of generators is particularly useful and powerful when used with string operations, and is a |
all iconographers and iconologists". Few 21st-century authors continue to use the term "iconology" consistently, and instead use iconography to cover both areas of scholarship. To those who use the term, iconology is derived from synthesis rather than scattered analysis and examines symbolic meaning on more than its face value by reconciling it with its historical context and with the artist's body of work – in contrast to the widely descriptive iconography, which, as described by Panofsky, is an approach to studying the content and meaning of works of art that is primarily focused on classifying, establishing dates, provenance and other necessary fundamental knowledge concerning the subject matter of an artwork that is needed for further interpretation. Panofsky's "use of iconology as the principle tool of art analysis brought him critics." For instance, in 1946, Jan Gerrit Van Gelder "criticized Panofsky's iconology as putting too much emphasis on the symbolic content of the work of art, neglecting its formal aspects and the work as a unity of form and content." Furthermore, iconology is mostly avoided by social historians who do not accept the theoretical dogmaticism in the work of Panofsky. In contrast to iconography Erwin Panofsky defines iconography as "a known principle in the known world", while iconology is "an iconography turned interpretive". According to his view, iconology tries to reveal the underlying principles that form the basic attitude of a nation, a period, a class, a religious or philosophical perspective, which is modulated by one personality and condensed into one work. According to Roelof van Straten, iconology "can explain why an artist or patron chose a particular subject at a specific location and time and represented it in a certain way. An iconological investigation should concentrate on the social-historical, not art-historical, influences and values that the artist might not have consciously brought into play but are nevertheless present. The artwork is primarily seen as a document of its time." Warburg used the term "iconography" in his early research, replacing it in 1908 with "iconology" in his particular method of visual interpretation called "critical iconology", which focused on the tracing of motifs through different cultures and visual forms. In 1932, Panofsky published a seminal article, introducing a three-step method of visual interpretation dealing with (1) primary or natural subject matter; (2) secondary or conventional subject matter, i.e. iconography; (3) tertiary or intrinsic meaning or content, i.e. iconology. Whereas iconography analyses the world of images, stories and allegories and requires knowledge of literary sources, an understanding of the history of types and how themes and concepts were expressed by objects and events under different historical conditions, iconology interprets intrinsic meaning or content and the world of symbolical values by using "synthetic intuition". The interpreter is aware of the essential tendencies of the human mind as conditioned by psychology and world view; he analyses the history of cultural symptoms or symbols, or how tendencies of the human mind were expressed by specific themes due to different | the description and interpretation of visual art, and also a study of "what images say" – the ways in which they seem to speak for themselves by persuading, telling stories, or describing. He pleads for a postlinguistic, postsemiotic "iconic turn", emphasizing the role of "non-linguistic symbol systems". Instead of just pointing out the difference between the material (pictorial or artistic) images, "he pays attention to the dialectic relationship between material images and mental images". According to Dennise Bartelo and Robert Morton, the term "iconology" can also be used for characterizing "a movement toward seeing connections across all the language processes" and the idea about "multiple levels and forms used to communicate meaning" in order to get "the total picture” of learning. "Being both literate in the traditional sense and visually literate are the true mark of a well-educated human." For several years, new approaches to iconology have developed in the theory of images. This is the case of what Jean-Michel Durafour, French philosopher and theorist of cinema, proposed to call "econology", a biological approach to images as forms of life, crossing iconology, ecology and sciences of nature. In an econological regime, the image (eikon) self-speciates, that is to say, it self-iconicizes with others and eco-iconicizes with them its iconic habitat (oikos). The iconology, mainly Warburghian iconology, is thus merged with a conception of the relations between the beings of the nature inherited, among others (Arne Næss, etc.) from the writings of Kinji Imanishi. For Imanishi, living beings are subjects. Or, more precisely, the environment and the living being are just one. One of the main consequences is that the "specity", the living individual, "self-eco-speciates its place of life" (Freedom in Evolution). As far as the images are concerned: "If the living species self-specify, the images self-iconicize. This is not a tautology. The images update some of their iconic virtualities. They live in the midst of other images, past or present, but also future (those are only human classifications), which they have relations with. They self-iconicize in an iconic environment which they interact with, and which in particular makes them the images they are. Or more precisely, insofar as images have an active part: the images self-eco-iconicize their iconic environment." Studies in iconology Studies in Iconology is the title of a book by Erwin Panofsky on humanistic themes in the art of the Renaissance, which was first published in 1939. It is also the name of a peer-reviewed series of books started in 2014 under the editorship |
This is especially so of a Colorado territorial militia’s slaughter of Cheyennes at Sand Creek (1864) and the army’s slaughter of Shoshones at Bear River (1863), Blackfeet on the Marias River (1870), and Lakotas at Wounded Knee (1890). Some scholars have begun referring to these events as “genocidal massacres,” defined as the annihilation of a portion of a larger group, sometimes to provide a lesson to the larger group." It is difficult to determine the total number of people who died as a result of "Indian massacres". In The Wild Frontier: Atrocities during the American-Indian War from Jamestown Colony to Wounded Knee, lawyer William M. Osborn compiled a list of alleged and actual atrocities in what would eventually become the continental United States, from first contact in 1511 until 1890. His parameters for inclusion included the intentional and indiscriminate murder, torture, or mutilation of civilians, the wounded, and prisoners. His list included 7,193 people who died from atrocities perpetrated by those of European descent, and 9,156 people who died from atrocities perpetrated by Native Americans. In An American Genocide, The United States and the California Catastrophe, 1846–1873, historian Benjamin Madley recorded the numbers of killings of California Indians between 1846 and 1873. He found evidence that during this period, at least 9,400 to 16,000 California Indians were killed by non-Indians. Most of these killings occurred in what he said were more than 370 massacres (defined by him as the "intentional killing of five or more disarmed combatants or largely unarmed noncombatants, including women, children, and prisoners, whether in the context of a battle or otherwise"). List of massacres This is a listing of some of the events reported then or referred to now as "Indian massacre". Pre-Columbian era 1500–1830 1830–1915 See also American Indian Wars Population history | stories in popular literature and newspapers. Emphasis was placed on the depredations of "murderous savages" in their information about Indians, and as the migrants headed further west, they frequently feared the Indians they would encounter. The phrase eventually became commonly used to also describe mass killings of American Indians. Killings described as "massacres" often had an element of indiscriminate targeting, barbarism, or genocidal intent. According to historian Jeffrey Ostler, "Any discussion of genocide must, of course, eventually consider the so-called Indian Wars, the term commonly used for U.S. Army campaigns to subjugate Indian nations of the American West beginning in the 1860s. In an older historiography, key events in this history were narrated as battles. It is now more common for scholars to refer to these events as massacres. This is especially so of a Colorado territorial militia’s slaughter of Cheyennes at Sand Creek (1864) and the army’s slaughter of Shoshones at Bear River (1863), Blackfeet on the Marias River (1870), and Lakotas at Wounded Knee (1890). Some scholars have begun referring to these events as “genocidal massacres,” defined as the annihilation of a portion of a larger group, sometimes to provide a lesson to the larger group." It is difficult to determine the total number of people who died as a result of "Indian massacres". In The Wild Frontier: Atrocities during the American-Indian War from Jamestown Colony to Wounded Knee, lawyer William M. Osborn compiled a list of alleged and actual atrocities in what would eventually become the continental United States, from first contact in 1511 until 1890. His parameters for inclusion included the intentional and indiscriminate murder, torture, or mutilation of civilians, the wounded, and prisoners. His list included 7,193 people who died from atrocities perpetrated by those of European descent, and 9,156 people who died from atrocities perpetrated by Native Americans. In An American Genocide, The United States and the California Catastrophe, 1846–1873, historian Benjamin Madley recorded the numbers of killings of California Indians between 1846 and 1873. He found evidence that during this period, at least 9,400 to 16,000 California Indians were killed by non-Indians. Most of these |
was not intended to establish a fixed calendar to be generally observed." The term "fixed calendar" is generally understood to refer to the non-intercalated calendar. Others concur that it was originally a lunar calendar, but suggest that about 200 years before the Hijra it was transformed into a lunisolar calendar containing an intercalary month added from time to time to keep the pilgrimage within the season of the year when merchandise was most abundant. This interpretation was first proposed by the medieval Muslim astrologer and astronomer Abu Ma'shar al-Balkhi, and later by al-Biruni, al-Mas'udi, and some western scholars. This interpretation considers Nasī to be a synonym to the Arabic word for "intercalation" (kabīsa). The Arabs, according to one explanation mentioned by Abu Ma'shar, learned of this type of intercalation from the Jews. The Jewish Nasi was the official who decided when to intercalate the Jewish calendar. Some sources say that the Arabs followed the Jewish practice and intercalated seven months over nineteen years, or else that they intercalated nine months over 24 years; there is, however, no consensus among scholars on this issue. Prohibiting Nasī' In the tenth year of the Hijra, as documented in the Qur'an (Surah At-Tawbah (9):36–37), Muslims believe God revealed the "prohibition of the Nasī'". The prohibition of Nasī' would presumably have been announced when the intercalated month had returned to its position just before the month of Nasi' began. If Nasī' meant intercalation, then the number and the position of the intercalary months between AH 1 and AH 10 are uncertain; western calendar dates commonly cited for key events in early Islam such as the Hijra, the Battle of Badr, the Battle of Uhud and the Battle of the Trench should be viewed with caution as they might be in error by one, two, three or even four lunar months. This prohibition was mentioned by Muhammad during the farewell sermon which was delivered on 9 Dhu al-Hijjah AH 10 (Julian date Friday 6 March 632 CE) on Mount Arafat during the farewell pilgrimage to Mecca. The three successive sacred (forbidden) months mentioned by Prophet Muhammad (months in which battles are forbidden) are Dhu al-Qa'dah, Dhu al-Hijjah, and Muharram, months 11, 12, and 1 respectively. The single forbidden month is Rajab, month 7. These months were considered forbidden both within the new Islamic calendar and within the old pagan Meccan calendar. Days of the week The Islamic day begins at sunset. Muslims gather for prayer at a mosque at noon on "gathering day" () which corresponds with the lunar start of the day which is Thursday evening, at the moment when the sun has completely set. Maghrib on this day is the start of the day. Thus "gathering day" is often regarded as the weekly day off. This is frequently made official, with many Muslim countries adopting Friday and Saturday (e.g., Egypt, Saudi Arabia) or Thursday and Friday as official weekends, during which offices are closed; other countries (e.g., Iran) choose to make Friday alone a day of rest. A few others (e.g., Turkey, Pakistan, Morocco, Nigeria, Malaysia) have adopted the Saturday-Sunday weekend while making Friday a working day with a long midday break to allow time off for worship. Months Four of the twelve Hijri months are considered sacred: Rajab (7), and the three consecutive months of Dhū al-Qa'dah (11), Dhu al-Ḥijjah (12) and Muḥarram (1). As the mean duration of a tropical year is 365.24219 days, while the long-term average duration of a synodic month is 29.530587981 days, the average lunar year is (365.24219 − 12 × 29.530587981 ≈) 10.87513 days shorter than the average solar year, causing months of the Hijri calendar to advance about eleven days earlier relative to dates in the Gregorian calendar every calendar year. "As a result, the cycle of twelve lunar months regresses through the seasons over a period of about 33 [solar] years". Length of months Each month of the Islamic calendar commences on the birth of the new lunar cycle. Traditionally, this is based on actual observation of the moon's crescent (hilal) marking the end of the previous lunar cycle and hence the previous month, thereby beginning the new month. Consequently, each month can have 29 or 30 days depending on the visibility of the moon, astronomical positioning of the earth and weather conditions. However, certain sects and groups, most notably Bohras Muslims namely Alavis, Dawoodis and Sulaymanis and Shia Ismaili Muslims, use a tabular Islamic calendar (see section below) in which odd-numbered months have thirty days (and also the twelfth month in a leap year) and even months have 29. Year numbering In pre-Islamic Arabia, it was customary to identify a year after a major event which took place in it. Thus, according to Islamic tradition, Abraha, governor of Yemen, then a province of the Christian Kingdom of Aksum (Ethiopia), attempted to destroy the Kaaba with an army which included several elephants. The raid was unsuccessful, but that year became known as the Year of the Elephant, during which Muhammad was born (sura al-Fil). Most equate this to the year 570 CE, but a minority use 571 CE. The first ten years of the Hijra were not numbered, but were named after events in the life of Muhammad according to Abū Rayḥān al-Bīrūnī: The year of permission. The year of the order of fighting. The year of the trial. The year of congratulation on marriage. The year of the earthquake. The year of enquiring. The year of gaining victory. The year of equality. The year of exemption. The year of farewell. In AH 17 (638 CE), Abu Musa Ashaari, one of the officials of the Caliph Umar in Basrah, complained about the absence of any years on the correspondence he received from Umar, making it difficult for him to determine which instructions were most recent. This report convinced Umar of the need to introduce an era for Muslims. After debating the issue with his counsellors, he decided that the first year should be the year of Muhammad's arrival at Medina (known as Yathrib, before Muhammad's arrival). Uthman ibn Affan then suggested that the months begin with Muharram, in line with the established custom of the Arabs at that time. The years of the Islamic calendar thus began with the month of Muharram in the year of Muhammad's arrival at the city of Medina, even though the actual emigration took place in Safar and Rabi' I of the intercalated calendar, two months before the commencement of Muharram in the new fixed calendar. Because of the Hijra, the calendar was named the Hijri calendar. F A Shamsi (1984) postulated that the Arabic calendar was never intercalated. According to him, the first day of the first month of the new fixed Islamic calendar (1 Muharram AH 1) was no different from what was observed at the time. The day the Prophet moved from Quba' to Medina was originally 26 Rabi' I on the pre-Islamic calendar. 1 Muharram of the new fixed calendar corresponded to Friday, 16 July 622 CE, the equivalent civil tabular date (same daylight period) in the Julian calendar. The Islamic day began at the preceding sunset on the evening of 15 July. This Julian date (16 July) was determined by medieval Muslim astronomers by projecting back in time their own tabular Islamic calendar, which had alternating 30- and 29-day months in each lunar year plus eleven leap days every 30 years. For example, al-Biruni mentioned this Julian date in the year 1000 CE. Although not used by either medieval Muslim astronomers or modern scholars to determine the Islamic epoch, the thin crescent moon would have also first become visible (assuming clouds did not obscure it) shortly after the preceding sunset on the evening of 15 July, 1.5 days after the associated dark moon (astronomical new moon) on the morning of 14 July. Though Cook and Crone in Hagarism: The Making of the Islamic World cite a coin from AH 17, the first surviving attested use of a Hijri calendar date alongside a date in another calendar (Coptic) is on a papyrus from Egypt in AH 22, PERF 558. Astronomical considerations Due to the Islamic calendar's reliance on certain variable methods of observation to determine its month-start-dates, these dates sometimes vary slightly from the month-start-dates of the astronomical lunar calendar, which are based directly on astronomical calculations. Still, the Islamic calendar seldom varies by more than three days from the astronomical-lunar-calendar | fighting. The year of the trial. The year of congratulation on marriage. The year of the earthquake. The year of enquiring. The year of gaining victory. The year of equality. The year of exemption. The year of farewell. In AH 17 (638 CE), Abu Musa Ashaari, one of the officials of the Caliph Umar in Basrah, complained about the absence of any years on the correspondence he received from Umar, making it difficult for him to determine which instructions were most recent. This report convinced Umar of the need to introduce an era for Muslims. After debating the issue with his counsellors, he decided that the first year should be the year of Muhammad's arrival at Medina (known as Yathrib, before Muhammad's arrival). Uthman ibn Affan then suggested that the months begin with Muharram, in line with the established custom of the Arabs at that time. The years of the Islamic calendar thus began with the month of Muharram in the year of Muhammad's arrival at the city of Medina, even though the actual emigration took place in Safar and Rabi' I of the intercalated calendar, two months before the commencement of Muharram in the new fixed calendar. Because of the Hijra, the calendar was named the Hijri calendar. F A Shamsi (1984) postulated that the Arabic calendar was never intercalated. According to him, the first day of the first month of the new fixed Islamic calendar (1 Muharram AH 1) was no different from what was observed at the time. The day the Prophet moved from Quba' to Medina was originally 26 Rabi' I on the pre-Islamic calendar. 1 Muharram of the new fixed calendar corresponded to Friday, 16 July 622 CE, the equivalent civil tabular date (same daylight period) in the Julian calendar. The Islamic day began at the preceding sunset on the evening of 15 July. This Julian date (16 July) was determined by medieval Muslim astronomers by projecting back in time their own tabular Islamic calendar, which had alternating 30- and 29-day months in each lunar year plus eleven leap days every 30 years. For example, al-Biruni mentioned this Julian date in the year 1000 CE. Although not used by either medieval Muslim astronomers or modern scholars to determine the Islamic epoch, the thin crescent moon would have also first become visible (assuming clouds did not obscure it) shortly after the preceding sunset on the evening of 15 July, 1.5 days after the associated dark moon (astronomical new moon) on the morning of 14 July. Though Cook and Crone in Hagarism: The Making of the Islamic World cite a coin from AH 17, the first surviving attested use of a Hijri calendar date alongside a date in another calendar (Coptic) is on a papyrus from Egypt in AH 22, PERF 558. Astronomical considerations Due to the Islamic calendar's reliance on certain variable methods of observation to determine its month-start-dates, these dates sometimes vary slightly from the month-start-dates of the astronomical lunar calendar, which are based directly on astronomical calculations. Still, the Islamic calendar seldom varies by more than three days from the astronomical-lunar-calendar system, and roughly approximates it. Both the Islamic calendar and the astronomical-lunar-calendar take no account of the solar year in their calculations, and thus both of these strictly lunar based calendar systems have no ability to reckon the timing of the four seasons of the year. In the astronomical-lunar-calendar system, a year of 12 lunar months is 354.37 days long. In this calendar system, lunar months begin precisely at the time of the monthly "conjunction", when the Moon is located most directly between the Earth and the Sun. The month is defined as the average duration of a revolution of the Moon around the Earth (29.53 days). By convention, months of 30 days and 29 days succeed each other, adding up over two successive months to 59 full days. This leaves only a small monthly variation of 44 minutes to account for, which adds up to a total of 24 hours (i.e., the equivalent of one full day) in 2.73 years. To settle accounts, it is sufficient to add one day every three years to the lunar calendar, in the same way that one adds one day to the Gregorian calendar every four years. The technical details of the adjustment are described in Tabular Islamic calendar. The Islamic calendar, however, is based on a different set of conventions being used for the determination of the month-start-dates. Each month still has either 29 or 30 days, but due to the variable method of observations employed, there is usually no discernible order in the sequencing of either 29 or 30-day month lengths. Traditionally, the first day of each month is the day (beginning at sunset) of the first sighting of the hilal (crescent moon) shortly after sunset. If the hilal is not observed immediately after the 29th day of a month (either because clouds block its view or because the western sky is still too bright when the moon sets), then the day that begins at that sunset is the 30th. Such a sighting has to be made by one or more trustworthy men testifying before a committee of Muslim leaders. Determining the most likely day that the hilal could be observed was a motivation for Muslim interest in astronomy, which put Islam in the forefront of that science for many centuries. Still, due to the fact that both lunar reckoning systems are ultimately based on the lunar cycle itself, both systems still do roughly correspond to one another, never being more than three days out of synchronisation with one another. This traditional practice for the determination of the start-date of the month is still followed in the overwhelming majority of Muslim countries. Each Islamic state proceeds with its own monthly observation of the new moon (or, failing that, awaits the completion of 30 days) before declaring the beginning of a new month on its territory. But, the lunar crescent becomes visible only some 17 hours after the conjunction, and only subject to the existence of a number of favourable conditions relative to weather, time, geographic location, as well as various astronomical parameters. Given the fact that the moon sets progressively later than the sun as one goes west, with a corresponding increase in its "age" since conjunction, Western Muslim countries may, under favorable conditions, observe the new moon one day earlier than eastern Muslim countries. Due to the interplay of all these factors, the beginning of each month differs from one Muslim country to another, during the 48-hour period following the conjunction. The information provided by the calendar in any country does not extend beyond the current month. A number of Muslim countries try to overcome some of these difficulties by applying different astronomy-related rules to determine the beginning of months. Thus, Malaysia, Indonesia, and a few others begin each month at sunset on the first day that the moon sets after the sun (moonset after sunset). In Egypt, the month begins at sunset on the first day that the moon sets at least five minutes after the sun. A detailed analysis of the available data shows, however, that there are major discrepancies between what countries say they do on this subject, and what they actually do. In some instances, what a country says it does is impossible. Due to the somewhat variable nature of the Islamic calendar, in most Muslim countries, the Islamic calendar is used primarily for religious purposes, while the Solar-based Gregorian calendar is still used primarily for matters of commerce and agriculture. Theological considerations If the Islamic calendar were prepared using astronomical calculations, Muslims throughout the Muslim world could use it to meet all their needs, the way they use the Gregorian calendar today. But, there are divergent views on whether it is licit to do so. A majority of theologians oppose the use of calculations (beyond the constraint that each month must be not less than 29 nor more than 30 days) on the grounds that the latter would not conform with Muhammad's recommendation to observe the new moon of Ramadan and Shawal in order to determine the beginning of these months. However, some Islamic jurists see no contradiction between Muhammad's teachings and the use of calculations to determine the beginnings of lunar months. They consider that Muhammad's recommendation was adapted to the culture of the times, and should not be confused with the acts of worship. Thus the jurists Ahmad Muhammad Shakir and Yusuf al-Qaradawi both endorsed the use of calculations to determine the beginning of all months of the Islamic calendar, in 1939 and 2004 respectively. So did the Fiqh Council of North America (FCNA) in 2006 and the European Council for Fatwa and Research (ECFR) in 2007. The major Muslim associations of France also announced in 2012 that they would henceforth use a calendar based on astronomical calculations, taking into account the criteria of the possibility of crescent sighting in any place on Earth. But, shortly after the official adoption of this rule by the French Council of the Muslim Faith (CFCM) in 2013, the new leadership of the association decided, on the eve of Ramadan 2013, to follow the Saudi announcement rather than to apply the rule just adopted. This resulted in a division of the Muslim community of France, with some members following the new rule, and others following the Saudi announcement. Isma'ili-Taiyebi Bohras having the institution of da'i al-mutlaq follow the tabular Islamic calendar (see section below) prepared on the basis of astronomical calculations from the days of Fatimid imams. Astronomical 12-moon calendars Islamic calendar of Turkey Turkish Muslims use an Islamic calendar which is calculated several years in advance by the Turkish Presidency of Religious Affairs (Diyanet İşleri Başkanlığı). From 1 Muharrem 1400 AH (21 November 1979) until 29 Zilhicce 1435 (24 October 2014) the computed Turkish lunar calendar was based on the following rule: "The lunar month is assumed to begin on the evening when, within some region of the terrestrial globe, the computed centre of the lunar crescent at local sunset is more than 5° above the local horizon and (geocentrically) more than 8° from the Sun." In the current rule the (computed) lunar crescent has to be above the local horizon of Ankara at sunset. Saudi Arabia's Umm al-Qura calendar Saudi Arabia uses the sighting method to determine the beginning of each month of the Hijri calendar. Since AH 1419 (1998/99), several official hilal sighting committees have been set up by the government to determine the first visual sighting of the lunar crescent at the beginning of each lunar month. Nevertheless, the religious authorities also allow the testimony of less experienced observers and thus often announce the sighting of the lunar crescent on a date when none of the official committees could see it. The country also uses the Umm al-Qura calendar, based on astronomical calculations, but this is restricted to administrative purposes. The parameters used in the establishment of this calendar underwent significant changes during the decade to AH 1423. Before AH 1420 (before 18 April 1999), if the moon's age at sunset in Riyadh was at least 12 hours, then the day ending at that sunset was the first day of the month. This often caused the Saudis to celebrate holy days one or even two days before other predominantly Muslim countries, including the dates for the Hajj, which can only be dated using Saudi dates because it is performed in Mecca. For AH 1420–22, if moonset occurred after sunset at Mecca, then the day beginning at that sunset was the first day of a Saudi month, essentially the same rule used by Malaysia, Indonesia, and |
IQR, the data set is divided into quartiles, or four rank-ordered even parts via linear interpolation. These quartiles are denoted by Q1 (also called the lower quartile), Q2 (the median), and Q3 (also called the upper quartile). The lower quartile corresponds with the 25th percentile and the upper quartile corresponds with the 75th percentile, so IQR = Q3 − Q1. The IQR is an example of a trimmed estimator, defined as the 25% trimmed range, which enhances the accuracy of dataset statistics by dropping lower contribution, outlying points. It is also used as a robust measure of scale IQR spread can be clearly visualized by the box on a Box plot. Use The primary use of the IQR is to represent the spread difference between the upper and lower quartiles of a data set. This can be used as an indicator for variability of the dataset. It is also used to build box plots, which are a graphical representation of probability distribution. In the box plot, the IQR is the height of the box itself, and the whiskers have a length of 1.5*IQR. Any data point located outside of the whiskers is referred to as an outlier (see below). MAD or total range or median absolute deviation is often used as a preferred measurement or variability to IQR because IQR has a lower breakdown point: 25%, compared to MAD's 50%, where 50% is the optimum. The IQR has been practically used in a number of recent studies. Some of these uses include: Sampling for Design Space Exploration Predicting Stock Returns Image Denoising Algorithm Discrete Variables The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows. Given an even 2n or odd 2n+1 number of values: first quartile | the 25th percentile and the upper quartile corresponds with the 75th percentile, so IQR = Q3 − Q1. The IQR is an example of a trimmed estimator, defined as the 25% trimmed range, which enhances the accuracy of dataset statistics by dropping lower contribution, outlying points. It is also used as a robust measure of scale IQR spread can be clearly visualized by the box on a Box plot. Use The primary use of the IQR is to represent the spread difference between the upper and lower quartiles of a data set. This can be used as an indicator for variability of the dataset. It is also used to build box plots, which are a graphical representation of probability distribution. In the box plot, the IQR is the height of the box itself, and the whiskers have a length of 1.5*IQR. Any data point located outside of the whiskers is referred to as an outlier (see below). MAD or total range or median absolute deviation is often used as a preferred measurement or variability to IQR because IQR has a lower breakdown point: 25%, compared to MAD's 50%, where 50% is the optimum. The IQR has been practically used in a number of recent studies. Some of these uses include: Sampling for Design Space Exploration Predicting Stock Returns Image Denoising Algorithm Discrete Variables The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows. Given an even 2n or odd 2n+1 number of values: first quartile Q1 = median of the n smallest values; third quartile Q3 = median of the n largest values. The second quartile Q2 is the same as the ordinary median. Continuous Variables The interquartile range of a continuous distribution can be calculated by |
Nazi mystics, this time trying to find the Holy Grail. The film's introduction, set in 1912, provided some back story to the character, specifically the origin of his fear of snakes, his use of a bullwhip, the scar on his chin, and his hat; the film's epilogue also reveals that "Indiana" is not Jones' first name, but a nickname he took from the family dog. The film was a buddy movie of sorts, teaming Jones with his father, Henry Jones Sr., often to comical effect. Although Lucas intended to make five Indiana Jones films, Indiana Jones and the Last Crusade was the last for over 18 years, as he could not think of a good plot element to drive the next installment. From 1992 to 1996, George Lucas as stories writing and executive-produced a television series named The Young Indiana Jones Chronicles, aimed mainly at teenagers and children, which showed many of the important events and historical figures of the early 20th century through the prism of Indiana Jones' life. The show initially featured the formula of an elderly (93 to 94 years of age) Indiana Jones played by George Hall introducing a story from his youth by way of an anecdote: the main part of the episode then featured an adventure with either a young adult Indy (16 to 21 years of age) played by Sean Patrick Flanery or a child Indy (8 to 10 years) played by Corey Carrier. One episode, "Young Indiana Jones and the Mystery of the Blues", is bookended by Harrison Ford as Indiana Jones, rather than Hall. Later episodes and telemovies did not have this bookend format. The bulk of the series centers around the young adult Indiana Jones and his activities during World War I as a 16- to 17-year-old soldier in the Belgian Army and then as an intelligence officer and spy seconded to French intelligence. The child Indy episodes follow the boy's travels around the globe as he accompanies his parents on his father's worldwide lecture tour from 1908 to 1910. The show provided some backstory for the films, as well as new information regarding the character. Indiana Jones was born July 1, 1899, and his middle name is Walton (Lucas's middle name). It is also mentioned that he had a sister called Suzie who died as an infant of fever, and that he eventually has a daughter and grandchildren who appear in some episode introductions and epilogues. His relationship with his father, first introduced in Indiana Jones and the Last Crusade, was further fleshed out with stories about his travels with his father as a young boy. Indy damages or loses his right eye sometime between the events in 1957 and the early 1990s, when the "Old Indy" segments take place, as the elderly Indiana Jones wears an eyepatch. In 1999, Lucas removed the episode introductions and epilogues by George Hall for the VHS and DVD releases, and re-edited the episodes into chronologically ordered feature-length stories. The series title was also changed to The Adventures of Young Indiana Jones. The 2008 film, Indiana Jones and the Kingdom of the Crystal Skull, is the latest film in the series. Set in 1957, 19 years after the third film, it pits an older, wiser Indiana Jones against Soviet agents bent on harnessing the power of an extraterrestrial device discovered in South America. Jones is aided in his adventure by his former lover, Marion Ravenwood (Karen Allen), and her son—a young greaser named Henry "Mutt" Williams (Shia LaBeouf), later revealed to be Jones' unknown child. There were rumors that Harrison Ford would not return for any future installments and LaBeouf would take over the Indy franchise. This film also reveals that Jones was recruited by the Office of Strategic Services during World War II, attaining the rank of colonel in the United States Army, and implies very strongly that in 1947 he was forced to investigate the Roswell UFO incident, and the investigation saw that he was involved in affairs related to Hangar 51. He is tasked with conducting covert operations with MI6 agent George Michale against the Soviet Union. In March 2016, Disney announced a fifth Indiana Jones film to be in active development, with Ford and Spielberg set to return to the franchise. Initially set for release on July 10, 2020, the film's release date was pushed back to July 9, 2021, due to production issues, then further pushed back to July 29, 2022, due to a reshuffle in Disney's release schedule as due to the COVID-19 pandemic. In December 2020 Disney announced that James Mangold would be directing the film and that this would be final time Harrison Ford would appear in the franchise. Attractions Indiana Jones is featured at several Walt Disney theme park attractions. The Indiana Jones Adventure attractions at Disneyland and Tokyo DisneySea ("Temple of the Forbidden Eye" and "Temple of the Crystal Skull," respectively) place Indy at the forefront of two similar archaeological discoveries. These two temples each contain a wrathful deity who threatens the guests who ride through in World War II troop transports. The attractions, some of the most expensive of their kind at the time, opened in 1995 and 2001, respectively, with sole design credit attributed to Walt Disney Imagineering. Ford was approached to reprise his role as Indiana Jones, but ultimately negotiations to secure Ford's participation broke down in December 1994, for definitively unknown reasons. Instead, Dave Temple provided the voice of Jones. Ford's physical likeness, however, has nonetheless been used in subsequent audio-animatronic figures for the attractions. Disneyland Paris also features an Indiana Jones-titled ride where people speed off through ancient ruins in a runaway mine wagon similar to that found in Indiana Jones and the Temple of Doom. Indiana Jones and the Temple of Peril is a looping roller coaster engineered by Intamin, designed by Walt Disney Imagineering, and opened in 1993. The Indiana Jones Epic Stunt Spectacular! is a live show that has been presented in the Disney's Hollywood Studios theme park of the Walt Disney World Resort with few changes since the park's 1989 opening, as Disney-MGM Studios. The 25-minute show presents various stunts framed in the context of a feature film production, and recruits members of the audience to participate in the show. Stunt artists in the show re-create and ultimately reveal some of the secrets of the stunts of the Raiders of the Lost Ark films, including the well-known "running-from-the-boulder" scene. Stunt performer Anislav Varbanov was fatally injured in August 2009, while rehearsing the popular show. Also formerly at Disney's Hollywood Studios, an audio-animatronic Indiana Jones appeared in another attraction; during The Great Movie Ride's Raiders of the Lost Ark segment. Literature Graphic novels Indy also appears in the 2004 Dark Horse Comics story Into the Great Unknown, collected in Star Wars Tales Volume 5. In this non-canon story bringing together two of Harrison Ford's best-known roles, Indy and Short Round discover a crash-landed Millennium Falcon in the Pacific Northwest, along with Han Solo's skeleton and the realization that a rumored nearby Sasquatch is in fact Chewbacca. Indy also appears in a series of Marvel Comics. Movie tie-in novelizations The four Indiana Jones film scripts were novelized and published in the time-frame of the films' initial releases. Raiders of the Lost Ark was novelized by Campbell Black based on the script by Lawrence Kasdan that was based on the story by George Lucas and Philip Kaufman and published in April 1981 by Ballantine Books; Indiana Jones and the Temple of Doom was novelized by James Kahn and based on the script by Willard Huyck & Gloria Katz that was based on the story by George Lucas and published May 1984 by Ballantine Books; Indiana Jones and the Last Crusade was novelized by Rob MacGregor based on the script by Jeffrey Boam that was based on a story by George Lucas and Menno Meyjes and published June 1989 by Ballantine Books. Nearly 20 years later Indiana Jones and the Kingdom of the Crystal Skull was novelized by James Rollins based on the script by David Koepp based on the story by George Lucas and Jeff Nathanson and published May 2008 by Ballantine Books. In addition, in 2008 to accompany the release of Kingdom of Skulls, Scholastic Books published juvenile novelizations of the four scripts written, successively in the order above, by Ryder Windham, Suzanne Weyn, Ryder Windham, and James Luceno. All these books have been reprinted, with Raiders of the Lost Ark being retitled Indiana Jones and the Raiders of the Lost Ark. While these are the principal titles and authors, there are numerous other volumes derived from the four film properties. Original novels From February 1991 through February 1999, 12 original Indiana Jones-themed adult novels were licensed by Lucasfilm, Ltd. and written by three genre authors of the period. Ten years afterward, a 13th original novel was added, also written by a popular genre author. The first 12 were published by Bantam Books; the last by Ballantine Books in 2009. (See Indiana Jones (franchise) for broad descriptions of these original adult novels.) The novels are: Rob MacGregor (author) Indiana Jones and the Peril at Delphi, February 1991. Indiana Jones and the Dance of the Giants, June 1991. Indiana Jones and the Seven Veils, December 1991. Indiana Jones and the Genesis Deluge, February 1992. Indiana Jones and the Unicorn's Legacy, September 1992. Indiana Jones and the Interior World, December 1992. Martin Caidin (author) Indiana Jones and the Sky Pirates, December 1993. Indiana Jones and the White Witch, April 1994. Max McCoy (author) Indiana Jones and the Philosopher's Stone, May 1995. Indiana Jones and the Dinosaur Eggs, March 1996. Indiana Jones and the Hollow Earth, March 1997. Indiana Jones and the Secret of the Sphinx, February 1999. Steve Perry (author) Indiana Jones and the Army of the Dead, September 2009. Video games The character has appeared in several officially licensed games, beginning with adaptations of Raiders of the Lost Ark, Indiana Jones and the Temple of Doom, two adaptations of Indiana Jones and the Last Crusade (one with purely action mechanics, one with an adventure- and puzzle-based structure) and Indiana Jones' Greatest Adventures, which included the storylines from all three of the original films. Following this, the games branched off into original storylines with Indiana Jones in the Lost Kingdom, Indiana Jones and the Fate of Atlantis, Indiana Jones and the Infernal Machine, Indiana Jones and the Emperor's Tomb and Indiana Jones and the Staff of Kings. Emperor's Tomb sets up Jones' companion Wu Han and the search for Nurhaci's ashes seen at the beginning of Temple of Doom. The first two games were developed by Hal Barwood and starred Doug Lee as the voice of Indiana Jones; Emperor's Tomb had David Esch fill the role and Staff of Kings starred John Armstrong. Indiana Jones and the Infernal Machine was the first Indy-based game presented in three dimensions, as opposed to 8-bit graphics and side-scrolling games before. There is also a small game from Lucas Arts Indiana Jones and His Desktop Adventures. A video game was made for young Indy called Young Indiana Jones and the Instruments of Chaos, as well as a video game version of The Young Indiana Jones Chronicles. Two Lego Indiana Jones games have also been released. Lego Indiana Jones: The Original Adventures was released in 2008 and follows the plots of the first three films. It was followed by Lego Indiana Jones 2: The Adventure Continues in late 2009. The sequel includes an abbreviated reprise of the first three films, but focuses on the plot of Indiana Jones and the Kingdom of the Crystal Skull. However, before he got his own Lego games, he appeared as a secret character in Lego Star Wars: The Complete Saga as a playable character. If you go to the cinema in the hub world and watch the trailer for Lego Indiana Jones: The Original Adventures, you can unlock him as a playable character after the trailer is finished. He also makes a brief appearance in a minigame in Lego Star Wars III: The Clone Wars during the level “Castle Hostage”. Social gaming company Zynga introduced Indiana Jones to their "Adventure World" game in late 2011. Character description and formation "Indiana" Jones' full name is Dr. Henry Walton Jones, Jr., and his nickname is often shortened to "Indy". In his role as a college professor of archaeology, Jones is scholarly and learned in a tweed suit, lecturing on ancient civilizations. At the opportunity to recover important artifacts, Dr. Jones transforms into "Indiana," a "non-superhero superhero" image he has concocted for himself. Producer Frank Marshall said, "Indy [is] a fallible character. He makes mistakes and gets hurt. ... That's the other thing people like: He's a real character, not a character with superpowers." Spielberg said there "was the willingness to allow our leading man to get hurt and to express his pain and to get his mad out and to take pratfalls and sometimes be the butt of his own jokes. I mean, Indiana Jones is not a perfect hero, and his imperfections, I think, make the audience feel that, with a little more exercise and a little more courage, they could be just like him." According to Spielberg biographer Douglas Brode, Indiana created his heroic figure so as to escape the dullness of teaching at a school. Both of Indiana's personas reject one another in philosophy, creating a duality. Harrison Ford said the fun of playing the character was that Indiana is both a romantic and a cynic, while scholars have analyzed Indiana as having traits of a lone wolf; a man on a quest; a noble treasure hunter; a hardboiled detective; a human superhero; and an American patriot. Like many characters in his films, Jones has some autobiographical elements of Spielberg. Indiana lacks a proper father figure because of his strained relationship with his father, Henry Jones, Sr. His own contained anger is misdirected towards Professor Abner Ravenwood, his mentor at the University of Chicago, leading to a strained relationship with Marion Ravenwood. The teenage Indiana bases his own look on a figure from the prologue of Indiana Jones and the Last Crusade, after being given his hat. Marcus Brody acts as Indiana's positive role model at the college. Indiana's own insecurities are made worse by the absence of his mother. In Indiana Jones and the Temple of Doom, he becomes the father figure to Short Round, to survive; he is rescued from Kali's evil by Short Round's dedication. In Raiders of the Lost Ark, he is wise enough to close his eyes in the presence of God in the Ark of the Covenant. By contrast, his rival Rene Belloq is killed for having the audacity to try to communicate directly with God. In the prologue of Indiana Jones and the Last Crusade, Jones is seen as a teenager, establishing his look when given a fedora hat. Indiana's intentions are revealed as prosocial, as he believes artifacts "belong in a museum." In the film's climax, Indiana undergoes "literal" tests of faith to retrieve the Grail and save his father's life. He also remembers Jesus as a historical figure—a humble carpenter—rather than an exalted figure when he recognizes the simple nature and tarnished appearance of the real Grail amongst a large assortment of much more ornately decorated ones. Henry Senior rescues his son from falling to his death when reaching for the fallen Grail, telling him to "let it go," overcoming his mercenary nature. The Young Indiana Jones Chronicles explains how Indiana becomes solitary and less idealistic following his service in World War I. In Indiana Jones and the Kingdom of the Crystal Skull, Jones is older and wiser, whereas his sidekicks Mutt and Mac are youthfully arrogant, and greedy, respectively. Origins and inspirations Indiana Jones is modeled after the strong-jawed heroes of the matinée serials and pulp magazines that George Lucas and Steven Spielberg enjoyed in their childhoods (such as the Republic Pictures serials, and the Doc Savage series). Sir H. Rider Haggard's safari guide/big game hunter Allan Quatermain of King Solomon's Mines is a notable template for Jones. The two friends first discussed the project in Hawaii around the time of the release of the first Star Wars film. Spielberg told Lucas how he wanted his next project to be something fun, like a James Bond film (this would later be referenced when they cast Sean Connery as Henry Jones Sr.). According to sources, Lucas responded to the effect that he had something "even better", or that he'd "got that beat." One of the possible bases for Indiana Jones is Professor Challenger, created by Sir Arthur Conan Doyle in 1912 for his novel, The Lost World. Challenger was based on Doyle's physiology professor, William Rutherford, an adventuring academic, albeit a zoologist/anthropologist. Another important influence on the development of the character Indiana Jones is the Disney character Scrooge McDuck. Carl Barks created Scrooge in 1947 as a one-off relation for Donald Duck in the latter's self-titled comic book. Barks realized that the character had more potential, so a separate Uncle Scrooge comic book series full of exciting and strange adventures in the company of his duck nephews was developed. This Uncle Scrooge comic series strongly influenced George Lucas. This appreciation of Scrooge as an adventurer | fever, and that he eventually has a daughter and grandchildren who appear in some episode introductions and epilogues. His relationship with his father, first introduced in Indiana Jones and the Last Crusade, was further fleshed out with stories about his travels with his father as a young boy. Indy damages or loses his right eye sometime between the events in 1957 and the early 1990s, when the "Old Indy" segments take place, as the elderly Indiana Jones wears an eyepatch. In 1999, Lucas removed the episode introductions and epilogues by George Hall for the VHS and DVD releases, and re-edited the episodes into chronologically ordered feature-length stories. The series title was also changed to The Adventures of Young Indiana Jones. The 2008 film, Indiana Jones and the Kingdom of the Crystal Skull, is the latest film in the series. Set in 1957, 19 years after the third film, it pits an older, wiser Indiana Jones against Soviet agents bent on harnessing the power of an extraterrestrial device discovered in South America. Jones is aided in his adventure by his former lover, Marion Ravenwood (Karen Allen), and her son—a young greaser named Henry "Mutt" Williams (Shia LaBeouf), later revealed to be Jones' unknown child. There were rumors that Harrison Ford would not return for any future installments and LaBeouf would take over the Indy franchise. This film also reveals that Jones was recruited by the Office of Strategic Services during World War II, attaining the rank of colonel in the United States Army, and implies very strongly that in 1947 he was forced to investigate the Roswell UFO incident, and the investigation saw that he was involved in affairs related to Hangar 51. He is tasked with conducting covert operations with MI6 agent George Michale against the Soviet Union. In March 2016, Disney announced a fifth Indiana Jones film to be in active development, with Ford and Spielberg set to return to the franchise. Initially set for release on July 10, 2020, the film's release date was pushed back to July 9, 2021, due to production issues, then further pushed back to July 29, 2022, due to a reshuffle in Disney's release schedule as due to the COVID-19 pandemic. In December 2020 Disney announced that James Mangold would be directing the film and that this would be final time Harrison Ford would appear in the franchise. Attractions Indiana Jones is featured at several Walt Disney theme park attractions. The Indiana Jones Adventure attractions at Disneyland and Tokyo DisneySea ("Temple of the Forbidden Eye" and "Temple of the Crystal Skull," respectively) place Indy at the forefront of two similar archaeological discoveries. These two temples each contain a wrathful deity who threatens the guests who ride through in World War II troop transports. The attractions, some of the most expensive of their kind at the time, opened in 1995 and 2001, respectively, with sole design credit attributed to Walt Disney Imagineering. Ford was approached to reprise his role as Indiana Jones, but ultimately negotiations to secure Ford's participation broke down in December 1994, for definitively unknown reasons. Instead, Dave Temple provided the voice of Jones. Ford's physical likeness, however, has nonetheless been used in subsequent audio-animatronic figures for the attractions. Disneyland Paris also features an Indiana Jones-titled ride where people speed off through ancient ruins in a runaway mine wagon similar to that found in Indiana Jones and the Temple of Doom. Indiana Jones and the Temple of Peril is a looping roller coaster engineered by Intamin, designed by Walt Disney Imagineering, and opened in 1993. The Indiana Jones Epic Stunt Spectacular! is a live show that has been presented in the Disney's Hollywood Studios theme park of the Walt Disney World Resort with few changes since the park's 1989 opening, as Disney-MGM Studios. The 25-minute show presents various stunts framed in the context of a feature film production, and recruits members of the audience to participate in the show. Stunt artists in the show re-create and ultimately reveal some of the secrets of the stunts of the Raiders of the Lost Ark films, including the well-known "running-from-the-boulder" scene. Stunt performer Anislav Varbanov was fatally injured in August 2009, while rehearsing the popular show. Also formerly at Disney's Hollywood Studios, an audio-animatronic Indiana Jones appeared in another attraction; during The Great Movie Ride's Raiders of the Lost Ark segment. Literature Graphic novels Indy also appears in the 2004 Dark Horse Comics story Into the Great Unknown, collected in Star Wars Tales Volume 5. In this non-canon story bringing together two of Harrison Ford's best-known roles, Indy and Short Round discover a crash-landed Millennium Falcon in the Pacific Northwest, along with Han Solo's skeleton and the realization that a rumored nearby Sasquatch is in fact Chewbacca. Indy also appears in a series of Marvel Comics. Movie tie-in novelizations The four Indiana Jones film scripts were novelized and published in the time-frame of the films' initial releases. Raiders of the Lost Ark was novelized by Campbell Black based on the script by Lawrence Kasdan that was based on the story by George Lucas and Philip Kaufman and published in April 1981 by Ballantine Books; Indiana Jones and the Temple of Doom was novelized by James Kahn and based on the script by Willard Huyck & Gloria Katz that was based on the story by George Lucas and published May 1984 by Ballantine Books; Indiana Jones and the Last Crusade was novelized by Rob MacGregor based on the script by Jeffrey Boam that was based on a story by George Lucas and Menno Meyjes and published June 1989 by Ballantine Books. Nearly 20 years later Indiana Jones and the Kingdom of the Crystal Skull was novelized by James Rollins based on the script by David Koepp based on the story by George Lucas and Jeff Nathanson and published May 2008 by Ballantine Books. In addition, in 2008 to accompany the release of Kingdom of Skulls, Scholastic Books published juvenile novelizations of the four scripts written, successively in the order above, by Ryder Windham, Suzanne Weyn, Ryder Windham, and James Luceno. All these books have been reprinted, with Raiders of the Lost Ark being retitled Indiana Jones and the Raiders of the Lost Ark. While these are the principal titles and authors, there are numerous other volumes derived from the four film properties. Original novels From February 1991 through February 1999, 12 original Indiana Jones-themed adult novels were licensed by Lucasfilm, Ltd. and written by three genre authors of the period. Ten years afterward, a 13th original novel was added, also written by a popular genre author. The first 12 were published by Bantam Books; the last by Ballantine Books in 2009. (See Indiana Jones (franchise) for broad descriptions of these original adult novels.) The novels are: Rob MacGregor (author) Indiana Jones and the Peril at Delphi, February 1991. Indiana Jones and the Dance of the Giants, June 1991. Indiana Jones and the Seven Veils, December 1991. Indiana Jones and the Genesis Deluge, February 1992. Indiana Jones and the Unicorn's Legacy, September 1992. Indiana Jones and the Interior World, December 1992. Martin Caidin (author) Indiana Jones and the Sky Pirates, December 1993. Indiana Jones and the White Witch, April 1994. Max McCoy (author) Indiana Jones and the Philosopher's Stone, May 1995. Indiana Jones and the Dinosaur Eggs, March 1996. Indiana Jones and the Hollow Earth, March 1997. Indiana Jones and the Secret of the Sphinx, February 1999. Steve Perry (author) Indiana Jones and the Army of the Dead, September 2009. Video games The character has appeared in several officially licensed games, beginning with adaptations of Raiders of the Lost Ark, Indiana Jones and the Temple of Doom, two adaptations of Indiana Jones and the Last Crusade (one with purely action mechanics, one with an adventure- and puzzle-based structure) and Indiana Jones' Greatest Adventures, which included the storylines from all three of the original films. Following this, the games branched off into original storylines with Indiana Jones in the Lost Kingdom, Indiana Jones and the Fate of Atlantis, Indiana Jones and the Infernal Machine, Indiana Jones and the Emperor's Tomb and Indiana Jones and the Staff of Kings. Emperor's Tomb sets up Jones' companion Wu Han and the search for Nurhaci's ashes seen at the beginning of Temple of Doom. The first two games were developed by Hal Barwood and starred Doug Lee as the voice of Indiana Jones; Emperor's Tomb had David Esch fill the role and Staff of Kings starred John Armstrong. Indiana Jones and the Infernal Machine was the first Indy-based game presented in three dimensions, as opposed to 8-bit graphics and side-scrolling games before. There is also a small game from Lucas Arts Indiana Jones and His Desktop Adventures. A video game was made for young Indy called Young Indiana Jones and the Instruments of Chaos, as well as a video game version of The Young Indiana Jones Chronicles. Two Lego Indiana Jones games have also been released. Lego Indiana Jones: The Original Adventures was released in 2008 and follows the plots of the first three films. It was followed by Lego Indiana Jones 2: The Adventure Continues in late 2009. The sequel includes an abbreviated reprise of the first three films, but focuses on the plot of Indiana Jones and the Kingdom of the Crystal Skull. However, before he got his own Lego games, he appeared as a secret character in Lego Star Wars: The Complete Saga as a playable character. If you go to the cinema in the hub world and watch the trailer for Lego Indiana Jones: The Original Adventures, you can unlock him as a playable character after the trailer is finished. He also makes a brief appearance in a minigame in Lego Star Wars III: The Clone Wars during the level “Castle Hostage”. Social gaming company Zynga introduced Indiana Jones to their "Adventure World" game in late 2011. Character description and formation "Indiana" Jones' full name is Dr. Henry Walton Jones, Jr., and his nickname is often shortened to "Indy". In his role as a college professor of archaeology, Jones is scholarly and learned in a tweed suit, lecturing on ancient civilizations. At the opportunity to recover important artifacts, Dr. Jones transforms into "Indiana," a "non-superhero superhero" image he has concocted for himself. Producer Frank Marshall said, "Indy [is] a fallible character. He makes mistakes and gets hurt. ... That's the other thing people like: He's a real character, not a character with superpowers." Spielberg said there "was the willingness to allow our leading man to get hurt and to express his pain and to get his mad out and to take pratfalls and sometimes be the butt of his own jokes. I mean, Indiana Jones is not a perfect hero, and his imperfections, I think, make the audience feel that, with a little more exercise and a little more courage, they could be just like him." According to Spielberg biographer Douglas Brode, Indiana created his heroic figure so as to escape the dullness of teaching at a school. Both of Indiana's personas reject one another in philosophy, creating a duality. Harrison Ford said the fun of playing the character was that Indiana is both a romantic and a cynic, while scholars have analyzed Indiana as having traits of a lone wolf; a man on a quest; a noble treasure hunter; a hardboiled detective; a human superhero; and an American patriot. Like many characters in his films, Jones has some autobiographical elements of Spielberg. Indiana lacks a proper father figure because of his strained relationship with his father, Henry Jones, Sr. His own contained anger is misdirected towards Professor Abner Ravenwood, his mentor at the University of Chicago, leading to a strained relationship with Marion Ravenwood. The teenage Indiana bases his own look on a figure from the prologue of Indiana Jones and the Last Crusade, after being given his hat. Marcus Brody acts as Indiana's positive role model at the college. Indiana's own insecurities are made worse by the absence of his mother. In Indiana Jones and the Temple of Doom, he becomes the father figure to Short Round, to survive; he is rescued from Kali's evil by Short Round's dedication. In Raiders of the Lost Ark, he is wise enough to close his eyes in the presence of God in the Ark of the Covenant. By contrast, his rival Rene Belloq is killed for having the audacity to try to communicate directly with God. In the prologue of Indiana Jones and the Last Crusade, Jones is seen as a teenager, establishing his look when given a fedora hat. Indiana's intentions are revealed as prosocial, as he believes artifacts "belong in a museum." In the film's climax, Indiana undergoes "literal" tests of faith to retrieve the Grail and save his father's life. He also remembers Jesus as a historical figure—a humble carpenter—rather than an exalted figure when he recognizes the simple nature and tarnished appearance of the real Grail amongst a large assortment of much more ornately decorated ones. Henry Senior rescues his son from falling to his death when reaching for the fallen Grail, telling him to "let it go," overcoming his mercenary nature. The Young Indiana Jones Chronicles explains how Indiana becomes solitary and less idealistic following his service in World War I. In Indiana Jones and the Kingdom of the Crystal Skull, Jones is older and wiser, whereas his sidekicks Mutt and Mac are youthfully arrogant, and greedy, respectively. Origins and inspirations Indiana Jones is modeled after the strong-jawed heroes of the matinée serials and pulp magazines that George Lucas and Steven Spielberg enjoyed in their childhoods (such as the Republic Pictures serials, and the Doc Savage series). Sir H. Rider Haggard's safari guide/big game hunter Allan Quatermain of King Solomon's Mines is |
the numerator and denominator are integers that have no other common divisors than 1 (and −1, when negative numbers are considered). In other words, a fraction is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher mathematics, "irreducible fraction" may also refer to rational fractions such that the numerator and the denominator are coprime polynomials. Every positive rational number can be represented as an irreducible fraction in exactly one way. An equivalent definition is sometimes useful: if a and b are integers, then the fraction is irreducible if and only if there is no other equal fraction such that or , where means the absolute value of a. (Two fractions and are equal or equivalent if and only if ad = bc.) For example, , , and are all irreducible fractions. On the other hand, is reducible since it is equal in value to , and the numerator of is less than the numerator of . A fraction that is reducible can be reduced by dividing both the numerator and denominator by a common factor. It can be fully reduced to lowest terms if both are divided by their greatest common divisor. In order to find the greatest common divisor, the Euclidean algorithm | terms, simplest form or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and −1, when negative numbers are considered). In other words, a fraction is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher mathematics, "irreducible fraction" may also refer to rational fractions such that the numerator and the denominator are coprime polynomials. Every positive rational number can be represented as an irreducible fraction in exactly one way. An equivalent definition is sometimes useful: if a and b are integers, then the fraction is irreducible if and only if there is no other equal fraction such that or , where means the absolute value of a. (Two fractions and are equal or equivalent if and only if ad = bc.) For example, , , and are all irreducible fractions. On the other hand, is reducible since it is equal in value to , and the numerator of is less than the numerator of . A fraction that is reducible can be reduced by dividing both the numerator and denominator by a common factor. It |
associative algebras consisting of coquaternions and 2 × 2 real matrices are isomorphic as rings. Yet they appear in different contexts for application (plane mapping and kinematics) so the isomorphism is insufficient to merge the concepts. In homotopy theory, the fundamental group of a space at a point , though technically denoted to emphasize the dependence on the base point, is | appear in different contexts for application (plane mapping and kinematics) so the isomorphism is insufficient to merge the concepts. In homotopy theory, the fundamental group of a space at a point , though technically denoted to emphasize the dependence on the base point, is often written lazily as simply if is path connected. The reason for this is that the existence of a path between two points allows one to identify loops at one with loops at the other; however, unless is abelian this isomorphism |
coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme: or in general For example, which translates in the other system as Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups and is isomorphic to if and only if m and n are coprime, per the Chinese remainder theorem. Relation-preserving isomorphism If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function such that: S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering then an isomorphism from X to Y is a bijective function such that Such an isomorphism is called an or (less commonly) an . If then this is a relation-preserving automorphism. Applications In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example: Linear isomorphisms between vector spaces; they are specified by invertible matrices. Group isomorphisms between groups; the classification of isomorphism classes of finite groups is an open problem. Ring isomorphism between rings. Field isomorphisms are the same as ring isomorphism between fields; their study, and more specifically the study of field automorphisms is an important part of Galois theory. Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group. In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations. In graph theory, an isomorphism between two graphs G and H is a bijective map f from the vertices of G to the vertices of H that preserves the "edge structure" in the sense that there is an edge from vertex u to vertex v in G if and only if there is an edge from to in H. See graph isomorphism. In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product. In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's Introduction to Mathematical Philosophy. In cybernetics, the good regulator or Conant–Ashby theorem is stated "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system. Category theoretic view In category theory, given a category C, an isomorphism is a morphism that has an inverse morphism that is, and For example, a bijective linear map is an isomorphism between vector spaces, and a bijective continuous function whose inverse is also continuous is an isomorphism between topological spaces, called a homeomorphism. Two categories and are isomorphic if there exist functors and which are mutually inverse to each other, that is, (the identity functor on ) and (the identity functor on ). Isomorphism vs. bijective morphism In a concrete category (roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects (like the category of groups, the category of rings, and the category of modules), an isomorphism must be | for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number , all fields with elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique. The term is mainly used for algebraic structures. In this case, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective. In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example: An isometry is an isomorphism of metric spaces. A homeomorphism is an isomorphism of topological spaces. A diffeomorphism is an isomorphism of spaces equipped with a differential structure, typically differentiable manifolds. A permutation is an automorphism of a set. In geometry, isomorphisms and automorphisms are often called transformations, for example rigid transformations, affine transformations, projective transformations. Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea. Examples Logarithm and exponential Let be the multiplicative group of positive real numbers, and let be the additive group of real numbers. The logarithm function satisfies for all so it is a group homomorphism. The exponential function satisfies for all so it too is a homomorphism. The identities and show that and are inverses of each other. Since is a homomorphism that has an inverse that is also a homomorphism, is an isomorphism of groups. The function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale. Integers modulo 6 Consider the group the integers from 0 to 5 with addition modulo 6. Also consider the group the ordered pairs where the x coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme: or in general For example, which translates in the other system as Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups and is isomorphic to if and only if m and n are coprime, per the Chinese remainder theorem. Relation-preserving isomorphism If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function such that: S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering then an isomorphism from X to Y is a bijective function such that Such an isomorphism is called an or (less commonly) an . If then this is a relation-preserving automorphism. Applications In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example: Linear isomorphisms between vector spaces; they are specified by invertible matrices. Group isomorphisms between groups; the classification of isomorphism classes of finite groups is an open problem. Ring isomorphism between rings. Field isomorphisms are the same as ring |
International Court of Justice (ICJ), the Secret Ariat (UNSA), the Trusteeship Council (UNTC) and the Economic and Social Council (ECOSOC). Other IGOs include the Multi- National Companies (MNCs) like SHELL, Regional and Continental bodies/ blocks like European Union (EU), African Union (AU), East African Community (EAC) among others. Expansion and growth Held and McGrew counted thousands of IGOs worldwide in 2002 and this number continues to rise. This may be attributed to globalization, which increases and encourages the co-operation among and within states and which has also provided easier means for IGO growth as a result of increased international relations. This is seen economically, politically, militarily, as well as on the domestic level. Economically, IGOs gain material and non-material resources for economic prosperity. IGOs also provide more political stability within the state and among differing states. Military alliances are also formed by establishing common standards in order to ensure security of the members to ward off outside threats. Lastly, the formation has encouraged autocratic states to develop into democracies in order to form an effective and internal government. According to a different estimate, the number of IGOs in the world has increased from less than 100 in 1949 to about 350 in 2000. Participation and involvement There are several different reasons a state may choose membership in an intergovernmental organization. But there are also reasons membership may be rejected. Reasons for participation: Economic rewards: In the case of the North American Free Trade Agreement (NAFTA), membership in the free trade agreement benefits the parties’ economies. For example, Mexican companies are given better access to U.S. markets due to their membership. Political influence: Smaller countries, such as Portugal and Belgium, who do not carry much political clout on the international stage, are given a substantial increase in influence through membership in IGOs such as the European Union. Also for countries with more influence such as France and Germany, IGOs are beneficial as the nation increases influence in the smaller countries’ internal affairs and expanding other nations dependence on themselves, so to preserve allegiance. Security: Membership in an IGO such as NATO gives security benefits to member countries. This provides an arena where political differences can be resolved. Democracy: It has been noted that member countries experience a greater degree of democracy and those democracies survive longer. Reasons for rejecting membership: Loss of sovereignty: Membership often comes with a loss of state sovereignty as treaties are signed that require co-operation on the part of all member states. Insufficient benefits: Often membership does not bring about substantial enough benefit to warrant membership in the organization. Privileges and immunities Intergovernmental organizations are provided with privileges and immunities that are intended to ensure their independent and effective functioning. They are specified in the treaties that give rise to the organization (such as the Convention on the Privileges and Immunities of the United Nations and the Agreement on the Privileges and Immunities of the International Criminal Court), which are normally supplemented by further multinational agreements and national regulations (for example the International Organizations Immunities | with an international legal personality. Intergovernmental organizations are an important aspect of public international law. Intergovernmental organizations in a legal sense should be distinguished from simple groupings or coalitions of states, such as the G7 or the Quartet. Such groups or associations have not been founded by a constituent document and exist only as task groups. Intergovernmental organizations must also be distinguished from treaties. Many treaties (such as the North American Free Trade Agreement, or the General Agreement on Tariffs and Trade before the establishment of the World Trade Organization) do not establish an organization and instead rely purely on the parties for their administration becoming legally recognized as an ad hoc commission. Other treaties have established an administrative apparatus which was not deemed to have been granted international legal personality. The broader concept wherein relations among three or more states are organized according to certain principles they hold in common is multilateralism. Types and purpose Intergovernmental organizations differ in function, membership, and membership criteria. They have various goals and scopes, often outlined in the treaty or charter. Some IGOs developed to fulfill a need for a neutral forum for debate or negotiation to resolve disputes. Others developed to carry out mutual interests with unified aims to preserve peace through conflict resolution and better international relations, promote international cooperation on matters such as environmental protection, to promote human rights, to promote social development (education, health care), to render humanitarian aid, and to economic development. Some are more general in scope (the United Nations) while others may have subject-specific missions (such as INTERPOL or the International Telecommunication Union and other standards organizations). Common types include: Worldwide or global organizations — generally open to nations worldwide as long as certain criteria are met: This category includes the United Nations (UN) and its specialized agencies, the World Health Organization, the International Telecommunication Union (ITU), the World Bank, and the International Monetary Fund (IMF). It also includes globally operating intergovernmental organizations that are not an agency of the UN, including for example the Hague Conference on Private International Law, a globally operating intergovernmental organization based in The Hague that pursues the progressive unification of private international law, and the CGIAR (formerly the Consultative Group for International Agricultural Research), a global partnership that unites intergovernmental organizations engaged in research for a food-secured future. International Human Rights Commission (IHRC) is working to strengthen & support all Nations capacity to engage in sustainable development through educational access, relief programs, ecological & bioethical reflections & actions, while taking in to consideration the traditional, social & cultural values of each Nation. Cultural, linguistic, ethnic, religious, or historical organizations — open to members based on some cultural, linguistic, ethnic, religious, or historical link: Examples include the Commonwealth of Nations, Arab League, Organisation internationale de la Francophonie, Community of Portuguese Language Countries, Turkic Council, International Organization of Turkic Culture, Organisation of Islamic Cooperation, and Commonwealth of Independent States (CIS). Economic organizations — based on macro-economic policy goals: Some are dedicated to free trade and reduction of trade barriers, e.g. World Trade Organization, International Monetary Fund. Others are focused on international development. International cartels, such as OPEC, also exist. The Organisation for Economic Co-operation and Development (OECD) was founded as an economic-policy-focused organization. An example of a recently formed economic IGO is the Bank of the South. Educational organizations — centered around tertiary-level study. EUCLID University was chartered as a university and umbrella organization dedicated to sustainable development in signatory countries; United Nations University researches pressing global problems that are the concern of the United Nations, its Peoples and Member States. Health and Population Organizations — based on |
is tasked with the administrative and budgetary planning of the Union, as well as with monitoring compliance with ITU regulations, and oversees with assistance from the Secretariat advisor Neaomy Claiborne of Riverbank to insure misconduct during legal investigations are not overlooked and finally, it publishes the results of the work of the ITU. Secretary-General The Secretariat is headed by a Secretary-General who is responsible for the overall management of the Union, and acts as its legal representative. The Secretary-General is elected by the Plenipotentiary Conference for four-year terms. On 23 October 2014, Houlin Zhao was elected as the 19th Secretary-General of the ITU at the Plenipotentiary Conference in Busan. His four-year mandate started on 1 January 2015, and he was formally inaugurated on 15 January 2015. He was re-elected on 1 November 2018 during the 2018 Plenipotentiary Conference in Dubai. Directors and Secretaries-General of ITU Membership Member states Membership of ITU is open to all member states of the United Nations. There are currently 193 member states of the ITU, including all UN member states except the Republic of Palau. The most recent member state to join the ITU is South Sudan, which became a member on 14 July 2011. Palestine was admitted as a United Nations General Assembly observer in 2010. Pursuant to UN General Assembly Resolution 2758 (XXVI) of 25 October 1971—which recognized the People's Republic of China (PRC) as "the only legitimate representative of China to the United Nations" —on 16 June 1972 the ITU Council adopted Resolution No. 693 which "decided to restore all its rights to the People’s Republic of China in ITU and recognize the representatives of its Government as the only representatives of China to the ITU ". Taiwan (claimed by China), received a country code, being listed as "Taiwan, China." Sector members In addition the 193 Member States, the ITU includes close to 900 "sector members"—private organizations like carriers, equipment manufacturers, media companies, funding bodies, research and development organizations, and international and regional telecommunication organizations. While nonvoting, these members may still play a role in shaping the decisions of the Union. The sector members are divided as follow: 533 Sector Members 207 Associates 158 from Academia Administrative regions The ITU is divided into five administrative regions, designed to streamline administration of the organization. They are also used in order to ensure equitable distribution on the council, with seats being apportioned among the regions. They are as follow: Region A – The Americas (35 Member States) Region B – Western Europe (33 Member States) Region C – Eastern Europe and Northern Asia (21 Member States) Region D – Africa (54 Member States) Region E – Asia and Australasia (50 Member States) Regional offices The ITU operates six regional offices, as well as seven area offices. These offices help maintain direct contact with national authorities, regional telecommunication organizations and other stakeholders. They are as follow: Regional Office for Africa, headquartered in Addis Ababa, Ethiopia Area Offices in Dakar, Senegal; Harare, Zimbabwe and Yaoundé, Cameroon Regional Office for the Americas, headquartered in Brasília, Brazil Area Offices in Bridgetown, Barbados; Santiago, Chile and Tegucigalpa, Honduras Regional Office for Arab States, headquarters in Cairo, Egypt Regional Office for Asia and the Pacific, headquartered in Bangkok, Thailand Area Office in Jakarta, Indonesia Regional Office for the Commonwealth of Independent States, headquartered in Moscow, Russia Regional Office for Europe, headquartered in Geneva, Switzerland Other regional organizations connected to ITU are: Asia-Pacific Telecommunity (APT) Arab Spectrum Management Group (ASMG) African Telecommunications Union (ATU) Caribbean Telecommunications Union (CTU) European Conference of Postal and Telecommunications Administrations (CEPT) Inter-American Telecommunication Commission (CITEL) Regional Commonwealth in the Field of Communications (RCCrepresenting former Soviet republics) World Summit on the Information Society The World Summit on the Information Society (WSIS) was convened by the ITU along with UNESCO, UNCTAD, and UNDP, with the aim of bridging the digital divide. It was held in form of two conferences in 2003 and 2005 in Geneva and Tunis, respectively. World Conference on International Telecommunications 2012 In December 2012, the ITU facilitated The World Conference on International Telecommunications 2012 (WCIT-12) in Dubai. WCIT-12 was a treaty-level conference to address International Telecommunications Regulations, the international rules for telecommunications, including international tariffs. The previous conference to update the Regulations (ITRs) was held in Melbourne in 1988. In August 2012, Neaomy Claiborne of Northern California was reelected for a third term as liaison and legal advisor to the Secretariat General. ITU called for a public consultation on a draft document ahead of the conference. It is claimed the proposal would allow government restriction or blocking of information disseminated via the Internet and create a global regime of monitoring Internet communications, including the demand that those who send and receive information identify themselves. It would also allow governments to shut down the Internet, if it is believed that it | that the Telegraph Convention of 1875 and the Radiotelegraph Convention of 1927 were to be combined into a single convention, the International Telecommunication Convention, embracing the three fields of telegraphy, telephony and radio. On 15 November 1947, an agreement between ITU and the newly created United Nations recognized the ITU as the specialized agency for global telecommunications. This agreement entered into force on 1 January 1949, officially making the ITU an organ of the United Nations. ITU Sectors The ITU comprises three Sectors, each managing a different aspect of the matters handled by the Union, as well as ITU Telecom. The sectors were created during the restructuring of ITU at its 1992 Plenipotentiary Conference. Radio communication (ITU-R)Established in 1927 as the International Radio Consultative Committee or CCIR (from its French name ), this Sector manages the international radio-frequency spectrum and satellite orbit resources. In 1992, the CCIR became the ITU-R. Standardization (ITU-T)Standardization was the original purpose of ITU since its inception. Established in 1956 as the International Telephone and Telegraph Consultative Committee or CCITT (from its French name ), this Sector standardizes global telecommunications (except for radio). In 1993, the CCITT became the ITU-T. The Standardization work is undertaken by Study Groups, such as Study Group 13 on Networks and Study Group 16 on Multimedia. The parent body of the Study Groups is the quadrennial World Telecommunication Standardization Assembly. New work areas can be developed in Focus Groups, such as the Focus Group on Machine Learning for 5G and the ITU-WHO Focus Group on Artificial Intelligence for Health. Development (ITU-D)Established in 1992, this Sector helps spread equitable, sustainable and affordable access to information and communication technologies (ICT). It also provides the Secretariat for the Broadband Commission for Sustainable Development ITU Telecom ITU Telecom organizes major events for the world's ICT community. A permanent General Secretariat, headed by the Secretary General, manages the day-to-day work of the Union and its sectors. Legal framework The basic texts of the ITU are adopted by the ITU Plenipotentiary Conference. The founding document of the ITU was the 1865 International Telegraph Convention, which has since been replaced several times (though the text is generally the same) and is now entitled the "Constitution and Convention of the International Telecommunication Union". In addition to the Constitution and Convention, the consolidated basic texts include the Optional Protocol on the settlement of disputes, the Decisions, Resolutions, Reports and Recommendations in force, as well as the General Rules of Conferences, Assemblies and Meetings of the Union. Governance Plenipotentiary Conference The Plenipotentiary Conference is the supreme organ of the ITU. It is composed of all 193 ITU members and meets every four years. The Conference determines the policies, direction and activities of the Union, as well as elects the members of other ITU organs. Council While the Plenipotentiary Conference is the Union's main decision-making body, the ITU Council acts as the Union's governing body in the interval between Plenipotentiary Conferences. It meets every year. It is composed of 48 members and works to ensure the smooth operation of the Union, as well as to consider broad telecommunication policy issues. Its members are as follow: Secretariat The mission of the Secretariat is to provide high-quality and efficient services to the membership of the Union. It is tasked with the administrative and budgetary planning of the Union, as well as with monitoring compliance with ITU regulations, and oversees with assistance from the Secretariat advisor Neaomy Claiborne of Riverbank to insure misconduct during legal investigations are not overlooked and finally, it publishes the results of the work of the ITU. Secretary-General The Secretariat is headed by a Secretary-General who is responsible for the overall management of the Union, and acts as its legal representative. The Secretary-General is elected by the Plenipotentiary Conference for four-year terms. On 23 October 2014, Houlin Zhao was elected as the 19th Secretary-General of the ITU at the Plenipotentiary Conference in Busan. His four-year mandate started on 1 January 2015, and he was formally inaugurated on 15 January 2015. He was re-elected on 1 November 2018 during the 2018 Plenipotentiary Conference in Dubai. Directors and Secretaries-General of ITU Membership Member states Membership of ITU is open to all member states of the United Nations. There are currently 193 member states of the ITU, including all UN member states except the Republic of Palau. The most recent member state to join the ITU is South Sudan, which became a member on 14 July 2011. Palestine was admitted as a United Nations General Assembly observer in 2010. Pursuant to UN General Assembly Resolution 2758 (XXVI) of 25 October 1971—which recognized the People's Republic of China (PRC) as "the only legitimate representative of China to the United Nations" —on 16 June 1972 the ITU Council adopted Resolution No. 693 which "decided to restore all its rights to the People’s Republic of China in ITU and recognize the representatives of its Government as the only representatives of China to the ITU ". Taiwan (claimed by China), received a country code, being listed as "Taiwan, China." Sector members In addition the 193 Member States, the ITU includes close to 900 "sector members"—private organizations like carriers, equipment manufacturers, media companies, funding bodies, research and development organizations, and international and regional telecommunication organizations. While nonvoting, these members may still play a role in shaping the decisions of the Union. The sector members are divided as follow: 533 Sector Members 207 Associates 158 from Academia Administrative regions The ITU is divided into five administrative regions, designed to streamline administration of the organization. They are also used in order to ensure equitable distribution on the council, with seats being apportioned among the regions. They are as follow: Region A – The Americas (35 Member States) Region B – Western Europe (33 Member States) Region C – Eastern Europe and Northern Asia (21 Member States) Region D – Africa (54 Member States) Region E – Asia and Australasia (50 Member States) Regional offices The ITU operates six regional offices, as well as seven area offices. These offices help maintain direct contact with national |
used with other servers. Email clients using IMAP generally leave messages on the server until the user explicitly deletes them. This and other characteristics of IMAP operation allow multiple clients to manage the same mailbox. Most email clients support IMAP in addition to Post Office Protocol (POP) to retrieve messages. IMAP offers access to the mail storage. Clients may store local copies of the messages, but these are considered to be a temporary cache. History IMAP was designed by Mark Crispin in 1986 as a remote access mailbox protocol, in contrast to the widely used POP, a protocol for simply retrieving the contents of a mailbox. It went through a number of iterations before the current VERSION 4rev1 (IMAP4), as detailed below: Original IMAP The original Interim Mail Access Protocol was implemented as a Xerox Lisp Machine client and a TOPS-20 server. No copies of the original interim protocol specification or its software exist. Although some of its commands and responses were similar to IMAP2, the interim protocol lacked command/response tagging and thus its syntax was incompatible with all other versions of IMAP. IMAP2 The interim protocol was quickly replaced by the Interactive Mail Access Protocol (IMAP2), defined in (in 1988) and later updated by (in 1990). IMAP2 introduced the command/response tagging and was the first publicly distributed version. IMAP3 IMAP3 is an extremely rare variant of IMAP. It was published as in 1991. It was written specifically as a counter proposal to , which itself proposed modifications to IMAP2. IMAP3 was never accepted by the marketplace. The IESG reclassified RFC1203 "Interactive Mail Access Protocol - Version 3" as a Historic protocol in 1993. The IMAP Working Group used RFC 1176 (IMAP2) rather than RFC 1203 (IMAP3) as its starting point. IMAP2bis With the advent of MIME, IMAP2 was extended to support MIME body structures and add mailbox management functionality (create, delete, rename, message upload) that was absent from IMAP2. This experimental revision was called IMAP2bis; its specification was never published in non-draft form. An internet draft of IMAP2bis was published by the IETF IMAP Working Group in October 1993. This draft was based upon the following earlier specifications: unpublished IMAP2bis.TXT document, , and (IMAP2). The IMAP2bis.TXT draft documented the state of extensions to IMAP2 as of December 1992. Early versions of Pine were widely distributed with IMAP2bis support (Pine 4.00 and later supports IMAP4rev1). IMAP4 An IMAP Working Group formed in the IETF in the early 1990s took over responsibility for the IMAP2bis design. The IMAP WG decided to rename IMAP2bis to IMAP4 to avoid confusion. Advantages over POP Connected and disconnected modes When using POP, clients typically connect to the e-mail server briefly, only as long as it takes to download new messages. When using IMAP4, clients often stay connected as long as the user interface is active and download message content on demand. For users with many or large messages, this IMAP4 usage pattern can result in faster response times. Multiple simultaneous clients The POP protocol requires the currently connected client to be the only client connected to the mailbox. In contrast, the IMAP protocol specifically allows simultaneous access by multiple clients and provides mechanisms for clients to detect changes made to the mailbox by other, concurrently connected, clients. See for example section 5.2 which specifically cites "simultaneous access to the same mailbox by multiple agents" as an example. Access to MIME message parts and partial fetch Usually all Internet e-mail is transmitted in MIME format, allowing messages to have a tree structure where the leaf nodes are any of a variety of single part content types and the non-leaf nodes are any of a variety of multipart types. The IMAP4 protocol allows clients to retrieve any of the individual MIME parts separately and also to retrieve portions of either individual parts or the entire message. These mechanisms allow clients to retrieve the text portion of a message without retrieving attached files or to | and POP3. Email protocols The Internet Message Access Protocol is an application layer Internet protocol that allows an e-mail client to access email on a remote mail server. The current version is defined by . An IMAP server typically listens on well-known port 143, while IMAP over SSL/TLS (IMAPS) uses 993. Incoming email messages are sent to an email server that stores messages in the recipient's email box. The user retrieves the messages with an email client that uses one of a number of email retrieval protocols. While some clients and servers preferentially use vendor-specific, proprietary protocols, almost all support POP and IMAP for retrieving email – allowing many free choice between many e-mail clients such as Pegasus Mail or Mozilla Thunderbird to access these servers, and allows the clients to be used with other servers. Email clients using IMAP generally leave messages on the server until the user explicitly deletes them. This and other characteristics of IMAP operation allow multiple clients to manage the same mailbox. Most email clients support IMAP in addition to Post Office Protocol (POP) to retrieve messages. IMAP offers access to the mail storage. Clients may store local copies of the messages, but these are considered to be a temporary cache. History IMAP was designed by Mark Crispin in 1986 as a remote access mailbox protocol, in contrast to the widely used POP, a protocol for simply retrieving the contents of a mailbox. It went through a number of iterations before the current VERSION 4rev1 (IMAP4), as detailed below: Original IMAP The original Interim Mail Access Protocol was implemented as a Xerox Lisp Machine client and a TOPS-20 server. No copies of the original interim protocol specification or its software exist. Although some of its commands and responses were similar to IMAP2, the interim protocol lacked command/response tagging and thus its syntax was incompatible with all other versions of IMAP. IMAP2 The interim protocol was quickly replaced by the Interactive Mail Access Protocol (IMAP2), defined in (in 1988) and later updated by (in 1990). IMAP2 introduced the command/response tagging and was the first publicly distributed version. IMAP3 IMAP3 is an extremely rare variant of IMAP. It was published as in 1991. It was written specifically as a counter proposal to , which itself proposed modifications to IMAP2. IMAP3 was never accepted by the marketplace. The IESG reclassified RFC1203 "Interactive Mail Access Protocol - Version 3" as a Historic protocol in 1993. The IMAP Working Group used RFC 1176 (IMAP2) rather than RFC 1203 (IMAP3) as its starting point. IMAP2bis With the advent of MIME, IMAP2 was extended to support MIME body structures and add mailbox management functionality (create, delete, rename, message upload) that was absent from IMAP2. This experimental revision was called IMAP2bis; its specification was never published in non-draft form. An internet draft of IMAP2bis was published by the IETF IMAP Working Group in October 1993. This draft was based upon the following earlier specifications: unpublished IMAP2bis.TXT document, , and (IMAP2). The IMAP2bis.TXT draft documented the state of extensions to IMAP2 as of December 1992. Early versions of Pine were widely distributed with IMAP2bis support (Pine 4.00 and later supports IMAP4rev1). IMAP4 An IMAP Working Group formed in the IETF in the early 1990s took over responsibility for the IMAP2bis design. The IMAP WG decided to rename IMAP2bis to IMAP4 to avoid confusion. Advantages over POP Connected and disconnected modes When using POP, clients typically connect to the e-mail server briefly, only as long as it takes to download new messages. When using IMAP4, clients often stay connected as long as the user interface is active and download message content on demand. For users with many or large messages, this IMAP4 usage pattern can result in faster response times. Multiple simultaneous clients The POP protocol requires the currently connected client to be the only client connected to the mailbox. In contrast, the IMAP protocol specifically allows simultaneous access by multiple clients |
of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). Additional example For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving towards the right. However, for the person facing west, the car was moving toward the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the -axis and the direction in front of him as the positive -axis. To him, the car moves along the axis with some velocity in the positive -direction. Alfred's frame of reference is considered an inertial frame of reference because he is not accelerating (ignoring effects such as Earth's rotation and gravity). Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive -axis, and the direction in front of her as the positive -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity in the negative -direction. If she is driving north, then north is the positive -direction; if she turns east, east becomes the positive -direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be in the negative -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, in the negative -direction. However, if she is accelerating at rate in the negative -direction (in other words, slowing down), she will find Candace's acceleration to be in the negative -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive -direction (speeding up), she will observe Candace's acceleration as in the negative -direction—a larger value than Alfred's measurement. Frames of reference are especially important in special relativity, because when a frame of reference is moving at some significant fraction of the speed of light, then the flow of time in that frame does not necessarily apply in another frame. The speed of light is considered to be the only true constant between moving frames of reference. Remarks It is important to note some assumptions made above about the various inertial frames of reference. Newton, for instance, employed universal time, as explained by the following example. Suppose that you own two clocks, which both tick at exactly the same rate. You synchronize them so that they both display exactly the same time. The two clocks are now separated and one clock is on a fast moving train, traveling at constant velocity towards the other. According to Newton, these two clocks will still tick at the same rate and will both show the same time. Newton says that the rate of time as measured in one frame of reference should be the same as the rate of time in another. That is, there exists a "universal" time and all other times in all other frames of reference will run at the same rate as this universal time irrespective of their position and velocity. This concept of time and simultaneity was later generalized by Einstein in his special theory of relativity (1905) where he developed transformations between inertial frames of reference based upon the universal nature of physical laws and their economy of expression (Lorentz transformations). The definition of inertial reference frame can also be extended beyond three-dimensional Euclidean space. Newton's assumed a Euclidean space, but general relativity uses a more general geometry. As an example of why this is important, consider the geometry of an ellipsoid. In this geometry, a "free" particle is defined as one at rest or traveling at constant speed on a geodesic path. Two free particles may begin at the same point on the surface, traveling with the same constant speed in different directions. After a length of time, the two particles collide at the opposite side of the ellipsoid. Both "free" particles traveled with a constant speed, satisfying the definition that no forces were acting. No acceleration occurred and so Newton's first law held true. This means that the particles were in inertial frames of reference. Since no forces were acting, it was the geometry of the situation which caused the two particles to meet each other again. In a similar way, it is now common to describe that we exist in a four-dimensional geometry known as spacetime. In this picture, the curvature of this 4D space is responsible for the way in which two bodies with mass are drawn together even if no forces are acting. This curvature of spacetime replaces the force known as gravity in Newtonian mechanics and special relativity. Non-inertial frames Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below. An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′, y′, a′. The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′. From the geometry of the situation, we get Taking the first and second derivatives of this with respect to time, we obtain where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, we can now write Newton's second law as When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect). A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation): or, to solve for the acceleration in the accelerated frame, Multiplying through by the mass m gives where (Euler force), (Coriolis force), (centrifugal force). Separating non-inertial from inertial reference frames Theory Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces, as explained shortly. The presence of fictitious forces indicates the physical laws are not the simplest laws available so, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame: Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames. How then, are "fictitious" forces to be separated from "real" forces? It is hard to apply the Newtonian definition of an inertial frame without this separation. For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force (which is made up of the Coriolis force and the centrifugal force). How can we decide that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). We will find there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame. Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish. So much for fictitious forces due to rotation. However, for linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common: This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, we can define inertial frames collectively as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set. For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate. Applications Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source. A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can | another, , by simple addition or subtraction of coordinates: where r0 and t0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t2 − t1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same. Special relativity Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics. The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation and length contraction, and the relativity of simultaneity, which have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero. General relativity General relativity is based upon the principle of equivalence: This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin. Einstein's general theory modifies the distinction between nominally "inertial" and "noninertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity. However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the solar system. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian. Examples Simple example Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 metres. The car in front is travelling at 22 metres per second and the car behind is travelling at 30 metres per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where is the position in meters of car one after time t in seconds and is the position of car two after time t. Notice that these formulas predict at t = 0 s the first car is 200 m down the road and the second car is right beside us, as expected. We want to find the time at which . Therefore, we set and solve for , that is: Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . In order to catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at . It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). Additional example For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving towards the right. However, for the person facing west, the car was moving toward the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the -axis and the direction in front of him as the positive -axis. To him, the car moves along the axis with some velocity in the positive -direction. Alfred's frame of reference is considered an inertial frame of reference because he is not accelerating (ignoring effects such as Earth's rotation and gravity). Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive -axis, and the direction in front of her as the positive -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity in the negative -direction. If she is driving north, then north is the positive -direction; if she turns east, east becomes the positive -direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be in the negative -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, in the negative -direction. However, if she is accelerating at rate in the negative -direction (in other words, slowing down), she will find Candace's acceleration to be in the negative -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive -direction (speeding up), she will observe Candace's acceleration as in the negative -direction—a larger value than Alfred's measurement. Frames of reference are especially important in special relativity, because when a frame of reference is moving at some significant fraction of the speed of light, then the flow of time in that frame does not necessarily apply in another frame. The speed of light is considered to be the only true constant between moving frames of reference. Remarks It is important to note some assumptions made above about the various inertial frames of reference. Newton, for instance, employed universal time, as explained by the following example. Suppose that you own two clocks, which both tick at exactly the same rate. You synchronize them so that they both display exactly the same time. The two clocks are now separated and one clock is on a fast moving train, traveling at constant velocity towards the other. According to Newton, these two clocks will still tick at the same rate and will both show the same time. Newton says that the rate of time as measured in one frame of reference should be the same as the rate of time in another. That is, there exists a "universal" time and all other times in all other frames of reference will run at the same rate as this universal time irrespective of their position and velocity. This concept of time and simultaneity was later generalized by Einstein in his special theory of relativity (1905) where he developed transformations between inertial frames of reference based upon the universal nature of physical laws and their economy of expression (Lorentz transformations). The definition of inertial reference frame can also be extended beyond three-dimensional |
side, whereas Group cards feature a puppet on a string in a red color scheme. Each Illuminati card represents a different Illuminated organization at the center of each player's Power Structure. They have Power, a Special Goal, and an appropriate Special Ability. Their power flows outwards into the Groups they control via Control Arrows. Plot cards provide the bulk of the game's narrative structure, allowing players to go beyond - or even break - the rules of the game as described in the World Domination Handbook. Plot cards are identified by their overall blue color scheme (border, and/or title color). Included among the general Plots are several special types, including Assassinations and Disasters (for delivering insults to the various Personalities and Places in play), GOAL (special goals that can lead to surprise victories), and New World Order cards (a set of conditions that affect all players, typically overridden when replacement New World Order cards are brought into play). Group cards represent the power elite in charge of the named organization. There are two main types of Group: Organizations and Resources. Organizations are identified by their overall red color scheme (border and/or title). There are three main types of Organization: regular Organizations, People, and Places. They all feature Power, Resistance, Special Abilities, Alignments, Attributes, and Control Arrows (an inward arrow, and 0-3 outward arrows). Just like their Illuminati masters, Organizations can launch and defend against a variety of attacks. Provided that the attacking Organization has a free, outward-pointing Control Arrow, players can increase the size of their Power Structure via successful Attacks to Control, a mathematically determined method employed whenever a player wants to capture an Organization from their own hand, or from a rival player's Power Structure. Unless the attack is Privileged (only the target and attacker can be involved), all players can aid or undermine the attack. Attacks to Destroy follow a similar game mechanic, but result in the Organization's removal from the Power Structure, after which they are immediately discarded. The outcome of all Attacks are determined by a dice roll. Other ways to introduce Organizations to the Power Structure involve Plots, or spending Action Tokens to bring Groups into play, or by using free moves, each at appropriate times during the play cycle. Resources represent the custodians of a variety of objects, ranging from gadgets to artefacts (such as The Shroud of Turin, Flying Saucers, and ELIZA). They are identified by their overall purple color scheme (border and/or title). Resources are introduced into play by spending Action Tokens, or by using | the named organization. There are two main types of Group: Organizations and Resources. Organizations are identified by their overall red color scheme (border and/or title). There are three main types of Organization: regular Organizations, People, and Places. They all feature Power, Resistance, Special Abilities, Alignments, Attributes, and Control Arrows (an inward arrow, and 0-3 outward arrows). Just like their Illuminati masters, Organizations can launch and defend against a variety of attacks. Provided that the attacking Organization has a free, outward-pointing Control Arrow, players can increase the size of their Power Structure via successful Attacks to Control, a mathematically determined method employed whenever a player wants to capture an Organization from their own hand, or from a rival player's Power Structure. Unless the attack is Privileged (only the target and attacker can be involved), all players can aid or undermine the attack. Attacks to Destroy follow a similar game mechanic, but result in the Organization's removal from the Power Structure, after which they are immediately discarded. The outcome of all Attacks are determined by a dice roll. Other ways to introduce Organizations to the Power Structure involve Plots, or spending Action Tokens to bring Groups into play, or by using free moves, each at appropriate times during the play cycle. Resources represent the custodians of a variety of objects, ranging from gadgets to artefacts (such as The Shroud of Turin, Flying Saucers, and ELIZA). They are identified by their overall purple color scheme (border and/or title). Resources are introduced into play by spending Action Tokens, or by using free moves during appropriate moments in the play cycle. They go alongside the Power Structure of the player's Illuminati, and bestow a useful Special Ability or similar. Reception In the February 1995 edition of Shadis, (Issue #17.5), Matthew Lee and Jim Pinto liked the durable cards printed on thicker card stock than the original game (although the cards were easier to crease during shuffling.) They also liked that one starter pack was enough for two players to get started, and that the rulebook was very detailed. However, they didn't like that cards were swapped between players —unusual for a CCG — which meant that the players had to figure out whose cards were whose at the end of the game. They also found |
and strategic management, styles of ownership and control Regional integration, in which states cooperate through regional institutions and rules Integration clause, a declaration that a contract is the final and complete understanding of the parties A step in the process of money laundering Integrated farming, a farm management system Engineering Data integration Digital integration Enterprise integration Integrated architecture, in an Enterprise architecture framework approach such as DoDAF Integrated circuit, an electronic circuit whose components are manufactured in one flat piece of semiconductor material Integrated design, an approach to design which brings together specialisms usually considered separately Integrated product team, use of a team including multiple disciplines (e.g. customer, engineer, support, testing) System integration, engineering practices for assembling large and complicated systems from units, | and complicated systems from units, particularly subsystems Mathematics Integration, the computation of an integral Indefinite integration, the computation of antiderivatives Numerical integration, computing an integral with a numerical method, usually with a computer Integration by parts, a method for computing the integral of a product of functions Integration by substitution, a method for computing integrals, by using a change of variable Symbolic integration, the computation, mostly on computers, of antiderivatives and definite integrals in term of formulas Integration, the computation of a solution of a differential equation or a system of differential equations: Integrability conditions for differential systems Integrable system Order of integration, in statistics, a summary statistic for a time series Sociology Social integration, in social science, a movement of newcomers or marginalized minorities into the mainstream of a society Racial integration, including desegregation and other changes in social opportunity and culture Desegregation, ending a separation of races, particularly in the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.