text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
common core state standards for mathematical practice Inside Mathematics illuminates the mathematical practice standards with video excerpts of mathematics lessons. Click the individual standards below to see instances of the practice standards in classroom lessons. Although the practices are presented here individually, it’s important to keep in mind that the practices can, and should, be evident together in a lesson. See the Mentors of Mathematical Practice for a holistic view of the practices together. See also our new resources incorporating social and emotional learning competencies in the mathematics classroom and how they support the mathematical practices.
<urn:uuid:3bee0a45-5891-435b-9ef5-ac8d5b8750e6>
CC-MAIN-2016-26
http://www.insidemathematics.org/common-core-resources/mathematical-practice-standards
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.893236
114
3.25
3
We have a five year old daughter who was diagnosed recently with Type 1 diabetes. We have been told to give the snack approximately half an hour after the Actrapid [Regular]. We are wondering, when her blood glucose is too high (the other day her level was 23mm/l [414 mg/dl] at 3.00pm), why she could not forego the snack, as she doesn't feel like eating when she is high, and then eat as usual at the next meal? This would allow time for her blood glucose to return to a more acceptable level. This question was referred to several members of the Diabetes Team, who have each given an answer: Answer from Dr. Lebinger:Although it makes sense to skip food when the blood sugar is high to try and bring it down faster, I find that often it makes it harder to figure out how to change the insulin to prevent the high blood sugars. As your child grows, she will need more insulin and you will see high blood sugars when you need to change the dose. I find it helpful to see what the blood sugars are before and after the high blood sugar on the usual food and exercise schedule to help me better figure out how to adjust the insulin. If your child is sick or spilling ketones with a high blood sugar, you should contact her doctor as she may need extra insulin right away and can't wait until the next day to adjust the dose. Answer from Dr. Robertson:I'm not quite sure that I understand your question - why are you giving Actrapid [Regular] at 2.30 pm? The principle of giving food after insulin is that the food is to work with that dose of insulin rather than the insulin being given to bring down the previous blood sugar. Another way of putting this is that without further food the blood sugar would fall rapidly making a hypo quite likely after the previous high. If you regularly find that your daughter has high blood sugars mid afternoon then you should discuss a change in her usual morning insulin dose with her team to try and prevent the high blood sugars at 2:30 PM. Original posting 13 Oct 96 Last Updated: Tuesday April 06, 2010 15:08:52 This Internet site provides information of a general nature and is designed for educational purposes only. If you have any concerns about your own health or the health of your child, you should always consult with a physician or other health care professional. This site is published by T-1 Today, Inc. (d/b/a Children with Diabetes), a 501c3 not-for-profit organization, which is responsible for its contents. Our mission is to provide education and support to families living with type 1 diabetes. © Children with Diabetes, Inc. 1995-2016. Comments and Feedback.
<urn:uuid:e7307350-c1df-4a78-bf95-c095feb2b66b>
CC-MAIN-2016-26
http://www.childrenwithdiabetes.com/dteam/1996-10/d_0d_19o.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961152
572
2.578125
3
Soweta Township Uprising 30th Anniversary Posted on: June 16, 2006 | Views: 427 The Soweto uprising that marked the beginning of the end for Apartheid in South Africa is remembered 30 years on. The anniversary of the day police opened fire on unarmed black students as they staged a protest march is now a public holiday. President Thabo Mbeki urged today's South African youth to confront the modern challenges of poverty, crime, and unemployment. Soweto remains one of the defining moments of South Africa's anti-apartheid struggle. It started as a revolt against plans by the minority white government to enforce teaching in Afrikaans, considered the language of the oppressor by the black population. Two dozen students died when police opened fire on the demonstrators. Up to 600 were killed by security forces in Soweto alone in the months of protests that followed. Now the manner of remembrance is at the centre of a new dispute. Some including former president Nelson Mandela say the sacrifices of Soweto should be honoured and commemorated. Others treat the national holiday as time for celebrations and partying. Provided by Reuters.
<urn:uuid:84d60600-6b02-4aa1-8e91-268e23a0d01a>
CC-MAIN-2016-26
http://www.spike.com/video-clips/2ze7fk/soweta-township-uprising-30th-anniversary
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00059-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950445
236
3.125
3
Atypical moles (atypical nevi) or dysplastic moles (dysplastic nevi), are caused by collections of the color-producing (pigment-producing) cells of the skin (melanocytes) in which the cells grow in an abnormal way. Atypical moles may occur as new lesions or as a change in an existing mole. Lesions may be single or multiple. In atypical-nevus syndrome, hundreds of atypical moles may be seen. People with atypical moles may be at increased risk for developing skin cancer (melanoma), with the risk increasing with the number of atypical moles present. Who's At Risk - Atypical moles may occur at any age and in all ethnic groups. - Atypical moles frequently run in families. - People with atypical moles may have a family history of melanoma. Signs and Symptoms - Atypical moles may appear anywhere on the skin. The lesions can vary in size and/or color. - They can be larger than a pencil eraser (6 mm) and may have variations in color within the lesion ranging from pink to reddish-brown to dark brown. - Atypical moles may be darker brown in the center or in the edges (periphery). - People with atypical-nevus syndrome may have hundreds of moles of varying sizes and colors. - Protective measures, such as avoiding skin exposure to sunlight during peak sun hours (10 AM to 3 PM), wearing protective clothing, and applying high-SPF sunscreen, are essential for reducing exposure to harmful ultraviolet (UV) light. - Monthly self-examination of the skin is helpful to detect new lesions or changes in existing lesions. - Be sure your atypical moles are not signs of skin cancer (melanoma). Remember the ABCDEs of melanoma lesions: A - Asymmetry: One half of the lesion does not mirror the other half. B - Border: The borders are irregular or vague (indistinct). C - Color: More than one color may be noted within the mole. D - Diameter: Size greater than 6 mm (roughly the size of a pencil eraser) may be concerning. E - Evolving: Notable changes in the lesion over time are suspicious signs for skin cancer. When to Seek Medical Care - The occurrence of a new mole (pigmented nevus) in an adult is unusual; if a new pigmented lesion occurs, see your doctor for evaluation. - People with multiple moles and unusual (atypical) moles should be examined by a dermatologist every 4–12 months depending on their past history and family history. - It may be difficult to tell an atypical mole from a normal mole, so seek medical evaluation if you are unsure about the nature of a mole or if you note changes within a mole. - Your doctor may recommend that you have a biopsy or surgical removal (excision) of unusual-appearing moles to find out whether or not you have atypical moles or melanoma. Treatments Your Physician May Prescribe - Biopsy or surgical removal (excision) may be done so the mole may be examined by a specialist (pathologist) to determine the actual diagnosis. - As noted previously, people with multiple moles and atypical moles should be followed regularly by a dermatologist. Whole-body photography or photographs of individual moles may be helpful in following these people. Bolognia, Jean L., ed. Dermatology , pp.17, 1770. New York: Mosby, 2003.
<urn:uuid:f5f858a9-3155-4b58-8eea-a1136ce22350>
CC-MAIN-2016-26
http://www.skinsight.com/adult/atypicalNevus.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00110-ip-10-164-35-72.ec2.internal.warc.gz
en
0.89274
794
3.75
4
Treatment for hypoplastic left heart syndrome requires either a three-step surgical procedure called staged palliation or a heart transplant. Staged palliation is considered one of the major achievements of congenital heart surgery in recent years. The survival rate for children at age 5 is about 70 percent and most of these children have normal growth and development. This three-step surgery procedure is designed to create normal blood flow in and out of the heart, allowing the body to receive the oxygenated blood it needs. The three steps consist of the following procedures: This procedure is performed shortly after birth. It converts the right ventricle into the main ventricle pumping blood to both the lungs and the body. The main pulmonary artery and the aorta are connected and the main pulmonary artery is cut off from the two branching pulmonary arteries that direct blood to each side of the lungs. Instead, a connection called a shunt is placed between the pulmonary arteries and the aorta to supply blood to the lungs. Bi-directional Glenn Operation This operation usually is performed about six months after the Norwood to divert half of the blood to the lungs when circulation through the lungs no longer needs as much pressure from the ventricle. The shunt to the pulmonary arteries is disconnected and the right pulmonary artery is connected directly to the superior vena cava, the vein that brings deoxygenated blood from the upper part of the body to the heart. This sends half of the deoxygenated blood directly to the lungs without going through the ventricle. This is the third stage, usually performed about 18 to 36 months after the Glenn. It connects the inferior vena cava, the blood vessel that drains deoxygenated blood from the lower part of the body into the heart, to the pulmonary artery by creating a channel through or just outside the heart to direct blood to the pulmonary artery. At this stage, all deoxygenated blood flows passively through the lungs.
<urn:uuid:3a9c38a7-34d9-4d20-9073-3f80d32c444c>
CC-MAIN-2016-26
http://surgery.ucsf.edu/conditions--procedures/staged-palliation-for-hypoplastic-left-heart-syndrome.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00181-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940046
403
3.03125
3
If you want to keep a modern society running cleanly and efficiently, you need large-scale sources of electricity. You can generate that electricity in one or more of only three ways: coal, natural gas, and nuclear. Coal and natural gas are fossil fuels. To generate electricity with them you have to burn them. Burning fossil fuels creates carbon dioxide (CO2), the principal man-made greenhouse gas. Most people are not aware of the sheer amount of CO2 that electric power generation dumps into the air—10 billion tons each and every year. And that number is growing. Not one single gram of those 10 billion tons of CO2 comes out of a nuclear plant. Almost all of those 10 billion tons come from coal and gas plants (some also comes out of oil-fired plants). What happens to that CO2? Well, it swirls around in the atmosphere, acting as a trap for heat energy that would otherwise radiate into outer space. It does this for literally hundreds of thousands—some say millions—of years. CO2 is an extremely tough and stable molecule. I know: one of my R&D projects aims to deprive CO2 of one of its oxygen atoms, in order to turn it into a reactant to make other products. Hiving an oxygen atom off CO2 requires a lot of energy and ingenuity. Left on its own in the earth’s atmosphere, without people like me devoting time and effort to split it, CO2 remains subject only to time. And time—hundreds of thousands of years of time—eventually does bring it back to earth, by entraining it weathering rock and water. Most of our planet’s surface is water, so most of that CO2 winds up in the oceans, making them more acidic. This has profound implications for the future of life on this planet. The entire ocean ecosystem, from phytoplankton to blue whales, is a giant carbon sink. Dr. Sylvia Earle, an oceanographer, points out in the video clip below that ocean acidification—the conversion of atmospheric CO2 into carbonic acid in water—is the result of the rate at which CO2 is becoming entrained in the oceans. Too much carbonic acid will kill ocean creatures, thereby depriving us of an enormous source of atmospheric oxygen. Ocean acidification is recognized by many, including many of the leading environmental groups, as a major problem. The Natural Resources Defense Council (NDRC) warns against it, and recommends, not surprisingly, steep reductions in CO2 emissions. The NDRC obviously went to a lot of trouble producing the fancy videos on its site. It obviously feels ocean acidification is a serious problem. Presumably, NDRC feels that the CO2 from power generation, which to repeat dumps 10 billion tons of CO2 into our air each and every year, should be one of the first places to start with the necessary carbon reductions. So which of the three power generation fuels—coal, natural gas, and nuclear—does the NDRC recommend we uptake in a big way? The answer may surprise, upset, and disappoint you. While waxing eloquent about the perils of ocean acidification, the NDRC, from the other corner of its mouth, proceeds to in effect advocate for coal and natural gas and against nuclear. With this kind of mealy-mouthed hypocrisy at the cutting edge of the self-styled “environmental movement,” is it any wonder that 10 billion tons of CO2 are getting dumped into our air and oceans every year. The “environmental movement” is the oceans’ and atmosphere’s worst enemy.
<urn:uuid:0540183d-8fae-4b03-ba96-a529200745e1>
CC-MAIN-2016-26
http://canadianenergyissues.com/2012/01/01/oceans-of-acid-the-dumping-ground-for-million-year-%E2%80%9Cclean%E2%80%9D-fossil-fuel-waste/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939131
757
3.5
4
This article in its original form was written by Bimal K. Bose (University of Tennessee, Knoxville) in 2014. This article gives a brief historical review of the evolution of power electronics over the past 100-plus years. It includes electrical machines, mercury-arc rectifiers, gas tube electronics, MAs, power semiconductor devices, converter circuits, and motor drives. Wherever possible it gives the name of the inventor and the year of invention for important technologies. It is important to note, however, that inventions are generally developed by a number of contributors working over a period of time. The history of power electronics is so vast that it is impossible to review it within a few pages. More information is available in the references. Power electronics is a technology that deals with the conversion and control of electrical power with high-efficiency switching mode electronic devices for a wide range of applications. These include as dc and ac power supplies, electrochemical processes, heating and lighting control, electronic welding, power line volt–ampere reactive (VAR) and harmonic compensators, high-voltage dc (HVdc) systems, flexible ac transmission systems, photovoltaic and fuel cell power conversion, high-frequency (HF) heating, and motor drives. We can define the 21st century as the golden age of power electronics applications after the technology evolution stabilized in the latter part of the past century with major innovations. Power electronics is ushering in a new kind of industrial revolution because of its important role in energy conservation, renewable energy systems, bulk utility energy storage, and electric and hybrid vehicles, in addition to its traditional roles in industrial automation and high-efficiency energy systems. It has emerged as the high-tech frontier in power engineering. From current trends, it is evident that power electronics will play a significant role in solving our climate change (or global warming) problems, which are so important. Power electronics has recently emerged as a complex and multidisciplinary technology after the last several decades of technology evolution made possible by the relentless efforts of so many university scientists and engineers in the industry. The technology embraces the areas of power semiconductor devices, converter circuits, electrical machines, drives, advanced control techniques, computer-aided design and simulation, digital signal processors (DSPs), and field-programmable gate arrays (FPGAs), as well as artificial intelligence (AI) techniques. The history of power electronics goes back more than 100 years. It began at the dawn of the 20th century with the invention of the mercury-arc rectifier by the American inventor Peter Cooper Hewitt, beginning what is called the “classical era” of power electronics. However, even before the classical era started, many power conversion and control functions were possible using rotating electrical machines, which have a longer history. The advent of electrical machines in the 19th century and the commercial availability of electrical power around the same time began the so-called electrical revolution. This followed the industrial revolution in the 18th century, The commercial wound-rotor induction motor (WRIM) was invented by Nikola Tesla in 1888 using the rotating magnetic field with polyphase stator winding that was invented by Italian scientist Galileo Ferraris in 1885. The cage-type induction motor (IM) was invented by German engineer Mikhail Dolivo-Dobrovolsky in 1889. The history of dc and synchronous machines is older. Although Michael Faraday introduced the dc disk generator (1831), a dc motor was patented by the American inventor Thomas Davenport (1837) and was commercially used from 1892. Polyphase alternators were commercially available around 1891. The concept of a switched reluctance machine (SRM) was known in Europe in the early 1830s, but as it was an electronic machine, the idea did not go far until the advent of self-commutated devices in the 1980s. The duality of the motoring and generating functions of a machine was well known after its invention. The commercial dc and ac power generation and distribution were promoted after the invention of machines. For example, dc distribution was set up in New York City in 1882 mainly for street car dc motor drives and the incandescent carbon filament lamps (1879) developed by Thomas Edison. However, ac transmission at a higher voltage and longer distance was promoted by Nikola Tesla and was first erected between Buffalo and New York by Westinghouse Electric Corporation (1886). Those were the exciting days in the history of the electrical revolution. Although rotating machines could be used for power conversion in the pre-power electronics era (the late 19th century), they were heavy, noisy, and the efficiency was poor. A dc generator coupled to a synchronous motor (SM) or an IM could convert ac to dc power, where dc voltage could be varied by controlling the generator field current. Similarly, a dc motor could be coupled to an alternator to convert dc to ac power, where the output frequency and voltage could be varied by motor speed variation with field current and alternator dc excitation, respectively. The ac-ac power conversion at a constant frequency and variable voltage was possible by coupling an alternator with an IM or an SM, where the alternator dc excitation was varied. Generating the variable-frequency supply required for ac motor speed control was not easy in the early days. How could you control the speed of the dc and ac motors that were so important for the processing industries? Controlling the speed of a dc motor was somewhat straightforward and was done by varying the supply voltage and motor field current. However, ac motors were generally used for constant-speed applications. The historic Ward Leonard method of dc motor speed control was introduced in 1891 for industrial applications. In this scheme, the variable dc voltage for the motor was generated by an IM dc generator set by controlling the generator field current. In the constant-torque region, the dc voltage was controlled at a constant motor field current, whereas the motor field current was weakened at higher speed in the constant-power region. The four-quadrant speed control was easily possible by reversing the dc supply voltage and motor field current. The speed control of the ac motor was more difficult help of power electronics. For a wound rotor IM (WRIM), the rotor winding terminals could be brought out by the slip rings and brushes, and an external rheostat could control the speed, although efficiency is very poor in such a scheme. Changing the number of stator poles is the simple principle for ac motor speed control, but the complexity and discrete steps of speed control could not favor this scheme. German inventors introduced two methods of WRIM speed control with slip energy recovery by the cascaded connection of machines, which are known as the Kramer drive (1906) and the Scherbius drive (1907). In the former method, the slip energy (at slip frequency) drives a rotary converter that converts ac to dc and drives a dc motor mounted on the WRIM shaft. The feedback of the slip energy on the drive shaft improves the system efficiency. In the Scherbius drive, the slip energy drives an ac commutator motor, and an alternator coupled to its shaft recovers the slip energy and feeds back to the supply mains. Both systems were very expensive. Both the Kramer and Scherbius drives are extensively used today, but the auxiliary machines are replaced by power electronics. For completeness, the Schrage motor drive (1914) invented in Germany, which replaces all the auxiliary machines at the cost of complexity of motor construction, should be mentioned. It is basically an inside-out WRIM with an auxiliary rotor winding with commutators and brushes that inject voltage on the secondary stator winding to control the motor speed. Power Electronics in the Classical Era: Mercury-Arc Rectifiers The history of power electronics began with the invention of the glass-bulb pool-cathode mercury-arc rectifier by the American inventor Peter Cooper Hewitt in 1902. While experimenting with the mercury vapor lamp, which he patented in 1901, he found that current flows in one direction only, from anode to cathode, thus giving rectifying action. Multi-anode tubes with a single pool cathode could be built to provide single and multiphase, half-wave, diode rectifier operation with the appropriate connection of transformers on the ac side. The limited amount of dc voltage control was possible by tap-changing transformers. The rectifiers found immediate applications in battery charging and electrochemical processes such as Al reduction, electroplating, and chemical gas production. The first dc distribution line (1905) with the mercury-arc rectifiers was constructed in Schenectady, New York, and used for lighting incandescent lamps. Hewitt later modified glass bulbs with steel tanks (1909) for higher power and improved reliability with water cooling that further promoted the rectifier applications. The introduction of grid control by Irving Langmuir (1914) in mercury-arc rectifiers ushered in a new era that further boosted their applications. The rectifier circuit could also be operated as a line-commutated inverter by retarding the firing angle. Most phase-controlled thyristor converter circuits used today were born in this classical era of power electronics evolution. In 1930 the New York City subway installed a 3,000-kW grid-controlled rectifier for traction dc motor drives. In 1931, German railways introduced mercury-arc cycloconverters (CCVs) that converted three-phase 50 Hz to single-phase 16 2/3 Hz for universal motor traction drives. Joseph Slepian of Westinghouse invented the ignitron tube in 1933. It is a single-anode, pool-cathode metal-case gas tube, where an igniter with phase control initiates the conduction. The ignitron tube could be designed to handle high power at high voltage. The single-anode structure of the ignitron tube permitted inverse-parallel operation for ac voltage control for applications such as welding and heating control as well as bridge converter configurations popular in railway and steel mill dc drives, and SM speed control, which has used dc-link load-commutated inverters (LCI) late 1930s. Ignitron converters were also used in HVdc transmission systems in the 1950s until high-power thyristor converters replaced them in the 1970s. The first HVdc transmission system was installed in Gotland, Sweden, in 1954. The diode bridge converter configurations (known as Graetz circuits) were invented much earlier (1897) by the German physicist Leo Graetz using electrolytic rectifiers. Power Electronics in the Classical Era: Hot-Cathode Gas Tube Rectifiers The thyratron, or hot-cathode glass bulb gas tube rectifier, was invented by GE (1926) for low-to-medium power applications. Functionally, it is similar to a grid-controlled mercury-arc tube. Instead of a pool cathode, the thyratron tube used a dry cathode thermionic emission heated by a filament similar to a vacuum triode, which was widely used in those days. The tube was filled with mercury vapor; the ionization of this vapor decreased the anode-to-cathode conduction drop (for higher efficiency), which was lower than that of mercury-arc tube. The grid bias with phase-shift controlled conduction is similar to the pool cathode tube. The modern thyristor or silicon-controlled rectifier (SCR), which is functionally similar, derives its name from the thyratron. The diode version of the thyratron was known as the phanotron. One interesting application of the phanotron was in the Kramer drive, where the phanotron bridge replaced the rotary converter (1938) for slip power rectification. Thyratrons were popular for commercial dc motor drives, where the power requirement was low. Ernst F. W. Alexanderson, the famous engineer at GE Corporate Research and Development (GE-CRD) in Schenectady, installed a thyratron CCV drive in 1934 for a wound-field SM (WFSM) drive (400 hp) for speed control of induced draft fans in the Logan power station. This was the first variable-frequency ac installation in history. Power Electronics in the Classical Era: Magnetic Amplifiers Functionally, a magnetic amplifier (MA) is similar to a mercury-arc or thyratron rectifier. Today it uses a high-permeability saturable reactor magnetic core with materials such as Permalloy, Supermalloy, Deltamax, and Supermendur. A control winding with dc current resets the core flux, whereas the power winding sets the core flux to saturate at a “firing angle” and apply power to the load. The phase-controlled ac power could be converted to variable dc with the help of a diode rectifier. In the early days, MAs used copper oxide (1930) and selenium rectifiers (1940) until germanium and silicon rectifiers became available in the 1950s. Copper oxide and selenium rectifiers were bulky leakage current. The traditional MAs used series or parallel circuit configuration. The advantages of MAs are their ruggedness and reliability, but the disadvantages are their increased size and weight. Germany was the leader in MA technology and applied it extensively in military technologies during World War II, such as in naval ship gun control and V-2 rocket control. Alexanderson was, however, the pioneer in MA applications. He applied MA to radio-frequency telephony (1912), where he designed an HF alternator and used MAs to modulate the power for radio telephony. In 1916, he designed a 70-kW HF alternator (up to 100 kHz) at GE-CRD to establish a radio link with Europe. Even today, MAs are used to control the lights of the GE logo on top of Building 37 in Schenectady, where Alexanderson used to work. The MA dc motor drives were competitors of the thyratron dc drives and popular for use in adverse environments. Robert Ramey invented the fast half-cycle response MA in 1951, which found extensive applications particularly in low-power dc motor speed control, servo amplifiers, logic and timer circuits, oscillators (such as the Royer oscillator), and telemetry encoding circuits. Copper oxide and selenium applications for signal processing proved extremely important when modern semiconductor-based control electronics was in its infancy. Power Electronics in the Modern Era: Power Semiconductor Devices The modern solid-state electronics revolution began with the invention of transistors in 1948 by Bardeen, Brattain, and Shockley of Bell Laboratories. While Bardeen and Brattain invented the point contact transistor, Shockley invented the junction transistor. Although solid-state electronics originally started with Ge, it gradually transformed using with Si as its base. The modern solid-state power electronics revolution (often called the second electronics revolution) started with the invention of the p-n-p-n Si transistor in 1956 by Moll, Tanenbaum, Goldey, and Holonyak at Bell Laboratories, and GE introduced the thyristor (or SCR) to the commercial market in 1958. Thyristors reigned supreme for two decades (1960–1980), even with the present popularity for high-power LCI drive applications. The word thyristor comes from the word "thyratron" because of the analogy of operation. Power diodes, both germanium and silicon, became available in the mid-1950s. Starting originally with the phase-controlled thyristor, gradually other power devices emerged. The integrated antiparallel thyristor (TRIAC) was invented by GE in 1958 for ac power control. The gate turn-off thyristor (GTO) was invented by GE in 1958, but in the 1980s several Japanese companies introduced high-power GTOs. Bipolar junction transistors (BJTs) and field-effect transistors were known from the beginning of the solid-state era, but power MOSFETs and bipolar junction transistors (BJTs, used as bipolar power transistors, BPTs) appeared on the market in the late 1970s. Currently, both GTOs and BPTs are obsolete devices, but power MOSFETs have become universally popular for low-voltage HF applications. The invention of the insulated-gate bipolar transistor (IGBT or IGT) in 1983 by GE-CRD and its commercial introduction in 1985 were significant milestones in the history of power semiconductors. Jayant Baliga was the inventor of the IGBT. However, initially, it had a thyristor-like latching problem and, therefore, was defined as an insulated-gate rectifier. Akio Nakagawa solved this latching problem (1984), and this helped the commercialization of the IGBT. Today, the IGBT is the most important device for medium-to-high power applications. Several other devices, including the static induction transistor, the static induction thyristor, the MOS-controlled thyristor (MCT), the injection-enhanced gate transistor, and the MOS turn-off thyristor, were developed in the laboratory in the 1970s and 1980s but did not ultimately see the daylight. Particularly for MCT development, the U.S. government spent a fortune, but it ultimately went to waste. The high-power, integrated gate-commutated thyristor (IGCT) was introduced by ABB in 1997. Currently, it is a competitor to the high-power IGBT, but it is gradually losing the race. Although silicon has been the basic raw material for current power devices, large-bandgap materials, such as SiC, GaN, and ultimately diamond (in synthetic thin-film form), are showing great promise. SiC devices, such as the Schottky barrier diode (1200 V/50 A), the power MOSFET (1200-V/100-A half-bridge module), and the JBS diode (600 V/20 A), are already on the market, and the p-i-n diode (10 kV) and IGBT (15 kV) will be introduced in the future. There are many challenges in researching large-bandgap power devices. Fortunately, in parallel with the power semiconductor evolution, microelectronics technology was advancing quickly and the corresponding material processing and fabrication techniques, packaging, device characterization, modeling, and simulation techniques contributed to the successful evolution of so many advanced power devices, their higher voltage and current ratings, and the improvement of their performance characteristics. Gradually, microelectronics-based devices, such as microcomputers/DSPs and application-specified integrated circuit (ASIC)/FPGA chips, became the backbone control. Most of the thyristor phase-controlled line and load-commutated converters, commonly used today, were introduced in the era of classical power electronics. The disadvantages of line-side phase control are a lagging displacement power factor (DPF) and lower-order line harmonics. The IEEE regulated harmonics with Standard IEEE-519 (1981), whereas Europe adopted the IEC-61000 standard, which was introduced in the 1990s. The current-fed dc link converters became very popular for multi-MW WFSM drives from the 1980s. The initial start-up method of the motor (building sufficient CEMF for load commutation) by the dc-link current interruption method was proposed by Rolf Müller et al. of Papst-Motoren (1979) and is popular even today. For a lagging DPF load (such as IM), the inverter required forced commutation. The auto-sequential current inverter (ASCI) using forced commutation was proposed by Kenneth Phillips of Louis Allis Co. in 1971. This topology became obsolete with the advent of modern self-commutated devices. The thyristor phase-controlled CCVs (with line commutation), were very popular from 1960 until 1995, when multilevel converters made them obsolete. The traditional CCVs used the blocking method, but Toshiba introduced the circulating current method in the 1980s to control the line DPF. The dual converter for a four-quadrant dc motor drive was popular long before that. The advent of thyristors initiated the evolution of the dc-link voltage-fed class of thyristor inverters for general industrial applications, particularly for IM drives. The voltage-fed converter topology is the most popular today and will possibly become universal in the future. A diode rectifier (Graetz bridge) usually supplies the dc link, and a force-commutated thyristor bridge inverter was the usual configuration. From the 1960s, the era of the thyristor forced commutation techniques started, and William McMurray of GE-CRD was the pioneer in this area. He invented techniques, known as the McMurray inverter (1961), the McMurray-Bedford inverter (1961), ac switched commutation (1980), and so on, which will remain as the most outstanding contributions in the history of power electronics. Self-commutated devices, such as power MOSFETs, BPTs, GTOs, IGBTs, and IGCTs, began appearing in the 1980s and replaced the majority of thyristor inverters. The voltage-fed inverters (VFIs) originally introduced with square (or six-stepped) wave output had a rich harmonic content. Therefore, the pulse width modulation (PWM) technique was used to control the harmonics as well as the output voltage. Fred Turnbull of GE-CRD invented the selected harmonic elimination PWM in 1963, which was later generalized by H. S. Patel and Richard Hoft of GE-CRD (1973) and optimized by Giovanni Indri and Giuseppe Buja of the University of Padua (1977). However, the sinusoidal PWM technique, invented by Arnold Schonung and Herbert Stemmler of Brown Boveri (1964), found widespread applications. Since motor drives mostly required current control, Allen Plunkett of GE-CRD developed the hysteresis-band (HB) sinusoidal current control in 1979. This was improved to the adaptive HB method by Bimal Bose (1989) to reduce the harmonic content. The space vector PWM (SVM) technique for isolated neutral load, based on the space vector theory of machines, was invented by Gerhard Pfaff, Alois Weschta, and Albert Wick in 1982. The SVM, although very complex, is now widely used. The front-end diode rectifier was gradually replaced by the PWM rectifier (the same as inverter topology), which allowed for four-quadrant drive capability and sinusoidal line current at any desired DPF. High-power GTO converters could be operated in multistepped mode because of the low switching frequency. The PWM rectifier operation modes allowed for the introduction of the static VAR compensator. Current-fed self-commutated GTO converters for high-power applications that required a capacitor bank on ac side were introduced in the 1980s. The performance of this type of dc-link dual PWM converter system is similar to that of the voltage-fed converter system. A class of ac-ac converters, called matrix converters or direct PWM frequency converters, was introduced by Marco G. B. Venturini (they are often called Venturini converters) in 1980 using inverse-parallel ac switches. My invention, an inverse-series transistor ac switch (1973), is now universally used in matrix converters. This converter topology has received a lot of attention in the literature, but so far, there have been very few industrial applications. Soft-switched dc-ac power conversion for ac motor drives was proposed by Deepakraj Divan of the University of Wisconsin (1985) and subsequent researchers, but hardly saw any daylight. However, soft-switched, HF link, power conversion has been popular for use in low-power dc-dc converters since the early 1980s. For high-voltage, high-power voltage-fed converter applications, Akira Nabae et al. at Nagaoka University of Technology invented the neutral-point clamped (NPC) multilevel converter in 1980 that found widespread applications in the 1990s and recently ousted the traditional thyristor CCVs. Gradually, the number of levels of the converter increased, and other types, such as the cascaded H-bridge or half-bridge and flying capacitor types, were introduced. Currently, the NPC topology is the one most commonly used. The area of motor drives is intimately related with power electronics, and it followed the evolution of devices and converters along with the PWM, computer simulation, and DSP techniques. The WRIM slip power control and load-commutated WFSM drives, introduced early in the classical era, have been discussed previously. Historically, however, ac machines were popular in constant-speed applications. During the thyristor age from the 1960s through the 1980s, variable-speed ac drives technology advanced at a rapid rate. Early in the thyristor age, variable-voltage constant-frequency IM drives were introduced using three-phase, anti-parallel thyristor, voltage controllers, and Derek Paice (1964) of Westinghouse was the pioneer in this area. The so-called Nola speed controller proposed by NASA in the late 1970s is essentially the same type of drive. However, it has the disadvantages of loss of torque at low voltage, poor efficiency, and line and load harmonics. The solid-state IM starter often uses this technique. The introduction of the McMurray inverter and the McMurray-Bedford inverter using thyristors essentially started the revolution for variable-frequency motor drives. With a variable-frequency, variable-voltage, sinusoidal power supply from a dc-link voltage source PWM inverter, rated machine torque was always available and the machine had no harmonic problems. The dc link voltage could be generated from the line either with a diode or a PWM rectifier. This simple open-loop volts/hertz control technique became extremely popular and is commonly used today. To prevent the speed and flux drift of open-loop volts/hertz control and improve the stability problem, closed-loop speed control with slip and flux regulation was used in the 1970s and early 1980s. Current-fed thyristor and GTO converters for IM drives were promoted during the same period. The advent of modern self-commutated devices considerably improved the performance of VFI drives. The introduction of vector or field-oriented control brought a renaissance in the history of high-performance ac drives. Karl Hasse at the Technical Universit of Darmstadt (1969) introduced the indirect vector control, whereas the direct vector control was introduced by Felix Blaschke of Siemens (1972). The vector control and estimation depended on synchronous reference frame, de – qe, and stationary reference frame, ds – qs, dynamic models of the machine. The de – qe model was originally introduced by Park (1929) for synchronous machines and was later extended to IM by Gabriel Kron of GE-CRD, whereas the ds – qs model of IM was introduced by H. C. Stanley (1938). Because of the control complexity, vector control has been applied in industry since the 1980s in microcomputer/DSP control. After Intel invented the microcomputer in 1971, the technology started advancing dramatically with the introduction of the TMS320 family in the 1980s by Texas Instruments. Recently, the powerful ASICs/FPGAs along with DSPs are almost universal in the control of power electronics systems. In 1985, Isao Takahashi of Nagaoka University of Technology invented an advanced scalar control technique called direct torque control or direct torque and flux control, which was to some extent close to vector control in performance. Gradually, other advanced control techniques, such as model-referencing adaptive control, sensorless vector control, and model predictive control, emerged. Currently, AI techniques, particularly fuzzy and artificial neural networks, are advancing the frontier of power electronics. Most control techniques developed for IM drives were also applicable to SM drives. However, the interest in SRM drives is fading after the surge of literature during the 1980s and 1990s. - C. C. Harskind and M. M. Morack Eds., A History of Mercury-Arc Rectifiers in North America (Piscataway, NJ: IEEE Press, 1987) - E. L. Owen, “Power Electronics and Rotating Machines—Past, Present and Future,” in Proc. Power Electronics Specialists Conf., June 1984, p. 3–11 - T. J. Wilson, “The Evolution of Power Electronics,” in Proc. Int. Symp. Industrial Electronics, Xian, China, May 1992, vol. 1, p. 1–9. - B. K. Bose, “Power Electronics—an Emerging Technology,” IEEE Trans. Industrial Electronics 36, no. 3, p. 403–412, Aug. 1989. - B. K. Bose, “The Past, Present and Future of power electronics,” IEEE Industrial Electron. Magazine 3, no. 2, p. 7–14, 2009. - B. K. Bose, “Power Electronics and Motor Drives—Recent Progress and Perspective,” IEEE Trans. Industrial Electronics 56, no. 2, p. 581–588, Feb. 2009. - W. McMurray, “Power Electronics in the 1990s,” in Proc. IEEE Industrial Electronics Society Conf. Record, 1990, p. 839–843. - B. K. Bose, ed., Adjustable Speed AC Drive Systems (New York, NY: IEEE Press, 1981). - B. K. Bose, Modern Power Electronics and AC Drives (Upper Saddle River, NJ: Prentice-Hall, 2001). - B. K. Bose, Power Electronics and Motor Drives—Advances and Trends (Burlington, MA: Academic Press, 2006). - B. K. Bose, “Power Electronics—A Technology Review,” Proc. IEEE 80, no. 8, p. 1301–1334, Aug. 1992. - B. K. Bose, “Power Electronics and Motion Control Technology: Status and Recent Trends,” IEEE Trans. Industrial Applications 29, Oct. 1993. - B. K. Bose, “Global Energy Scenario and Impact of Power Electronics in 21st century,” IEEE Trans. Industrial Electronics 60, no. 7, p. 2638–2651, July 2013. - B. D. Bedford and R. G. Hoft, Principles of Inverter Circuits (New York: Wiley, 1964). - T. M. Jahns and E. L. Owen, “AC adjustable-speed drives at the millennium: How did we get here?” IEEE Trans. Power Electronics 16, no. 1, p. 17–25, Jan. 2001. - E. L. Owen, M. M. Morack, and C. C. Herskind, “AC Adjustable Speed Drives with Electronic Power Converters—the Early Days,” IEEE Trans. Industrial Applications 20, no. 2, pp. 298–308, Mar./Apr. 1984. - A. O. Staub and E. L. Owen, “Solid-state Motor Controllers,” IEEE Trans. Industrial Applications 22, no. 6, pp. 1113–1120, Nov./Dec. 1986. The author would like to thank Thomas Lipo of the University of Wisconsin, Madison; Giuseppe Buja of University of Padova, Italy; and Barry Brusso of S&C Electric Company, United States, for their help in writing this article. - J. D. van Wyk, “Power Electronic Converters for Motion Control,” Proc. IEEE 82, no. 8, p. 1164–1193, Aug. 1994. - B. K. Bose, “Eleven Years in Corporate Environment—My Experience,” IEEE IE Soc. Newsletter, p. 6–8, June 2006. - Galileo Ferraris Biography - Mikhail Dolivo-Dobrovolsky Biography - Highlights of Ward Leonard's Seventy Five Years of Progress - Peter Cooper Hewitt biography - Standard IEEE-519 - In Memoriam Richard Hoft - Akira Nabae, 1996 William E. Newell Power Electronics Award Recipient - 1 Introduction - 2 Electrical Machines - 3 Power Electronics in the Classical Era: Mercury-Arc Rectifiers - 4 Power Electronics in the Classical Era: Hot-Cathode Gas Tube Rectifiers - 5 Power Electronics in the Classical Era: Magnetic Amplifiers - 6 Power Electronics in the Modern Era: Power Semiconductor Devices - 7 Power Converters - 8 Motor Drives - 9 References - 10 Acknowledgments - 11 Further Reading
<urn:uuid:45f261d1-9db6-4b90-b7b7-bc5daa0afb1b>
CC-MAIN-2016-26
http://ethw.org/Power_electronics
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932913
6,979
3.3125
3
The Great Society was a set of liberal domestic programs proposed or enacted in the United States on the initiative of President Lyndon B. Johnson (1963-1969). Two main goals of the Great Society social reforms were the elimination of poverty and racial injustice. Civil rights laws were passed that permanently ended segregation and denial of the right to vote. Advocates for the poor came to power and rejected notions of the morally worthy poor in favor a direct attack on the structural barriers such as lack of education, inadequate facilities, and above all racism. New major spending programs that addressed education, medical care, urban problems, and transportation were launched during this period. The Great Society in scope and sweep resembled the New Deal domestic agenda of Franklin D. Roosevelt, but differed sharply in types of programs. Some Great Society proposals were stalled initiatives from John F. Kennedy's New Frontier. Johnson's success depended on his own remarkable skills at persuasion, coupled with the Democratic landslide in 1964 that brought in many new liberals. Johnson used secret teams of experts to craft his programs and failed to listen to the voters or build grass roots networks of supporters. When conservative times came the programs closed or languished, unless they had powerful constituencies as did civil rights and Medicare. Anti-war Democrats complained that spending on the Vietnam War choked off the Great Society. Richard Nixon continued many of the spending programs. While Ronald Reagan reduced funding or ended some of them, many of the these programs, including Medicare, Medicaid, and federal education funding, continue to the present. - 1 Economics and social conditions - 2 Ann Arbor Speech - 3 The 1965 legislative program and presidential task forces - 4 The 1964 election and the Eighty-ninth Congress - 5 Major programs - 6 Local studies - 7 The legacies of the Great Society - 8 Memory - 9 Further reading Unlike the New Deal, which was a response to a severe economic crisis, the Great Society emerged in a period of prosperity. Grave social crises confronted the nation. Racial segregation existed throughout the South. The Civil Rights Movement was gathering momentum, and in 1964 urban riots began within black neighborhoods in New York City and Los Angeles; by 1968 hundreds of cities had major riots that caused a severe political backlash. Foreign affairs were generally quiet except for the Vietnam War, which grew from limited involvement in 1963 to a large-scale military operation in 1968 that overshadowed the Great Society. Ann Arbor Speech Johnson presented his goals for the Great Society in a speech at the University of Michigan on May 22, 1964. Speechwriter Richard N. Goodwin coined the phrase "the Great Society," and Johnson had used the expression from time to time before the Michigan speech, but he had not emphasized it until that occasion. In this address, which preceded the election-year party conventions, Johnson described his plans to solve pressing problems: “We are going to assemble the best thought and broadest knowledge from all over the world to find these answers. I intend to establish working groups to prepare a series of conferences and meetings—on the cities, on natural beauty, on the quality of education, and on other emerging challenges. From these studies, we will begin to set our course toward the Great Society.” The 1965 legislative program and presidential task forces 'President Kennedy set up task forces comprised of scholars and experts to craft New Frontier legislation. A similar reliance on experts appealed to Johnson, in part because the task forces would work in secret and outside of the existing activist networks and directly for the White House staff. His staff created 14 separate task forces under presidential assistants Bill Moyers and Richard N. Goodwin. The average task force had nine members, and generally was comprised of governmental experts and academicians. Thirteen focused on finding innovative solutions to the issues of Agriculture, Anti-recession policy, Civil rights, Education, Efficiency and economy, Health, Income maintenance policy, Intergovernmental fiscal cooperation, Natural resources, Pollution of the environment, Preservation of natural beauty, Transportation, and Urban problems. Their reports went to Moyers, who circulated them to the relevant agencies and set up review panels. Special attention was paid to persuading Congress to pass the legislation, but Johnson basically circumvented Congress, the media, and the interest groups. Johnson reviewed all the proposals with Moyers and Budget Director Kermit Gordon in late 1964. Many specific proposals were included in brief form in Johnson’s State of the Union address of January 1965. The task-force approach, combined with Johnson's electoral victory in 1964 and his talents in obtaining congressional approval, were widely credited with the success of the legislation agenda in 1965. Critics would later cite the task forces as a factor in a perceived elitist approach to Great Society programs. Also, because many of the initiatives did not originate from outside lobbying, some programs had no political constituencies that would support their continued funding. The result was the ease in which they were later canceled or cut back. The 1964 election and the Eighty-ninth Congress With the exception of the Civil Rights Act of 1964, the Great Society agenda was not a widely discussed issue during the 1964 Presidential election campaigns. Johnson won the election with 61% of the vote, the largest percentage since the popular vote first became widespread in 1824, and carried all but six states. Democrats gained enough seats to control more than two-thirds of each chamber in the Eighty-ninth Congress with a 68-32 margin in the Senate and a 295-140 margin in the House of Representatives. The political realignment allowed House leaders to alter rules that allowed conservative Southern Democrats to kill New Frontier and civil rights legislation in committee, which aided efforts to pass Great Society legislation. In 1965 the first session of the Eighty-ninth Congress created the core of the Great Society. The Johnson administration submitted eighty-seven bills to Congress, and Johnson signed eighty-four, or 96%, arguably the most successful legislative agenda in American history. The centerpiece of the Great Society was using the public support for the Civil Rights Movement, and building a new bipartisan coalition to pass three civil rights acts. The Civil Rights Act of 1964 forbade job discrimination and immediately ended segregation acress the country. The South immediately complied, with little resistance. Unexpectedly gender was added--against the opposition of liberals who said the time for women had not come. The Voting Rights Act of 1965 assured minority registration and voting rights. It suspended use of literacy or other voter-qualification tests that had sometimes served to keep African-Americans off voting lists and provided for federal court lawsuits to stop discriminatory poll taxes. It also reinforced the Civil Rights Act of 1964 by authorizing the appointment of federal voting examiners in areas that did not meet voter-participation requirements. The Civil Rights Act of 1968 banned housing discrimination and extended constitutional protections to Native Americans on reservations, but had little impact. War on Poverty The most ambitious and controversial part of the Great Society was its initiative to end poverty. The Kennedy administration had been contemplating a federal effort against poverty. Johnson, who as a teacher had observed extreme poverty in Texas among Mexican-Americans, launched an "unconditional war on poverty" in the first months of his presidency with the goal of eliminating hunger and deprivation from American life. The centerpiece of the War on Poverty was the Economic Opportunity Act of 1964, which created an Office of Economic Opportunity (OEO) to oversee a variety of community-based antipoverty programs. The OEO reflected a fragile consensus among policymakers that the best way to deal with poverty was not simply to raise the incomes of the poor but to help them better themselves through education, job training, and community development. Central to its mission was the idea of "community action," the participation of the poor themselves in framing and administering the programs designed to help them. The War on Poverty began with a $1 billion appropriation in 1964 and spent another $2 billion in the following two years. It spawned dozens of programs, among them the Job Corps, whose purpose was to help disadvantaged youths develop marketable skills; the Neighborhood Youth Corps, the first summer jobs established to give poor urban youths work experience and to encourage them to stay in school; "Volunteers in Service to America" (VISTA) , a domestic version of the Peace Corps, which placed concerned citizens with community-based agencies to work towards empowerment of the poor; the Model Cities Program for urban redevelopment; Upward Bound, which assisted poor high school students entering college; legal services for the poor; the Food Stamps program; the Community Action Program, which initiated local Community Action Agencies charged with helping the poor become self-sufficient; and Project Head Start, which offered preschool education for poor children. The most important educational component of the Great Society was the Elementary and Secondary Education Act of 1965, designed by Commissioner of Education Francis Keppel. It ended a long-standing political taboo by providing significant federal aid to public education, initially allotting more than $1 billion to help schools purchase materials and start special education programs to schools with a high concentration of low-income children. The Act established Head Start, which had originally been started by the Office of Economic Opportunity as an eight-week summer program, as a permanent program. The Higher Education Act of 1965 increased federal money given to universities, created scholarships and low-interest loans for students, and established a National Teachers Corps to provide teachers to poverty stricken areas of the United States. It began a transition from federally funded institutional assistance to individual student aid. The Bilingual Education Act of 1968 offered federal aid to local school districts in assisting them to address the needs of children with limited English-speaking ability until it expired in 2002. The Social Security Act of 1965 authorized Medicare and provided federal funding for many of the medical costs of older Americans. The legislation overcame the bitter resistance, particularly from the American Medical Association, to the idea of publicly-funded health care or "socialized medicine" by making its benefits available to everyone over sixty-five, regardless of need, and by linking payments to the existing private insurance system. Medicare and Medicaid Medicare was hospital and doctor care for all people retired and on Social Security. In 1966 welfare recipients of all ages received medical care through the Medicaid program. Medicaid was created on July 30, 1965 through Title XIX of the Social Security Act. Each state administers its own Medicaid program while the federal Centers for Medicare and Medicaid Services (CMS) monitors the state-run programs and establishes requirements for service delivery, quality, funding, and eligibility standards. Arts and cultural institutions National endowments for arts and humanities In September 1965, Johnson signed the National Foundation on the Arts and Humanities Act into law, creating both the National Endowment for the Arts and National Endowment for the Humanities as separate, independent agencies. Lobbying for federally funded arts and humanities support began during the Kennedy Administration. In 1964 the National Commission on the Humanities suggesteded that the emphasis placed on science endangered the study of the humanities from elementary schools through postgraduate programs. In order to correct the balance, it recommended "the establishment by the President and the Congress of the United States of a National Humanities Foundation." Support came from Johnson . In March 1965, the White House proposed the establishment a National Foundation on the Arts and Humanities and requested $20 million in start-up funds. The commission's report had generated other proposals, but the White House's approach eclipsed them. The administration's plan, which called for the creation of two separate agencies each advised by a governing body, was the version approved by Congress. Richard Nixon later dramatically expanded funding for NEH and NEA. After the First National Conference on Long-Range Financing of Educational Television Stations in December 1964 called for a study of the role of noncommercial education television in society, the Carnegie Corporation agreed to finance the work of a 15-member national commission. Its landmark report, Public Television: A Program for Action, published on January 26, 1967, popularized the phrase "public television" and assisted the legislative campaign for federal aid. The Public Broadcasting Act of 1967, enacted less than 10 months later, chartered the Corporation for Public Broadcasting as a private, non-profit corporation. The law initiated federal aid through the CPB for the operation, as opposed to the funding of capital facilities, of public broadcasting. The CPB initially collaborated with the pre-existing National Educational Television system, but in 1969 decided to start the Public Broadcasting Service (PBS). A public radio study commissioned by the CPB and the Ford Foundation and conducted from 1968-1969 led to the establishment of National Public Radio, a public radio system under the terms of the amended Public Broadcasting Act. Two long-planned national cultural and arts facilities received federal funding that would allow for their completion through Great Society legislation. A National Cultural Center, suggested during the Franklin Roosevelt Administration and created by a bipartisan law signed by Dwight Eisenhower, was transformed into the John F. Kennedy Center for the Performing Arts, a living memorial to the assassinated president. Fundraising for the original cultural center had been poor prior to legislation creating the Kennedy Center, which passed two months after the president's death and provided $23 million for construction. The Kennedy Center opened in 1971. In the late 1930s the United States Congress mandated a Smithsonian Institution art museum for the National Mall, and a design by Eliel Saarinen was unveiled in 1939, but plans were shelved during World War II. An 1966 act of Congress established the Hirshhorn Museum and Sculpture Garden as part of the Smithsonian Institution with a focus on modern art, in contrast to the existing National Art Gallery. The museum was primarily federally funded, although New York financier Joseph Hirshhorn later contributed $1 million toward building construction, which began in 1969. The Hirshhorn opened in 1974. Congress set up the Department of Transportation in October 1966; it began operations in spring 1967. The Urban Mass Transportation Act of 1964 provided $375 million for large-scale urban public or private rail projects in the form of matching funds to cities and states and created the[Urban Mass Transit Administration (now the Federal Transit Administration). The National Traffic and Motor Vehicle Safety Act of 1966 and the Highway Safety Act of 1966 were enacted, largely as a result of Ralph Nader's book Unsafe at Any Speed. In 1964 Johnson named Assistant Secretary of Labor Esther Peterson to be the first presidential assistant for consumer affairs. Cigarette Labeling Act of 1965 required packages to carry warning labels. Motor Vehicle Safety Act of 1966 set standards through creation of the National Highway Traffic Safety Administration. Fair Packaging and Labeling Act requires products identify manufacturer, address, clearly mark quantity and servings. Statute also authorizes permits HEW and FTC to establish and define voluntary standard sizes. The original would have mandated uniform standards of size and weight for comparison shopping, but the final law only outlawed exaggerated size claims. Child Safety Act of 1966 prohibited any chemical so dangerous that no warning can make its safe. Flammable Fabrics Act of 1967 set standards for children's sleepwear, but not baby blankets. Wholesome Meat Act of 1967 required inspection of meat which must meet federal standards. Truth-in-Lending Act of 1968 required lenders and credit providers to disclose the full cost of finance charges in both dollars and annual percentage rates, on installment loan and sales; it did not stop spiraling consumer debt Wholesome Poultry Products Act of 1968 required inspection of poultry which must meet federal standards. Land Sales Disclosure Act of 1968 provided safeguards against fraudulent practices in the sale of land. Radiation Safety Act of 1968 provided standards and recalls for defective electronic products. Joseph A. Califano, Jr. has suggested that Great Society's main contribution to the environment was an extension of protections beyond those aimed at the conservation of untouched resources. Discussing his administration's environmental policies, Lyndon Johnson suggested that "[t]he air we breathe, our water, our soil and wildlife, are being blighted by poisons and chemicals which are the by-products of technology and industry. The society that receives the rewards of technology, must, as a cooperating whole, take responsibility for [their] control. To deal with these new problems will require a new conservation. We must not only protect the countryside and save it from destruction, we must restore what has been destroyed and salvage the beauty and charm of our cities. Our conservation must be not just the classic conservation of protection and development, but a creative conservation of restoration and innovation." At the behest of Secretary of the Interior Stewart Udall, the Great Society included several new environmental laws to protect air and water. Environmental legislation enacted included: - Clear Air, Water Quality and Clean Water Restoration Acts and Amendments - Wilderness Act of 1964, - Endangered Species Preservation Act of 1966, - National Trail System Act of 1968, - Wild and Scenic Rivers Act of 1968, - Land and Water Conservation Act of 1965, - Solid Waste Disposal Act of 1965, - Motor Vehicle Air Pollution Control Act of 1965, - National Historic Preservation Act of 1966, - Aircraft Noise Abatement Act of 1968, and - National Environmental Policy Act of 1969. New York City The Great Society flourished in New York City, the rich and powerful center of American liberalism, then crashed. An important example came in medical care for those poorly served, which Johnson's advisers identified as a cause of poverty. Liberals within the City health department pushed through numerous large-scale, expensive reforms thanks to the progressive political and social climate that prevailed nationally. They added ambulatory care to their traditional preventive activities, made sure community members were involved in the planning and implementation of services, audited the quality of the care funded from public funds, and launched new efforts to solve health problems such as lead poisoning that were rooted in poverty. However, forces "below" and "above" constrained what they could accomplish. Newly empowered community activists were uncooperative and even hostile, while federal and state financing was unstable. The crisis of New York State's unforeseen and exorbitant Medicaid expenses doomed the funding for the City health department's efforts. State money was hurriedly moved to Medicaid shortfalls; only three neighborhood family-care centers were created, instead of the sixteen planned. Continued severe fiscal retrenchment would put a decisive end to reform. The continued flight of the middle class to the suburbs during the early 1970s and the consequent erosion of the tax base plunged New York City into a financial downward spiral that culminated in insolvency in 1975. The city was taken over mid-year by a "municipal assistance corporation", an independent coalition of investors that kept the city solvent by assuming the most immediate of its massive debts. The staff of the Department of Health was cut by one-fourth. In a series of triage decisions, department services were categorized as "life saving" versus "life enhancing," with the latter subject to cuts. The experience showed that if Great Society liberalism could not make it in New York City, it could not make it anywhere, as state after state repeated similar patterns. The legacies of the Great Society Immediately on passage of the first Civil Rights act in 1964, large scale rioting began in black neighborhoods, which escalated every summer during Johnson's years, until the "long hot summer" of massive violence against the shops and stores and police of the inner city became almost routine. The black riots played a major role in destroying political support for Johnson's Great Society. Crime rates climbed sharply, making cities physically dangerous and inner cities public schools and universities declined sharply. Many white ethnics of the old New Deal Coalition felt betrayed, and moved toward the Republican party. Labor unions reached their maximum strength, and began a steady, sharp decline, weakening the Democratic party in industrial states. At national level, backlash against the "big government" solutions of the Johnson administration had begun to set in immediately upon passage. Johnson had circumvented the step of building a grass-roots support netweork because he wanted only experts to designs the Great Society, with no minimal political comproimises. Typical was the fate of the neighborhood health center program. Funding for the centers remained flat during the Nixon and Ford administrations, in spite of escalating health-care costs; beginning in 1970, the program was gradually transferred from the OEO to the Department of Health, Education, and Welfare, where it stagnated. Not only was there no financial commitment, there was little political will to expand health and welfare services for the poor. On the other hand Medicare was enormously popular because it did have a very large, well-defined, active cadre of supporters in the population over age 65, and because it had its own tax base as part of Social Security. Medicare and Medicaid (the latter for poor people) drove up medical costs and squeezed the discretionary budget. Food stamps remained popular because they had support from both the inner cities and the farmers. Additional Great Society programs became impossible after the GOP gains in the 1966 elections, as voters were angered by soaring crime and violence. The Democratic coalition splintered four ways, with George Wallace leading the segregationists in the Deep South, Hubert Humphrey leading the old New Dealers, city bosses and labor unions, and Eugene McCarthy and Robert Kennedy in bitter competition for the students and intellectuals whpo opposed the Vietnam war. There was no place for Johnson; his support collapsed after the long-hidden Vietnam War seized national attention in early 1968. Johnson stunned the nation by withdrawing from the presidential race. The Democratic nomination was bestowed, amidst tear gas and violence in Chicago, on Humphrey, who lost in a three way race to Nixon, as Wallace captured the Deep South. Many Great Society initiatives, especially civil rights laws and programs that benefited the middle class, continue to exist in some form. Some programs, like Medicare and Medicaid, have been criticized as inefficient and unwieldy, but enjoy wide support and have grown vastly since the 1960s . Federal funding of public and higher education has expanded since the Great Society era and has maintained bipartisan support. Federal funding for culture initiatives in the arts, humanities, and public broadcasting have repeatedly been targets for elimination, but have survived. The War on Poverty Interpretations of the War on Poverty remain controversial. The Office of Economic Opportunity was dismantled by the Nixon and Ford administrations, largely by transferring poverty programs to other government departments. Funding for many of these programs were further cut in President Ronald Reagan's first budget in 1981. The gap between the expansive intentions of the War on Poverty and its relatively modest achievements fueled later conservative arguments that government is not an appropriate vehicle for solving social problems. The poverty programs were heavily criticized by conservatives, especially Charles Murray, who denounced them in his 1984 book Losing Ground as being ineffective and creating an underclass of lazy citizens. One of Johnson's aides, Joseph A. Califano, Jr., has countered that, "from 1963 when Lyndon Johnson took office until 1970 as the impact of his Great Society programs were felt, the portion of Americans living below the poverty line dropped from 22.2 percent to 12.6 percent, the most dramatic decline over such a brief period in this century." The poverty rate for blacks fell from 55 percent in 1960 to 27 percent in 1968.. However, the poverty rate among black families fell dramatically from 1940 and 1960 (87 percent to 47 percent), suggesting poverty rates would have continued falling without the War on Poverty. In spite of the important gains in civil rights dating from the Great Society and the continuing expansion of Medicare and Medicaid, today's memory of it is now generally negative, with liberals rarely mentioning Johnson and conservatives systematically attacking it. Stanley B. Greenberg argues that the ideas of the Great Society: - have fallen into disrepute. They have come to represent narrow and unconvincing ideas that cannot organize the "facts" into a convincing story. Only when the historic rubble is cleared away and new models and ideas gain currency, will Democrats be able to take advantage of popular impulses that favor equity, populism and national effort. The conservative backlash of the 1970s and 1980s made the word "liberal" a term of reproach that liberals have largely abandoned in favor of "progressive." Johnson became an issue in the heated 2008 Democratic primary contest when Senator Hillary Clinton, trying to make a point about presidential leadership by undercutting Senator Barack Obama’s constant references to Martin Luther King, the civil rights icon, said: “Dr King’s dream began to be realized when President Lyndon Johnson passed the Civil Rights Act of 1964. It took a president to get it done.” She came under immediate attack from Democrats, who generally revere King and ignore Johnson. see the detailed guide at Great Society/Bibliography - Andrew, John A. Lyndon Johnson and the Great Society (1998) excerpt and text search - Ginzberg, Eli, and Robert M. Solow, eds. The Great Society: Lessons for the Future (1974) 11 chapters on each programs, by experts; online edition - Helsing, Jeffrey W. Johnson's War/Johnson's Great Society: the guns and butter trap (2000) excerpt and text search - Kaplan, Marshall, and Peggy L. Cuciti; The Great Society and Its Legacy: Twenty Years of U.S. Social Policy (1986) online edition - Milkis, Sidney M. and Jerome M. Mileur, eds. The Great Society And The High Tide Of Liberalism (2005) excerpt and text search - Murray, Charles. Losing Ground: American Social Policy, 1950-1980 (1985), influential attack from the right excerpt and text search - Woods, Randall. LBJ: Architect of American Ambition (2006). A highly detailed scholarly biography excerpt and online search from Amazon.com - Zarefsky, David. President Johnson's War on Poverty: Rhetoric and History (1986) online edition - see "President Johnson's speech at the University of Michigan" - Unger, Irwin, 1996: 'The Best of Intentions: the triumphs and failures of the Great Society under Kennedy, Johnson, and Nixon': Doubleday, p. 104. - See NEH online - James Colgrove, "Reform and Its Discontents: Public Health in New York City During the Great Society" Journal of Policy History v19#1 (2007) 3-28 in Project Muse - Jonathan Engel, Poor People's Medicine: Medicaid and American Charity Care since 1965. (2006). - see online - See Sowell column - Stanley B. Greenberg. " Liberalism: Beyond the Great Society and New Deal: Rediscovering People and Prosperity," in John F. Sears, ed. Franklin D. Roosevelt and the Future of Liberalism (1991) p, 116. - Jerome M. Mileur, The Great Society and Demise of New Deal Liberalism, in Sidney M. Milkis and Jerome M. Mileur, eds. The Great Society and the High Tide of Liberalism, (2005)online p. 411-55 - Tim Reid, "Hillary Clinton gaffe over Martin Luther King may cost votes in South Carolina," The Times (London) Jan. 12, 2008
<urn:uuid:439c6499-a2c7-4c11-a346-855415c27aa0>
CC-MAIN-2016-26
http://en.citizendium.org/wiki/Great_Society
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00091-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953749
5,553
3.609375
4
In 1997, in The Confederate War (Harvard University Press), the historian Gary W. Gallagher argued that contemporary scholars erred in ascribing Confederate defeat to questions of race, class, and gender, and especially to discontent on the home front and ambivalence over slavery's morality. Gallagher, one of the nation's pre-eminent Civil War specialists, marveled not at Confederate weakness but rather at how the outmanned and outgunned Southerners sustained nationalism and popular will as long as they did. "The Confederate military," he concluded, "ultimately proved unable to win enough victories at crucial times to carry their nation to independence." Historians' propensity to emphasize internal Southern weakness, he maintained, resulted "from an understandable tendency to work backward from the war's outcome in search of explanations for Confederate failure." Just as Gallagher judged historians off track regarding Confederate defeat, he now considers them derailed on the question of what mattered most to victorious Northerners—the concept of the Union. Writing recently on a New York Times blog, Gallagher remarks that "as we approach the sesquicentennial of the Civil War, the meaning of Union to mid-19th-century Americans has been almost completely lost. Americans today find it hard to believe that anyone would risk life or fortune for something as abstract as Union. A war to end slavery seems more compelling, the sort of war envisioned in the film Glory." Mindful of the importance of African-Americans in the military, political, and social history of the Civil War, Gallagher nonetheless insists that "a concentration on emancipation and race sometimes suggests that Union victory had scant meaning apart from them." "Without an appreciation of why loyal citizens believed a Union that guaranteed democratic self-government was worth great sacrifice, no accurate understanding of the Civil War era is possible. A sesquicentennial that fails to make this clear will have failed in a fundamental way." Gallagher develops this argument in his bold, fast-paced, and provocative The Union War, a work that in its revisionist historiographical tone parallels his earlier book. Recently published by Harvard, The Union War offers a searing critique of what Gallagher terms anachronistic scholarship that privileges emancipation and the agency of African-Americans during the war over loyal citizens' commitment to the concept of a perpetual Union. Accusing historians of allowing "modern sensibilities" to skew their "view of how participants of a distant era understood the war," Gallagher finds, not surprisingly, that their scholarship exposes "the many ways in which wartime Northerners fell short of later standards of acceptable thought and behavior." "Can we criticize the North's Civil War generation for not envisioning what the nation has become?" he asks. Gallagher takes aim at numerous noted historians, including Orville Vernon Burton, Walter A. McDougall, and David Williams, for undervaluing the seriousness and importance of the concept of the Union to the wartime generation, for emphasizing antebellum America's class and racial shortcomings, for overstating Americans' self-interest, and for condemning America for inequality. According to Gallagher, such historians dismiss the notion of American exceptionalism that President Abraham Lincoln described in December 1862, referring to the Union, then mired in a bloody civil war, as "the last, best hope of earth." Recent scholars, Gallagher insists, also have erred seriously by distorting emancipation's place in the war's grand narrative. He recognizes that most loyal Union soldiers held slavery responsible for the war but points out that they generally considered emancipation "as a means to achieve and uphold Union." Lincoln's government pursued emancipation as a necessary tool of preserving the Union rather than as an end in itself. Gallagher considers the transformation of the war into what the historian Robert Eno Hunt terms "an emancipation event—an interpretation made necessary by the civil-rights generation" to be unfortunate, "part of a cultural imperative to frame a more expansive view of race in the United States around black people as agents of their own improvement." While illuminating many neglected aspects of the war, the emphasis on race has created its own distortion, The Union War argues. The author says historians since the 1970s have gone astray in transforming emancipation into "the second grand goal" of the conflict when, in fact, most citizen-soldiers never considered freeing the slaves a principal aim. The fact that the war ended slavery and set the nation on the road to the modern civil-rights movement has infused the Civil War story with the fallacious notion that emancipation was an inevitable result of Union victory, Gallagher maintains. Scholars, he writes, have also slighted the importance of the U.S. military's role in the war. The army, he insists, constituted "a national institution of unmatched reach and influence, an expression of a free society's reliance on citizen-soldiers, and the principal instrument wielded to salvage and safeguard the Founders' constitutional handiwork." Gallagher takes special umbrage at historians' treatment of the army's contribution to emancipation. The army in fact "determined how and where freedom arrived in the Confederacy"; by providing opportunities for slaves to escape, it shaped "the geography of emancipation." By the spring of 1865, the army had facilitated the escape of perhaps as many as one-seventh of the Confederacy's slaves. "First came Union armies," Gallagher insists, "followed by a swelling migration of enslaved people from farms and plantations to lines held by the invading Yankees." Without diminishing the contributions of African-Americans, Lincoln, and Congress in staging the drama of emancipation, Gallagher argues that "without the United States Army, none of the other actors could have succeeded on a broad scale. No matter how desperately slaves wanted to be free ... the chance for successful escape was negligible unless Union military forces had reached their area." Gallagher singles out three influential scholars—James M. McPherson, Ira Berlin, and Leslie S. Rowland—for influencing the "odd way" that recent historians have interpreted wartime emancipation. McPherson is the author of the Pulitzer Prize-winning Battle Cry of Freedom (1988); Berlin established the University of Maryland's Freedmen and Southern Society Project in 1976 and assembled a team of talented editors; Rowland now directs the project, which has published five important collections that document the emancipation process. They are among the scholars whom Gallagher identifies as celebrating the notion of "self-emancipation"—that the slaves themselves forced Lincoln and Congress to adopt emancipation. Others maintain that Lincoln should receive much credit for liberating the South's bondsmen and women. "Advocates of different positions regarding how to apportion credit for emancipation typically mention the importance of the Union army," Gallagher complains, "and then ignore it in the bulk of their analysis." He seeks to contextualize the story. Emancipation resulted from a series of policy changes forced upon Lincoln's government as the Confederates' insurrection dragged on, in what Gallagher terms an "emancipation-related contingency" that "provided a powerful impetus to emancipation and the ancillary enrollment of African-American soldiers." Emancipation and the recruitment of black troops were byproducts of stiffened Rebel resistance. As Lincoln had warned border-state representatives in July 1862, the longer the war persisted, the more slavery, their "peculiar institution," would "be extinguished by mere friction and abrasion—by the mere incidents of the war." Gallagher also faults historians, including such noteworthies as Edward J. Blum, Michael W. Fitzgerald, Eric Foner, and Kenneth M. Stampp, for interpreting Radical Reconstruction following the war as a "lost moment," an "unfinished revolution," a missed opportunity to attain true racial equality by delivering on the promises of the Reconstruction-era constitutional amendments. While cognizant of the veil of racism that shrouded the nation from the late 19th to well past the mid-20th century, Gallagher nevertheless emphasizes the significant racial progress Americans achieved between 1860 and 1880. "Far from a lost moment," he explains, "the era of Reconstruction represented a rather miraculous period that yielded essential improvements to the Constitution that would have been unthinkable except as an outgrowth of the war." The 13th, 14th, and 15th Amendments signified "unequivocal evidence of the transformative power of a massive military event." Gallagher correctly identifies a consensus in recent Civil War historiography that devotes considerable attention to the experiences of slaves, to black agency and self-emancipation, and to the recruitment and service of the United States Colored Troops. Before McPherson's groundbreaking scholarship on abolitionist and civil-rights activism during and after the Civil War, and the stunning documentary histories edited by Berlin, Rowland, and their colleagues, only a handful of scholars, generally black, integrated the actions of African-Americans into the grand Civil War narrative. W.E.B. Du Bois's magisterial but unfortunately undervalued Black Reconstruction (1935) represents this genre most powerfully. But have recent scholars exaggerated emancipation as both a component and byproduct of Confederate defeat at the expense of the war as the savior of the Union? Not to the extent that Gallagher suggests. Today's historians, like Lincoln 150 years ago, understand the vital nexus between Union and emancipation. Late in the war, for example, when Union victory remained in doubt, Lincoln rejected proposals for a negotiated peace that would have abandoned emancipation and the use of African-Americans as soldiers. He explained that his government fought "for the sole avowed object of preserving our Union; and it is not true that it has since been, or will be, prossecuted [sic] by this administration, for any other object." That said, Lincoln likened African-American soldiers, sailors, and laborers to the "physical force" of steam or horsepower. "Keep it and you can save the Union," the president remarked. "Throw it away and the Union goes with it." In 1985, in The Destruction of Slavery, Berlin and his editorial team summarized their "self-emancipation" thesis. "Over the course of the ... war," they explained, "the slaves' insistence that their own enslavement was the root of the conflict—and that a war for the Union must necessarily be a war for freedom—strengthened their friends and weakened their enemies." African-Americans' determination to be free convinced Northerners "to make property into persons, to make slaves into soldiers, and, in time, to make all black people into citizens." Over time, even white Southerners, too, "came to understand the link between national union and universal liberty." Nine years later, asking "Who Freed the Slaves?" McPherson answered by gently critiquing the self-emancipation argument. "By coming into Union lines, by withdrawing their labor from Confederate owners, by working for the Union army and fighting as soldiers in it," he acknowledged, "slaves did play an active part in achieving their own freedom and, for that matter, in preserving the Union." That said, McPherson argued, those who credited the slaves with freeing themselves risked undervaluing Lincoln's contribution (and presumably that of the Union army) to the emancipation experience. Anticipating Gallagher's argument, he pointed to the direct relationship between Union military success and emancipation. Instead of the slaves' freeing themselves, McPherson maintained, Union troops, or the presence of Union troops, had liberated them. "Freedom," then, "quite literally came from the barrel of a gun." And because the president, as commander in chief, had overseen an army of liberators, "Abraham Lincoln freed the slaves." In fact, McPherson drew on Berlin's own The Destruction of Slavery in pointing out the direct relationship between Union military success and emancipation. "Indeed," Berlin had explained, "any Union retreat could reverse the process of liberation and throw men and women who had tasted freedom back into bondage." As to Gallagher's other criticism, contemporary historians probably do understate the symbolic importance that Northerners accorded to keeping the Union of their fathers and grandfathers intact—the Revolution-era battlefields and patriot graves Lincoln mentioned in his First Inaugural Address. But it is less a question of ignoring the importance of the Union than assuming it. After all, McPherson based his invaluable studies of why Union and Confederate soldiers went to war on documents showing how both Northerners and Southerners saw themselves as custodians of the legacy of 1776. Scholars take for granted that for the North, the war began as a constitutional struggle to suppress rebellion and to retain the federal government's sovereignty. They acknowledge that liberty and Union mattered to loyal citizens, and that emancipation and the recruitment of African-American soldiers gradually became essential weapons in the Union army's arsenal. "In time," Ira Berlin observes in a chapter in the 1997 book Union & Emancipation (Kent State University Press), "it became evident to even the most obtuse Federal commanders that every slave who crossed into Union lines was a double gain: One subtracted from the Confederacy and one added to the Union." As Gallagher makes clear, Union troops often were the gatekeepers for escaped slaves and, despite their racial bias, most came to understand (if not fully to appreciate) emancipation as necessary to squashing disunion. Historians today, however, generally interweave questions of Union, the military, race, and "black agency" more successfully and with more nuance than Gallagher suggests. While often impatient with Americans in the era of Reconstruction—and prone to impose present-day notions of the meaning and power of race, class, and gender on them—historians also generally offer more-balanced analyses of that period than Gallagher observes. They understand the huge transformation, as well as the lost opportunities, that Reconstruction wrought for Northerners, Southerners, and Westerners, for whites, blacks, and Native Americans. Almost a half-century ago—in the midst of the civil-rights revolution—Kenneth Stampp acknowledged the "Radical idealism" that sparked the 14th and 15th Amendments. In his revisionist The Era of Reconstruction, 1865-1877 (1965), Stampp remarked, "That these amendments could not have been adopted under any other circumstances, or at any other time, before or since, may suggest the crucial importance of the Reconstruction era in American history. Indeed, without radical Reconstruction, it would be impossible to this day for the federal government to protect Negroes from legal and political discrimination." Gallagher's provocative book, part historiographical intervention and part interpretive history, underscores, as a reviewer of his earlier The Confederate War explained, the dangers that "knowledge of outcome impose" on the writing of history. Gallagher reminds us of the centrality and importance of the Union to the war that forever ended serious threats of secession and racial slavery. Contemporaries grasped the meaning of the Union: They generally referred to Northern forces as the "Union army" and spoke of "Union victory" over the Rebels. Today we know that African-Americans played no small part in bringing the Confederate war to an end and in defining the meaning and legacy of the Union war.
<urn:uuid:b355705d-fe3e-4b86-9323-a3c351e5ad7b>
CC-MAIN-2016-26
http://chronicle.com/article/Civil-War-History-an/127926/?otd=Y2xpY2t0aHJ1Ojo6c293aWRnZXQ6OjpjaGFubmVsOnRoZS1jaHJvbmljbGUtcmV2aWV3LGFydGljbGU6bGluY29sbi1hdC10aGUtbW92aWVzOjo6Y2hhbm5lbDp0aGUtY2hyb25pY2xlLXJldmlldyxhcnRpY2xlOmNpdmlsLXdhci1oaXN0b3J5LWFuLWludGVydmVudGlvbg==
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953109
3,121
2.765625
3
The Japanese capital has experienced its first winter without snow for 131 years, weather officials say. Will snow be a thing of the past for those living in Japan's capital? The Japan Meteorological Agency said it had recorded no snow in central Tokyo between December and the end of February, the official winter months. This the first time no snow has fallen in winter since records began in 1876, the agency said. Officials said the winter had been unusually warm, but added that snow could still fall in the coming weeks. And they played down a direct link to global warming. "We believe El Nino can be one factor. Another theory is that the seasonal southward movement of cold air from the Arctic region was not sustained and weak," one official from the agency said. "It's a bit of stretch to link this directly to global warming. But the winter was very warm, for sure." The official, quoted by the AFP news agency, said cold air is expected to move into the Tokyo region in the middle of March. "We might see snow then. In the past, Tokyo has had snow as late as April 17," the official added. Tokyo is more likely to experience snowfall in early spring than in winter, according to the meteorological agency. Four years ago, a December snowfall was the first to be recorded in the Japanese capital for 15 years.
<urn:uuid:7e4c5e02-2821-48c5-9958-c44ab9d1ef1f>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/asia-pacific/6407771.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980919
287
2.765625
3
|Name: _________________________||Period: ___________________| This test consists of 5 short answer questions, 10 short essay questions, and 1 (of 3) essay topics. Short Answer Questions 1. Who gives Langston a thousand dollar grant? 2. Where does Noel Sullivan, an old friend of Hughes', put him up? 3. With what is Hughes' poetry often infused? 4. Why are Hughes and the crew sent to Odessa? 5. Does Langston read his poetry at this luncheon? Short Essay Questions 1. What happens to the movie for which Hughes had been asked to become a writer? 2. What does Langston do after his first year at Lincoln? 3. Describe Langston's poetry-reading tour. 4. What does Langston do when he returns to Harlem? 5. What does Langston do when he travels to New Orleans? 6. What does Langston do while in Carmel, California? 7. Who is Jesse B. Simple? 8. Why does Langston write the poem "Tomorrow's Seed." 9. What is significant about "Don't You Want to Be Free?" 10. What does Langston do upon returning to the United States? Write an essay for ONE of the following topics: Essay Topic 1 One of this text's themes regards the division between black and white. Part 1)How is this theme revealed in Langston's family? Part 2) How does this theme surface throughout the book? Part 3) How is there irony in this theme when Langston travels to Africa? Essay Topic 2 Hughes is accepted into Columbia University. Part 1) Describe his experience at Columbia. Part 2) Why does he drop out? How does his experience at Columbia play a role in his decision to drop out? Part 3) Is Langston's college experience worthwhile? Why or why not? What purpose does this time at Columbia serve? Essay Topic 3 Langston goes on a poetry-reading tour. Part 1) Describe this tour. Why does he go on this tour? Part 2) How does he affect those to which he speaks? Why? Part 3) How is Hughes affected by the people to which he speaks and meets? Part 4) How does racism continue to play a role in Hughes' life during this time? This section contains 921 words (approx. 4 pages at 300 words per page)
<urn:uuid:1b17d96c-c094-4403-bb92-0acd9be741f4>
CC-MAIN-2016-26
http://www.bookrags.com/lessonplan/langston-hughes/test6.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00007-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926582
526
2.90625
3
It’s Not the Size of the Salamander, It’s the Size of the Fight in the Salamander Source Newsroom: Allen Press Publishing Newswise — Don’t get between a salamander and her eggs. The concept usually applies to a mother bear and her cubs, but it rings true for this small amphibian as well—particularly as the eggs get closer to hatching. A study has found that female salamanders grow more aggressive in defending their nests as their eggs mature. Other factors, including the size of the mother, were insignificant. An article in the December 2010 issue of the journal Herpetologica describes the behavior of the eastern red-backed salamander, found in forests of North America. The females proved to be more vigorous about guarding clutches of eggs than territory or food. The study was conducted in a laboratory setting, using wild salamanders caught in New Hampshire. Female salamanders and their nests of eggs were “threatened” by the introduction of a nonbrooding female salamander for 15-minute intervals. While the first reaction of many mothers was to curl tightly around their eggs to protect them, substantially more aggressive behavior toward the intruder followed. In many instances, nudging, chasing, and snapping behaviors eventually gave way to repeatedly biting the intruder. The female red-backed salamander may reproduce only every two years because of the substantial energy required to produce and attend her clutch. As the eggs become more viable, the mother’s protection increases. The research found that mothers were more aggressive in defending their six-week old eggs than their four-week old eggs. At the same time, the number of eggs in the nest and the size of the mother did not appear to make a difference in her aggression. These salamanders have the ability to recognize the developmental stage of their eggs, or at least are able to determine the amount of time that has passed since they laid their eggs. The older the brood, the more likely it is to survive to hatching, making it more important to the mother. Full text of the article “Factors Affecting Aggression During Nest Guarding in the Eastern Red-Backed Salamander (Plethodon Cinereus),” Herpetologica, Volume 66, Issue 4, December 2010 is available at http://www2.allenpress.com/pdf/herp-66-04-385-392.pdf Herpetologica, a quarterly journal of The Herpetologists' League, contains original research articles on the biology of amphibians and reptiles. The journal serves herpetologists, biologists, ecologists, conservationists, researchers, and others interested in furthering knowledge of the biology of amphibians and reptiles. To learn more about the society, please visit: http://www.herpetologistsleague.org.
<urn:uuid:31b89e0d-fdf2-4639-bbe5-40e3f80a43dd>
CC-MAIN-2016-26
http://www.newswise.com/articles/it-s-not-the-size-of-the-salamander-it-s-the-size-of-the-fight-in-the-salamander
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00047-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95783
598
3.46875
3
- taste (v.) - c. 1300, "to touch, to handle," from Old French taster "to taste, sample by mouth; enjoy" (13c.), earlier "to feel, touch, pat, stroke" (12c., Modern French tâter), from Vulgar Latin *tastare, apparently an alteration (perhaps by influence of gustare) of taxtare, a frequentative form of Latin taxare "evaluate, handle" (see tax (v.)). Meaning "to take a little food or drink" is from c. 1300; that of "to perceive by sense of taste" is recorded from mid-14c. Of substances, "to have a certain taste or flavor," it is attested from 1550s (replaced native smack (v.3) in this sense). For another PIE root in this sense, see gusto. The Hindus recognized six principal varieties of taste with sixty-three possible mixtures ... the Greeks eight .... These included the four that are now regarded as fundamental, namely 'sweet,' 'bitter,' 'acid,' 'salt.' ... The others were 'pungent' (Gk. drimys, Skt. katuka-), 'astringent' (Gk. stryphnos, Skt. kasaya-), and, for the Greeks, 'rough, harsh' (austeros), 'oily, greasy' (liparos), with the occasional addition of 'winy' (oinodes). [Carl Darling Buck, "A Dictionary of Selected Synonyms in the Principal Indo-European Languages," 1949] Sense of "to know by experience" is from 1520s. Related: Tasted; tasting. Taste buds is from 1879; also taste goblets. - taste (n.) - early 14c., "act of tasting," from Old French tast "sense of touch" (Modern French tât), from taster (see taste (v.)). From late 14c. as "a small portion given;" also "faculty or sense by which the flavor of a thing is discerned;" also "savor, sapidity, flavor." Meaning "aesthetic judgment, faculty of discerning and appreciating what is excellent" is first attested 1670s (compare French goût, German geschmack, Russian vkus, etc.). Of all the five senses, 'taste' is the one most closely associated with fine discrimination, hence the familiar secondary uses of words for 'taste, good taste' with reference to aesthetic appreciation. [Buck] Taste is active, deciding, choosing, changing, arranging, etc.; sensibility is passive, the power to feel, susceptibility of impression, as from the beautiful. [Century Dictionary]
<urn:uuid:017de55b-277e-4d03-9be0-e4d362840d85>
CC-MAIN-2016-26
http://etymonline.com/index.php?term=taste&allowed_in_frame=0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913122
591
3.03125
3
From 11:00PM PDT on Friday, July 1 until 5:00AM PDT on Saturday, July 2, the Shmoop engineering elves will be making tweaks and improvements to the site. That means Shmoop will be unavailable for use during that time. Thanks for your patience! © 2016 Shmoop University, Inc. All rights reserved. Discussion and Essay Questions Available to teachers only as part of the Teaching the Right to Bear ArmsTeacher Pass Teaching the Right to Bear Arms Teacher Pass includes: Assignments & Activities Current Events & Pop Culture articles Discussion & Essay Questions Challenges & Opportunities Related Readings in Literature & History Sample of Discussion and Essay Questions Meaning of the Second Amendment United States v. Cruikshank According to this case, what was the purpose of the Second Amendment? What did it protect? What did it not protect? What power did the states hold under this interpretation of the Second Amendment? What responsibilities did the states hold? What is the doctrine of incorporation and how does it affect the Second Amendment? United States v. Miller Summarize the ruling in United States v. Miller. How did the Court define “the people”? How did this definition affect individual gun rights? How did the Court open the door to an individual right to own certain types of guns? More precisely, who served in the militia? Who exactly were (and were not) part of "the people"? What sorts of guns were covered under this interpretation of the Second Amendment? District of Columbia v. Heller How would you summarize this decision? In what ways did it deviate from previous decisions? What right did the Court say the Second Amendment guarantees? What are the limitations on that right? Why is it significant that this ruling applied to a Washington, D.C. law? What impact will this ruling have on state laws?
<urn:uuid:1f9c6841-4584-454a-b139-0ecfb18b4a7f>
CC-MAIN-2016-26
http://www.shmoop.com/right-to-bear-arms/discussion-essay-questions.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958443
394
3.25
3
I'm not sure these two statement are not same thing. $$ "f equals a continuous function a.e." & "f is continuous a.e."$$ The concept is too much abstract, so I wanna find some counter examples. a function $f$ and a continuous function $g$ s.t. $f=g$ a.e and $f$ is not continuous a.e. a function $f$ continuous a.e. s.t. there exists no continuous function $g$ with $f=g$ a.e. In addition I wanna find a Riemann integrable function which has an uncountable set of discontinuities. Is there some nice example to understand those concepts, note that please.
<urn:uuid:05c72782-dbf4-4daf-8c75-132d00191f1f>
CC-MAIN-2016-26
http://math.stackexchange.com/questions/156439/continuous-function-a-e/156443
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00017-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885956
160
2.65625
3
What does Propane smell like? Propane smells like rotten eggs, a skunk’s spray, or a dead animal. Propane manufacturers add this smell to help alert customers to propane leaks. Some people may have difficulty smelling propane due to their age (older people may have a less sensitive sense of smell); a medical condition; or the effects of medication, alcohol, tobacco, or drugs. Consider purchasing a propane gas detector as an additional measure of security. Odor fade is an unintentional reduction in the concentration of the odor of propane, making it more difficult to smell. Although rare, this can be caused by the presence of air, water, or rust in the cylinder. New and reconditioned small cylinders that sit too long before being filled are prone to internal rust when moisture and air get inside.
<urn:uuid:aa75b0e7-b88c-450b-8a55-ded26e66fa43>
CC-MAIN-2016-26
http://www.diamondpropane.com/what-does-propane-smell-like
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00189-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932244
171
2.609375
3
Eastern African marine pollution agreement European action on air pollution Cheetahs need genetic diversity World water shortage Polymers condition desert soils New national parks in Uganda Broadening the purpose of botanical gardens An action plan and convention for the protection, management and development of the marine and coastal environment of the eastern African region were adopted at a conference of plenipotentiaries held at UNEP headquarters in Nairobi in June 1985. The conference also adopted a protocol concerning cooperation in combating marine pollution in cases of emergency and another concerning protected areas and wild fauna and flora. Three resolutions were adopted dealing with programme implementation and institutional and financial arrangements. POLLUTION SPARES NOT EVEN THE OCEAN East African countries sign agreement The conference was convened by UNEP as part of its Regional Seas Programme. It was attended by delegates from France, Kenya, Madagascar, Mozambique, Seychelles, Somalia, the United Republic of Tanzania and the European Economic Community. A protocol on the reduction of sulphur emissions or their transboundary fluxes by at least 30 percent was adopted by the Executive Body for the Convention of Long-Range Transboundary Air Pollution in Helsinki on 9 July 1985. This reduction should be achieved by 1993 at the latest by 21 of the 30 Parties to the Convention. The 30 percent reduction applies to the 1980 level of emissions. The 21 countries are Austria, Belgium, Bulgaria, the Byelorussian SSR, Canada, Czechoslovakia, Denmark, the Federal Republic of Germany, Finland, France, the German Democratic Republic, Hungary, Italy, Liechtenstein, Luxembourg, the Netherlands, Norway, Sweden, the Ukrainian SSR and the USSR. The Finnish Government has decided to reduce emissions by 50 percent. Consideration will further be given to similar measures to reduce emissions of nitrogen oxides, including transboundary fluxes of secondary by-products, between now and 1995. A set of three international programmes has been drawn up for research into the effects of the main atmospheric pollutants on human health and the environment. Research will concentrate essentially on the effects on materials (including those used in historical and cultural monuments), the evaluation and monitoring of the acidification of watercourses and lakes and the effects of air pollution on forests. The French Government, meanwhile, has introduced a system of "mutual air insurance". A charge will be levied in respect to sulphur oxide emissions by thermal installations of greater than 50 megawatts capacity. The revenues, estimated at FF 150 million a year, will be divided among industrialists investing in equipment to reduce pollution emissions. This scheme should make it possible to do better than the 1984 objective of reducing sulphur emissions in France by 50 percent between 1980 and 1990. In the Federal Republic of Germany, industry will have to reduce its pollution emissions by almost 40 percent and invest DM 10000 million to comply with the new regulations on air pollution, which took effect at the end of 1985. The new regulations apply to both new and existing installations. The latter will have to be modified within the next five years so that their pollution emission is no greater than that of the new installations. The Council of Europe's When Marco Polo visited Kublai Khan at his summer residence in the Himalayas 700 years ago, he reported that the Mongol ruler kept 1000 cheetahs as hunting companions. The use of the cheetah, the fastest mammal in the world, to aid in royal hunts began with the Sumerians in 3000 BC and was continued by Egyptian pharaohs, French kings, Indian princes and Austrian emperors. In later centuries scientists were puzzled by the fact that with all these thousands of royal pets taken from the wild on three continents, there was not one known instance of cheetahs successfully breeding in captivity; this did not come about until 1956. Preliminary results from a study begun by the American Cancer Institute five years ago of captive cheetahs in the United States and southern Africa indicated that cheetahs had trouble reproducing because their genes were not sufficiently diverse. Now, with the study complete, the researchers have concluded that if they do not find a range of diverse cheetah genes, the species could soon be vulnerable to extinction. TOO SPECIALIZED TO SURVIVE? genetics causing cheetah decline The cheetahs that once roamed North America, Asia and Europe are extinct. Cheetahs now exist in the wild only in southern and eastern Africa. Researchers are hoping that cheetahs in eastern Africa have a different genetic makeup. Since 1970, the researchers reported in the journal Science, 10 to 15 percent of cheetahs caught in the wild have been successfully bred. The mortality rate among offspring produced in captivity has been 29.1 percent. Estimates of the worldwide cheetah population range from 1500 to 20000. Conservation and efficient use of existing water supplies will be the only realistic answers to what is expected to be a permanent global water shortage, according to a new study by the Worldwatch Institute. The Institute, a non-governmental organization based in Washington, D.C., that does research on issues involving resources, found that the world's existing sources of water for drinking, agriculture and industry are approaching their limits. The severe drought in Africa, which has brought hunger and death on a wide scale in Ethiopia and other countries "is but a prelude of things to come", the study warned. According to water planners around the world, supplies will fall short in affluent countries as well as developing ones within the next 20 years. Such traditional ways of expanding available water supplies as building dams, reservoirs and canals can no longer provide satisfactory solutions in much of the world. SHIRE RIVER IN MALAWI new study advocates water conservation "Pervasive depletion and overuse of water supplies, the high capital cost of new large water projects, rising pumping costs and worsening ecological damage", the report said, "call for a shift in the way water is valued, used and managed." The study found that future needs for drinking, agriculture and industry would have to be met through more productive, efficient and innovative uses of existing supplies, rather than by expanding supplies through construction projects that would require large amounts of capital and damage the environment. "Only by managing water demand rather than ceaselessly striving to meet it is there hope for a truly sustainable water future," according to the study, entitled Conserving water - the untapped alternative. "Today's water institutions - the policies and laws, government agencies and planning and engineering practices that shape patterns of water use - are steeped in a supply-side management philosophy no longer appropriate to solving today's water problems." Perhaps the most important area for increasing the efficiency of water use, the report said, is in agriculture, where wasteful irrigation techniques are rapidly depleting resources in major crop areas of the United States, the USSR, China and other countries. The study contended that recycling industrial water and what it called modest efficiency standards for household taps and appliances could also produce huge savings. The New York Times Plastic soil conditioners, or polymers, work rather like sponges in that small granules of the material can absorb many times their own volume of water when mixed with soil. This moisture is then released slowly to adjacent plant roots. Researchers at the Institute for Terrestrial Ecology (ITE) in Khartoum have taken these polymers and used them in a series of trials aimed at improving the establishment of tree seedlings in the Sudan. The polymers, some of which can absorb up to 300 to 600 times their own volume of water, are mixed with the soil at very low concentrations. Among the trees with which the ITE is experimenting are Eucalyptus microtheca, Acacia senegal, Acacia seyal and Prosopis chilensis. Results so far have been successful and cost-effective. In one experiment, for example, intermittent rains at six-day intervals were simulated. This is the worst growing condition possible in that while one rain stimulates growth, the following rain arrives too late to keep it nourished. In the experiment 70 percent of the trees planted in polymer-treated soils survived while 100 percent of the untreated ones died. Belgium's Ghent University has developed a similar technology which has been tested since 1981 through a European Economic Community project involving the Egyptian Academy of Sciences. Successful experiments have been carried out in China, Malaysia and Indonesia. BBC Farming World and UNEP News The European Economic Community (EEC) has pledged to give Uganda US$2.2 million to help rehabilitate three of East Africa's most renowned game parks. This is one of the largest sums spent on wildlife in East Africa in recent years. The EEC grant will help restore the Queen Elizabeth, Kabalega and Kidepo national parks, once considered among the most beautiful and rich in wildlife in all of East Africa. The parks and their lodges were built in the late 1960s to attract tourism and to generate foreign exchange, but a decade later they had fallen into ruin. The project aims to strike a long-term balance between human needs and the protection of natural resources. It includes social actions in favour of the human population within the project areas. Direct assistance will be given to Uganda's Institute of Ecology to aid in this task. A four-day conference held in November 1985 in Las Palmas, Grand Canary, brought together 220 plant conservation experts from 42 countries to discuss turning botanical gardens into active centres of rare plant conservation. The International Union for the Conservation of Nature and Natural Resources (IUCN) had called the meeting to mobilize concern among the world's 1200 main botanical gardens and institutes for the future of the world's plant species, 25000 of which are imminently threatened with extinction. The World Wildlife Fund (WWF), which sponsored the meeting and is spending US$8 million on plant rescue projects, has run a major campaign since 1984 to alert the public to valuable but vanishing wild species such as the rosy periwinkle from Madagascar, used in the treatment of leukaemia. The principal force behind the meeting is the realization by IUCN and WWF, together with FAO and Unesco, that existing gardens and seed banks cannot by themselves guarantee the survival and certainly not the genetic variety of wild plant species, many of which are essential in revitalizing key crop strains against disease. Although some gardens are already active in conservation, the aim of the conference was to get botanists out to help conserve endangered plants in their native habitats. It is believed that more than 40 percent of the world's botanical gardens are at present working to improve their conservation efforts. The mobilization of botanical gardens could be a significant accomplishment in that the top ten botanical gardens in the world host 50 million visitors annually. Heading the list of concerns were the chief "centres of endemism" ("plant-homes" where species originated) around the world to which many species are confined. The conference adopted a "hit list" of 100 top-priority conservation areas which WWF and IUCN will ask governments and international agencies to protect as the world's key plant homelands. Credit for the photograph on page 43 of Unasylva 147 (Vol. 37. 1985/1) should have been given to ''Bennett" rather than "Benvett", and the caption should have read "Tropical forest dweller" rather than "Amazon forest dweller", since the person pictured is a Chocó Indian from Panama.
<urn:uuid:f0a9d539-8314-48d3-814b-fe048d2c1392>
CC-MAIN-2016-26
http://www.fao.org/docrep/r7750e/r7750e0a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949693
2,371
3.328125
3
The actual value of a CSS property is the used value after all approximations have been applied. For example, a user agent may only be able to render borders with a integer pixel value and may be forced to approximate the computed width of the border. |CSS Level 2 (Revision 1) The definition of 'actual value' in that specification. - CSS Reference - CSS Key Concepts: CSS syntax, at-rule, comments, specificity and inheritance, the box, layout modes and visual formatting models, and margin collapsing, or the initial, computed, resolved, specified, used, and actual values. Definitions of value syntax, shorthand properties and replaced elements.
<urn:uuid:a9adadf8-938c-4a71-bea9-fcf2884b55ba>
CC-MAIN-2016-26
https://developer.mozilla.org/en-US/docs/Web/CSS/actual_value
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.672109
137
2.96875
3
Note: I use the term "reader response" rather than "peer review/peer evaluation" for at least two reasons: I want students to view themselves as writers, a perspective that implies a relationship with others as readers. Also, I use these cycles primarily as opportunities for writers to get response from colleagues about their work in progress, rather than to judge one another's final work (what the expression using "peer" suggest). In groups of three: CYCLE 1 (30-45 minutes, 10-15 minutes per draft) - Writers read aloud while others listen and follow along on draft copy. - Writers take notes while others give oral feedback to each draft in answer to the following questions: - In a sentence or two, how would you sum up the writer's main claim in this draft? - After reading this draft, what are the main unanswered questions remaining for you as a reader? How would getting these questions answered help you? - What part of this draft is clearest/most effective/interesting for you as a reader? Why? - What part of this draft could be strengthened? What specific suggestions can you offer for revising it? - Writer's Question (Only ONE question will be addressed; check the main one.): How might I strengthen my focus/thesis/? What part(s) need more evidence? Why? How might I strengthen connections between ideas? How might I make my word choice more concise and precise? What main proofreading/format concerns do you see? How might I edit them? CYCLE 2 (20-30 minutes) Exchange drafts with a partner in your group; re-read with questions in mind and then write a detailed response memo giving full responses to each of the above four questions. CYCLE 3 (5-10 minutes) - Read reader's responses and then: - Do a revising note: Based on information received from reader responses, how might you revise this draft? What parts would you keep? Change? Why? - Keep responses and revising note for revising possibilities. - Later, solicit more responses from more readers, including ones outside this class, such as in the Writing Center. - Do a revising note: Note: These cycles can be done inside or outside of class time. If not enough time to do both oral response (cycle 1) and written response (cycle 2), choose one or the other. Students tend to prefer the oral route, but helps to insist that writers take notes so they will have a written record of the responses to use in revising. ~ Carmen Werder
<urn:uuid:a5de9bdf-33fd-41d9-a3f1-2238a44b8714>
CC-MAIN-2016-26
http://www.library.wwu.edu/wis_reader_response_cycles
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949811
546
2.5625
3
In 2005, the European Parliament approved a written declaration on the durability of the European food aid programme for the most deprived persons. The declaration not only advocated a permanent food aid programme plus an annual budget, but also called for it to be expanded. To ensure distribution of balanced food rations, the European Parliament called for the programme to be opened up to new sectors such as pork, poultry and eggs. Mariann Fischer Boel, European Commissioner for Agriculture and Rural Development, was tasked with following up the declaration. That fact is that, three years later, there is still no sign of a basis for a new regulation and that only modest steps have been taken. As yet, it is totally unclear what budget funding is available. Food aid is very much an issue in the European Union, where 16% of the population lives below the poverty line. Can the Commission ensure a food aid programme at European level? In the process, will it enter into a dialogue with the European NGOs active in the field?
<urn:uuid:b88d805f-c8e9-49b7-af8c-e268174fc506>
CC-MAIN-2016-26
http://www.europarl.europa.eu/sides/getDoc.do?type=QT&reference=H-2008-0332&language=BG
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952548
203
2.515625
3
Verse 27. "They came to Elim" - This was in the desert of Sin, and, according to Dr. Shaw, about two leagues from Tor, and thirty from Marah or Corondel. "Twelve wells of water" - One for each of the tribes of Israel, say the Targums of Jonathan and Jerusalem. "And threescore and ten palm trees" - One for each of the seventy elders.- Ibid. Dr. Shaw found nine of the twelve wells, the other three having been choked up with sand; and the seventy palm trees multiplied into more than 2000, the dates of which bring a considerable revenue to the Greek monks at Tor. See his account at the end of this book, and see also the map. Thus sufficient evidence of the authenticity of this part of the sacred history remains, after the lapse of more than 3000 years. IN the preceding notes the reader has been referred to Dr. Kennicott's translation and arrangement of the song of Moses. To this translation he prefixes the following observations:- "This triumphant ode was sung by Moses and the sons of Israel: and the women, headed by Miriam, answered the men by repeating the two first lines of the song, altering only the first word, which two lines were probably sung more than once as a chorus. "The conclusion of this ode seems very manifest; and yet, though the ancient Jews had sense enough to write this song differently from prose; and though their authority has prevailed even, to this day in this and three other poems in the Old Testament, (Deut. xxii.; Judg. v.; and 2 Sam. xxii.,) still expressed by them as poetry; yet have these critics carried their ideas of the song here to the end of ver. 19. The reason why the same has been done by others probably is, they thought that the particle yk for, which begins ver. 19, necessarily connected it with the preceding poetry. But this difficulty is removed by translating yk when, especially if we take ver. 19-21 as being a prose explanation of the manner in which this song of triumph was performed. For these three verses say that the men singers were answered in the chorus by Miriam and the women, accompanying their words with musical instruments. 'When the horse of Pharaoh had gone into the sea, and the Lord had brought the sea upon them; and Israel had passed, on dry land, in the midst of the sea; then Miriam took a timbrel, and all the women went out after her with timbrels and dances; and Miriam (with the women) answered them ( Áhl lahem, the men, by way of chorus) in the words, O sing ye, &c.' That this chorus was sung more than ONCE is thus stated by Bishop Lowth: Maria, cum mulieribus, virorum choro IDENTIDEM succinebat. - Praelect. 19. "I shall now give what appears to me to be an exact translation of this whole song: - MOSES. Part I 1. I will sing to JEHOVAH, for he hath triumphed gloriously; The horse and his rider hath he thrown into the sea. 2. My strength and my song is JEHOVAH; And he is become to me for salvation: This is my God, and I will celebrate him; The God of my father, and I will exalt him. 3. Jehovah is mighty in\ Perhaps a battle! chorus sung Jehovah is his name! by the men. Chorus, by Miriam and the women. Perhaps sung first in this place. O sing ye to Jehovah, for he hath triumphed gloriously: The horse and his rider hath he thrown into the sea. MOSES. Part II 4. Pharaoh's chariots and his host hath he cast into the sea; And his chosen captains are drowned in the Red Sea. 5. The depths have covered them, they went down; (They sank) to the bottom as a stone. 6. Thy right hand, Jehovah, is become glorious in power; Thy right hand, Jehovah, dasheth in pieces the enemy. 7. And in the greatness of thine excellence thou overthrowest them that rise against thee. Thou sendest forth thy wrath, which consumeth them as stubble. 8. Even at the blast of thy displeasure the waters are gathered together; The floods stand upright as a heap, Congealed are the depths in the very heart of the sea. O sing ye to JEHOVAH, &c. Chorus by the women. MOSES. Part III 9. The enemy said: 'I will pursue, I shall overtake; I shall divide the spoil, my soul shall be satiated with them; I will draw my sword, my hand shall destroy them.' 10. Thou didst blow with thy wind, the sea covered them; They sank as lead in the mighty waters. 11. Who is like thee among the gods, O JEHOVAH? Who is like thee, glorious in holiness! 12. Fearful in praises; performing wonders! Thou stretchest out thy right hand, the earth swalloweth them! 13. Thou in thy mercy leadest the people whom thou hast redeemed; Thou in thy strength guidest to the habitation of thy holiness! O sing ye to JEHOVAH, &c. Chorus by the women. MOSES. Part IV 14. The nations have heard, and are afraid; Sorrow hath seized the inhabitants of Palestine. 15. Already are the dukes of Edom in consternation, And the mighty men of Moab, trembling hath seized them; All the inhabitants of Canaan do faint 16. Fear and dread shall fall upon them; Through the greatness of thine arm they shall be still as a stone. 17. "Till thy people, JEHOVAH, pass over [Jordan;" - Till the people pass over whom thou hast redeemed. 18. Thou shalt bring them and plant them in the mount of thine inheritance: The place for thy rest which thou, JEHOVAH, hast made; The sanctuary, JEHOVAH, which thy hands have established. Grand chorus by ALL. JEHOVAH FOR EVER AND EVER SHALL REIGN." 1. When poetry is consecrated to the service of God, and employed as above to commemorate his marvellous acts, it then becomes a very useful handmaid to piety, and God is honoured by his gifts. God inspired the song of Moses, and perhaps from this very circumstance it has passed for current among the most polished of the heathen nations, that a poet is a person Divinely inspired; and hence the epithet of profhthv, prophet, and vates, of the same import, was given them among the Greeks and Romans. 2. The song of Moses is a proof of the miraculous passage of the Israelites through the Red Sea. There has been no period since the Hebrew nation left Egypt in which this song was not found among them, as composed on that occasion, and to commemorate that event. It may be therefore considered as completely authentic as any living witness could be who had himself passed through the Red Sea, and whose life had been protracted through all the intervening ages to the present day. 3. We have already seen that it is a song of triumph for the deliverance of the people of God, and that it was intended to point out the final salvation and triumph of the whole Church of Christ; so that in the heaven of heavens the redeemed of the Lord, both among the Jews and the Gentiles, shall unite together to sing the song of Moses and the song of the Lamb. See Rev. xv. 2-4. Reader, implore the mercy of God to enable thee to make thy calling and election sure, that thou mayest bear thy part in this glorious and eternal triumph.
<urn:uuid:640fce19-04f6-45fd-9dae-09f129ebcb1d>
CC-MAIN-2016-26
http://www.godrules.net/library/clarke/clarkeexo15.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00165-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967668
1,646
2.75
3
(Swans - August 9, 2010) Science is not a question of truth and falsehood, but is a collection of theories and experiments, operations that we, or our machines, do. Theories are recipes for experiments and predictions about their results. We add theories that provide experiments that come out as predicted to our storehouse of scientific theories. That, and that alone, is science. What are theories and experiments? Looked at from the outside an experiment often looks just like a measurement. The determination of a particular distance might be an experiment or just a simple measurement. For example, if we were measuring the diameter of a type of atom we would be doing an experiment. When we determine the size of a type of thing we can count on other tokens (examples) of that type being that size later or elsewhere. The result of these later measurements is predictable in advance. Theories, once established, make predictions about the outcome of experiments. These experiments then become routine operations. An experiment is either a measurement used to test a prediction or a measurement of a type of thing used to establish a standard for making a prediction. G in Newton's law of gravitation is a constant, that is, such a standard. The ability to make predictions is the purpose of the theory and our justification for accepting it. All the rest is giddy imaginings. Predictable outcomes of operations allow technological advances. For all technological production is a string of repeatable operations whose outcome is known in advance. In the most common case in the "hard" sciences the theory will predict a number that a measurement of an operation will produce. For example, Newton's law of gravitation predicts how much time weights balanced at the ends of rods and in proximity will take to swing together. It is usually written like this. Now when we look at this formula it is hard to see just how we get from it to the prediction. We might define all the symbols, but instead let us try another way. The formula, when interpreted correctly, supplies us with a picture. Heavy balls accelerate towards each other. The acceleration of each ball is directly proportional to the size of the other ball. The balls' acceleration towards each other is inversely proportional to their masses and proportional to the square of their distance apart. Thus Galileo's famous experiment, which showed two different sized cannon balls falling to earth from the Tower of Pisa at the same rate, is explained. For both accelerate towards the same other ball, the earth. There is also a second picture. If these masses are in motion to start out with, then depending on this motion they will move in orbit around each other or in a parabola, approaching and then shooting apart. We imagine certain conditions and using the formula, predict, for example, a planet's or comet's location at a certain time. Given the proper interpretation, the formula, with the right values, can describe any elliptical orbit or parabolic trajectory. But in this case we calculate the values for the masses by use of the formula, and can make predictions only after this calculation. We should note in passing that Galileo's experiment is not quite what the theory requires. If the cannon balls were dropped at different times they would fall, or appear to fall, at minutely different rates. For, according to the theory, the earth must also move. And its motion is different in the two cases, for the cannon balls are of different sizes and the earth's motion is proportional to the cannon balls' sizes. Newton's formula predicts that each mass will accelerate in proportion to the size of the other mass. We can't see the earth's motion because we are standing upon the earth, but its motion would shorten the time the heavier ball would appear to take to fall. The movement of the earth because of the cannon balls is, of course, minuscule, but this doesn't change that the setup of the experiment is wrong. Were one of the cannon balls a lot larger, say the size of the sun, it would fall to earth, or appear to because we are standing on the earth, a lot faster than a bowling ball. For then the earth's movement towards it would be, to say the least, palpable. To do the experiment correctly we must measure the movement of both the earth and the balls, both masses. That means we must somehow stand apart. Hence the elaborate setup with balls balanced at the ends of rods and swinging together. Clearly, just looking at the symbols in the formula, it is impossible to know how to do the experiment correctly. How would we know that Galileo's experiment is not quite right? Can the formula tell us? How? If we imagine the picture as a photograph it doesn't really help us. Somehow the balls pulling at each other must be in the picture. But how? Do we want to imagine rubber bands between them? This wouldn't work because the rubber bands would get slacker as the balls got closer together. No, the picture is not just a picture, but more like a mental rehearsal of the experiment. We recount, in imagination, what ought to happen, then use the formula to calculate what actually did. The formula, couched in mathematical symbols, is actually a description of one or more measurements and a calculation that predicts the value of a final measurement to be made at the end of the operation. So the measurements made at the beginning of the experiment allow us to predict the measurement made at the end. The picture is a description of the apparatus necessary to produce the final measurement. It is often said that Newton, through this formula, unified our understanding of the motion of the planets and the motion of falling bodies. It was this unification of what at the time seemed completely different phenomena that so impressed everybody. And it was Newton's unification that encouraged physicists later to search for physics' holy grail, the unified field theory, a single theory that explains everything. How did Newton do it? As we have seen, the formula considered as a picture depicts bodies that accelerate. Acceleration is described like this in my old physics book: often the velocity of a moving body changes either in magnitude [speed], in direction, or both as the motion proceeds. The body is then said to have an acceleration. So velocity has two components, speed and direction. If either or both changes there is an acceleration. Now when bodies fall their direction doesn't change but their speed does, and when planets move in a circular orbit their speed doesn't change but their direction does. So if acceleration is a change in speed or direction, both falling bodies and planets can be explained by the same formula. Thus a single formula unites two apparently disparate experiments because one of the essential quantities can be measured in either of two ways. [If a planet is in an elliptical orbit both its speed and direction change.] A unified field theory would also have to have quantities of a similar sort to accommodate a variety of clearly quite different operations. But, we should ask, does Newtonian mechanics explain these motions? No, Newtonian mechanics gives us, at best, an ability to make certain predictions. It offers no explanations of why anything happens. Science makes predictions and that is all. We might predict the speed of a falling body after, say, ten seconds. Or we might predict the position of a planet on the fourth of July 2010. We will then perform measurements, experiments, to see if we are correct. Why the body falls or why the planet is there is of no scientific concern. Experiments of the second kind, those that are used to establish quantities for predictions, do not result in flashy theories, and what they do show might not be called theories at all. For example, it is essential for engineers to know how strong various materials are. Experiments are used to establish the compression strengths of various kinds of cement, various kinds of steel, those of some highly artificial kinds of materials. These measurements, unlike, for example, measurements of the width of a particular hallway, are measurements of a type of thing. It is a measurement of an example of this or that type of cement. By establishing the strength of a type of cement we can predict just how strong a pillar or post made of that cement will be. With this knowledge we can make elaborate structures. The measurement of a type of thing looks no different from the measurement of a particular thing. What makes something a type of thing rather than simply a particular thing? Why, that we produced it (or if you want identified it) with a repeatable operation. Should we call this list of compression strengths a theory? Well, it is as you will. It does seem possible to determine strength (and there are various kinds) through what might be called "more theoretical" means. But it is not to my purpose to trace how that might be done here. My point is a simple one: scientific theory, for those who understand it, supplies a recipe for carrying out an experiment and predicts what the result of that experiment will be. Experiments of one kind test theories by producing results that either agree or disagree with theoretical predictions, while those of a second kind establish these predictions for a class of operations through the measurement of a token of a particular type. Real theories must lead to experiments that either agree or disagree with the theories' predictions. A theory must supply us, as Einstein put it, with an experiment that can "cook" the theory. Such an experiment tests the prediction. Prediction of the result of our own operations is the be all and end all of science. Taken together theories and experiments provide a catalog of operations we can reliably do, a collection of "steps" whose results we can reliably predict. We make our technological marvels by performing elaborate sequences of these steps, the results of which are predictable. Production lines are collections of machines that do these steps repeatedly. Theories that do not supply such experiments, and through them predictable steps, are not scientific theories. Given these introductory comments let us take a look at a number of scientific controversies. How about the idea of "intelligent design"? When an interviewer asked Stephen C. Meyer, a Cambridge educated leading proponent of intelligent design, "What would be your main argument for the evidence of intelligent design in the cell?" he said: The DNA molecule is literally encoding information into alphabetic or digital form. And that's a hugely significant discovery, because what we know from experience is that information always comes from an intelligence, whether we're talking about hieroglyphic inscription or a paragraph in a book or a headline in a newspaper. If we trace information back to its source, we always come to a mind, not a material process. So the discovery that DNA codes information in a digital form points decisively back to a prior intelligence. That's the main argument of the book. Now if intelligent design is to have the form of a scientific theory it must predict the result of an experiment. What is the experiment of which intelligent design will predict the result? I suppose we might imagine ourselves discovering the workshop of the designer, but how would we know that it was such a workshop? Would it differ from a human laboratory? Meyer's answer is in the form of a logical argument. It does not supply a "theory" of intelligent design that describes an experiment one might do. I for one cannot imagine any such experiment, but if someone comes up with one the nature of the experiment will reveal what the theory of intelligent design means. As it stands now the theory of intelligent design fails because it is simply not a theory that is in the proper form to be a scientific theory. Meyer's logical argument is no more convincing than something like this. "Everything that moves must have a mover, but there must be a first mover, so God is the unmoved mover." In any case such logical arguments do not make scientific theories. As an aside I note that Meyer argues that "information always comes from an intelligence." However, the same argument might be made that "information always comes from a human being." So Meyer's argument could equally support the claim that a human being is the intelligent designer of DNA. But again logical arguments do not make scientific theories unless they produce a prediction of an experimental result. What about the theory that contends with it, Darwin's theory of evolution? The intelligent designers object primarily to the idea that life evolved through a series of random accidental mutations. Darwin, or his present defenders, claim that evolution occurs when a random mutation proves beneficial and those individuals with it gain an advantage. Now this too describes no experiment. What experimental result does this theory predict? Suppose we allowed fruit flies to reproduce endlessly in some controlled environment and a mutation occurred that benefited those who had it so that the population gradually all had this mutation. Could we then say that this change was accidental? What would tell us that this was so? Perhaps many mutations would occur and the population would retain only a few, those that proved beneficial. Would this convince us that the changes were accidental? If it did convince us then, scientifically, it should convince us only of what the experiment shows, namely that a population retains beneficial mutations. If the fact that there are many mutations and only some retained convinces us that the process is "accidental," then this is the meaning of accidental in this context. No more and no less. But if someone denied that this meant that the process was accidental, how could we tell whether the change was purposive or accidental? Finally, this experiment tells us nothing about what happened in the past. Populations might retain accidental beneficial mutations and also produce intentional beneficial ones. Certainly scientists in the laboratory have intentionally produced mutations. In the interview with Dr. Meyer the interviewer quotes John Walton, a Biblical scholar. "Science is not capable of exploring a designer or his purposes. It could theoretically investigate design but has chosen not to by the parameters it has set for itself." Walton is quite right that science cannot examine the question of a designer because of the "parameters it has set for itself." Whether or not an experiment produces a result because of purpose or chance is not a scientific question. Science only asks what the result is. Mr. Walton is quite wrong to think that science could do otherwise. Explanations that do not lead to predictions are outside its scope. Of course, we can create populations of plants and animals in the laboratory through forcing mutations. Now some anomalies are passed on to offspring and some aren't. Can we identify which is which? Yes, to some extent. And in this way we can make predictions about experiments. People might say that this is how Darwin's theory is scientific, even though such experiments do not involve the past or "how evolution happened." The theory of evolution, the idea that species evolved from other species through this process of chance mutation "back then," remains without an experiment. For a scientific theory must say that the experiment will come out this way and not that way. Proponents of intelligent design do not object to evolution completely, but to evolution of all life forms from a single source through chance mutation. Meyer has this to say: I think small-scale microevolution is certainly a real process. I'm skeptical about the second meaning of evolution -- the idea of universal common descent, that all organisms share a common ancestry. I think the fossil record rather shows that the major groups of organisms originated separately from one another. But that's not what the theory of intelligent design (ID for short) is mainly challenging. I do not know what Meyer is pointing to in the fossil record. Presumably it is large structural differences or differences in DNA. Would such differences convince us that the two species had different origins? It might convince some and not others, but no predictive experiment emerges. So, whether it convinces us or not, it is not scientific. Meyer has a point here. Did humans evolve from "pre-humans" through chance mutation? What experiment does such a theory describe? I see none. Then does any predictive experiment emerge from Darwin's work? Well, yes. Through the use of radiocarbon dating and perhaps other means we can give a "date" to fossil remains. Through this method we have been able to date, roughly, the origin of humans and other species. We have created a whole timeline, a natural history, through our methods of dating fossil remains. From this we can make claims that are experimentally testable. Darwinians can predict that any discovered human fossil remains will not be twenty million years old. They might even be able to claim that they will be no more than 100,000 years old. Through various characteristics picked up through earlier experiments they can probably date remains quite accurately and predict, with reasonable accuracy, what the radiocarbon dating will be. The appearance and disappearance of various species within this constructed past allows us to make predictions about "dating" experiments. In this sense Darwin's theory is scientific. But we should realize that only in this sense is Darwin's theory scientific. There is no real way to say that one species evolved into another. Even if we found the so-called "missing link," some fossil that had some of the characteristics of one species and some of a later one, this would not prove that one evolved into the other. A shrug, as if to say, "how could it be otherwise?" is not a scientific experiment. Again, no experiment is indicated. The validity of radiocarbon dating rests upon a theory and an experiment. Simplified it goes something like this. Living beings take in carbon and so have the same amount of Carbon 14 as the atmosphere. When they die they stop breathing and the carbon 14 decays. So the amount of carbon 14 in a fossil indicates its age. By measuring the ratio of carbon 14 to carbon 12 we can tell, roughly, when in the past the organism died and stopped exchanging carbon with the atmosphere. Does radiocarbon dating really tell us when in the past something happened? All we can say is that it fits into the complex web of tests of the past that we have, and this complex web of tests is the scientific past. We picture the past as if it were a time and place "back then." It's hard to resist the idea that it still exists, "somewhere." Scientifically, the past, like everything else, is a complex of experiments that can only be done right now. For that is where we always are, right here, right now. In reality, our image of the past is a set of rules and a scientific theory, a recipe for a multitude of experiments that permit us to locate events within "past time." Prior documents can't refer to posterior ones. Certain inks only came into use at certain times, as did certain kinds of paper. Experiments determine whether these are in a given sample. Mothers and fathers must be older than their children. Documents tell us of birth dates. Things cannot be in two places at the same time. Reports tell us of the location of people and things. Cause and effect must move from earlier to later times. And much more. Radiocarbon dating must fit into the complex of experiments we can do to locate a person or event in the past. That it does so is our justification for incorporating it into this web of experiments. Suppose someone said (as people do) "God plopped down the whole past all at once." Well, fine, but scientifically irrelevant. It is nothing but a fantasy picture and affects none of the operations that allow us to locate things in the past. Darwin's theories allow us to predict, from knowledge of just what species left certain remains, the age of those remains as calculated through radiocarbon dating or any other method within the web of experiments that supply us with knowledge of the past. To this extent, because it allows us to make such predictions, Darwin's is a scientific theory. But we must note that the notion of "survival of the fittest," a notion that has so perniciously encouraged the belief in the goodness of the "free market system," is not scientific, for it allows no predictions. The problem here is simple -- to identify fitness one must already know those who survive. Nothing marks one individual as more fit than another except its survival. Chickens are more fit than extinct eagles. Their fitness consists in their being good to eat and easy to control, so humans protect and breed them, guaranteeing that they survive. DDT wiped out the eagle. But there is no predictive element, no trait identified ahead of time that will guarantee that a particular individual will survive as opposed to another. Big beaks in birds might make individuals more formidable in fights with rivals or simply add extra weight that hinders successful migration. "Fitness" can only be identified after the fact. An individual was fitter because it has survived, so no prediction is possible. "Fitness" and "survival" are interesting because they seem to be concepts that relate to each other as cause and effect whereas they are in fact synonyms. "Survival of the fittest" is thus a tautology, true in all worlds, and hence having no predictive power. Again, a theory is a scientific theory only if it allows us to predict the result of an experiment. Thus intelligent design fails and Darwin's theories succeed only partially in being scientific theories. Let's look at one more controversy. What about global warming? Is this a scientific theory? Yes, it is. The theory says global average temperatures are a function of CO2 concentrations in the atmosphere. The experiment is as follows: place thermometers all around the world. Record daily temperatures and take the average. Compare these temperatures year over year. Produce a regression graph of these temperatures year by year. Also, daily, record the CO2 level in the atmosphere, and make a regression graph of these measurements. The slope of the regression graph of average temperatures will be a linear function of the level of CO2 in the atmosphere provided all else remains constant. This last limitation is completely legitimate, and Newton's law of gravitation has the same limitation. "All else" includes levels of solar radiation, the presence of other greenhouse gases, arctic ice cover, and the level of particulate matter in the atmosphere. The problem? The experiment takes years to perform, and the regression graph, with only a few data points, does not convince everybody. In order to add more data points, scientists have tried to use proxies to measure temperatures in the past. Also, unlike with experiments associated with Newton's theory, we cannot keep all else constant, so variations in solar radiation, other greenhouse gases, and atmospheric particulate matter have affected the measurements. But, since we have theories and experiments that predict just how these phenomena affect global temperatures we can incorporate them in our calculations. The theory of global warming also requires yet another experiment. This part of the theory says that the atmospheric levels of CO2 are a function of human production and release of CO2. Again, all science can show is a correspondence between human emissions of CO2 and atmospheric concentrations of CO2. And also again, other factors will not remain constant, but their measurement and effect are possible. But the problem here is not merely in determining a scientific outcome. For here it is important that the result of an experiment change our way of life, and the connection between this result and this change is not scientific. Warming, in the sense described above, is scientifically demonstrable. That people ought to do something about it is not. Nothing, scientifically, requires human beings to people the earth. We might just as easily engage in an end-of-the-species party. Nothing scientific opposes it. If you find Michael Doliner's work valuable, please consider Feel free to insert a link to this work on your Web site or to disseminate its URL on your favorite lists, quoting the first paragraph or providing a summary. However, DO NOT steal, scavenge, or repost this work on the Web or any electronic media. Inlining, mirroring, and framing are expressly prohibited. Pulp re-publishing is welcome -- please contact the publisher. This material is copyrighted, © Michael Doliner 2010. All rights reserved. Have your say Do you wish to share your opinion? We invite your comments. E-mail the Editor. Please include your full name, address and phone number (the city, state/country where you reside is paramount information). When/if we publish your opinion we will only include your name, city, state, and country. About the Author Michael Doliner studied with Hannah Arendt at the University of Chicago (1964-1970) and has taught at Valparaiso University and Ithaca College. He lives with his family in Ithaca, N.Y. (back)
<urn:uuid:e5f301e3-b5da-4951-8326-e0e7b3ee8669>
CC-MAIN-2016-26
http://www.swans.com/library/art16/mdolin63.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949645
5,060
3.5
4
A remembrance service was held in Stoke-on-Trent today for a Nazi wartime massacre in Czechoslovakia. Nearly 180 men were executed during the 1942 masacre in a village called Lidice near Prague, with many women and children taken to concentration camps and murdered. The people of Stoke-on-Trent raised the equivalent of £1 million to help re-build the village after it was destroyed on 10th June 1942. Stoke-on-Trent television personality Nick Hancock, was at the service in Stoke Minster today, having made a documentary about the massacre: The attack on the village by the Nazis was in retaliation of Reinhard Heydrich, the highest ranking Nazi official in the Protectorate of Bohemia and Moravia. It was a local doctor called Sir Barmett Stross from Stoke-on-Trent that began a campaign entitled 'Lidice shall live' in the 1940s to raise money to help rebuild the village and enlisted the help of coal workers. Last month children from Bustehrad School in the village visited Stoke to learn more about Sir Barnett.
<urn:uuid:87208a82-cb6e-440d-84b9-1d3ba72a0fb6>
CC-MAIN-2016-26
http://www.itv.com/news/central/2012-06-10/massacre-remembrance-service-in-stoke-on-trent/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977366
229
2.6875
3
Personalized learning is coming to the Tomah School A 600-thousand dollar. three-year grant is making this style of education In tonight's Assignment: Education report. we'll take a look at what this means for students at Tomah Middle School. Sound from translator/student I'm studying Manderine Chinese. I'm studying Russian. I am studying Arabic. All three world languages are now being offered at Tomah Middle School. I'm taking a Rosetta Stone program. This computer software program specializes in teaching a foreign language. 8th grader Matthew Buss is one of six middle school students who are piloting this foreign language program. I thought it would be something different to try and I've always kind of wanted to learn Chinese. Matthew is learning Manderine Chinese as a part of the Tomah School District's personalized approach to exposing students to as many foreign languages Our intent is to expose them to German and Spanish their 6th grade year. And then in 7th grade year, they get to pick between three languages whether it's arabic, chinese or russian. The foreign language computer software is being paid for with grant money, and offers the students the opportunity to learn at their own pace. It was a little hard to use at first but then I got used to it after a while. So, it's been kind of routine now. every 7th grader at Tomah Middle will get to choose between Manderine Chinese, Russian and Arabic. And they will all be learning with a computer. and at their own pace. The biggest thing we're hoping to learn is how can we go ahead and implement this affectively for next year based off our six kids. based off their progress level. Matthew is doing well in his Manderine Chinese class. He plans to continue studying the language next year when he enters high school in hopes of becoming fluent. If I travel to other countries, I won't need a translator. I can just speak to them in their native language. To have the ability to be able to actually go ahead and speak that foreign language and be fluent makes you so much more marketable in that global economy. And this personalized computer software program is making it possible for Tomah middle school students to get a taste for as many different world languages This is the first year of the personalized foreign The grant was given to the district by the Department To be eligible. a certain percentage of the student population within the school district must have a military connection. This grant is also paying for personalized learning in math, as well. The district is using a web-based software program to give every student in the middle school a personalized in addition to their regular math class. The program will help high achievers stretch their and also provide assistance to students who are struggling.
<urn:uuid:1a95cc05-d4da-4fe4-93d0-ac86869b59b9>
CC-MAIN-2016-26
http://www.news8000.com/schools/assignment-education/26197260
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930723
632
2.609375
3
Back to the Syllabus We have seen how English is part of a family that includes Dutch, Italian, and Portuguese but excludes Finnish. The first step in accounting for the history of the English language is to look at the family that English belongs to, the "Indo-European" family of languages. No records of a language corresponding to primitive Indo-European exist. What we know of it is reconstructed, through a process of inference, from records of the earliest versions of many languages across Europe and Western Asia. Comparisons of basic vocabulary in Latin, Greek, old Slavic languages, Old English and other early Germanic languages, old Persian languages and old Indian languages like Sanskrit suggest that all of these languages are ultimately dialects of a long-lost prehistoric language we call Proto-Indo-European (for want of any neater way to name it). Languages from Ireland to India are descendants of Indo-European, with notable exceptions like Basque, Magyar, Estonian, Finnish, Turkish, Arabic, Hebrew, Tamil, and others. Note one very important principle at work here: the fact that people speak languages from the same family does not mean that their biological families are closely related. They could be. All humans are ultimately related, just 120,000 years back or so. But people from widely divergent ethnic backgrounds frequently end up speaking the same language--a look at any modern American community shows that. Work by the population geneticist Luigi Luca Cavalli-Sforza has demonstrated that in some cases, genetic patterns in populations parallel linguistic history. Just as certainly, in many cases language does not parallel genetics. In any case there is no evidence that any group of humans has a different linguistic biology than any other group. I stress this point because it is easy to connect language to "race" in simplistic and even evil terms. Across the range of the Indo-European languages, non-Indo-European speakers are usually minorities in modern nation-states, often persecuted because their language seems to tag them "racially" as outsiders. In Nazi Germany, the state identified racial traits with the concept of "Aryan," which in turn was identified with original speakers of Indo-European languages. The basic idea was that Germans were racially "original" and part of the pure founding population of the Indo-European world. Many Nazis saw the Anglo-Saxon (hence "Germanic") English as racial cousins. The Nazi idea is pure nonsense. In most of Europe the local populations speak their languages because of historical political accidents, not because of a racial connection between descent and language. In England, most people today speak English; a few centuries ago, they spoke languages from the very distant Celtic wing of the Indo-European family; a few thousand years ago, they spoke pre-Celtic, non-IE languages. Yet many English people can trace some of their genetic heritage back even further, to early human settlement of Western Europe tens of thousands of years ago. Their languages have shifted as they have been colonized by conquering populations many times during that span. The influx of new people has mixed the gene pool greatly, and the cultural speed with which languages spread and change has mixed the cultural heritage even more greatly. At some point, several thousand years ago, Indo-European was a single dialect with a fairly limited range, perhaps in the steppes of the Ukraine and what is now Russia north of the Caucasus (that's the best guess). At some unknown point, Indo-Europeans spread their language across a wider area; they, or people who had learned their language, spread westward into Europe, eastward into Asia. This migration provided one basic context for language change--as descendant groups of Indo-European speakers, and colonized people who learned Indo-European lost contact with one another, their languages drifted into the main subfamilies that we observe today. The history of the drift is mostly lost to us; we see just the results. Some large features of the drift are observable in the features of the descendant languages. Many words in western Indo-European languages have /k/ sounds (or their derivatives) where Eastern languages have /s/ (or its derivatives). Avestan satem, for instance, corresponds to Latin centum (both mean "hundred"). Satem languages include Indian languages, Armenian, Slavic languages, and Lithuanian. Centum languages include Greek, Latin, Celtic languages, and Germanic languages. This was once seen as the great original dialect split between two major IE groups. It isn't anymore (see Lehmann, Historical Linguistics, Routledge 1992: 73), but it suggests that at some time early on, the eastern groups were in close enough contact for a change from an original /k/ sound to /s/ could spread across a wide area, but not to the west, where the sound developed independently. But the English word "hundred" has neither a /k/ nor an /s/. Since English is a Western language, we'd expect /k/. Why isn't it there? In 1822, Jakob Grimm (more famous as a collector of folktales) noticed that the contrast between Latin centum and English hundred has many corresponding examples. Latin cannabis is English hemp; Latin caput is English head. In fact, English and other "Germanic" languages tend to have an /h/ or /x/ sound where Latin and Greek have /k/. Grimm noted two other main contrasts--between Latin /p/ and Germanic /f/ (pisces / fish; pes / foot) and between Latin /t/ and Germanic /þ/ (tres / three; tonare / thunder). What does this mean? That the Germanic languages, in separating from other Western IE languages, underwent a systematic change in consonants that marks them off from all other Western branches (because Latin, Greek, and Celtic languages all contrast to Germanic in this way). And very importantly, it means that Germanic split off from Latin and Greek a very long time ago--several thousand years, at least. Remember that consonants change slowly. For a consonant change to become so widespread, systematic and marked as a dialectal difference, it must go very deep and be very old. One point I'll keep reinforcing, that's shown here, is that English does not come from Latin or Greek. Some important vocabulary in modern English is influenced by Latin and Greek, but those languages are not the parents of our own; they are very distant cousins.
<urn:uuid:18d72ccc-5bfc-4409-a85e-59e8a5939598>
CC-MAIN-2016-26
http://www.uta.edu/english/tim/courses/4301w00/ie.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955186
1,351
3.6875
4
The classic image of Galileo Galilei has the 16th century Italian scientist dropping two balls of differing weights from the Leaning Tower of Pisa and observing them hitting the ground at the same time. Though that scenario was probably no more than one of Galileo's thought experiments—his known tests involved rolling balls down inclines—it does illustrate his towering reputation as a scientific revolutionary. Galileo helped pave the way for classic mechanics and made huge technological and observational leaps in astronomy. Most famously, he championed the Copernican model of the universe, which put the sun at its center and the earth in orbit. The Catholic Church banned Galileo's 1632 book Dialogue Concerning the Two Chief World Systems, forced Galileo to recant his heliocentric views and condemned him to house arrest. He died in his Florence home in 1642. Historians of science have long debated the exact nature of and motivations for Galileo's trial. War, politics and strange bedfellows obscure science's premier martyrdom story. Many of the documents historians use to try and untangle the mystery are mired in their own prejudices or were written long after the fact, or both. Now the very first written biography of Galileo has been rediscovered. It offers a rare glimpse into what people thought about the trial only 20 years after Galileo's death and even suggests a tantalizing new explanation for why he was put on trial in the first place. Following Galileo's death, his apprentice, Vincenzo Viviani, collected Galileo's books and correspondences and announced his intention to write the definitive history of Galileo. Due to Viviani's privileged position, most other would-be biographers deferred to him. But by the 1660s, Viviani still had not written his promised masterpiece. Enter Thomas Salusbury, an English historian who in 1664 published his Galilean oeuvre, Mathematical Collections and Translations. Composed of two volumes, the collection contained translations of Galileo's various discourses, letters, and the first book-length depiction of Galileo's life. Then in 1666, the Great Fire of London swept through the city. The book trade in particular was badly hit; many publishing houses became piles of ashes overnight. In the inferno, all but a single copy of Salusbury's biography were lost. Salusbury died at about the same time—possibly in the fire, or maybe from the plague. By late 1666, Mrs. Susan Salusbury was a known widow. But the book lived on. It passed through various hands before, in 1749, it wound up in the private library of George Parker, Second Earl of Macclesfield, a respected amateur astronomer. The 1830s marked the last time that the book was directly quoted. After that, the trail goes cold. Historians searched the Macclesfield library again and again, only to wind up empty-handed, and most were resigned to the fact that the book was lost. In 2003, Richard Parker, the Ninth Earl of Macclesfield, was evicted from the family castle following a bitter property dispute with the castle's management company, whose shareholders included his own relatives. The 30-year family feud precipitating the eviction was based on, as the presiding judge put it, simple "palpable dislike." Upon his ousting, the Earl auctioned off the contents of the castle's three libraries. Nick Wilding, an associate professor of history at Georgia State University, heard the libraries were up for auction and immediately called the Sotheby's representative in charge of the affair. Wilding asked him, doubtfully, if in the collection he had chanced across a particular title: Galilaeus Galilaeus His Life: In Five Books, by Thomas Salusbury. "To my surprise, he said, 'Why, yes, actually. I've got it right here,'" Wilding recalls. He hopped on the next plane to London. Perusing the tattered tome at Sotheby's auction house, Wilding became the first person to study Salusbury's mysterious biography of Galileo in almost 200 years. Inside the timeworn document itself, Wilding discovered clues that allowed him to piece together its elusive, seemingly cursed history. Wilding discovered that the manuscript itself solves one mystery: why did this copy survive the Great Fire when its siblings were burned? The book is incomplete. It is missing a chunk in the middle and ends abruptly, mid-sentence, in the middle of the final of five books. And tellingly, some of the pages are full of proofreader's marks. For Wilding, these clues point to one conclusion: The copy that exists today was an incomplete version taken home by a proofreader, away from the fire's epicenter, and spared from the brunt of the disaster. The text's curious state—unfinished and annotated—provided Wilding with insights into the overlapping worlds inhabited by Galileo, Salusbury and the publishing industry. Like many works of the time, it has its share of inconsistencies, partly because Galileo's apprentice Viviani controlled the firsthand evidence and Salusbury had to rely on secondary sources. "Quite a lot of it is wrong," Wilding says. "But that makes it even more interesting for historians because you have to explain the mistakes as well as the facts." For example, Salusbury parrots rumors of the time that Galileo was an illegitimate child, and that his wife tore up many of his scientific papers at the request of a nefarious priest. Modern scholars know both claims are false; in fact, Galileo never even married. But these inaccuracies point to the rampant anti-Catholic, misogynistic sentiments of many in the Italian scientific circle at the time, Wilding says. "For them, it was, 'Bad priest! Stupid women!'" But the most striking finding might not be an error at all. Salusbury presents a novel motivation for Galileo's infamous trial, Wilding says. If people know anything about Galileo's trial, it's usually that the church disapproved of his advocacy of the idea that the earth orbits the sun. In many people's minds, Galileo is a kind of martyr figure for science and a cautionary tale against allowing religious authority to trump scientific inquiry. "There's been a very long discussion about the trial—what happened, who won—and to some extent that's still going on today," Wilding says. "The usual interpretation is that this was the great rift between science and religion. You've got this arrogant scientist up against a dogmatic church, and in that head-ramming, the pope's going to win." Not that modern scholars give much credence to the traditional science-vs.-religion interpretation of the trial. Most Galilean researchers today agree that politics played a much bigger role than religious closed-mindedness, but there is spirited disagreement about the specifics. Some think the pope was angry at being parodied by Galileo's character Simplicius in Dialogue Concerning the Two Chief World Systems. Other scholars have suggested that church leaders felt Galileo had tricked them into granting him a license to write the book by not revealing its Copernican leanings. But "Salusbury's explanation is kind of refreshingly new," Wilding says. It goes like this: In the middle of the Thirty Years' War between the Holy Roman Empire and almost every major power in Europe, tensions were high between Tuscany and Rome. The Tuscan Duke of Medici had refused to aid Rome in its war efforts against France. Pope Urban VIII decided to punish the Duke by arresting the Duke's personal friend, Galileo. Whatever its motivation, the Roman court found Galileo guilty of heresy and placed him under house arrest. He spent the first five years of his sentence in a small house near Florence, where he continued to publish work on the science of motion, and the next—and last—four years of his life confined to another home in Florence closer to his doctors. "No other historian in the 350 years after the trial has ever proposed the theory" that the Pope persecuted Galileo to punish the Duke of Medici, Wilding says. Written only 20 years after Galileo's death, the newfound biography represents one of the earliest explanations for the trial ever recorded. "To me, it feels right," Wilding says. The idea "might provide some closure to a still-festering wound." But Wilding admits that Salusbury himself could be projecting his own interpretations on the event. That's the view Galilean historian Paula Findlen, at Stanford University, takes. To her, the accuracy of Salusbury's claims is less interesting than the fact that Salusbury is claiming them at all. "It's interesting to see how people at that time, from outside Italy, are starting to reconstruct Galileo's life," Findlen says. It shows that people immediately recognized the importance of Galileo, of his works and of his trial. And not only did they grasp the significance, they also suspected that politics was at the root of the trial, even then. "Even if you disagree with Salusbury's interpretation, it reinforces the idea that people knew there was something deeply political about the whole thing." Mario Biagioli, a Harvard historian of science, says that perhaps the most exciting thing about Wilding's findings is the indication of England's early interest in Galileo. Biagioli sees the instant fascination with Galileo as an early sign of progressive thinking within the scientific revolution. "In a sense, the myth of Galileo derives from his early works and biographies—they're part of his canonization," he says. At this time, England's fledgling Royal Society, a scientific organization that Salusbury tried in vain to join, was looking to establish its patron saints, Biagioli explains, and Galileo seemed to fit the bill. Salusbury's decision to write a biography of Galileo may reflect the desire to reach across borders and solidify science as a worldwide affair. But if there was so much interest in Galileo, why did the Salusbury biography ever disappear in the first place? Why didn't anybody make copies of the single remaining manuscript? Findlen suggests that, at some point, interest in Galileo waned. Maybe it was the canonization of English scientists such as Francis Bacon, or perhaps the availability of later Galilean biographies, but "you have to conclude that at some point, [the biography] became obscured." Then missing. Then lost. Then finally found again. But some scholars worry that the book may disappear again. In 2006, Sotheby's sold it for £150,000 to an anonymous private collector. In his last encounter with the biography, Wilding slipped a note inside the cover asking that its new owner contact him so that it might be studied further. Ultimately, he'd like to see it wind up in a museum. "It would be sad if things ended here, if it was lost again and kept in a private library for another 300 years," Wilding says. But he's hopeful that the more people talk about the biography, the more it comes up in public and scholarly discussions, the more likely it will be that the new owner will release the book to the public domain. "There does seem to be something of a curse on it," Wilding says. "I suppose I should start fearing fires and plagues at this point."
<urn:uuid:62a8464d-35eb-4511-9e08-3da8e66ba56b>
CC-MAIN-2016-26
http://www.smithsonianmag.com/science-nature/galileo-reconsidered-7931973/?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976527
2,330
3.859375
4
Southern Appalachian Range States Despair Over the ‘Bug that Ate Christmas’ In West Virginia’s scenic Canaan Valley National Wildlife Refuge, with its gently sloping mountains and emerald acres of timber, Mike Powell relishes the perks of his job as a caretaker of the land: the sounds of a gurgling stream and the fresh pine scent of evergreens. But one sight deeply troubles him — the haggard look of the valley’s fabled Christmas trees. Some are bent like old men. The eye-popping green hue that makes people want to adorn them with ornaments had yellowed. A few were covered with hideous waxy balls, a telltale sign that they were under siege by the balsam woolly adelgid, a tiny insect with a notorious reputation among entomologists, who call it “the bug that ate Christmas.” Along the southern Appalachian range, they are eating two of the nation’s most popular wild Christmas trees — Canaan and Fraser firs — to death. People who buy Christmas trees at farms need not worry. Farmers who grow Christmas trees control the pest with a potent and costly insecticide, two-man crews spraying one to two acres a day. They work with agricultural extension agents to develop the most efficient pest management strategy because, said Rick Dungey of the National Christmas Tree Association, “it’s very expensive.” But there’s no stopping adelgids in the wild, where applying chemicals might take out far more organisms than the target. Join the Discussion After you comment, click Post. You can enter an anonymous Display Name or connect to a social profile.
<urn:uuid:b8c98f28-3c8d-418b-b864-d307c2d8a3c3>
CC-MAIN-2016-26
http://www.governing.com/news/headlines/southern-appalachian-range-states-despair-over-the-bug-that-ate-christmas.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940961
353
2.640625
3
* Don't slack off! Take challenging courses and keep studying hard this semester, as these classes will better prepare you to succeed in college. * Keep track of important scholarship and financial aid opportunities and deadlines. * Begin working on the Free Application for Student Financial Aid. Pick up a copy at your guidance counselor's office or apply online in January at www.fafsa.ed.gov. * Begin scheduling visits to colleges you are considering attending. * Plan to take the ACT. * Begin investigating college scholarship and financial aid opportunities. Sophomores and Freshmen: * Begin gathering information about colleges that you might consider attending. * Meet with your high school counselor to discuss your career/future interests. * Plan to take the most challenging high school courses you can. * Emphasize to your child the importance of taking challenging courses in high school. * Talk to high school staff to make sure you understand college preparation needs. * Learn about financial aid and college funding. A good resource is http://
<urn:uuid:71ceda9d-96de-4f99-ae38-011f078e6b58>
CC-MAIN-2016-26
http://www.standard-democrat.com/story/1358062.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00191-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917432
214
2.515625
3
Blood and Body Fluid PrecautionsSkip to the navigation What are blood and body fluid precautions? Blood and body fluid precautions are recommendations designed to prevent the transmission of HIV, hepatitis B virus (HBV), hepatitis C virus (HCV), and other diseases while giving first aid or other health care that includes contact with body fluids or blood. These precautions treat all blood and body fluids as potentially infectious for diseases that are transmitted in the blood. The organisms spreading these diseases are called blood-borne pathogens. Blood and body fluid precautions apply to blood and other body fluids that contain visible traces of blood, semen, and vaginal fluids. They also apply to tissues and other body fluids, such as from around the brain or spinal cord (cerebrospinal fluid), around a joint space (synovial fluid), in the lungs (pleural fluid), in the lining of the belly and pelvis (peritoneal fluid), around the heart (pericardial fluid), and amniotic fluid that surrounds a fetus. Why are blood and body fluid precautions important? Although skin provides some protection from exposure to potentially infectious substances, it is strongly recommended that health professionals use blood and body fluid precautions for further protection when they are providing health care. These precautions also help protect you from exposure to a potential infection from your health professional in the unlikely event that you come in contact with the health professional's blood. The American Red Cross recommends that everyone use blood and body fluid precautions when giving first aid. Are blood and body fluid precautions always needed? The best practice is to always use blood and body fluid precautions, even when you can't see any blood and there's no chance that blood is present. But the precautions aren't absolutely needed if you don't see any blood when you come in contact with other body fluids, such as: - Breast milk. - Mucus from the nose or lungs. How can you reduce your risk of exposure to blood and body fluids? Blood and body fluid precautions involve the use of protective barriers such as gloves, gowns, masks, and eye protection. These reduce the risk of exposing the skin or mucous membranes to potentially infectious fluids. Health care workers should always use protective barriers to protect themselves from exposure to another person's blood or body fluids. Gloves protect you whenever you touch blood; body fluids; mucous membranes; or broken, burned, or scraped skin. The use of gloves also decreases the risk of disease transmission if you are pricked with a needle. - Always wear gloves for handling items or surfaces soiled with blood or body fluids. - Wear gloves if you have scraped, cut, or chapped skin on your hands. - Change your gloves after each use. - Wash your hands immediately after removing your gloves. - Wash your hands and other skin surfaces immediately after they come in contact with blood or body fluids. - Masks and protective eyewear, such as goggles or a face shield, help protect your eyes, mouth, and nose from droplets of blood and other body fluids. Always wear a mask and protective eyewear if you are doing a procedure that may expose you to splashes or sprays of blood or body fluids. - Gowns or aprons protect you from splashes of blood or body fluids. Always wear a gown or apron if you are doing a procedure that may expose you to splashes or sprays of blood or body fluids. How else can I reduce my risk? The American Red Cross recommends that everyone use blood and body fluid precautions while giving first aid. You may wish to have gloves available in your home, office, or vehicle if you think you may be required to help another person in an emergency. Other precautions can help you minimize your risk of exposure to contaminated blood and body fluids. - If you give injections to a family member or - Use puncture-resistant containers to dispose of needles, scalpels, and other sharp instruments. - Do not recap needles. - Do not bend or handle used needles or disposable syringes. - Avoid touching objects that may be contaminated. What should I do if I am exposed? - Wash your hands immediately after any exposure to blood or body fluids, even if you wear gloves. - If you get splashed in the eyes, nose, or mouth, flush with water. - If you are pricked by a needle (needlestick), contact your doctor right away for further advice. Other Works Consulted - Centers for Disease Control and Prevention (2003). Exposure to blood: What healthcare personnel need to know. Available online: http://www.cdc.gov/ncidod/dhqp/pdf/bbp/Exp_to_Blood.pdf. - Centers for Disease Control and Prevention (2003). Guidelines for environmental infection control in health-care facilities: Recommendations of CDC and the Healthcare Infection Control Practices Advisory Committee (HICPAC). MMWR, 52(RR-10): 1–48. Available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5210a1.htm. [Errata in MMWR, 52(42): 1025–1026. Available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5242a9.htm.] - Centers for Disease Control and Prevention (2007). Guideline for isolation precautions: Preventing transmission of infectious agents in healthcare settings 2007. Available online: http://www.cdc.gov/hicpac/2007IP/2007isolationPrecautions.html. Primary Medical Reviewer William H. Blahd, Jr., MD, FACEP - Emergency Medicine Specialist Medical Reviewer H. Michael O'Connor, MD - Emergency Medicine Current as ofNovember 20, 2015 Current as of: November 20, 2015 To learn more about Healthwise, visit Healthwise.org.
<urn:uuid:b43983eb-ac18-4f82-a08a-9a173abe35a2>
CC-MAIN-2016-26
http://www.uwhealth.org/health/topic/special/blood-and-body-fluid-precautions/tv7778spec.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.857099
1,250
4.03125
4
Oral History Resources According to the Oral History Association, "Oral history is a field of study and a method of gathering, preserving and interpreting the voices and memories of people, communities, and participants in past events." Oral historians collect testimony from living people and then analyze and verify their findings for placement within a historical context. A list of resources, prepared by Liesl Orenic, Director of the American Studies Program, Dominican University, is available on our website. Dominican Monastic Search Dominican Monastic Search is a review journal produced by the Conference of the Nuns of the Order of Preachers of the United States of America. The purpose of this publication is to explore Dominican theology and spirituality and to share insights gained from study and prayer. This resource was first published in 1980 and the articles are written by Dominican nuns. Several volumes have been digitized and are available online.
<urn:uuid:ff1b7e0b-dbdc-47bf-9228-4595cab73c5f>
CC-MAIN-2016-26
http://www.dom.edu/mcgreal/resources
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939771
181
2.984375
3
There’s a certain irony surrounding ageing and diet that’s hard to ignore. As we get older we become less active, burning fewer calories and requiring less energy from food to fuel the body. However, old age is when we need vitamins and minerals from food more than ever to ward off disease, keep us healthy, and expedite wound healing, for example. So what do we do? Eat an amount we’re comfortable with and starve the body of essential vitamins, or eat until we’re bursting, putting our organs under the unnecessary strain of extra weight? It may seem like a lose-lose situation, but there’s a very obvious solution.By swapping out your regular meals for dishes filled with healthy, mineral-rich ingredients, you can find a good balance between fuelling your body, and satisfying your needs. Why Do We Need to Watch What We Eat? While there are many reasons why our nutritional needs change throughout different points in our lives, there are three primary reasons for us to start paying more attention to what we eat as we get older. Firstly, as we age, the body becomes less capable of absorbing nutrients from food. This means that we can be eating the same sort of healthy diet that we did when we were younger, and yet getting much less from it. Some health professionals have rallied for the introduction of recommended daily allowances, or RDAs, specifically for the elderly, as we already have RDAs for adults and children, but as yet this is something that hasn’t materialised in the UK. Until it does, it’s up to us to ensure we’re taking in slightly more than the published RDAs. Don’t worry about taking too much of the good stuff naturally from foods, but if you decide to use supplements, only take the recommended dosage – there can be some nasty side effects if you take too much! Secondly, there are some health conditions that are much more common amongst the elderly than in younger generations, such as osteoporosis (a weakening and degeneration of the bones), for example. Conditions such as this may not be able to be cured by a change in diet, but they can certainly be managed through good nutrition. Calcium is something we all know is good for strong, healthy bones, but research shows that the over 65s are taking in between 480 and 600 mg per day, when they really should be taking in 800 mg at the very least. Of course, there’s still the issue that calcium and other vitamins and minerals aren’t absorbed into the body very easily, so what’s the solution? Meat! Well, protein, and lots of it. Protein has an anabolic effect on the body, which means it gives it a temporary boost – it helps the body produce new cells, speeds up digestion, and essentially encourages the body to act in a way it hasn’t done for years! Increasing protein intake is an excellent way to help the body absorb greater amount of dietary calcium. Finally, some common conditions amongst the elderly can affect lifestyle choices, which can in turn determine whether or not we should be aiming to consume higher quantities of vitamins and minerals. Immobility, for example, can reduce the amount of time a person spends outside of the home, severely limiting exposure to natural sunlight – a leading source of Vitamin D. Vitamin D should ideally come from two sources – food and sunlight – so it’s important to make both dietary and lifestyle changes if you’re running low, such as increasing the consumption of Vitamin D-rich foods, such as oily fish, and using mobility aids to get outdoors for frequently. Don’t Get Stuck in a Rut When making the change to a healthier diet, it can become very easy to get stuck in a rut, cooking up the same meat and two veg each and every night. It doesn’t take a genius to work out that eating well will become very boring, very fast. Instead, it’s best to keep it interesting. Perhaps try new health-boosting foods that you’ve never tasted before. Consider foods such as sardines, blueberries, and cinnamon – not your average ingredients, but very healthy, and mouthwateringly tasty.
<urn:uuid:5c269ef6-e0a1-401c-96ac-60f86dd482e8>
CC-MAIN-2016-26
http://www.bfeedme.com/changing-dietary-requirements-old-age-need-know/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957052
884
2.609375
3
Vol. 32, No. 1, February 2000 Extreme Tree Rings in Spruce (Picea abies [L.] Karst.) and Fir (Abies alba Mill.) Stands in Relation to Climate, Site, and Space in the Southern French and Italian Alps (pp 1-13) The similarity over long distances of dendroecological pointer years (with extreme ring-widths) were studied at both regional and country scales, in order to investigate the geographical extension of climate influences on tree-rings. Two common species, Norway spruce (Picea abies Karst.) and white fir (Abies alba Mill.) were compared. The regional study was carried out on 33 populations located in four alpine valleys along a climatic gradient of summer aridity (Tarentaise, Maurienne, and Brianšonnais, in France, and Susa valley in Italy). For most of species and regions, several negative ring-width pointer years with abrupt growth reductions such as 1976, 1922, 1986, and 1944 were common (listed in order of decreasing importance). However, spruce growth was more reduced in 1948 than that of fir. At the country scale, some of the strongest positive (e.g., 1932, 1964, 1969) and negative (e.g., 1956, 1962, 1976, 1986) pointer years extended over the whole of France, whereas the geographic variability was explainable by the autoecology of species. At both studied scales, evident climatic interpretations such as severe winter frosts, unusual summer droughts, or excessive wet and cold springs can explain most of the negative pointer years. Conversely, most positive growth responses are caused by a local combination of favorable climatic factors rather than simple extreme events, and therefore are less efficient for wood dating. Tree Growth near Treeline: Abrupt or Gradual Reduction with Altitude? (pp 14-20) Natural climatic treelines are relatively discrete boundaries in the landscape established at a certain elevation within an otherwise continuous gradient of environmental change. By studying tree rings along elevational transects at and below the upper treeline in the European Alps, we (1) determine whether radial stem growth declines abruptly or gradually, and (2) test climatic influences on trees near treeline by investigating transects for climatically different historical periods. While tree height decreases gradually toward the treeline, there is no such general trend for radial tree growth. We found rather abrupt changes which imply threshold effects of temperature which moved upslope in a wave-like manner as temperatures increased over the past 150 yr. Currently radial tree growth at treeline in the Alps is the same magnitude as at several hundred meters below current treeline. Over short intervals, tree-ring width is more dependent on interannual climatic variability than on altitudinal distance to treeline. We conclude that (1) the elevational response of tree-rings includes a threshold component (a minimal seasonal temperature) and that (2) radial growth is more strongly correlated with year to year variation in climate than with treeline elevation as such. Our data indicate that the current treeline position reflects influences of past climates and not the current climate. Humus Forms in the Forest-Alpine Tundra Ecotone at Stillberg (Dischmatal/Switzerland): Spatial Heterogeneity and Classification (pp 21-29) In the forest-alpine ecotone at Stillberg (Dischmatal/Switzerland) the morphology of humus forms and the spatial variability of organic layer properties were investigated. At northeast-exposed gully sites mulls with high acidity in the A-horizon occur. They were classified after the Canadian classification of humus forms as Rhizomulls. Mors occur on ridges and on their east- and north-exposed aspects. They can be differentiated by the ratio between the thickness of the F-horizon and the combined thickness of the F- and H-horizon. The relative thickness of the F-horizon increases significantly in the order: east aspects ridges north aspect. The humus forms of the east aspects and the ridges were classified as Humimors and those of the north aspects as Hemimors. The Canadian classification was suitable to describe the properties of the horizons and to classify the humus forms. The results of a grid sampling at the study sites and the computation of nonergodic correlograms show that the spatial variability of organic-layer thickness, bulk density, and moisture is high (CV around 50%), with a pronounced small-scale heterogeneity (range usually below 2 m and more than 50% variability occurs within 0.3 m). Only 33% of the variance of organic-layer thickness were explained by site and vegetation structure, but in spite of the low percentage both proved to be a significant factor. In the forest-alpine tundra ecotone about 30 to 35 soil samples per site are needed for a reliable estimation of the mean of the organic-layer thickness. Nutrient and Thermal Regime, Microbial Biomass, and Vegetation of Antarctic Soils in the Windmill Islands Region of East Antarctica (Wilkes Land) (pp 30-39) In the antarctic summer of 1996, permafrost-affected cold soils close to the Australian Casey Station in the Windmill Islands region (Wilkes Land) were investigated to determine in what way the thermal and nutrient regimes in the antarctic soils are related to microbial biomass and vegetation patterns. The soils are characterized by a high content of coarse mineral particles and total organic carbon (TOC) and a low C/N ratio (mean 11). Despite the low pH values (mean 4.0) the soils are rich in nutrients due to an input from seabirds (existing or abandoned nesting sites) and an eolian distribution of fine-grained soil material in the landscape. Vegetation influences TOC storage and the cation exchange capacity in the uppermost soil horizons, whereas total N and most nutrient levels are not affected by the vegetation, but by seabird droppings. The present nutrient level does not affect plant adaptation, because the K, Mg, and P contents are often extraordinarily high. This suggests that nutrient supply is not a limiting factor, whereas microclimate effects, such as moisture availability and ground-level wind speed, have a primary influence on plant growth. Soil-surface temperature measurements indicate a strong variability in microclimate due to small-scale variations in geomorphological surface features. Bacteria found in all soil horizons, but not algae and yeast. Soil microbial counts are weakly correlated to the C/N ratios and soil surface temperatures. High TOC and clay contents probably improve the soil water-holding capacity and TOC contributes to the microbial food supply. The investigated microbial parameters are weakly correlated to the present vegetation carpet, the lowest counts are found in the soils with scattered or no vegetation cover. Constraints to Nitrogen Fixation by Cryptogamic Crusts in a Polar Desert Ecosystem, Devon Island, N.W.T., Canada (pp 40-45) Polar desert ecosystems, which dominate the landscape throughout much of the High Arctic, are environmentally stressed and limited in their development. Scattered intermittently over these landscapes are areas of cryptogamic crust development that are associated with increased vascular plant abundance. Since nutrient limitation, especially nitrogen, is significant in these ecosystems, I wished to examine the role of these cryptogamic crusts in the supply of fixed nitrogen and the constraints to that fixation. Nitrogen fixation rates (as measured by acetylene reduction) were highest in sites with a well-developed cryptogamic crust, lowest in sites with only bare mineral soil, and intermediate in sites with a partially developed crust. Highest rates of acetylene reduction (i.e., nitrogen fixation) were seen within a few days of snowmelt (late June to early July) and declined as the season progressed, until near the end of the growing season (1-5 August) when rates were approximately 50% of early season rates. Late season precipitation events restored acetylene reduction rates to near original levels. In manipulative experiments, acetylene reduction rates dropped dramatically as crust moisture content declined and rates increased as soil surface temperature increased to 24░C. A significant finding was that acetylene reduction at 3░C was 40% of that found at 12 to 13░C. Thus, there is a potential for nitrogen accumulation even during the colder periods of the growing season. As calculations show, the quantity of nitrogen fixed by these cryptogamic crusts was adequate to support the nitrogen needs of the mosses and vascular plants of these developing ecosystems. Pliocene Piedmont Glaciation in the RÝo Shehuen Valley, Southeast Patagonia, ArgentÝna (pp 46-54) There are few sites in southern South America where late Tertiary glacial sediments have been radiometrically dated. Glaciations occurred near the Miocene-Pliocene transition, during the mid-Pliocene (3.5 Ma), and after 2.1 Ma ago. In the RÝo Shehuen valley at least four river terraces older than 2.25 Ma correlate with terminal moraines and outwash plains. The age of the youngest Pliocene advance is bracketed by two radiometric dates; glaciofluvial sediments lie on a 3.0 Ma old basalt lava and merge into a river terrace that is covered by 2.25 Ma old basalts. A remnant of a terminal moraine of the second oldest advance still exists. This glacier advance extended over 160 km from the Southern Patagonian Icefield. The oldest glaciation occurred well before 3.0 Ma. It may be possible to correlate these pre-Pleistocene glaciations to glacial deposits covering the Meseta Desocupada north of the glacial basin of Lago Viedma (250 m altitude). There, at an elevation of 1500 m, previous research found a till lying between two basalt flows with an age of 3.5 Ma. Till, separated by soil formations, covers the plateau surface. Nineteenth- and Twentieth- Century Glacier Fluctuations and Climatic Implications in the Arco and Colonia Valleys, Hielo Patagónico Norte, Chile (pp 55-63) Dendrochronology, lichenometry, and analysis of aerial photographs taken in 1944, 1979, and 1983 were used to date the 19th- and 20th-century fluctuations of the Arco, Colonia, and Arenales glaciers on the eastern side of the Hielo Patagˇnico Norte in southern Chile. This work has demonstrated that the glaciers retreated from their Little Ice Age maximum positions between 1850 and 1880, with retreat rates increasing during the 1940s and with surface thinning of at least 30 m since 1980. Comparison with the fluctuation behavior of other outlet glaciers of the icefield suggests a degree of synchrony in the timing of their variations and therefore argues for a common climatic control for these movements. Variability in Winter Mass Balance of Northern Hemisphere Glaciers and Relations with Atmospheric Circulation (pp 64-72) An analysis of variability in the winter mass balance (WMB) of 22 glaciers in the Northern Hemisphere indicates two primary modes of variability that explain 46% of the variability among all glaciers. The first mode of variability characterizes WMB variability in Northern and Central Europe and the second mode primarily represents WMB variability in northwestern North America, but also is related to variability in WMB of one glacier in Europe and one in Central Asia. These two modes of WMB variability are explained by variations in mesoscale atmospheric circulation which are driving forces of variations in surface temperature and precipitation. The first mode is highly correlated with the Arctic Oscillation Index, whereas the second mode is highly correlated with the Southern Oscillation Index. In addition, the second mode of WMB variability is highly correlated with variability in global winter temperatures. This result suggests some connection between global temperature trends and WMB for some glaciers. Holocene Treeline and Climate Change in the Subalpine Zone Near Stoyoma Mountain, Cascade Mountains, Southwestern British Columbia, Canada (pp 73-83) Multiproxy paleoecological investigation of a small lake in the high subalpine zone near Stoyoma Mountain, northern Cascade Mountains of British Columbia, reveals significant change in vegetation, limnic conditions, and inferred climate throughout the Holocene (last 10,000 radiocarbon years). Three zones of distinct pollen, plant macrofossil, and chironomid assemblages are apparent in the sediment core from 3M Pond (informal name). A dry, sparsely vegetated spruce parkland and a warm-adapted chironomid community existed in and around the study sites in the early Holocene (ca. 10,000 to 7000 14C yr BP). Between 7000 and 3500 14C yr BP, Engelmann spruce-subalpine fir forest conditions established and then declined around 3M Pond leading to modern subalpine parkland conditions from 3500 14C yr BP to present. Chironomid communities at 3M Pond between 7000 and 3500 14C yr BP are indicative of warmer waters than present, but show a transition to modern assemblages. Three climatic regimes are identified near Stoyoma Mountain: (1) the early Holocene xerothermic period (10,000 to 7000 14C yr BP, (2) a period of climatic transition in the mid-Holocene (7000 to 3500 14C yr BP), and (3) cool, modern neoglacial conditions (after 3500 14C yr BP). These findings confirm vegetation and inferred climate changes identified at Cabin Lake, British Columbia (a nearby lake in the subalpine forest). Changes in treeline position, plant communities, chironomid communities, and inferred climate are nearly synchronous and validate the multiproxy approach for paleoecological reconstruction. Chironomid-based paleotemperature reconstructions confirm earlier evidence that the early Holocene was significantly warmer than present, with estimated summer water surface temperatures up to 4║C higher than today. Hydrology and Geochemistry of River-borne Material in a High Arctic Drainage System, Zackenberg, Northeast Greenland (pp 81-94) The roles of chemical and mechanical weathering in permafrost regions were assessed, by measuring stream discharge and major, trace, and rare earth elements (REE) of suspended matter (SPM), river-bed sediments (RBS), and water in two lithologically different catchments in the High Arctic at Zackenberg, Northeast Greenland. The drainage basin contains sedimentary and crystalline rocks. In streams draining the sedimentary rock area, SPM and total dissolved solutes (TDS) are high with maximum values of 2500 mg L-1 and 105 ÁS cm-1, respectively. Variation of both relates to changes in vegetation and morphology. Mineral fractionation during transport and soil-forming processes in the sedimentary portion of the study area lead to characteristic chemical profiles for the SPM and RBS. Streams draining the crystalline rock area have low SPM (18 mg L-1) and TDS (14 ÁS cm-1) as a result of poor soil development and a lack of vegetation. Mechanical denudation exceeds chemical denudation by an order of magnitude for the entire catchment. Because the REE distributions of the crystalline differ from those in the sedimentary SPM differ, it is possible to quantify source rock contributions to the main outflow using a mixing calculation. A mass balance comparing the SPM in the main outflow with the tributaries, using the REEs as "fingerprints", indicates that about 90% of the sedimentary basin suspended matter is redeposited before reaching the outflow, at least over the period of observation. Taking this redeposition into account, the rate of chemical denudation (100 kg km-2 d-1) exceeds mechanical denudation (70 kg km-2 d-1) in the sedimentary drainage basin. Seasonal Changes in Bed Elevation in a Step-Pool Channel, Rocky Mountains, Colorado, U.S.A. (pp 95-103) Scour and fill patterns at East St. Louis Creek, Colorado, were investigated via repeat, detailed surveys of the channel bed at 11 cross sections during the 1995 snowmelt season. Spatial variability was remarkably high, with significant differences in cross section scour and fill patterns over distances as short as 0.5 m. Most sites had small net changes in bed elevation, both daily and over the entire runoff season. The data and observations indicate the presence of small pulses of fine material that are temporarily deposited on top of the channel pavement in wider areas of the channel and near woody debris complexes. Scour and fill are primarily limited to the finer material of such pulses. ANOVA analysis indicates that although discharge was important in predicting changes in bed elevation, the relationship between discharge and bed mobility is complicated by the effects of local channel morphology and a slight hysteresis. Regression analysis shows that variations in channel width determine where finer sediments are deposited, and therefore the locations of greater change in bed elevation. The proximity of morainal ridges and boulders to the channel edge locally influence the channel width and also the distribution of woody debris complexes. Results of this study suggest that the channel morphology and sediment transport along some reaches of small, high-gradient streams in watersheds with a glacial history may not respond as substantially to changes in discharge characteristic as do other types of alluvial channels. A Note on Summer CO2 Flux, Soil Organic Matter, and Microbial Biomass from Different High Arctic Ecosystem Types in Northwestern Greenland (pp 104-106) We measured CO2 flux, soil organic matter, and soil microbial biomass carbon in six high arctic tundra communities near Thule, Greenland, in July 1997, including polar desert, polar semidesert, and polar oasis ecosystems. Three of the four polar desert sites were in a contiguous toposequence originating at the receding margin of the Greenland ice cap and extending away from the ice approximately 400 m. The other sites ranged from 3 to 12 km from the ice margin. We measured net ecosystem CO2 uptake in the polar desert ecosystem most distance from the ice sheet (1.2 g CO2 m-2 d-1) and in the polar semidesert ecosystem (0.3 g CO2 m-2 d-1), but net CO2 loss in the polar oasis site and the three polar desert sites in the toposequence. Ecosystem respiration tended to be greatest in the ecosystems that have apparently been ice-free the longest, with efflux rates up to 3.7 g CO2 m-2 d-1. In the toposequence, soil organic matter was greatest adjacent to the icecap (3.10%) and decreased to 0.93% is the polar desert site 400 m from the ice. The polar semidesert and polar oases sites had 2.67 and 3.83% soil organic matter, respectively. Soil microbial biomass carbon ranged from about 1 mg C g-1 soil in the polar oasis ecosystem to about 0.2 mg C g-1 soil in one of the polar desert ecosystems but did not follow the patterns we found for soil organic matter. Our findings substantiate other recent studies showing significant CO2 flux between high arctic ecosystems and the atmosphere, and suggest that carbon exchange in these systems merit consideration in circumarctic estimates of carbon flux.
<urn:uuid:ab354f84-c5f9-4696-9714-494aa3018162>
CC-MAIN-2016-26
http://www.colorado.edu/INSTAAR/arcticalpine/volume32/32-1abs.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00131-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928059
4,078
2.90625
3
Polluted Penguin Poop 30 July 2007 Penguin guano in the Antarctic is adding to organic pollutant problems there, say Belgian scientists. Adrian Covaci at the University of Antwerp, Belgium, and colleagues found unexpectedly high levels of organic pollutants in the soil around a colony of non-migratory Adelie penguins in the Antarctic. Concerns about organic pollutant levels in the Antarctic have led to intensive studies into how they reach this remote region, said Covaci. The pollutants originate from man-made sources such as organochlorine pesticides and brominated flame retardants, he explained. The routes through which they normally travel are air and ocean currents. Recent studies have shown that migrating birds can also transport organic pollutants to the Antarctic in their body tissues, added Covaci. Covaci's study shows that non-migratory penguins are also redistributing organic contaminants on a local scale, resulting in levels 10 to 100 fold higher than expected in the soil around their colonies. Covaci suggests that penguins are initially exposed to the contaminants by eating polluted fish, which have been contaminated through the food chain. Bioaccumulation means that the penguins have high levels of contaminants in their bodies. The soil around the colony is then contaminated by penguin guano and carcasses. Kevin Jones, an expert in organic pollutant transport at the University of Lancaster, UK, said the work is important. It highlights an unusual mechanism that is moving chemicals around our planet, said Jones.
<urn:uuid:f325ea03-995c-4827-8567-145a57ccc90f>
CC-MAIN-2016-26
http://www.enn.com/top_stories/article/21400
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947514
316
3.21875
3
The government and Tokyo Electric Power Co. said Monday they may be able to start removing the melted fuel inside the crippled nuclear reactors at the Fukushima No. 1 complex around 18 months earlier than initially planned, although this action would still be years away. The process would reportedly begin with the removal of fuel assemblies from the outside-reactor spent-fuel pools of units 1 to 4, the latter of which was widely reported to have been exposed to the atmosphere when the building housing the reactor was blown up in a hydrogen explosion involving an adjacent reactor that suffered a meltdown. According to a revised plan to decommission four reactors at the six-reactor complex, the extremely challenging task of removing melted fuel from reactors 1, 2, and 3, which suffered the core meltdowns, and from the exposed spent-fuel pool of unit 4 could start within the first half of fiscal 2020 if the efficiency of preparation work is improved. But the government and Tepco still believe it will take 30 to 40 years to complete the decommissioning process from the December 2011 point at which the plant was declared to have achieved a stable state of cold shutdown. The revised plan is expected to be compiled later this month after the opinions of local governments and experts are solicited, a government official said. Tepco continues to struggle to contain the catastrophe at the Fukushima plant more than two years after it was crippled by the March 11, 2011, Great East Japan Earthquake and monster tsunami. When the disaster struck, knocking out power, Tepco was unable to cool reactors 1 to 3 before they experienced core meltdowns or keep the empty reactor 4’s external spent-fuel pool cooled. Subsequent hydrogen explosions severely damaged the buildings housing reactors 1, 3 and 4. Reactors 1, 2 and 3 are currently being kept cool through continued water injections — a process that is creating a huge accumulation of radioactive water that Tepco is hard-pressed to deal with. Tepco plans to start taking out fuel assemblies from reactor 4’s spent-fuel pool later this year and move on to the removal of fuel from the spent-fuel pools of units 1 to 3, before trying to extract the melted fuel in their cores, which at their present state of danger are inaccessible. The removal of the melted fuel is expected to take place in the final stage of the decommissioning process.
<urn:uuid:907d8eaf-6878-49ad-bb5b-00df648bb314>
CC-MAIN-2016-26
http://www.japantimes.co.jp/news/2013/06/10/national/removal-of-melted-fuel-from-stricken-fukushima-reactors-may-be-advanced-a-bit/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965666
486
2.78125
3
RNA editing in the mammalian brain In recent years proofs of the eukaryotic messenger RNA as a target for gene regulation and sequence alteration has vastly expanded. New technologies such as high throughput sequencing of entire transcriptomes have revealed that alternative RNA processing events are used to increase the diversity of the proteome to a much larger extent than previously anticipated. By alternative splicing, alternative polyadenylation and RNA editing the possibilities to diversify and regulate the RNA population are almost infinite. In addition an increasing number of non-coding RNAs such as microRNAs, have been discovered to contribute to tissue specific and developmentally regulated transcriptome variation. Our research is focused on functional consequences of adenosine to inosine (A-to-I) RNA editing, the most common type of editing in mammals. This RNA processing event is catalyzed by the ADAR enzymes, converting A-to-I within double-stranded or highly structured RNA. Since inosine is recognized as guanosine by the cellular machineries, A-to-I editing has the power to recode mRNAs and also change the sequence of non-coding RNAs such as microRNAs. Most recoding sites of editing have been found in transcripts important for neurotransmission, and a number of brain specific microRNA are also edited. We are therefore particularly interested in the function of RNA editing in neurons and in understanding how editing contributes to the variety of the transcriptome during brain development and neuronal stimulation. RNA editing/RNA processing/ADAR/microRNA/neuron. Daniel, C. Silberberg, G. Behm, M. Öhman, M. (2014) Alu elements shape the primate transcriptome by cis-regulation of RNA editing. Genome Biol. 15(2):R28. Daniel, C., Venö, M., Ekdahl, Y., Kjems, J., & Öhman, M. (2012). A distant cis acting intronic element induces site-selective RNA editing. Nucleic Acids Res. 40(19):9876-86 Ekdahl, Y., Farahani, H.S., Behm, M., Lagergren, J., & Öhman, M. (2012). A-to-I editing of microRNAs in the mammalian brain increases during development. Genome Res. 22(8):1477-87 Silberberg, G., Lundin, D., Navon, R., & Öhman, M. (2012). Deregulation of the A-to-I RNA editing mechanism in psychiatric disorders. Human Molecular Genetics. 21(2):311-321 Daniel, C., Wahlstedt, H., Ohlson, J., Björk, P., & Öhman, M. (2011). Adenosine-to-inosine RNA editing affects trafficking of the gamma-aminobutyric acid type A (GABA(A)) receptor. J Biol Chem. 286(3):2031-40 (Link to Researcher ID) January 26, 2016 Page editor: Elin Eriksson Source: Department of Molecular Biosciences, The Wenner-Gren Institute
<urn:uuid:8817c689-75cc-47d0-ae9a-ff937834a0d2>
CC-MAIN-2016-26
http://www.su.se/mbw/research/research-groups/molecular-cell-biology/group-%C3%B6hman
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.867154
675
2.515625
3
Time in the United States This page features general background information about time in the United States. Useful Information about Time in the United States Below is a list of articles that provide basic background information about the United States’ time: - Time Zones in North America. - Background on the United States’ Time Zone. - The Energy Policy Act of 2005. - Indiana’s Time Zones and Daylight Saving Time. - Why Arizona Does Not Observe Daylight Saving Time. timeanddate.com will expand this section as more information becomes available.
<urn:uuid:34a64384-5f03-4754-a1a9-a5ed7ebf5a30>
CC-MAIN-2016-26
http://www.timeanddate.com/time/us/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.76763
121
3
3
Brinicles, first captured forming on film by the BBC in 2011, are hollow tubes of ice that descend from Antarctic sea ice. They look a lot like icicles, but aren’t. As sea water freezes into ice, it excludes salt and other ions, which get trapped in brine-rich compartments in sea ice. Brine has a lower freezing temperature than water, so if the sea ice cracks, the liquid is released, and immediately freezes any seawater that it comes in contact with, creating a hollow tube of ice descending into the water. Julyan Cartwright and Bruno Escribano, authors of a recent study about brinicles, qualify the formations as “chemical gardens,” plant-like tubular structures that form when metal salt crystals are immersed in certain solutions. Chemical gardens are a common intro to chemistry experiment, and are ubiquitous in children’s chemistry kits, the “grow your own crystals” exercise. They are frequently seen around hydrothermal vents, growing upward. By their very nature, brinicles (and chemical gardens) possess three key ingredients necessary for life: they create chemical gradients; they have the kind of electric potentials that may help jump-start life, and the brine-rich compartments they originate in have primitive membranes that contain fats, lipids, and and chemical compounds. These are the same conditions established by chemical gardens at hydrothermal vents, which are presently one of the top candidates for where life originated on Earth. Thus one of the more important implications of the current study is that it proposes that brinicles could have been the site of a cold emergence of life, just as hydrothermal vents could be originators in the hot origins theories. Cold origins have been hypothesized by some scientists because some complex organic molecules form more readily in cold temperatures than in warm ones, and the surface of ice is a good substrate for these reactions. Even if brinicles aren’t the spot where Earth’s life began, they may help in our ongoing search for habitable zones beyond our planet. Structures similar to brinicles could exist elsewhere in the solar system, for instance on Jupiter’s moons Ganymede or Callisto, where they could harbor life even in freezing conditions.
<urn:uuid:4a16ca20-5795-42ee-a0bb-f58f79f9ad8d>
CC-MAIN-2016-26
http://blogs.discovermagazine.com/visualscience/2013/05/06/life-could-have-evolved-in-frigid-underwater-ice-gardens/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937234
472
3.6875
4
The cooling of eastern Pacific Ocean waters has been counteracting the warming effect of greenhouse gases. Our research, released today in Nature, shows this natural variability in ocean cycles is responsible for the “hiatus” in global warming over the last ten years or so. The study considers the tropical Pacific Decadal Oscillation, a climate cycle that plays out over the course of several decades. Within this large pattern fall El Niño and La Niña, well-known faster cycles that cause shifts in the distribution of warm water in the equatorial Pacific Ocean. While El Niño and La Niña last only a few years, the Pacific Decadal Oscillation lasts several decades. The Oscillation has been in a cooling phase since 1998. When the climate cycle that governs that ocean cooling reverses and begins warming again, the planet-wide march toward higher temperatures will resume with vigor. The study does not consider when the reversal might happen, but it brings scientists closer to understanding how to look for signs of it. If researchers can estimate that climate cycle, they could also better estimate the end date of regional trends that are linked to ocean cooling, such as the drought in the southern United States that have caused an estimated tens of billions of dollars in agricultural damage since 2000. As carbon dioxide levels in Earth’s atmosphere increase, global temperatures have been rising. Before 2000, global temperatures had risen at a rate of 0.13C per decade since 1950. The hiatus in warming has happened while levels of carbon dioxide, the main greenhouse gas produced by human activities, continue rising steadily. In May 2013, carbon dioxide reached 400 parts per million in the atmosphere for the first time in human history. The disconnect led some climate watchers to speculate that increases in the concentration of carbon dioxide are not as strongly coupled to global warming as thought, even though the heat-trapping properties of carbon dioxide have been identified for more than a century. We concluded, however, that natural variability in the form of eastern Pacific Ocean cooling is behind the hiatus. We used computer modeling to simulate regional patterns of climate anomalies. This enabled us to see global warming in greater spatial detail, revealing where it has been most intense and where there has been no warming or even cooling. Climate models consider anthropogenic forcings such as greenhouse gases and tiny atmospheric particles known as aerosols, but they cannot reproduce a specific climate event like the current hiatus. We devised a new method for climate models to take equatorial Pacific ocean temperatures as an additional input. Then, amazingly, our model can simulate the hiatus well. Specifically the model reproduced the seasonal variation of the hiatus, including a slight cooling trend in global temperature during northern winter season. In summer, the equatorial Pacific’s grip on the northern hemisphere loosens, and the increased greenhouse gases continue to warm temperatures, causing record heat waves and unprecedented Arctic sea ice retreat. The last cooling phase in the tropical Pacific Decadal Oscillation – cooling waters in the eastern equatorial Pacific Ocean – lasted from roughly 1940 to the early 1970s. During that cool phase, warmer, drier weather dominated in the midwestern United States. The current cooling phase began just after a strong El Niño year in 1998. The Texas AgriLife Extension Service at Texas A&M University estimates that the drought it traces to that same year caused nearly US$21 billion in agricultural losses through 2011. In 2011, Texas set a single-year record with losses of US$7.62 billion. We do not know if the current cooling phase will last as long as the last one. Predicting equatorial Pacific conditions more than a year in advance is beyond the reach of current science. But we know that over the timescale of several decades, climate will continue to warm as we pump more greenhouse gases into the atmosphere.
<urn:uuid:f9104949-ca1b-457b-b265-cc45e3caf74a>
CC-MAIN-2016-26
https://theconversation.com/warming-slowed-by-cooling-pacific-ocean-17534
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00026-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930948
779
3.984375
4
Windows 8 – Administrative Tools – Component Services If you are a normal user, then let me stop you right now. This topic is not more than a scribbling of black ink for you. Microsoft introduced Component Object Model (COM) technology in early 90s. The goal of COM was to enable the developers to produce reusable binary code in a language-neutral and cross-platform way. COM, by its design, enforces implementers to provide well-defined interface to their objects. Thus, a properly developed COM application would be able to be reused across several languages and platforms. Indeed, even today, COM objects have been reused by newer technologies like .NET without any concern of what .NET language is being used. You may find COM frightening. However, COM applications are not uncommon. For example, Microsoft Office Excel is created by grouping several COM objects to work together. COM+ is an extension to COM. It supersedes COM in a way. COM+ builds on COM, and provides additional services like distributed transactions, queuing, role-based security, etc., thus providing more automation, security and flexibility. Component Services, an MMC snap-in, is used to administer COM+ applications on your system. COM+ is used in distributed applications. Such applications can be managed through Component Services. Naturally, this admin tool is meant only for very advanced users; mostly administrators. Using this tool, an admin can deploy COM+ applications, and configure its permissions and other settings. Also, one can create an empty COM+ application using the wizard provided by the tool, which can be enriched later by writing code into it. How to Launch It Control Panel Way - Open “Administrative Tools” applet from the traditional Control Panel. - Launch “Component Services” from the list of administrative tools. - Invoke Run window or Search Charm. Select the Settings tab in case of Search Charm. - Type in the command “comexp.msc”, and hit Enter. Component Services is visually divided into 3 panes. - The leftmost tree pane hosts a hierarchy of COM+ applications and objects. - The middle pane describes the selected item in the tree pane. - The rightmost Actions pane lists commands related to the selected item in the tree pane, as well as any selected item in the description pane. The node of your computer in the tree pane lists four main folders. All the COM+ apps installed in your computer are listed here hierarchically. This is the node where you can add new or remove installed COM objects. Also, you can modify the properties of COM+ objects here. Here you can configure security permissions and authentication level for individual COM applications. There is no hierarchy here; all the applications are located directly under the folder, instead. The COM+ applications that are running currently get a place under this node. Distributed Transaction Coordinator This node features transactions that happen between COM applications of different network computers. It branches into two nodes – Transaction List lists the current transactions, while Transaction Statistics, as implied by the name, displays statistical data like speed, response time, etc., for the transaction
<urn:uuid:f2efe2b3-ddf1-4a1d-8259-38c1b67db316>
CC-MAIN-2016-26
http://www.eyeonwindows.com/2012/09/07/windows-8-administrative-tools-component-services/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00118-ip-10-164-35-72.ec2.internal.warc.gz
en
0.905068
655
2.59375
3
The state superintendent and the president of a bipartisan education group did their best Tuesday to persuade lawmakers that national standards would be best for Michigan. "The Common Core state standards are the best foundation possible for ensuring career- and college-ready Michigan graduates," said State Superintendent Michael Flanagan. Common Core is a set of math and language arts educational standards created by a group of the country's governors and state superintendents. The purpose is to increase consistency between the school systems in different states, thereby making students more prepared for their futures regardless of where they may be. Education experts say it's not a prescribed curriculum, but rather a yardstick by which to measure educational achievement. "They are not a set of instructional methods," said Michael Cohen, president of Washington D.C.-based Achieve Inc. "They are not a set of instructional materials. Depending on the state those are either determined individually by the state or left to the local school boards or teachers to decide on their own." States are free to approve the standards individually and so far 45 have in full. Minnesota has chosen to adopt only the language arts portion. Alaska, Texas, Nebraska and Virginia have abstained to date. The Michigan Department of Education adopted the program in 2010 and began taking steps to implement it, but the legislature has since blocked its funding. The hearings will ultimately lead to a vote on whether Common Core is best for the state. Cohen said adopting a set of common goals can raise the country's level of proficiency, as scores on various state tests rarely equate. "Proficient in Michigan does not mean proficient in Minnesota or proficient in Massachusetts," he said Tuesday. "The scores that earn a proficient determination on the fourth-grade reading test [in Michigan] would be considered below basic on the fourth-grade national assessment in progress. Clearly in many states students are told that they're proficient when they're not really prepared for what comes next." Nationalizing standards would also put the US on pace with many other countries, Cohen said. The standards would come with a new set of tests that would replace the Michigan Educational Assessment Program. "These tests are used to hold schools accountable," Cohen said. "Tests are the tools that you use and students use and the parents use to know if the schools in the state are making progress and whether the tax dollars you spend are actually producing the results that you need." Opponents say they fear adopting Common Core will result in a nationalized curriculum, taking the power out of the hands of local school districts. "They say that we'll have a seat at the table," said Rep. Tom McMillin (R-Rochester Hills). "We own the table right now. Before Common Core we were deciding what standards were taught in our schools. With Common Core we don't. "I really wish we'd get away from this idea that they're only standards. They will decide curriculum. You test standards with assessments and you teach to the test." Other representatives from both sides of the aisle questioned student privacy when it comes to collecting data and where some of the money would come from to support some of the program's technology goals. Tuesday was the first of four hearings on Common Core. House Speaker Jase Bolger (R-Marshall) has said he wants legislators to take a vote before the fiscal year begins and the budget takes effect October 1. Legislators said the two-and-a-half-hour hearing was a good start. "I'm encouraged by what I heard today," said Rep. Andy Schor (D-Lansing). "There were a lot of tough questions. The Dept. of Education clearly are the experts and they have done the research and they have decided this is the standard to move forward with and I think that we need to make sure that we as legislators agree and that it's done the right way and I think they're doing a good job so far."
<urn:uuid:d8539c86-b6bb-4988-afff-73cbed115be1>
CC-MAIN-2016-26
http://www.wilx.com/news/headlines/Education-Experts-Push-Common-Core-215723051.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00165-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975678
807
3.15625
3
Located in Normandy, France, Mont Saint-Michel (also known as Lee Mont St-Michel) is a rocky island that is famous for being the home for the medieval Benedictine Abbey and church. It was used primarily during the 6th and the 7th century as a stronghold of the Romano and British culture. One of the stories of legend about Mont Saint-Michel is that the archangel Michael once came to St. Aubert, the bishop of Avranches, and asked him to make a church on this rocky island. However, Aubert kept ignoring him until Michael burned a hole in his skull by using only his finger. In October 709, the oratory, or place of worship, was finally constructed and dedicated to St. Michel. It is believed that St. Aubert is buried there. In the 11th century, an Italian architect designed the Roman church building of the Benedictine Abbey that was constructed on the mount. There are numerous underground crypts and chapels which help balance the weight of the building and keep the structure supported on the ground. In 2006, France announced that it would begin building a hydraulic dam that would remove silt surrounding the Mont Saint-Michel to make it into an island once again. The project began in 2009 and was completed by 2013. Tours of the Mont Saint-Michel are available and they typically average between 35 and 45 minutes in length.
<urn:uuid:4b08ce0f-6604-4967-a028-0d1db6123bf4>
CC-MAIN-2016-26
http://famouswonders.com/mont-saint-michel-in-normandy/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983826
290
3.109375
3
New Mexico Census Transcriptions by Lydia Uribe© Submitted November 1, 2010© Return to Main Page Arizona became the 48th State February 14, 1912 New Mexico became the 47th State Jan. 6, 1912 Prior to 1866 that area was called New Mexico Territory. For the purpose of this exhibit, settlements were arranged below alphabetically, not chronically. *Approximate count may not consider blank lines or unoccupied dwellings. In 1850, the counties covered a much greater area than in 1914. See 1855 map and 1914 map. |Official 1850 US Census NM tally published in 1853| |1850||Total Whites||Free Colored||Tally| Lydia Uribe has been doing research for several years. She has worked long hours transcribing early censuses in New Mexico. Below are some of her census transcriptions. VRE = Value of Real Estate Owned |Santa Ana County, New Mexico 1850 Census| |Index, all||Pages||J.M. Giddings||4,533(actual)| |Algodones, Town||Page||J.M. Giddings||490| |Santa Ana, Town||Page||J.M. Giddings||179| |Santa Ana County||Page1||J.M. Giddings||500| |Santa Ana County||Page2||J.M. Giddings||500| |Santa Ana County||Page3||J.M. Giddings||500| |Santa Ana County||Page4||J.M. Giddings||500| |Santa Ana County||Page5||J.M. Giddings||500| |Santa Ana County||Page6||J.M. Giddings||500| |Santa Ana County||Page7||J.M. Giddings||500| |Santa Ana County||Page8||J.M. Giddings||427| 1850 Taos County Census The 1850 Territory of New Mexico County for Taos County was begun in July of 1850. The census had but one designation called the Northern Division. About 9,500 people appeared on the 1850 Taos County Census. In 1850, Taos County covered a much greater area than in 1914. The 1850 census included name; age; sex; color; birthplace; if a person attended school, or was married within the year; if the person could read or write; if the person were a deaf-mute, blind, insane, or idiotic; real estate value and occupation. This exhibit was slightly modified to fit our compact table format. The census is divided into the web pages below which have 600 names per page. First use the Name Index to locate the person, then go to the specified page below. |1850 Taos County Census||Name Index||In Alphabetical Order| |Abbert, Francisco Meliton to Cruz, Maria Estefana||Cruz, Maria Eusebia to Garcia, Maria Josefa||Gurciaga, Juan Francisco to Maldonado Juan Luis| |Maldonado, Margarita to Montoya, Felipe Antonio||Montoya, Felipe Antonio to Romero, Victoria||Romero, Ygnacia to Vernal, Juan| |Vernal, Juan Antonio to Xaramillo, Miguel Antonio| |Page 1||Page 2||Page 3||Page 4||Page 5||Page 6||Page 7||Page 8||Page 9| |Page 10||Page 11||Page12||Page13||Page14||Page15||Page16||Page17||Page18|
<urn:uuid:13372308-1394-49f2-ac4b-a7a213c58490>
CC-MAIN-2016-26
http://www.rootsweb.ancestry.com/~nma/Grace%20Censuses/census-lydiauribe.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.751003
768
2.8125
3
Energy Film uses Nanoparticle Technology to provide low cost solution for improving window performance. The U.S. Department of Energy has stated that on average, 71% of solar heat gain in the summer and 49% of heat loss in the winter is at the window, creating a significant strain on the structure’s heating and cooling systems. Energy Film is a new technology made with nano-particles that can be easily applied to windows. It is a thin layer of film that effectively fights solar heat gain in the summer and retains heat in the winter. There are two Energy Film purchase options; Energy Film 75 (transparent) and Energy Film 40 (tinted). Energy Film 75 blocks 70% of heat producing infrared radiation and blocks 97% of harmful UV radiation. Energy Film 40 blocks 81% of heat producing infrared radiation, and blocks 98% of harmful UV radiation which causes fading and deterioration of flooring, fabric, wall-hangings, and furniture.
<urn:uuid:c57a41ad-d27c-4129-8963-0d225c402154>
CC-MAIN-2016-26
http://www.igreenbuild.com/_coreModules/eShopping/productDetail.aspx?productMasterID=316
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917063
198
2.96875
3
One-time University of Tartu Student Tõnu Karma reached the Livonian coast for the first time in 1948. Now, 50 years later Karma lives in Riga, the city where most remaining Livonians reside today. "The Livonians are to blame for the fact that I became a Latvian citizen", says 74 year-old Tõnu Karma on the subject of Livonians, giving some insight into his own life as well. It is difficult to speak about Livonians without mentioning Tõnu Karma. It is even harder to speak of Karma and not bring up the topic of Livonians. In 1948 a group of students from the University of Tartu visited the coastal villages of Livonia, led by the legendary grand old man of language Paul Ariste. This was the first such expedition following Worid War II. In the period immediately after the war approximately 800 Livonians lived in 12 villages dotted along the coast of the Gulf of Riga; now only about fifty Livonians remain. On the eve of World War II, with the occupation of the Baltic States, the Livonian coast became the western border of the Soviet Union.The beach was closed, barbed wire blocked access to the sea and the beach's sand was harrowed. The Livonians, who had earned a living fishing, were now forced to search for work elsewhere. And so it is that the majority of Livonians now live inVentspils and Riga. The students participating in the research expedition went from village to village and recorded the language."After a few weeks you got a grasp of the language and could start struggling to communicate in Livonian," recalls Karma. "Learning the language was made easier by the fact that the Livonians had close ties with the Island of Saaremaa and therefore a large number of them spoke Estonian." According toTõnu Karma, the influence of Latvian on the Livonian language caused it to seem foreign. In order to better understand Livonian, he decided to learn Latvian. He thereby began studies in Latvian at the University of Tartu, which lasted two years. Expeditions continued to be held each summier and Karma participated in the following two research trips as well. In continuing the description of his life, Karma is forced to make a slight deviation into Stalinist language theory. He speaks of how the Soviet regime eliminated the educated who were disloyal to those in power, labelling them as bourgeois nationalists. Those who were given this title of "enemy of Communism" had to repent. Those who expressed regret were spared a trip to Siberia, yet lost their jobs. This is what happened to the most renowned specialist and researcher of Baltic studies of all times, Janis Endzelins, who worked at the University of Riga. Endzelins refused to accept the theory coined by Soviet linguist Nikolai Marr, according to which numerous different languages fused together as time passed, thereby decreasing the total number of languages in existence.This type of theory suited the ideology of the time to a tee. Every instructor of philology was forced to lecture on Marr's theories. Endzelins was the only one against this and announced that he did not accept Marr nor his ideas. Endzelins was removed from the university, since he had become a bourgeois nationalist. Estonian Paul Ariste was called upon to continue Endzelins' uncompleted work. "Then a young Latvian student came to Ariste for a consultation," explains Tõnu Karma. "By this time I had studied Latvian for a few years and Ariste told me to take the girl out on the town until he freed up, this is how we got acquainted.We wrote to each other for five years and it got to the stage that one of us had to leave our homeland. We decided that it would be easier for me to find work in Latvia than for her to do the same in Estonia." Karma immediately began working at the library of the University of Riga. Research into the culture and language of the Livonians could now continue. Karma became a renowned linguist/philologist. When courses in Livonian language and cultural history were introduced at the University of Latvia in 1995, Tõnu Karma took up a post as instructor. This Estonian residing in Riga is now an honorable member of the Latvian Academy of Sciences. Estonian president Lennart Meri has presented Karma with the Order of the White Cross and Latvian president Guntis Ulmanis has granted him the Three Star Medal. "I'm not sure what Order I'm supposed to wear them in", ponders Karma. Karma is one of 1300 Estonians with Latvian citizenship.When the Baltic States regained their independence, people had to decide whether to apply for Estonian or Latvian citizenship. Latvia offered its politically repressed citizens certain benefits and this is what swayed Karmas decision. "I have experienced prison camp life in Russia; one and a half years in seven different spots.The first was truly a death camp. In August of 1944 I was recruited into the German army, into a reserve regiment of the border guard consisting of Estonians. We didn't get a chance to fire a single shot when the Russians advanced. Who wanted to, stayed here, who didn't, left for the West. I thought to myself, "I haven't managed to do anyone any harm..., I was even wearing civilian clothes," recalls Karma. There are 186 Livonians holding Latvian citizenship. Livonians, who have been declared Latvias second indigenous people alongside Latvians, lived on the territory of the current Republic of Latvia before the Latvians. It has been reported that Livonians inhabited these regions 5000 years ago. However they were not Livonians, but their ancestors. The Livonians' cultural life, which had gained momentum during the first period of independence and was stopped short by the Soviet occupation, was re-born along with the newly independent Republic of Latvia. Livonian Days are now held on the Livonian coast each year during the first weekend of August. The central site of celebrations is the village of Mazirbe (Ire in Livonian), where there is a cultural center, school and church, whose construction was supported by kindred Hungarians, Finns and Estonians. There is also a fishing boat cemetery in Mazirbe - fishing boats sawed in half lie rotting on the forest floor Karma reveals the traditions: "Boats were sawed in half before as well.The old ones were used to hold fishing gear and smoke fish. Boats that were no longer seaworthy were used for Midsummer bonfires.That boat cemetery in the Mazirbe forest - before the occupation, boats were never left to rot like that." Livonians make up a small group: they have one cultural center, five vocal ensembles and their own representative in the Latvian parliament. The first book to be written in Livonian was published in 1863 and from then on approximately fifty Livonian language books have gone to print. In 1935 the first Livonian language textbook was published. Throughout history approximately 30 people are known to have written poetry in Livonian, among them one Estonian, one Latvian and one Finn. "Never has there been a Livonian language school or church", says Tõnu Karma, revealing the fragility of Livonian culture. However church services have been held in Livonian, presided by young Finnish pastor E. K. Erviö with a degree in theology from the University of Helsinki. "He travelled to the Livonian coast a few times a month. Initially he went from village to village christening children, then he learned Livonian and held sermons in the regions native tongue. This was interrupted in 1937", explains Karma. The last time the Lord's Prayer resounded in Mazirbe church was a few summers ago when a Service was led by Edgar Valgama, a Livonian living in Finland. Even though Livonian villages lie along a 60 km strip of coastline, the language has three dialects: eastern, central and western. Books have been written in all three dialects. Livonians living in the south-west corner of Estonia never got as far as the printing of their own book. The first Livonian book (the Gospel of St. Matthew) was not published by church clergymen, but rather on the initiative of those studying the language. An attempt was made to preserve Livonian language and culture through the written word. "Petõr Damberg spoke of how they got out of the path of advancing World War I by crossing the sea to Saaremaa on foot and that they had that book with them. All of their other books were in different languages but one book was in Livonian," is how Karma describes the experiences of the author of Livonians' first reader. Written material in ones native tongue holds immense value for members of a small national group. Today only eight people speak Livonian as their mother tongue.The youngest among them was born in 1926. In response to the question, could children still be born into families with Livonian as their first language, Karma's explanations take us to Finland. He speaks of a minister living in western-Finland in whose home Livonian is spoken. Tõnu Karma: "He has bibles in various languages as well as the New Testament in Livonian. He acquired the language based on that book, without ever having had any direct contact with Livonians. It so happened that one night the phone rang. I can't remember whether he asked in Estonian or Finnish, if we could chat a bit in Livonian. I said why not. We spoke for about half an hour. Juha-Lassi Tast has a large family. He makes a point of speaking to his sons only in Livonian. Tõnu Karma feels that the survival of Livonians lies in their owr hands. Valt Ernstreit has become a pillar of Livonian cultural life.The Student of Balto-Finnic languages at the University of Tartu was behind the publication of an anthology of Livonian poetry, among other projects. A vocal ensemble has now been formed whose members are all young Livonians. Tõnu Karma believes, ''Our own literature and songs - this also helps keep the Livonian language alive". ""Blow, Wind" is undoubtedly the most popular Latvian folksong. Now its origin is thought to be Livonian. One day whe Livonian is no longer spoken, at least the song will endure."
<urn:uuid:7bb99366-8725-4e7a-a408-f912a29ab75b>
CC-MAIN-2016-26
http://www.suri.ee/r/liivi/air.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00114-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982256
2,213
2.75
3
Backpack-wearing colugos show ideas about gliding are for the birds: A number of mammalian species, like flying squirrels, have evolved the ability to glide for long distances. Some researchers have suggested this is a way to save energy, since a nice long glide takes a lot less power than climbing down from one location and climbing back up to another. But it's an idea that has been difficult to test until now, when miniaturized electronics have allowed researchers to fit colugos with motion-sensitive backpacks. The tracking system shows that, to glide to a new location in the canopy, colugos have to expend a fair bit of energy to climb upwards before launching into the air—based on the biomechanics of it all, they could typically do better by moving horizontally through the canopy to their final location. And, whichever way they go about things, neither of those movements end up adding a whole lot to the animal's energy budget. So, the authors suggest that gliding's just a matter of getting someplace quickly, rather than a way to conserve energy. Mixing GPS tracking and vomit times: "Seed dispersal is critical to understanding forest dynamics," this paper starts, reasonably enough. But, before long, the authors are feeding seeds to captive toucans and measuring how long it takes before they're vomited back up (an average of just over 25 minutes, in case you were curious). Then, just as with culogos, miniature backpacks appear, strapped on to wild animals—in this case, a GPS system on a toucan. Given the distance travlled and the amount of time involved, the authors can estimate just how far a toucan is likely to take a seed before puking it back up. Nearly half the seeds get over 100m from their site of origin. Rinse and spit to go full term: Sometimes, the basic premise of a study seems almost as weird as the results. First, the results: using antimicrobial mouthwash may help avoid some pre-term births. Superficially, that's not an obvious experiment, but it turns out there's a good reason to have tried it: we've known for a while that peridontal infections are associated with pre-term births. So, the authors did a double-blind study where some pregnant women with peridontal disease were given an antimicrobial mouthwash. The numbers are pretty small, but the results look promising, as those who used the mouthwash had a reduced rate of pre-term births compared to the controls. Cloudy, with a chance of rain from geysers on a orbiting body: Saturn's moon Enceladus is a fairly small body, but it seems to have an outside effect on that planetary system, thanks to a reservoir of water that it sprays out through geysers. Its output is steady enough to have created a ring that shares the moon's orbit. Now, researchers have also found that water forms a giant torus that's 50,000km high and pumps water into Saturn's upper atmosphere. The origin of that water had been an open question, previously, as models had suggested that any water on the planet should be limited to the deeper layers. Obvious result of the week—kids eat more vegetables if you hide them in their food: The authors of this study managed to slip some vegetables into chidren's food by mixing puréed vegetables in to their regular intake, a manipulation that required the study be registered as a clinical trial. The net result, however, was that the kids kept eating the same volume of vegetable side dishes, but upped their total intake of veggies. The total amount of energy ingested by the kids also went down, which should help with obesity.
<urn:uuid:f989f90d-b52a-499c-bfbf-10586a11063d>
CC-MAIN-2016-26
http://arstechnica.com/science/2011/07/weird-science-puts-electronics-in-the-backpacks-of-wild-animals/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970193
776
3.34375
3
Exact solutions of the nonlinear equations for a loudspeaker are extremely difficult to solve as they involve Volterra series expansions and higher-order system functions and impulse responses [A. R. Kaiser et al., J. Audio Eng. Soc. 35, 421--433]. Indeed, these complex methods are required for exact system simulations, however, they can be greatly simplified with a reasonable assumption about the system. By assuming that the distortion (or nonlinearity) is not so large that it causes an excessive amount of distortion, say about 20% THD, a vastly simplified approach can be developed. It is not always the case that this near-linear assumption can be made, but clearly, it should be accurate for higher performance units. This presentation will show how to use an algebra processor (Maple) to develop these simplified predictions given the nonlinearities of the parameters. These equations will then be used to show the effect that various loudspeaker enclosure designs have on the distortion of a given (typical) loudspeaker using a commercially available software package written by the author.
<urn:uuid:6523e487-3e10-4caf-9bae-9ed1547abbd9>
CC-MAIN-2016-26
http://www.auditory.org/asamtgs/asa97pen/4aEA/4aEA6.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00116-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92332
218
2.671875
3
Pronunciation: (kûr'dl), [key] —v.t., v.i., -dled, -dling. 1. to change into curd; coagulate; congeal. 2. to spoil; turn sour. 3. to go wrong; turn bad or fail: Their friendship began to curdle as soon as they became business rivals. 4. curdle the or one's blood, to fill a person with horror or fear; terrify: a scream that curdled the blood. Random House Unabridged Dictionary, Copyright © 1997, by Random House, Inc., on Infoplease.
<urn:uuid:c72eb372-58ac-4817-a336-85f87c7ce9b8>
CC-MAIN-2016-26
http://dictionary.infoplease.com/curdle
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898138
142
2.6875
3
Questions relating to the use and application of the Open Game License (OGL) The Open Game License (OGL) allows for derivative works of game material to be produced, while allowing the originator to protect their Intellectual Property. It is usually (although not exclusively) used to open up game mechanics and associated terms for reuse in supporting products by third party publishers while retaining exclusive ownership of setting-specific information. Technically, it can be used to make as much or as little of a game (mechanics, setting, or almost anything else) open for reuse as the licensor wishes. It was originated by Wizards of the Coast for the 3rd Edition of Dungeons and Dragons, in order to allow for third party support of that game. While the necessity of such a license can be questioned (game rules are not subject to U.S. copyright laws), it provided a safe harbor for third party publishers wishing to make make material compatible with or based upon that game system. Material specified for reuse under the terms of the OGL is known as Open Game Content, or OGC. Alongside the OGL, Wizards of the Coast released the d20 System Reference Document, or SRD, a sanitized version of the 3rd Edition Dungeons and Dragons ruleset comprised completely of Open Game Content. This was later updated to the 3.5 version of that ruleset. The OGL has since been included in a number of other games - most notably those derived from the SRD, but other systems have also been opened through the OGL. The OGL should not be confused with the d20 System License, which allows for additional use of terms and logos trademarked by Wizards of the Coast as well as adding further restrictions on the use of OGL material from the SRD.
<urn:uuid:80932c6e-c626-4f41-9e73-fb987fdca1d5>
CC-MAIN-2016-26
http://rpg.stackexchange.com/tags/ogl/info
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959974
366
2.59375
3
This book refutes the common, misconceived notion that Islam is a violent and intolerant religion, which calls upon its followers to wage unceasing war, called jihad, against all non-Muslims. It is alleged that Islam prohibits all freedom of religion, propagates its message by force, and coerces Muslims by threat of the death penalty to remain within its fold. Another charge is that Islam does not tolerate any criticism of its teachings and urges the faithful to kill anyone who speaks against it. What does the Holy Quran actually teach on these questions, and what kind of example did the Holy Prophet Muhammad set in these matters? To find out please read the book Islam, Peace and Tolerance by clicking on the cover of this book as shown below. (The book is in pdf format.)
<urn:uuid:48fcd9ce-6ab8-41df-877e-db9c5127bc80>
CC-MAIN-2016-26
http://ahmadiyya.org/islam/islam-pt.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946663
172
2.609375
3
Aging & Health A to Z Basic Facts & Information Nursing homes have changed dramatically over the past several decades. These changes have been driven by government regulations and consumer pressures. Today’s nursing homes are highly regulated, high-quality institutions for the care and treatment of older adults who have severe physical health and/or mental disabilities. Is a Nursing Home Right for You? Almost half of all people who live in nursing homes are 85 years or older. Relatively few residents are younger than 65 years. Most are women (72%), many of whom are without a spouse (almost 70% are widowed, divorced or never married) and with only a small group of family members and friends for support. The Most Common Reason For Living in A Nursing Home Some type of disability with activities of daily living (ADLs) is the most common reason that older people live in nursing homes. Not surprisingly, people living in nursing homes generally have more disability than people living at home. About 25% of nursing-home residents need help with one or two activities of daily living (for example, walking and bathing), and 75% need help with three or more. More than half of residents have incontinence (either bowel or bladder), and more than a third have difficulty with hearing or seeing. In addition to physical problems, mental conditions are common in nursing home residents. In fact, dementia remains the most common problem, and affects an estimated 50-70% of residents. More than three fourths of nursing-home residents have problems making daily decisions, and two thirds have problems with memory or knowing where they are from time to time. At least one-third of nursing home residents with dementia also have problematic behaviors. These behaviors may include verbal and physical abuse, acting inappropriately in public, resisting necessary care, and wandering. Communication problems are also common—almost half of nursing home residents have difficulty both being understood and understanding others. Depression is another condition that affects nursing home residents. Research has shown it may occur more in nursing home residents than in individuals living in the community. Length of Stay Although disability is common among nursing home residents, the length of stay varies greatly. Twenty-five percent of people admitted stay only a short time (3 months or less). Many of the people who stay for a short time are admitted for rehabilitation or for terminal (ie, end-of-life) care. About half of residents spend at least 1 year in the nursing home, and 21% live there for almost 5 years. Interestingly, function often improves in many of the residents who stay for a longer time. Risk Factors for Admission There are several risk factors for admission to a nursing home, including the following: - Age. The chance of being admitted to a nursing home goes up rapidly with age. For example, about 20% of people 85 years and older live in nursing homes, compared with just 1.1% of people 65-74 years of age. - Low income. - Poor family support. Especially in cases where the older adult lacks a spouse or children. - Low social activity. - Functional or mental difficulties. Characteristics of Nursing Homes Nursing homes increasingly offer medical services similar to those offered in hospitals after surgery, illness, or other sudden medical problems. Older adults need a higher level of care, and hospital stays are shorter than they used to be. Medical services vary a lot among nursing homes, but include: - kidney dialysis - orthopedic care (care for muscle, joint, and bone problems), breathing treatments - support after surgery - intravenous therapy and antibiotics - and wound care. Traditionally, these services have been available only in hospitals and rehabilitation centers. Choosing a Nursing Home Your family doctor or other healthcare professional (such as, home health nurses and social workers) can provide recommendations for nursing homes. Older adults and/or family members should try to visit as many places as possible to get a sense of what the place is like, including the overall feeling and quality of care. Using a checklist can help you evaluate quality, the range of services, convenience, and costs. Your visit may last an hour or two so that you can meet and talk with the admissions officer, nursing home administrator, head nurse, and social worker. Remember that no nursing home is perfect, and all will likely be very different from the current living situation. Nursing Home Checklist - Is the nursing home clean? Are there any unpleasant smells? - Is it well maintained? - Do the residents look well cared for? - Are the rooms adequate? - What recreational and private space is available? - Are there safety features, such as railings and grab bars? - Is the home licensed by the state and certified by Medicaid? - How many nurses and nursing assistants are there compared with how many residents? - Do the administrators and medical professionals have special training in geriatrics or long-term care? - Are key professionals full-time or part-time? - How long have the managers and medical professionals been with the nursing home? - What type of medical coverage is provided? - How close is the nursing home to family members? How close is it to the nearest hospital? - What is the food like? - How much do basic services cost? What services are covered? - What additional services are available? How much do they cost? - What happens if a person runs out of money and needs medical aid? Nursing homes may often seem scary and depressing, and moving into one can fill people with a sense of betrayal and failure. Family involvement is important in helping the older person make the transition to a new living arrangement. Contrary to the stereotype, families do not abandon their loved ones by placing them in a nursing home. In fact, only a few nursing home residents are truly without any family. Family members are encouraged to visit residents regularly and to be involved in the total care of their older relative. Family members can offer company and help with the basic activities of daily living, and they may be better able to communicate the needs of the resident. Updated: March 2012 Posted: March 2012
<urn:uuid:dd0632e8-c25e-4fa4-82f1-36593b6736d6>
CC-MAIN-2016-26
http://www.healthinaging.org/aging-and-health-a-to-z/topic:nursing-homes/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965804
1,281
2.8125
3
What You Can Do Founding and Founders Rights, Powers and Duties Unity and Federalism Abuses and Usurpations Jurisdiction & Due Process Selection & Removal Expanded Bill of Rights Trial Jury Reform Grand Jury Reform Rules of Evidence Proxy Voting for House Repeal State Bar Acts National ID or Circles of Trust Fonet - phonetic alphabet Bill of Rights History The Federalist Papers Declaration of Independence Rule of Law Latin Maxims of Law Law & History Liberty Library of Constitutional Classics Supreme Court Milestones U.S. Court Decisons Code of Federal Regulations State Legislative Rules of Procedure Jurisdiction over Federal Areas within the States Presumption of Nonauthority Kentucky Resolutions & Virginia Report Nondelegation & Administrative State Justiciability: Standing Redressability Law Review Articles Separation & Decentralization Diffusion of innovations Fonts Used on this Site Public Key 2 Public Key 1 What's New on this Site What You Can Do Local Search Engines Grand Jury Advice and Guides Grand Jury - American Bar Association — Answers to frequently asked questions provide an introduction to, and overview of, the grand jury system. Grand Jury - Equal Protection and Due Process — Article by Richard Alexander and Sheldon Portman of the Alexander Law Firm discusses the history of, and challenges to, the grand jury system. Grand Jury - Fully Informed Grand Jurors Alliance — Independent organization seeks to enhance the effectiveness of grand juries through public awareness and education about jury functions. Grand Jury - How Does It Work? — Law Street article presents a concise overview of the grand jury process. Includes discussion of the Whitewater and runaway grand juries. Grand Jury - Indictment by Grand Jury — Attorney William F. Nimmo offers this concise introductory profile of the grand jury process. Grand Jury - Its History, Its Secrecy, and Its Process — Mark Kadish's essay discusses the evolution of the grand jury in light of adherence to due-process rights. From the Florida State Law Review. Grand Jury - Legal Definition of Presentment — Lectric Law Library examines two forms of grand jury findings, a presentment and an inquisition, based upon the scope of an investigation. Grand Jury - What It Is, What It Does — Article by Andrea F. McKenna profiles the duties of state grand juries. Presented by the Pennsylvania Attorney General's Office. Grand Jury - Witness Checklist — Law firm of Finer & Pugsley offers a manual for preparing and defending witnesses involved in grand-jury testimony.
<urn:uuid:704bb4a3-deaf-4610-88ed-46f451f11a61>
CC-MAIN-2016-26
http://www.constitution.org/jury/advice_and_guides.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.706919
567
2.8125
3
Eleven leopards identified on proposed road route in Iran Bafq Protected Area under threat from new road Eight adult leopards have been identified in Bafq. (©ICS/YazdDoE/CACP/Panthera) January 2013. During a one year monitoring program in Bafq Protected Area in central Iran, eleven Persian leopards were identified, including four males and four females, and two of them are accompanied by one single and one set of twin cubs. Moreover, one of single females was filmed accompanied by an adult male which can be indication of breeding of the third female in the population. Launched in January 2012, a one year camera trapping program was implemented by the Iranian Cheetah Society(ICS) and Yazd Department of Environment in partnership with ( Asiatic Cheetah Project) CACP and Pantherato understand the population make up of the Asiatic cheetah and the Persian leopard across multiple reserves in central Iran, including Bafq. It is unusual record two different families of the leopards in a single area in west Asia, and this suggests the high potential of Bafq to re-colonize surrounding habitats, if they are properly protected. According to recent information, the female with two cubs has successfully raised her cubs and they have now left her and become independent; her last image shows that she is now solitary, probably looking around to find a mate for the next year. Moreover, both of her independent offspring have been confirmed to be female, making 6 female leopards in a single reserve, assuming that all four of the other females are still alive. Construction of a road through Bafq is a major threat to the areas cheetahs and leopards. Photo courtesy of ICS Recently one of the Bafq Governor's Office authorities declared that the area does not merit protection, stating "We believe that with no more than two leopards and 6 cheetahs, Bafq Protected Area does not have high environmental importance to continue its protection as a reserve". However these investigations have revealed that the largest single population of the endangered Persian leopard in central Iran occurs in Bafq, and it is unusual to find six females in a similar sized area elsewhere in west Asia. Bafq protected area - Under threat Established in 1996, the 850 km2 Bafq Protected Area is one of the main habitats for various cats in Iran, but it is under severe threat from plans to construct a road through the area. The Iranian Cheetah Society, Yazd DoE and Conservation of Asiatic Cheetah Project (CACP) are negotiating with communities and authorities over this potential threat, and huge media coverage has been brought to stop the road. Undoubtedly, the Bafq road is nowadays the largest concern for Iranian environmentalists for the survival of the Asiatic cheetahs and Persian leopards. One of the adult females with a yearling cub in summer 2012 (©ICS/YazdDoE/CACP/Panthera)
<urn:uuid:81433676-0010-4ea6-9d03-02682d187afb>
CC-MAIN-2016-26
http://www.wildlifeextra.com/go/news/bafq-leopards.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963244
644
2.609375
3
Cause Phytophthora infestans, a fungus-like microorganism that also attacks potato (Irish, not sweet). It usually does not survive in soil or dead plant debris. For an epidemic to begin in any one area, the microorganism must overwinter in potato tubers (culls, volunteers) or be reintroduced on seed potatoes or tomato transplants, or live spores must blow in with rain. Besides potato and tomato, P. infestans can infect only a few other, closely related plants. Occasionally peppers and eggplants are mildly infected, as are a few related weeds such as hairy nightshade (Solanum sarrachoides) and bittersweet nightshade (Solanum ducamara). It is a common problem west of the Cascade Range. Since 1990, there have been severe outbreaks of late blight in commercial and home garden plantings of potato and tomato in both the United States and Canada. Many have been associated with new strains of the late-blight microbe. Late blight of tomato is a disease that progresses very rapidly. Cool, wet weather favors disease development; hot, dry weather checks it. Infected stems can harbor the pathogen in dry weather, enabling sporulation and late-blight spread when wetter conditions return. Rainy weather, fog, and dew are favorable for late blight. No commercial cultivars have sufficient resistance to P. infestans to slow disease development, and once disease begins, cultural controls may fail to slow its development. Chemical control, initiated before disease begins, is the only method that can prevent complete crop destruction. Symptoms Lesions start as irregular, greenish, water-soaked spots on leaves, petioles, and/or stems. Under cool, moist conditions, spots rapidly enlarge to form purplish black lesions. Lesions can girdle affected stems, killing foliage farther out. During periods of high humidity and leaf wetness, a cottony white mold usually is visible on lower leaf surfaces at the edges of lesions. In dry weather, infected leaf tissues quickly dry up, and the white mold disappears. On green fruit, gray-green water-soaked spots form, enlarge, coalesce, and darken, resulting in large, firm, brown, leathery-appearing lesions. If conditions remain moist, abundant white mold will develop on the lesions, and secondary soft-rot bacteria may follow, resulting in a slimy wet rot of the entire fruit. On ripe fruit, lesions have cream-colored concentric zones, which eventually coalesce and affect the entire fruit. Cultural control Cultural controls alone won't prevent disease during seasons with wet, cool weather. However, the following measures will improve your chances of raising a successful crop. - Plant only healthy-appearing tomato transplants. Check to make sure plants are free of dark lesions on leaves or stems. If starting transplants from seed, air-dry freshly harvested seed at least 3 days. - Destroy volunteer tomatoes and potatoes routinely by cultivation or herbicides. Do not let volunteers grow, even on compost piles. Infected tomato refuse should be buried or bagged and put in the trash. - Avoid wetting foliage when irrigating, especially in late afternoon and evening. - Space, stake, and prune tomato plants to provide good air circulation. - Tomato cultivars Mountain Magic, Wapsipinicon Peach, Matt's Wild Cherry, and Pruden's Purple had high levels of resistance to multiple races of the late blight pathogen in trials conducted by University of Wisconsin-Madison while the cultivar Legend, developed by OSU, was resistant to one race of the three pathogen races tested. 'Plum Regal', 'Defiant', and 'Iron Lady' also appear to have resistance to late blight. Some cherry tomato cultivars (Red Cherry and Sweetie) were more tolerant to late blight in WSU trials. Management in Organic Production Control of tomato late blight is dependent upon the combination of cultural management practices and limited fungicide use. It is imperative to use pathogen-free transplants. Choose, if possible, tomato varieties with resistance to multiple races of late blight; several heirloom varieties are available. Growing tomatoes under plastic or in glasshouses will reduce the environmental conditions required for infection, namely moisture on leaves. Keeping greenhouse temperatures warm and providing good air circulation will make the environment less conducive to late blight infections. Consider row orientation in larger plantings in the field. Select fields where winds enhance drying of the plants or have shorter dew periods. Apply irrigation water to the soil, not the foliage. Because late blight can overwinter on volunteer potato or tomato as well as weedy solanaceous plants, it is imperative that volunteers and nightshades be controlled in organic production. Discovery of late blight nearby or within a planting would call for protective spray measures as well as removal and destruction of infected plants or plant parts. Application of fixed copper with an acceptable labeling for organic use can offer protection against the oomycete that causes late blight; however, copper accumulates in soil. (See "Materials Allowed for Organic Disease Management" in Section 5.) The biocontrol material, Serenade, suppresses late blight and in rotation with copper has resulted in marketable fruit even in the presence of active late blight pressure. Rotating out of solanaceous hosts for three years will help to avoid copper build-up as well as populations of other pathogens that affect tomato. Interplanting tomatoes with other crops may aid in decreasing spread once an epidemic commences. Chemical control Spray at regular intervals. Begin chemical control programs before symptoms appear. - Bordeaux 8-8-100 offers limited control. H - CAA-fungicide (Carboxylic Acid Amides) formulations (Group 40) as a tank-mix with a fungicide with a different mode of action. Do not make more than two (2) sequential applications before alternating to a different mode of action. - Forum at 6 fl oz/A (non-staked plants) on a 5- to 10-day schedule. Do not apply within 4 days of harvest. 12-hr reentry. - Revus at 5.5 to 8 fl oz/A or Revus Top at 5.5 to 7 fl oz/A on 7- to 10-day intervals. Do not use Revus Top on varieties in which mature tomatoes will be less than 2 inches in diameter. Preharvest interval is 1 day for Revus Top; 0 day for Revus. 12-hr reentry. - Chlorothalonil formulations. - Ariston at 1.9 to 3 pints/A on 7-day intervals. Preharvest interval is 3 days. 12-hr reentry. - Bonide Fung-onil is available for home gardens. H - Bravo Ultrex at 1.3 to 1.8 lb/A on 7- to 10-day intervals. May be applied the day of harvest. 12-hr reentry. - Echo 720 at 1.38 to 3 pints/A on 7- to 14-day intervals. Preharvest interval is 0 days. 12-hr reentry. - Copper products are not recommended as stand-alone materials. - Champ WG at 1.06 lb/A on 3- to 10-day intervals. 48-hr reentry. H O - Cueva at 0.5 to 2 gal/100 gal water on 7- to 10-day intervals. May be applied on the day of harvest. 4-hr reentry. O - Cuprofix Ultra 40D at 1.25 to 3 lb/A on 5- to 10-day intervals. 48-hr reentry. - Kocide 2000 at 1.5 to 3 lb/A or Kocide 3000 at 0.75 to 1.75 lb/A on 5- to 10-day intervals. 48-hr reentry. - Liqui-Cop at 3 to 5 teaspoons/gal water. H - Nu Cop 50 WP at 2 to 3 lb/A on 7- to 10-day intervals. Do not apply within 1 day of harvest. 48-hr reentry. O - Curzate 60DF (Group 27) at 3.2 to 5 oz/A on 5- to 7-day intervals only in combination with another protective fungicide. Do not apply within 3 days of harvest. 12-hr reentry. - Dithane DF Rainshield at 1.5 to 2 lb/A or Dithane F-45 Rainshield at 1.2 to 1.6 quarts/A. Do not apply within 5 days of harvest. 24-hr reentry. - ManKocide at 2.5 to 5 lb/A on 7- to 10- day intervals. Under moderate to severe disease pressure use the higher rate on 3- to 7-day intervals. Do not apply within 5 days of harvest. 48-hr reentry. - OSO 5% SC (Group 19) at 3.75 to 13 fl oz/A on 7- to 14-day intervals will suppress disease. Can be applied the day of harvest. 4-hr reentry. - Phosphorous acid-based products (Group 33) are labeled for use and are very effective; some are available for home garden use. - Agri-Fos at 1.25 to 2.5 quarts/A (2 teaspoons to 2 fl oz per gal of water). First application at transplant or when direct seeded crops are at 2 to 4 true leaf stage, then at 1- or 2-week intervals as required to control disease. 4-hr reentry. H - Alude at 1.25 to 2 quart/A on 7- to 14-day intervals. First application at transplant or when direct seeded crops are at 2 to 4 true leaf stage, then at 1- or 2-week intervals as required to control disease. 4-hr reentry. - Presidio (Group 43) at 3 to 4 fl oz/A on 10-day intervals in combination with another fungicide that has a different mode of action. Preharvest interval is 2 days. 12-hr reentry. - Ranman (Group 21) at 2.1 to 2.75 fl oz/A on 7- to 10-day intervals. Do not make more than three (3) applications of Ranman before alternating for three intervals to a labeled fungicide with a different mode of action. Preharvest interval is 0 days. 12-hr reentry. - Ridomil Gold Bravo SC at 2.5 pints/A. Do not apply within 5 days of harvest. 48-hr reentry. - Strobilurin fungicides (Group 11) are labeled for use. Do not make more than one (1) application of a Group 11 fungicide before alternating to a labeled fungicide with a different mode of action. - Aftershock at 5.7 fl oz/A is labeled for late blight suppression. Do not apply within 3 days of harvest. 12-hr reentry. - Cabrio EG at 8 to 16 oz/A or at 8 to 16 oz/100 gal spray volume (dilute). Preharvest interval is 0 days. 12-hr reentry. - Evito 480 SC at 5.7 fl oz/A. Preharvest interval is 3 days. 12-hr reentry. - Flint at 4 oz/A. Do not apply within 3 days of harvest. 12-hr reentry. - Quadris Flowable at 6.2 fl oz/A. Do not apply with an adjuvant. May be applied the day of harvest. 4-hr reentry. - Quadris Opti at 1.6 pints/A. Do not apply until 21 days after transplanting or 35 days after seeding. Adjuvants should not be used. Preharvest interval is 0 days. 12-hr reentry. - Reason 500 SC at 5.5 to 8.2 fl oz/A. Do not apply within 14 days of harvest. 12-hr reentry. - Premixes of Group 11 + 27 fungicides are available for use. - Tanos at 6 to 8 oz/A on 5- to 7-day intervals. Must be tank-mixed with an appropriate fungicide with a different mode of action (non-Group 11 or 27). Do not apply within 3 days of harvest. 12-hr reentry. - Premixes of Group 40 + 45 fungicides are available for use. - Zampro at 14 fl oz/A for no more than three (3) applications per season. Do not apply more than two (2) applications before alternating to a fungicide with a different mode of action. Preharvest interval is 4 days. 12-hr reentry. - Double Nickel LC at 0.5 to 6 quarts/A on 3- to 10-day intervals. Can be applied the day of harvest. 4-hr reentry. O - Regalia (Group P5) at 1 to 4 quarts/A plus another fungicide on 7- to 10-day intervals. Preharvest interval is 0 days. 4-hr reentry. O - Serenade MAX at 1 to 3 lb/A on 5- to 7-day intervals. Applications can be made up to harvest. 4-hr reentry. O - Sonata at 2 to 4 quarts/A on 7- to 14-day intervals. Can be applied up to and on the day of harvest. 4-hr reentry. O References Seidl Johnson, A.C., Jordan, S.A., and Gevens, A.J. 2014. Novel resistance in heirloom tomatoes and effectiveness of resistance in hybrids to Phytophthora infestans US-22, US-23, and US-24 clonal lineages. Plant Dis. 98:761-765. Shoemaker, P.B., Milks, D.C., and Lynch, N.P. 2003. Fungicides and combinations for tomato late blight. F&N Tests 58:V101
<urn:uuid:baf2c85d-678a-48c8-9fff-0e3d885cf06b>
CC-MAIN-2016-26
http://pnwhandbooks.org/plantdisease/tomato-lycopersicon-esculentum-late-blight
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887389
2,969
4.03125
4
Located at Kelker and Siegfried Streets, Steelton, PA [40.23720, -76.82720]. Founded in 1795, Midland Cemetery is the final resting place for slaves, former slaves, men of the United States Colored Troops, Buffalo Soldiers and numerous leaders of the area’s African American community. Notable burials from Pennsylvania's Grand Review 100 Voices include: Lemuel Butler, Co. H, 55th Massachusetts; Charles Henderson, Co. K, 127th USCT; Andrew Hill, Co. B, 6th USCT. Other USCT burials include: Samuel E. Coles, Co. G, 127th USCT; Thomas Dorsey, Co. B, 41st USCT, William Jackson, Co. H, 45th USCT; Charles James, Co. C, 127th USCT; Richard Johnson, Co. G, 127th USCT; Israel Palmer, Co. C, 29th USCT; Charles Preston, Co. G, 127th USCT; John Richardson, Co. D, 10th USCT; Nicholas Webster, Co. H, 127th USCT; William Woodburn, Co. D, 32nd USCT. (By Rebecca Solnit) Place Unit Type Location or Site Burial place of
<urn:uuid:333cadf5-4a5e-4b96-8bbf-9d278ca10f7f>
CC-MAIN-2016-26
http://hd.housedivided.dickinson.edu/node/32845
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892955
274
2.5625
3
Neil DeGrasse Tyson Eugene Mirman - StarTalk - Infrasound by SciBinge Listening to this excerpt of StarTalk, Neil DeGrasse Tyson points out a fascinating study. Vic Tandy of the School of International Studies and Law at Coventry University put together a paper back in 1998 called The Ghost in the Machine. In it, he describes the science behind a phenomenon concerning the human eye, which stands to be the root cause of a portion of "ghost sightings." Apparently 18hz-19hz sound waves can cause a resonant vibration in the eye, inducing artifacts misinterpreted as supernatural. These frequencies are referred to as infrasound because they are below the hearing range of humans. 20hz is classically the lowest frequency a human can detect via hearing (vibration through touch is a separate issue), and the overall range of hearing narrows with age. 20hz-20,000hz is classically known to be the range of hearing in a baby, with pristine undamaged ears. Everyone's eyeballs are different, so the exact frequency must vary. Tandy states on the resonant frequency: "Eyeballs (1-100Hz mostly above 8 Hz and strongly 20-70Hz effect difficulty in seeing)" As a side note, infrasound can be heard by elephants; which leads me to conclude that there must be significantly less ghost elephant sightings amongst their communities. Here's the abstract of Ghost in the Machine: "In this paper we outline an as yet undocumented natural cause for some cases of ostensible haunting. Using the first author’s own experience as an example, we show how a 19hz standing air wave may under certain conditions create sensory phenomena suggestive of a ghost. The mechanics and physiology of this ‘ghost in the machine’ effect is outlined. Spontaneous case researchers are encouraged to rule out this potential natural explanation for paranormal experience in future cases of the haunting or poltergeistic type." Download the paper here: Ghost in the Machine -----Sharing via these buttons will share the current article page, unless your current url is the main page of Astronasty. Click the title of the article to go to an individual article page. Share Spread the Love Vic Tandy, Tony R. Lawrence (1998). The Ghost in the Machine Journal of the Society for Psychical Research, 62
<urn:uuid:f08e107e-4520-4cf2-93c9-b22c2536fee1>
CC-MAIN-2016-26
http://astronasty.blogspot.com/2011_10_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897157
489
2.703125
3
Foodborne illness in the U.S. carries an annual price tag up to $77.7 billion, according to research published in January in the Journal of Food Protection. The paper, “Economic Burden from Health Losses Due to Foodborne Illness in the United States,” is by Robert Scharff, an assistant professor in the department of consumer sciences at Ohio State University. The study assesses the economic effects of foodborne illness to reflect a reduced estimate of the number of cases by the federal Centers for Disease Control and Prevention in 2010. Such illnesses sicken 48 million and kill 3,000 every year, according to the CDC. That was a big drop from 1999, when the agency reported 76 million sickened and 5,000 killed annually. The CDC attributes the difference to improvements in its data and methods. Scharff offers two measures for cost of illness. One represents medical costs, productivity losses and mortality. An enhanced model replaces productivity loss with pain, suffering and functional disability measures intended to be more inclusive. The nationwide total of $77.7 billion is Scharff’s enhanced estimate. For the basic model, it’s $51 billion. That puts the average cost per case of foodborne illness at $1,626 or $1,068, respectively. The paper looks at costs related to 31 pathogens. Salmonella accounts for about 28% of foodborne-related deaths and 35% of hospitalizations, making it the leading cause of both, according to the CDC. In Scharff’s analysis it costs $4.4 billion under the basic model, and balloons to $11.4 billion in the expanded. "The difference here is largely due to the fact that salmonella has a decent chance of leading to reactive arthritis," Scharff told The Packer. "This condition does not have a big effect on productivity, but it does result in a high degree of pain and suffering." In 2010, the U.S. Department of Agriculture pegged the cost of salmonella at $2.7 billion. Listeria monocytogenes totals close to $2 billion under both Scharff models. E. coli O157:H7, by comparison, costs $607 million (basic) to $635 million (expanded). The USDA’s 2010 number on E. coli O157 was $488.7 million. The study reflects all foodborne illness. According to the Watsonville, Calif.-based Alliance for Food and Farming, 2.2% of such illnesses can be traced to a farm where fresh produce is grown.
<urn:uuid:d0ac2ad3-9e6d-44e7-9837-f450d98ddd60>
CC-MAIN-2016-26
http://www.thepacker.com/fruit-vegetable-news/Study-Foodborne-illness-costs-up-to-777-billion-136744023.html?source=related
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00194-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938219
546
2.546875
3
Extremely hot or cold periods are a serious problem to road surfaces: many cracks, potholes and car ruts are the consequence. It’s expensive to repair them and the construction sites cause thousands of traffic jams every year and thus lead to an increase in unnecessary car emissions. The new pavement was developed by Prof Pilakoutas within the European research project ECOLANES. It is based on roller-compacted concrete (RCC), a mixture which uses hardly any water. The moment it is compressed the concrete is firm and immediately ready for light traffic. This shortens the time of road constructions considerably. The new concrete is reinforced with steel fibres which makes the material tougher and extends the lifespan of the pavement. In order to make this new concrete an ecological product, the researchers have selected steel fibres that are leftovers from the recycling process of waste tyres. Every year more than 3.2 million tons of waste tyres are recycled in the European Union, but so far there was little use for the metal wires which are part of reinforcement of the tyres. For the new concrete the recycled steel fibres are ideal. The steel fibres are at least 50 per cent cheaper compared to steel fibres from a metal processing plant. There is no need for raw material to be mined which saves additional energy and resources. The energy balance of the new material has been the focus of the researchers and one aim was to make the concrete recyclable itself. If the pavement gets damaged it can be removed, crushed and reused. This reduces once more the use of raw materials and transportation costs. To make sure that the new material withstands both the forces of nature and intensive traffic, the concrete has been tested in many ways. For example, in the lab the concrete was exposed in a climate chamber to extreme temperatures. The mix has also been paved on different test sites in England, Cyprus, Romania and Turkey to examine the material under different climatic conditions and traffic situations. The results have been promising and the scientists are now looking for an industry partner to market, mass-produce and deploy the new eco-concrete on a larger scale. youris.com provides its content to all media free of charge. We would appreciate if you could acknowledge youris.com as the source of the content.
<urn:uuid:5ee5f891-d2bb-4fb6-a00f-108affdd1936>
CC-MAIN-2016-26
http://www.youris.com/Mobility/Road_Materials/Recycled_Streets.kl
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00118-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950424
472
3.109375
3
Unlike most herons, the Cattle Egret inhabits dry grassland. Generally white overall with a yellow bill, the Cattle Egret gets its name from the fact that is can generally be found around cattle – feeding on the insects and small creatures that the cattle disturb. The Emu is a prehistoric bird that originated about 80 million years ago in Australia. They are closely related to ostrich, rhea, cassowary and kiwi. These are flightless birds (they have very short wings and very weak wing muscles), but they can run very fast. The name Flamingo derives from the Latin meaning flame. There are six species of flamingo, two of which are exhibited here at Flamingo Land. The remaining three are Andean Flamingo, James’ Flamingo and Lesser Flamingo. Some species can be found in huge flocks of up to 1 million birds! Like ostriches and emus, the rhea is flightless and uses its long powerful legs to escape from predators. Living in flocks of 30 or more, rheas roam the vast pampas grasslands in search of grass seed, roots and fruits. However, they are also known to boost their diets with protein rich meals such as fledgling birds, insects and small reptiles. Humboldt Penguins live in small colonies along the Pacific coastline of Chile and Peru. Like all Penguins, Humboldt’s are flightless marine birds, which have adapted superbly for life in the sea, they have flipper like wings and webbed feet which enable them to “fly” gracefully through the water at speeds of up to 15mph. Penguins feed on small fish such as sardines, mullet and anchovies.
<urn:uuid:ed726eed-2674-4803-b4fa-64737bf029d9>
CC-MAIN-2016-26
http://www.flamingoland.co.uk/zoo-and-conservation/zoo/birds.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957185
365
3.390625
3
||This Wikijunior article is a stub. You can help Wikijunior by expanding it.| What does it look, feel, taste, or smell like? Copper is a reddish-orange metal, one of only two metals (the other being gold) which has a color other than silver or gray. How was it discovered? As early as 10,000 years ago, people found small deposits of native (pure) copper metal in the ground. This copper was then hammered and used to make weapons, tools and decorations. In Northern Iraq, a copper pendant was found that can be dated back to about 8700 B.C. Where did its name come from? Copper gets its name from the Latin word Cuprum, meaning from the island of Cyprus. In the Ancient Roman world (whose common language was Latin), most copper was mined in Cyprus. Where is it found? Copper can be found underground in the form of copper ore. The main copper ore producing countries are Chile, United States, Indonesia, Australia, Peru, Russia, Canada, China, Poland, Kazakhstan, Zambia, Zaire, and Mexico. Copper is usually obtained from the ores cuprite (CuO2), tenorite (CuO), malachite (CuO3*Cu(OH)2), chalcocite (Cu2S), covellite (CuS), and bornite (Cu6FeS4). What are its uses? As copper is a great conductor of electricity it is used to make wires that carry electricity into homes, schools and businesses. In addition, copper is used to make locks, pipes, doorknobs, pots, bronze and jewelry. Most coins also contain copper, not just pennies (in fact pennies are now mostly zinc due to rising copper prices, but other coins are mostly still copper.) Is it dangerous? A little copper is necessary for many living things. However, copper in higher levels can be toxic.
<urn:uuid:84b9531a-bed1-4840-bbe0-1611f4d54090>
CC-MAIN-2016-26
https://en.wikibooks.org/wiki/Wikijunior:The_Elements/Copper
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950716
418
3.34375
3
As the world’s most famous play, Hamlet draws upon an almost shameless quantity of popular themes. Most of these, moreover, are sensational and sufficient to compel the groundlings to stand throughout Shakespeare’s longest play. But the revenge tradition that underlies it, and that gives it gripping excitement, would have struck contemporary audiences as profoundly different from such bloody tragedies as they were used to. It was a hero who, because of his sensitive, moral nature, suffers keenly from his task. His is, as both his loved Ophelia and his friend Horatio say, a noble mind; and all evidence points to his reluctance to be cruel in order to be kind. The play for succeeding audiences has consequently become more than a simple revenge play: it has become archetypal as the ordeal of taking repulsive but occasionally passionate action. "It is we," wrote William Hazlitt, "who are Hamlet."’ And Coleridge acknowledged, "I have a smack of Hamlet, if I may say so." Few of us cannot identify with the hero, and many are the warm discussions about what is his "mystery" (3.2.352). Not only students, lay people, and troubled souls have argued about the melancholy Dane; psychoanalysts have also generously donated their services to unravel probably the most complex character in literature. But we must not underestimate, however crude it may be, the underlying revenge tradition. It gives to the play not only plot but also what we have called the tragedy of passion. Indeed, Hamlet’s words to his only friend, Horatio—"Give me that man / That is not passion’s slave" (3.2. 68—69)—express one of the main struggles that Hamlet himself must undergo. For this tradition, Shakespeare draws mainly upon Seneca, partly upon The Spanish Tragedy, and also upon a cruder anonymous version of the play, now known as the Ur-Hamlet, no longer extant. Moreover, Shakespeare draws upon almost all of the horrendous elements of the tradition. Hamlet is summoned by the ghost of his father to avenge his death at the hands of his brother Claudius. He sinks into a deep sadness, close at times to madness, in his mission. His already sick mind is sullied by sex—notably the incest of his mother, who has married Claudius within a month or two of the funeral. Not especially Senecan are the episodes involving Polonius’s family, notably Ophelia and her tragic love for Hamlet, or though her madness and probable suicide are partly in the tradition. More conventional are Hamlet’s delay (though not its psychological causes) and his cunning concern to make the revenge as appropriate and condign as possible. The play within the play, which Hamlet devises to "catch the conscience of the king" (2.2.590—91), is surely an exploitation of the popular episode in The Spanish Tragedy. Many of the subsequent violent elements—the murder of Polonius, the leaping into Ophelia’s grave, the fatal duel with Laertes, the accidental poisoning of Gertrude, and the ultimate, condign slaying of Claudius—are variations upon the revenge tradition. But those elements that would have most pleased and been recognized by the audience are the burden of revenge, the ghost, madness, incest, delay, and appropriateness in technique of revenge. What the audience would have witnessed with wonder are the philosophical extensions of cruel finesse and passion. These extensions, attributable largely to the noble and brooding mind of the revenger, are well expressed by him as "thoughts beyond the reaches of our souls" (1.4.56). In them recrimination for delay takes the form of self-analysis and of anguished reflection upon the state of man that have scarcely been As in Julius Caesar, the play that probably preceded it by only a year, the protagonist is of a noble, philosophical mind. Shakespeare found compellingly interesting during these years—and probably never again—a protagonist who is not primarily of heroic stature. (Bradley, intent upon making all four of the major heroes awesomely large, had to attribute to Hamlet "genius," and Bradley could not have done even that for Brutus.) These two are men of conscience and thought who have placed upon them an in congenial burden, made even more intolerable by the crude environment that produces it. In placing Hamlet in the revenge tradition, we must seek to correct the common stereotype that critics who depend upon this tradition make of Hamlet’s revenge. Hamlet’s task is not so simple as killing the king. His, rather, is the most profound kind of revenge (if one can justly call it that) imposed upon any hero. His task is to set the times right, to purge the court of Elsinore. This duty, then, is much more profound in yet another sense than revenge tragedy. The play concerns the purging, partly by revenge, of a corrupt society. Hamlet must make of man more than a beast. And in doing so, he must constantly struggle nor to be a beast himself, not to let his noble mind be overthrown, not to lose his "capability and godlike reason" (4.4.38), not to let his heart lose its nature. Court of Elsinore Most of the action in Hamlet rakes place in the Court of Elsinore, which appears first in the second scene. Superficially, especially after the bleak, heartsick fear of the opening scene, set at midnight on the battlements and terrorizing not only the sentries but also the skeptical Horatio with two appearances of the Ghost, it seems to be a warm, bright, and civilized setting. After the midnight out-of-doors darkness — a darkness emphasized by Marcellus’s opening and unanswered question to the seemingly void universe as well as to Francisco, "Who’s there?" (1. 1. 1)—it is an indoor scene full of color and fine dress. Claudius, from the throne, reassuringly, brilliantly brings the newly formed state together. He logically explains the hasty marriage and the "mirth in funeral" (1.2.12). He warmly deals with his supporter counselor Polonius, and genially gives his counselor’s son, Laertes, permission to go to Paris. The threatened invasion of Fortinbras is expertly dealt with. Only Hamlet, a man on whom rests what G. Wilson Knight calls "the embassy of death, remains darkly alone, unresponsive to warm, reasonable consolation and a proffered stature as a son. Hamlet, who will prove to be the most difficult stepson in literature, answers only his mother’s plea to stay in Denmark, and even she does not escape his scathing wit. On the whole, however, it seems to be a comfortable court. And scene 3 stresses this impression by bringing together in close intimacy Polonius and his family. Laertes gives words of worldly, experienced caution to protect his sister’s virtue, but affection is shown even in her bantering reply to Parisbound Laertes. Polonius then arrives and gives, in a celebrated fatherhood speech, counsel on a prudent but gentlemanly life. The most important function of the scene is the restraint placed upon Ophelia in not seeing Hamlet. He will but trifle with her, or "wreck" (2. 1. 113) her. Hamlet, of a noble nature free from all contriving, is later severely shaken by the narrow vision of the restraint and the close-hearredness that it represents. It is, all in all, a scene and a family not untypical of the court as, in more insidious and corrupt forms, we shall generally see it. It is narrow, politic, suspicious—a prison that does not have, like Hamlet, "a heart unfortified" (1.2.96). Yet, even without the Italianate villainy of Claudius, it is a court that will somehow merit the scourging of a terrible kind. Typical again are the character and fate of Polonius’s family, which to a person will be wiped out. To grasp the true nature of Elsinore, and the purgation that it will receive, we must not begin with Claudius or Polonius or the premature settling of a disturbed state. We must not begin with a sophisticated indoor scene. These scenes are often, As You Like It illustrate, less close to reality than the scenes set in the forest or on the heath. We must, in short, begin the play as Shakespeare does, at midnight on the battlements; with characters confronting without pretense or control the raw evil, the rottenness of the state of Denmark. Bodes Some Strange Eruption to the State productions of the play omit, with serious consequences, the entire first scene. Their reasoning may be practical, for drastic cuts are necessary in Shakespeare’s longest play. But a fundamental misunderstanding of the play is also likely. It is a scene that, as Horatio explains, "is prologue to the omen coming on," sent by "heaven and earth together" (1. 1. 123, 124). Horatio likens it to the prodigious events preceding the death of A state is in jeopardy, and to the Elizabethans that threat of war meant that a sinsick land is to be scourged. This first scene describes at length the preparations against an invasion by Fortinbras, who is also omitted from many productions, even though he will appear prominently at the end of the play. True, the Ghost will appear with his "dread command" (3.4. 109) in the fifth scene, but he is needed at the start by his position to dominate the state’s peril and to give, like Fortinbras, a military beginning as well as a military ending to the play. He terrifies not just because he is a ghost but also because he comes in the "warlike form / In which the majesty of buried Denmark / Did sometimes march" (1. 1.47—49). He is the only ghost in extant Elizabethan drama to appear in armor. He deserves the first scene—even without Hamlet—to sound the note of the dominant theme of doom. Long and soft peace was not an auspicious condition in Elizabethan thought. Military theorists and theologians warned repeatedly that its symptoms were those of a sinful and sick land, ripe for sacking. There is an excessive softness in Claudius’s kingdom, a peacebred decadence. The new king differs markedly from his martial brother. All the parasites of peace here have proliferated: courtiers—sinister and suave like Claudius, politic like Laertes, false like Rosencrantz and Guildenstern, or effete like Osric; corrupt lawyers and impeded justice; artful and affected language; in fact all the decadent types and qualities mentioned in Hamlet’s most famous soliloquy. More serious still are the moral corruption’s of a peacetime state threatened by corrective war: sexual aberrations and license (extending to Laertes and to the recurrent image of the harlot); social disease imaged by "impostume" (4.4.27) and "canker" (5.2.69); the "oppressor’s wrong (3.1.71); "rank" (1.2. 136; many times emphasized, often by Hamlet, and connoting sexual stench); and gross debauchery in such forms as heavy drinking, usually and ominously conjoined with the sound of cannon. Imagery, as we have noticed, goes deeper than "seems" (1.2.75— 76), the picture of Elsinore given in the second scene. Without dwelling upon the wellknown disease images catalogued by Caroline Spurgeon,4 we readily recall such dominant expressions of physical deterioration as "the fatness of these pursy times" (3.4. 154) and "the drossy age" (5.2. 181). Especially basic to the play is a hidden kind of disease, sometimes discovered too late. This kind of image is unmistakably related to peacebred corruption in one of the most important and overlooked passages in the play (it is overlooked in productions because the scene in which it appears is usually cut). Hamlet comments upon the appearance of Forrinbras’s army as follows: "This is th’impostume of much wealth and peace, / That inward breaks, and shows no cause without / Why the man dies" (4.4.27— 29). Barnabe Riche (an author whose Farewell to Militarie Profession Shakespeare read) indicates the specific kinds of inward rottenness concealed in peacetime: deceit, fraud, flattery, incontinence, inordinate lust, and "to be short . . . al manner of fllthinesse. "Riche, moreover, got his diagnosis from a respected authority: St. Augustine in The City of God. In fact, most alarms to England had theological origins, based upon biblical analogues and hence most terrifying to Elizabethans. Babylon, Sodom, and Gomorrah were cities especially subject to visitation of armed portents: but the sinful city that compellingly caught the horrified attention of England was Jerusalem before its destruc tion by Titus. Was there no way in which military devastation could be avoided? In a sermon called Gods Mercies and Jerusalems Miseries, Lancelot Dawes expounds the text from Jeremiah 5:1. The text is to search in the city for a man "that executeth Judgment and seeketh the truth and I will spare it." Only one man, it is emphasized, need be found. Such a minister of judgment must be able to give drastic physic to the moral disease of the city, for "from the sole of her foot to the crown of her head, there be nothing found in her but wounds and, swelling, and sores full of Such a man is not to be found in Jerusalem. Nineveh, however, was redeemed, and its redemption was found in many a sermon. But its success on the stage is more significant of popular appeal and helps clarify the meaning of Hamlet to its audience. In A LookingGlass for London and England, Thomas Lodge and Robert Greene dramatized the frightening sins of a city under a sensual monarch, the appearance of an angel who brings in Jonas and Oseas as prophets to scourge the court repeatedly with moral warnings, and finally the internal purgation of the city within the appointed forty days. If we consider Hamlet to be, like Jonas and Oseas, a wildly speaking voice of judgment and correction, we may be struck by other parallels between the two plays. Rasni, the king, "loves chainbering and wantonesse," indulges in carousing, and rules a kingdom of "filthinesses and sinne." He is threatened: "The foe shall pierce the gates with iron rampes. ‘‘ The most arresting specific parallel is that Rasni falls sensually in love with, and marries, his own sister. Hamlet is too complex a play, and Hamlet too various a character, to fit comfortably into any tradition. One must, however, attempt to account for as many of its images as possible, especially if these give the play and its hero a significance greater than killing a king, or suffering from delay, or meaningless abuse of others, or near madness. Lose Not Thy Nature Hamlet, as a corrective surrogate form of war in Denmark, wages a still more crucial war as an instrument of destiny. He is a human being, one who must battle within himself a war in itself, a war between ruthlessness (a terrible passion) and humane feelings. The Ghost, in his story to his son, tells him not to pity him but to take stern action. The early Hamlet, though sickeningly bitter at his mother’s perfidy and the "bloat" (3.4. 183) king’s lust, is mostly a noble mind, one not, despite Ophelia’s words, yet overthrown. Near the end of the play, when he has killed Polonius, he can be heartless—"Thou wretched, rash, intruding fool, farewell! / I took thee for thy better" (3.4.32—33); this is the only elegy he can pronounce over the dead father of his once beloved—and there is bestiality in his "I’ll lug the guts into the neighbor room" (3.4.2 13). Perhaps, however, his most insightful view of the murder is a resignedly philosophical one: For this same lord, I do repent; but heaven hath pleased it so, To punish me with this, and this with me, That I must be their scourge and minister. The two key words are scourge and minister. The latter is an untainted of God. I Richard III the virtuous Richmond on the eve of battle prays to God, "Make us thy ministers of chastisement" (R3 5.3.3 14). A scourge, on the other hand, has taken on himself revenge, like Tamburlaine, and is ultimately doomed. Such, at any rate, is the view of Fredson Bowers.8 But the two words are often used interchangeably in the religious literature of the day, and Hamlet must, though he does not at first kill, behave with the cruelty of a scourge in setting the time right. He is not, even from the beginning, temperamentally suited for a dispassionate enlightening of the moral sense of his mother, Ophelia, Polonius, or other tainted attendants at Elsinore. Once, doubtless, he had been. But when we first see him he is morbidly disillusioned with life and man ("man delights not me," 2.2.305) and woman. All is rank. Exacerbating his world view is the dread command of the ghost. This command, with its clinical account of his sexual mother, renders him incapable of a reasoned correction of others. The Ghost’s command that usurps all else is "Let not the royal bed of Denmark be/A couch of luxury and damned incest" (1.5.82—83). This order makes for the savage attempt to mortify and chasten even so virtuous a girl as More important, it makes him partly blind to the purging that his victims are undergoing of their own nature. Polonius, on his own, knows, as he places the book of devotion in Ophelia’s We are oft to blame in ‘Tis too much proved, that with devotion’s visage And pious action, we do sugar o’er The devil himself. And even Claudius himself has his conscience wrung by this observation, for in an aside he virtually cries out: O, ‘tis true. How smart a lash that speech doth give my conscience! The harlot’s cheek, beautied with plastring art, Is not more ugly to the thing that helps it Than is my deed to my most painted word. O heavy burthen! Claudius is, however, more caught in conscience by Hamlet’s playwithintheplay. His great soliloquy makes him more than a onedimensional villain. He prays for the mostneeded virtue in the play (perhaps in Shakespeare)—an open heart: Help, angels! Make assay. Bow, stubborn knees, and, heart with strings of steel, Be soft as sinews of the newborn babe. All may be well. Indeed a major aspect of Hamlet’s excoriating mission is that even while it threatens to narrow his own heart and humanity (witness his callousness toward the death of Rosencrantz and Guildenstern), it awakens feelings of guilt in his victims. Gertrude, morally obtuse, is his major obstacle in enlightenment, even as she is (though not in Freudian interpretation) the powerful threat to his role as minister rather than scourge. At once one of the most important and most enigmatic passages in the play is the Ghost’s command concerning But howsomever thou pursuest this act, Taint not thy mind, nor let thy soul contrive Against thy mother ought. Leave her to heaven And to those thorns that in her bosom lodge To prick and sting her. Perhaps "Taint not thy mind" applies to the entire revenge mission, and in following that injunction Hamlet is reasonably successful. But the sexual nausea with which he views and treats his mother makes him almost hysterically and carnally passionate. When he is going to his mother’s chambers at her request for the "closet scene, he must try to fortify his heart: "Soft, now to my mother. / O heart, lose not thy nature; let not ever / The soul of Nero enter this firm bosom" (3.2.377—79). So distraught is he, yet so anxious to carry out the Ghost’s commands and his own deep feelings for Gertrude, that the scene is one of the most powerfully poetic in the play, despite its painfully sexual nature. It is also a crucial scene in that it carries out, in the largest sense, the ultimatum of the Ghost’s charge: "Let not the royal bed of Denmark be / A couch for luxury and damned incest" (1.5.82—83). Luxuty, it will be remembered, kept its Latin and romance meaning of licentiousness, of rank abundance, and of sumptuous pleasure, suitable to a kingdom of decadent peace. Largely upon this scene, therefore, and not upon the killing of Claudius, depends the cleansing of what is rotten in the state of Denmark. And Hamlet succeeds through his brutal yet ardently moving rhetoric. He cries to Gertrude: Leave wringing of your hands. Peace, sit you down And let me wring your heart, for so I shall it be made of penetrable stuff, If damned custom have not brazed it so That it is proof and bulwark against sense. (3 .4. 35—39) So broad reaching, he cries, is her deed, that Heaven’s face does And this solidity and compound mass, With heated visage, as against the doom, Is thoughtsick at the act. In effect, Hamlet correctly sees the earth as sick against the coming of the "doom." He is carrying out the fullest meaning of the Ghost’s command, a meaning in which Gertrude’s vileness and subsequent recognition are central. With a persistent battle between passionate morality and morbid sexual revulsion in his soul, he pictures for her the stench and sweat of her sexual Nay, but to live In the rank sweat of an enseamed bed, Stewed in corruption, honeying and making love Over the nasty sty— She pleads with him to stop: "O Hamlet, thou hast cleft my heart in twain" (3.4. 157). In so confessing, she becomes (if we except Laertes) the last and certainly most important sinner whose heart Hamlet has opened. The cruelty and even filth of his tactics make it sometimes questionable whether he fulfills his mission untainted. His earlier cruel wit may be written off as "antic disposition" (1.5. 172) as may his "wild and whirling words" (1.5. 133) used to his old friends. He is probably right, in so intolerable a corrective role, to see himself as both scourge and minister. But, as we must more deeply recognize, Hamlet is our hero because, although forced into cruelty and even sadism, he is one of the most beautiful in soul of any man Shakespeare created. We remember mainly his heartrending soliloquies and his suffering. None but he could speak words like To die, to sleep— No more—and by a sleep to say we end The heartache, and the thousand natural shocks That flesh is heir to. He may say that the deaths of Rosencrantz and Guildenstern "are not near my conscience; their defeat / Does by their own insinuation grow" (5.2.58—59). But, again, he can apologize humbly to the murderous Laertes, and he can go beyond his own plight when he states that "by the image of my cause I see / The portraiture of his" Still more in his favor is the concern for all human agony in his soliloquies; and still more, the religious commitment that comes to him after the hectic fever of his scourging. He learns: "There’s a divinity that shapes our ends, / Roughhew them how we will" (5.2.10—11). As his doom draws near, we see more of his own and not the age’s suffering: "But thou wouldst not think how ill all’s here about my heart" (5.2.20 1). Perhaps his first unselfish recognition is expressed in the biblical parable: "There is a special providence in the fall of a sparrow" With consummate artistry, therefore, Shakespeare is able to make the final scene of his most spiritually endowed hero twofold. Hamlet has earned, first, the beautiful tribute of Horatio, a man not given to unrealistic statements: "Now cracks a noble heart. Good night, sweet prince, / And flights of angels sing thee to thy rest" (5.2.348—49). And secondly, but not usually shown, is the conclusion expressed by Fortinbras, a conclusion representing his highest tribute. He had come to claim his "rights of memory in the kingdom" (5.2.378), though really to carry out a scourge that he himself does not know the basis for. Let four captains Bear Hamlet like a soldier to the stage, For he was likely, had he been put on, To have proved most royal; and for his passage The soldiers’ music and the rights of war Speak loudly for him. The last sounds are of cannon, not for Claudius, but for Hamlet and regenerate Denmark.
<urn:uuid:844f6991-939e-4a4f-8276-7b2849f1f015>
CC-MAIN-2016-26
http://www.hamletguide.com/essays/jorgensen.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00094-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936602
6,106
3.03125
3
France suffered twice from war in the 20th century at the hands of its neighbor Germany. The 21st century, however, looks much brighter than the preceding one. France is part of a European coalition which may create a United States of Europe in the future. Not only is France one of the leading players in this effort, it also has strong relations with Germany, once its invader, which brings it security. By joining the common currency efforts in Europe, France has secured an influential role for itself in European affairs, both economically and politically. It has already started to reap the benefits of one of the largest markets in the world by receiving a large volume of foreign direct investment and exporting its agricultural and other products within Europe in a tariff-free environment. Recent liberalization and privatization efforts of the government, which reversed an earlier course of the 1980s, both brought confidence to markets and increased efficiency to the government, which is now much smaller than before. Its main problems lie in its relatively high unemployment rate and low population growth. The EU may provide a solution to the unemployment problem, but with a more integrated Europe comes the possibility that France may lose many of its more talented workers if they leave for better jobs. Coupled with rigid immigration policies and xenophobia, France may experience a shortage of technically capable individuals. Germany, which has many of the same racial difficulties, recently initiated a program which is similar to the U.S. green card and may encourage technologically savvy people to come to Germany. France may soon have to confront the same problem. Decreasing population growth may seem to be an answer to unemployment in the short run, but it is a fact that the burden of supporting the retired will have to be shared by fewer working people as France's demographics change. This situation may mean the need for higher spending on health care and related services, which will drain government aid funds. Perhaps this problem is the biggest challenge France has to deal with in the 21st century.
<urn:uuid:a06b0bbb-6af8-484f-8d49-99cb827bc1ad>
CC-MAIN-2016-26
http://www.nationsencyclopedia.com/economies/Europe/France-FUTURE-TRENDS.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00063-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978782
396
2.59375
3
CHICAGO — On a June day last year, 3-year-old Jolan Jackson was sitting at the dining room table in his booster chair waiting for his meal. Having just boiled a cup of water in the microwave, Jolan’s teenage sister poured the hot liquid into a Cup Noodle foam container and placed it on the table, according to a complaint filed in March against the maker of the soup in Los Angeles County Superior Court. In the next few minutes, the soup cup somehow tipped over, spilling its hot contents onto Jolan’s lap and leaving him with second and third-degree burns that required 15 days in the hospital and months of follow-up surgeries, the complaint said. “He was in tremendous pain,” said Jolan’s mother, Latisha Beam. “By the time I got to the hospital he was on a morphine drip and the skin had peeled and rolled off so that it was just pink and raw.” Scaldings are the most common burns suffered by children admitted to hospitals and burn units across the nation, according to national burn statistics. And although some children are scalded by boiling pots on the stove or overly hot tap water, physicians report that many others are injured by soup, including the kind made in disposable foam cups. Roughly 30 to 40 percent of the kids treated for burns at Stroger Hospital “are there for hot soup scaldings,” said hospital spokesman Marisa Kollias. Another analysis of burns at Shriners Hospitals for Children Northern California found that soup burns caused about 8 percent of all burn admissions. Either estimate suggests that soup scalds thousands of children nationwide each year, with an estimated 30,000 people younger than 20 treated for scalds of all types each year, according to a 2010 epidemiological review published in the Pediatric Annals. Some scientists, doctors, victims, and advocates say this kind of injury would occur less often if the makers of instant soups used a safer cup. Certain containers are more likely to tip over than others, according to a 2006 analysis in the Journal of Burn Care & Research by Dr. David Greenhalgh, professor and chief of burn surgery at the University of California at Davis School of Medicine. Of the 11 instant soup cups tested by Dr. Greenhalgh and his colleagues, Cup Noodles tied for the second most tip-prone design because of its height and narrow base. Researchers measured its tipping point at about 21 degrees; the least tip-prone container tipped at about 64 degrees. “The most significant contributor to the ease of tipping over was height,” the paper said. “Simple redesigning of instant soup packaging with a wider base and shorter height, along with the requirement for warnings about the risks of burns, would reduce the frequency of soup burns.” Nissin Foods, the maker of Cup Noodles (formerly called Cup O’ Noodles), said its products carry prominent labels warning consumers to handle the hot soup with care, especially when serving it to children. “Our hearts go out to children and families who have suffered burns of any type,” said the statement from Nissin, whose American headquarters is just outside Los Angeles in Rosemead, Calif. “We urge parents to never leave hot products of any kind in reach of their young children.” The Jackson family’s complaint against the company acknowledges that the package included warnings about putting the cup in the microwave, the potentially high heat, and the need to handle with care. But it also alleges that the design of the cup is “unsafe for its intended use because of its dimensions, including an overly narrow base.” The soup’s label, it said, did not sufficiently warn “of its unstable design that significantly increased the risk that the Cup Noodles would tip over and/or spill with the potential for causing severe burns.” “It never entered my mind that noodle soup could do this degree of damage to a child,” said Ms. Beam, who works full time and attends nursing school at night. “But when I found out how many other children had been burned like this, I decided that I had to do something to get them to change their design. Because no child, and especially no 3-year-old, should have to suffer this degree of pain over a cup of noodles.” Burn doctors say hot liquid burns are especially dangerous for the very young. “Children have thinner skin and greater susceptibility to burn injury,” said Loyola University Medical Center burn specialist Dr. Richard Gamelli. “Also, if they are wearing footy pajamas or fleecy things or diapers, they can end up with significant burns because they hold the heat and fluid.” Children can be more severely burned by noodle soups than other types because they retain more heat and the noodles can stick to the skin, doctors say. University of Chicago researchers in 2011 examined an additional contributing factor: small children’s access to microwave ovens, where handling cups of sloshy soup can easily lead to an injury. Marla Robinson, assistant director of inpatient therapy services at University of Chicago Medicine, and her colleagues found that all the 4-year-olds who participated in the study could reach, operate, and remove food from a microwave. “Children as young as 17 months old can turn on a microwave, open the door, and remove items, putting them at significant risk for scald injuries,” said the University of Chicago report, published in the Journal of Trauma. Ms. Robinson found that the hospital handled 24 cases of children burned by microwaved soup last year, up from nine in 2003. Arguing that it’s too easy for small children to access hot food from a microwave oven, the report recommended that microwaves be redesigned to be more child-resistant. For now, Ms. Beam said, “these noodles are totally banned from my house.”
<urn:uuid:e46e4905-ec6a-44c8-a5ba-6cabee096e99>
CC-MAIN-2016-26
http://www.toledoblade.com/Food/2013/05/19/Kids-burned-by-instant-soup-cups.print
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964319
1,264
2.5625
3
A cure would stop MS in its tracks, prevent new symptoms, stop - not just slow - progression, and hopefully even reverse damage. There are four basic types of MS with all kinds of variations, and a real MS cure would address all of them. That is an ambitious goal. So are we close to a cure? Dr. Jean Martin Charcot identified and defined multiple sclerosis as a disease in 1868. The first effective pharmaceutical treatment was not available until 1992, 124 years later. Now, just 20 years after the first drug, we have options including oral applications as well as many clinical trials developing even more. All of these drugs are intended to reduce the number of attacks, often up to 50%, and to slow progression, but there is not yet a valid measurement for progression. This sounds like a great treatment, but not quite a cure. More research is underway. Based on tests using mice, the answer to a cure may well be found in genetics. Just a few months ago, I explored the question: Is MS Hereditary? The answer seems to be yes, and some specific relevant genes and genomes have already been identified. This information leads to exciting research projects based on the idea that one key to a cure is indeed genetics. Then there are other exciting possibilities: CSVII. stem cells, and even repairing damaged myelin. Is there a possibility for a cure here? Dr. Paolo Zamboni has developed an amazing treatment that has been touted as a possible cure. MSers flocked to treatment centers to take advantage of this “cure.” His wife Elena developed MS in 1995, inspiring Zamboni, a vascular surgeon, to research the disease. He found a vascular condition he thinks may be one cause of MS. It is my understanding that although MS is not a vascular disease, this treatment does make some people feel much better, and for some it actually eliminates new symptoms for years. But it is not a cure. Until more trials are completed, perhaps we should have cautious optimism for Dr. Zomboni’s treatment. He joins the National MS Society in encouraging MSers to continue their current treatment until CCSVI has been proven beyond doubt. Remember how MSers anticipated stem cell therapy? A small trial in Chicago tested 23 men and women with early RRMS who did not respond after six months of using interferon beta. Tests took stem cells from the patient’s own fat tissue or bone marrow. Three years after transplantation, 17 had improved by at least one point on the disability scale while none had deteriorated. That sounds like promising results to me, even for such a small trial. It won’t be a true cure until it also addresses MS types beyond early RRMS. A larger trial for this one-time transplantation of stem cells is underway. The idea here is not only to stop it from happening again, but also to reverse damage already done. The Mayo Clinic is collaborating with international groups to identify and promote natural repair. There are two laboratories, one working with animal models and the other studying human pathologies, developing therapies to restore myelin. Because it is natural, there are almost no side effects. That sounds good, but no cure here either. There are many claims for MS cures. Be wary of these false claims. When you hear something, talk with your doctor or check with an organization like the National MS Society or maybe the MS International Federation to see if it is valid. One day soon, it really will be. It sounds as if a cure is just around the corner. Remember, there is always hope. Once you choose hope, anything's possible. ~ Christopher Reeve Published On: June 29, 2011
<urn:uuid:4c982241-0e87-4ce1-bcd3-2898aee2b1d1>
CC-MAIN-2016-26
http://www.healthcentral.com/multiple-sclerosis/c/32873/140793/cured/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00035-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959027
760
2.9375
3
World’s highest-res display matches human eye’s acuity July 14, 2001 | Source: KurzweilAI A 20-million-pixels screen with the visual acuity of the human eye at 10 feet has been developed at Sandia National Laboratories. It is also the fastest in the world in rendering complex scientific data sets, says program leader Philip Heermann. The Sandia images are created through massively parallel imaging, using outputs of 64 computers and splitting data into 16 screens arranged as a 4 by 4 set. By January 2002, Heermann expects the Sandia team to reach the project’s second phase goal of 64 million pixels, making its resolution higher than the eye’s. The images are expected to give scientists a better view of complicated systems. Sandia’s immediate needs are to improve understanding of complex situations like crashes and fires, but the facility is also valuable for microsystems, nanotechnology and biological explorations.
<urn:uuid:d57245c2-10ad-4e82-9601-c15bd3f1cef1>
CC-MAIN-2016-26
http://www.kurzweilai.net/world-s-highest-res-display-matches-human-eye-s-acuity
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927599
204
2.96875
3
Teaching Children About Money is a Web site dedicated to helping parents teach kids money to prepare them for any recession or downturn in the economy. The current recession has taken most parents by surprise and what concerns them is that they do not know how to deal with it. Through this course they will not only be helping their children, but they will learn tips and tricks for dealing with the current economic problems. When parents teach their children early on about money, they will already know how to handle a recession should one hit in their generation. The Web site, http://www.teachingchildrenaboutmoney.com/, offers resources to help parents teach kids money using easy to follow, step by step instructions. Parents will also find themselves learning more about how to deal with the present recession while teaching these important rules to their children. Many parents all over the world have been hit by this recession and are struggling financially. As a result, there is simply no better time for them to teach their kids money as they will see firsthand what difficult economic times are like. When parents have taught their children how to deal with a recession before it strikes they don’t have to worry about how they will deal with the next recession to occur in their lifetimes. The Web site offers many internationally recognized ebooks which include "The Insider's Secrets to Raising a Future Millionaire", "50 Money Making Ideas for Kids", and "How Can You Raise a Kid Entrepreneur Without Giving Your Kid an Allowance." The company has grown a lot over the year it has been in business due to the personal attention they pay customers as well as the hands on customer service. To learn more, visit http://www.teachingchildrenaboutmoney.com/ Teaching Children About Money is a Web site dedicated to helping parents teach kids money to prepare them for any recession or downturn in the economy. The current recession has taken most parents by surprise and what concerns them is that they do not know how to deal with it. Through this course they will not only be helping their children, but they will learn tips and tricks for dealing with the current economic problems. 4/27/09 | Posted by Brian Scott at 5:15 AM The National Science Teachers Association (NSTA), the largest professional organization in the world promoting excellence and innovation in science teaching and learning, today announced a call for entries for the 2009-2010 NSTA New Science Teacher Academy. The NSTA New Science Teacher Academy, co-founded by the Amgen Foundation, is a year-long professional development program established to help reduce the high attrition rate among science teachers new to the teaching profession. Intended for science educators entering their second or third year of teaching, the Academy is designed to help promote quality science teaching, enhance teacher confidence and classroom excellence and improve teacher content knowledge. Since its launch in 2007, the Academy has received more than 1,500 applications nationwide. NSTA Fellows selected for the program receive a comprehensive membership package, online mentoring with trained mentors who teach in the same discipline, and the opportunity to participate in a variety of web-based professional development activities, including web seminars. In addition, each NSTA Fellow receives financial support to attend and participate in NSTA's 2010 National Conference on Science Education in Philadelphia. Since its inception, the Academy has provided high-quality professional development to more than 350 science teachers nationwide, impacting nearly 15,000 students, approximately. Science teachers located throughout the country who will be entering their second or third year of teaching and whose schedule is a minimum of 51 percent middle or high school science, can apply to become an NSTA Fellow. For more information about the NSTA New Science Teacher Academy or to learn how to apply to become a fellow, please visit www.nsta.org/academy. Applications must be submitted no later than June 1, 2009 to be considered. 4/26/09 | Posted by Brian Scott at 5:13 AM Women tend to choose non-math-intensive fields for their careers -- not because they lack mathematical ability, but because they want flexibility to raise children or prefer less math-intensive fields of science, reports a new Cornell study. "A major reason explaining why women are underrepresented not only in math-intensive fields but also in senior leadership positions in most fields is that many women choose to have children, and the timing of child rearing coincides with the most demanding periods of their career, such as trying to get tenure or working exorbitant hours to get promoted," said lead author Stephen J. Ceci, professor of human development at Cornell. Women with advanced math abilities choose non-math fields more often than men with similar abilities, he added. Women also tend to drop out of scientific fields -- especially math and physical sciences -- at higher rates than do men, particularly as they advance, because of their need for greater flexibility and the demands of parenting and caregiving, said co-author Wendy M. Williams, Cornell professor of human development. "These are choices that all women, but almost no men, are forced to make," she said. The study, published in the March issue of the American Psychological Association's Psychological Bulletin (135:2), is an integrative analysis of 35 years of research on sex differences in math. Ceci and his Cornell co-authors reviewed more than 400 articles and book chapters to better understand why women are underrepresented in such math-intensive science careers as computer science, physics, technology, engineering, chemistry and higher mathematics. Women today comprise about 50 percent of medical school classes; yet women who enter academic medicine are less likely than men to be promoted or serve in leadership posts, the authors report. As of 2005, only 15 percent of full professors and 11 percent of department chairs were women. Non-math fields are also affected: For example, only 19 percent of the tenure-track faculty members in the top 20 philosophy departments are women. The authors concluded that hormonal, brain and other biological sex differences were not primary factors in explaining why women were underrepresented in science careers, and that studies on social and cultural effects were inconsistent and inconclusive. They also reported that although "institutional barriers and discrimination exist, these influences still cannot explain why women are not entering or staying in STEM careers," said Ceci. "The evidence did not show that removal of these barriers would equalize the sexes in these fields, especially given that women's career preferences and lifestyle choices tilt them toward other careers such as medicine and biology over mathematics, computer science, physics and engineering." The analysis, which also was conducted with Susan Barnett, Ph.D. '04, a visiting scholar at Cornell, also found that "Women would comprise 33 percent of the professorships in math-intensive fields if it was based solely on being in the top 1 percent of math ability, but they currently comprise less than 10 percent," Ceci said. Science, technology, engineering and math are not the only professions affected by women's career choices, said the authors. Women are still underrepresented in the top positions of such fields as medicine, law, biology, psychology, dentistry and veterinary science. The authors recommended that universities and companies create options for women with math talents who want to pursue math-intensive careers. These could include deferred start-up of tenure-track positions and part-time work that segues to full-time tenure-track work for women who are raising children, and courtesy appointments for women unable to work full time but who would benefit from use of university resources (e-mail, library resources, grant support) to continue their research from home. 4/24/09 | Posted by Brian Scott at 5:29 AM Scholastic Media announced today, in celebration of National Volunteer Week, the "Be Big In Your Community Contest" as part of its ongoing Clifford The Big Red Dog(R) BE BIG!(TM) campaign (www.scholastic.com/cliffordbebig) to support civic engagement. The national contest invites kids of all ages, teachers, parents and community leaders to submit a BIG idea that demonstrates Clifford's Big Ideas (Share, Help Others, Be Kind, Be Responsible, Play Fair, Be a Good Friend, Believe in Yourself, Have Respect, Work Together and Be Truthful) to enter for a chance to win a community grant to be used towards implementing the winning proposals. One (1) grand prize entry will be honored with a $25,000 community grant and ten (10) runner up entries will each be honored with a $2,500 community grant (via HandsOn Network affiliate organizations or designees) from the Be Big Fund. (www.handsonnetwork.org) The mission of the Be Big Fund is to recognize and reward others for their BE BIG actions, catalyze change in local communities and provide resources to share BIG ideas. "Be Big In Your Community Contest" submissions will be accepted on the Clifford BE BIG website (www.scholastic.com/cliffordbebig) or via standard mail today through June 26, 2009 and is open to all legal residents of the U.S. Submissions will be evaluated by a panel of judges from Scholastic Inc. and HandsOn Network based on the following criteria: feasibility, creativity, sustainability and impact. Official Rules are available at www.scholastic.com/cliffordbebig/contestrules and at http://www.handsonnetwork.org/. NO PURCHASE NECESSARY. Void where prohibited or restricted by law. 4/23/09 | Posted by Brian Scott at 9:27 AM Eighty-two percent of parents with children 8 years old and younger say they read a book out loud to them daily, according to a study commissioned by Hooked on Phonics(R). "The research shows that parents understand that their involvement is critical to establishing a love of reading in children early in life so they're ready and willing to learn," said Judy L. Harris, CEO of Smarterville, the company that owns, creates, manufactures and distributes Hooked on Phonics(R). The telephone survey of 694 parents nationwide was conducted to coincide with the National Education Association's annual Read Across America day. Now in its twelfth year, the program focuses on motivating children to read, in addition to helping them master basic skills. The nationwide reading program is held on or near March 2, the birthday of Dr. Seuss. The study also found that parents with children 8 years old and younger read more than eight books per week to their children. Fifty-five percent of those respondents said the mother is the primary reader and 24 percent said both parents are the primary readers. "This is indisputable evidence that parents are the most important and influential people in a child's life, and they are in the best possible position to help children learn to read and love it," Harris said. Among the parents who have children at least 5 years old, 66 percent say their child knew how to read when she or he started kindergarten; 75 percent of these parents say they or their spouse were the primary influence in helping their child learn to read. Among all parents, 69 percent rate their level of pride when their oldest child learned to read at 8 or higher on a scale of 1 to 10 (1 means no feelings of pride to 10 means the proudest moment of their life). "The ability to read well is the foundation for learning and for succeeding later on, whether in the workplace, in the home and in life," said Harris. "We are delighted in the tremendous difference these parents are making in their children's reading and their education." 4/19/09 | Posted by Brian Scott at 7:13 AM Can organizing your home help your child to become a better reader? Maybe. A new study on the effects of the home environment on early reading growth has found evidence of a link between the reading abilities of 5- and 6-year-old children, and the orderliness of their homes. Researchers at Teachers College, Columbia University and the Ohio State University found that household order, characterized by routines and cleanliness, was positively associated with a range of early reading abilities in a sample of 455 kindergarten and first-grade twins. However, this association only held for mothers whose own reading abilities were above the national average. When the sample was split by mother's reading level, household order explained reading growth among children of mothers with above average reading skills, while the child's interest in and enjoyment of reading explained reading growth among children of mothers with average reading ability. Dr. Anne Martin, one of the researchers, noted that perhaps the same mothers who are above average readers are also those who are more likely to keep a tidy home and to implement daily household routines. So, mothers looking to enhance their child's early reading skills should be encouraged to grab their organizers and even their brooms, as keeping an orderly home may have an even greater impact on our children than we previously thought. Experts have long advised parents that the best way to encourage children to read is to read to them. But, Martin says, "Encouraging child-directed activities such as making books available in the home and allowing children to amuse themselves with books may be equally important and effective approaches to improving early reading." "Furthermore," Martin adds, "for mothers who are above-average readers but may not have the time or inclination to read aloud, there may be a new strategy that has been overlooked until now: keeping an orderly home." For information about the study and the National Center for Children and Families, please go to: http://www.policyforchildren.org/orderinthehouse.html. Posted by Brian Scott at 7:11 AM In its fifth year, The George Washington University's Prime Movers Media Program pairs veteran and retired journalists from leading news media companies with students in elective media classes at Washington, D.C., high schools to help them create or strengthen student-run media. High school journalism and English classes are often a student's first glimpse at what a career in writing, broadcast, or the media entails. The experiences they have in and out of the classroom can have a profound effect on their future careers. In its fifth year, The George Washington University's Prime Movers Media Program pairs veteran and retired journalists from leading news media companies with students in elective media classes at Washington, D.C., high schools to help them create or strengthen student-run media. Students and GW Prime Movers Media Program interns meet during school hours over the course of the academic year – several weeks of which are complemented with the expertise of professional journalists. "The Prime Movers program gave me hands-on experience of what it is like to work as a broadcast journalist," said Chiron Hunt, a 2007 graduate of Ballou High School in Washington, D.C., and a three-year student participant in the program. "The professional mentors who came into my classroom brought real life experience that you can't get in a normal class." Sam Ford, general assignment reporter with WJLA-TV (ABC-7) in Washington, D.C., and two-time professional mentor with the program, said, "The Prime Movers Program is not only good for journalists because it gets you back into the schools but helps you to get in touch with the lives of these students who live in Washington, D.C. The rewarding part is to see the switch turn on when these students put together their stories and really see how to do it." Hunt added, "At first, I just took the course as an elective. After a while, I got a feel for what I was doing and started to feel comfortable on screen and was enjoying it. Now, I'm majoring in broadcast journalism at the University of Nebraska and hope to someday work for ESPN as a sports broadcaster." "Prime Movers Media is opening an ever-widening pathway for diverse high school students to work in the expanding ‘information highway' and creating a pipeline for ensuring racial diversity in the new media era," said Dorothy Gilliam, founder and director of GW's Prime Movers Media Program. "In addition to preparing the best and brightest for media careers, participating high school students also benefit from this program through enhancing their reading comprehension, graphic design skills, critical thinking, problem solving, leadership abilities, teamwork, and writing and oral communications. These skills will contribute to their development as media savvy news consumers and will better prepare them for competition in the global marketplace." Gilliam is a prize-winning journalist who retired from The Washington Post after 33 years to start the program at GW in 2003. Undergraduates at The George Washington University also complete internships and work in the local high schools and, in turn, learn from the professional mentors and the students. Marie Zisa, a GW sophomore majoring in political communication and former two-time intern with the program, said, "The first semester I interned with Prime Movers, I was helping an advanced class, and I had had no prior camera experience. At times, the students were the ones teaching me, and we were able to work together and find a solution to the problem. What I most learned from Prime Movers is how rewarding stepping out of your comfort zone can be." Former professional mentors with GW's Prime Movers Media Program include Bruce Horowitz, USA Today; Felix Contreras, National Public Radio; Seth Stern, Congressional Quarterly; and Pat Wingert, Newsweek. Current professional mentors include Don Hecker, The New York Times, and Tamara Jones, Yvonne Lamb and Sylvia Moreno, The Washington Post (retired). For more information about The George Washington University's Prime Movers Media Program, visit http://www.gwu.edu/~primemovers. 4/17/09 | Posted by Brian Scott at 11:21 AM New research from Vanderbilt University's Peabody College offers guidance for teachers to help them improve writing instruction in the primary grades and develop stronger student writers. The two new studies by Steve Graham, professor, and Curry Ingram, Chair in Special Education, were recently published in the Journal of Educational Psychology. "The primary purpose of both articles is to inform teachers about writing practices that work with a wide variety of students," Graham said. "We're hoping to help give teachers the opportunity to creatively incorporate effective writing strategies in the classroom to improve the writing of their students." The National Commission on Writing has stated that writing should be placed at the center of the school agenda. In "A Meta-Analysis of Single Subject Design Writing Intervention Research," Graham and Leslie Rogers, a current Vanderbilt University doctoral student in special education, identified effective writing practices for all students including students who struggle within the classroom. This research focuses on the current writing practices in grades 1 through 12, including some suggestions for improvement. "Among the more important findings is the need for students to be taught how to plan, revise and set clear and specific goals for their writing," Graham said. "Students also need to be taught the skills to write clear and effective paragraphs." Graham's other paper, "Primary Grade Writing Instruction: A National Survey," co-authored with Laura Cutler, a graduate student in Special Education at the University of Maryland when the research was conducted and currently a teacher in Florida, provides more direct recommendations to improve classroom writing practices. "Primary grade teachers need to focus on increasing the time spent writing, balancing the time spent writing with the time spent learning how to write, boosting their students' motivation for writing, making computers a more integral part of their writing curriculum, and improving their own preparation for teaching writing," Graham said. "These recommendations offer educators the opportunity to focus on their weakest areas to improve instruction and the quality of student writers produced in our classrooms." Source: Vanderbilt's Peabody College of education and human development 4/16/09 | Posted by Brian Scott at 10:05 AM How big is a serving of spaghetti or a cup of cranberry juice? Correctly estimating the size of a food serving is important for maintaining a healthy weight, but a new study suggests people with lower literacy levels might have a more difficult time sizing up the foods they eat. People with high literacy levels are twice as likely as those with low literacy test scores are to dole out a single-sized portion of pasta, pineapple, ground beef and other common foods, according to the study in the April issue of the American Journal of Preventive Medicine. Yet, people with higher literacy levels have troubles estimating portion sizes, too, said Johns Hopkins researcher Mary Margaret Huizinga, M.D., who led the study while at Vanderbilt University. When combining serving data for all the foods in the study, only 62 percent of study participants could serve a specific amount of food accurately when asked, she and colleagues discovered. For individual foods, "accuracy ranged from 30 percent for beef to 53 percent for juice," Huizinga said. "The current super-sizing of many foods may lead Americans to overestimate what a normal portion should be," she said, "and the overestimation of portion size may lead to overeating and contribute to obesity." In their study of 164 patients at a primary care clinic, the researchers tested the participants' verbal and written literacy as well as their understanding of numerical information. They then measured how well the patients were able to estimate a single serving or a specified amount of a variety of foods, using guidelines from the U.S. Food and Drug Administration and the U.S. Department of Agriculture as a standard. The participants' food preferences, or even how often they ate a particular food, did not seem to affect how well they estimated serving sizes, Huizinga and colleagues noted. Ballooning portion sizes in restaurants is one factor that prompts people to see large portions as normal, but the same kind of "portion distortion" can happen at home, said Jennifer Fisher, Ph.D., an associate professor of public health at Temple University. In her studies of how much children eat when faced with normal or super-sized entrees, Fisher found that a family's social and cultural perceptions of "how much is enough" also influenced the portions dished out to children. "Seeing a large amount of food in front of you can lead you to believe that someone decided this portion was the right amount to eat," she said. Huizinga MM, et al. Literacy, numeracy, and portion-size estimation skills. Am J Prev Med 36(4), 2009. 4/13/09 | Posted by Brian Scott at 5:06 AM Super Stars Literacy, Inc., the East Bay non-profit that builds literacy skills for primary grade children in communities with limited resources, has earned recognition for its work from two leading community foundations. The San Francisco Foundation (SFF) awarded a $20,000 grant for the 2008-09 year "to support [Super Stars Literacy's] comprehensive academic and social development daily after-school program to improve the academic and social development of elementary school children with challenging life circumstances." Also, following on the heels of its recent grant of $15,000, the East Bay Community Foundation (EBCF) has now profiled Super Stars Literacy in EBCF's 2008 annual report as an organization well-positioned to advance EBCF's new strategic focus on preparing young people at both preschool and school-age levels to succeed. Super Stars Literacy is one of only two nonprofits highlighted in this report. According to Mike Mowery, Super Stars Literacy's Executive Director, "This back-to-back recognition by the two preeminent community foundations on each side of the Bay further validates our success in building critically-needed early literacy and social development skills for our at-risk students. We greatly appreciate the ongoing support of both the East Bay Community Foundation and the San Francisco Foundation as we seek to improve opportunities for academic success for more and more Bay Area students." Super Stars Literacy's comprehensive five-day-a-week after-school program is a direct response to studies suggesting children, especially those from low socio-economic backgrounds who are not reading at grade level by the end of the third grade, are at serious risk of never developing strong academic skills or graduating high school. Since its founding in 2002, the program has achieved outstanding results in meeting, and often exceeding, its goal of having 80 percent of its students read at grade-level as they enter the third grade. About The East Bay Community Foundation The East Bay Community Foundation (http://www.eastbaycf.org/) connects community needs in Alameda and Contra Costa counties with individuals, families and organizations interested in charitable giving - and acts as a catalyst for change through leading initiatives, through advocacy, and through partnerships with business, government, and private foundations. About The San Francisco Foundation As one of the nation's largest community foundations with more than $1 billion in assets, The San Francisco Foundation (http://www.sff.org/) addresses community needs by supporting innovative ideas and strengthening existing nonprofit organizations that lack sufficient resources or infrastructure. They focus on the areas of arts and culture, community development, community health, education, the environment, and social justice. About Super Stars Literacy, Inc. Dedicated to building early literacy skills in primary grade children in communities with limited resources, Super Stars Literacy (http://www.superstarsliteracy.org/) currently serves 270 students at six Oakland, Calif., elementary schools: Hoover Elementary, Parker Elementary, Think College Now, International Community School, EnCompass Academy and East Oakland Pride. The program was originally founded in 2002 as a program of the Junior League of Oakland-East Bay, Inc., and earned independent nonprofit 501(c)(3) status in 2008. Posted by Brian Scott at 5:03 AM A nationwide study of first grade classrooms finds that while many teachers create positive social environments in the classroom, most provide inadequate instructional support. The report is published in the March issue of The Elementary School Journal. Authors Megan Stuhlman and Robert Pianta (University of Virginia) used direct observations to assess the social and instructional quality of interactions between teachers and students in 820 first grade classrooms. Previous studies have indicated that the quality of such interactions can have a significant impact on student learning, especially in early grades. The researchers found 23 percent of classrooms to be of "high overall quality," with teachers getting high marks for creating a positive social climate in the classroom and for providing strong instructional support to students. Twenty-eight percent of classrooms were deemed "mediocre," with teachers scoring just below the sample mean on all study measures. Seventeen percent were "low overall quality." A fourth category of classrooms characterized by "positive emotional climate, low academic demand" accounted for 31 percent of classrooms—the largest category in the sample. In these classrooms, Stuhlman explains, teachers are warm and do not discipline using threats, but they tend not to give constructive feedback that helps students understand concepts. "We found that quality, particularly instructional features of teacher behavior, was rather low across the sample," Pianta says. "In other studies we have demonstrated the connection between these observed teacher-child interactions and student learning gains. So what we are seeing here may influence the extent to which children can perform at standards consistent with accountability frameworks such as No Child Left Behind." The study also casts doubt on traditional assumptions about the factors that influence educational quality. Class size and teacher credentials, for example, had little impact on quality. And in a finding that may come as a surprise to advocates of private school vouchers, public school classrooms actually fared a bit better than private school classes. "[M]ore public schools were categorized as high overall quality than would be expected by chance," the authors write. "Moreover, equal proportions of public and private schools were in the lowest rated classroom type." The results suggest that educational opportunity will not be improved simply by shipping students to private schools, Pianta says. "Instead, strong, instructionally-focused, and effective professional development for a large number of teachers is perhaps the most important next step." Source: University of Chicago Press Journals 4/10/09 | Posted by Brian Scott at 11:41 AM US Airways (LCC) has joined with Reading Is Fundamental (RIF) for a second year to celebrate the wonder of reading through the "Fly with US. Read with Kids.(R)" campaign. It features the online Read with Kids Challenge and supports RIF programs serving children across the nation. This year, the challenge is climbing to new heights with a goal of collectively logging 5 million minutes spent reading with children from April 1-June 30. Participants can register and log their minutes online at www.RIF.org/readwithkids. Registrants can join individually, or new for this year, create a team of three or more adults. All participants will be entered to win a grand prize drawing of a Walt Disney World(R); Resort vacation package from US Airways Vacations, US Airways gift cards, and other great prizes. The team determined by random drawing will win the opportunity to select a featured RIF program, as well as a school in their community, to receive a special children's book collection. US Airways -- the official airline of RIF -- is also encouraging customers, employees, and readers nationwide to support children's literacy by making a donation to RIF. Donors can receive a special edition of Off You Go, Maisy! -- a children's book by best-selling author Lucy Cousins -- and be eligible to receive up to 5,000 US Airways Dividend Miles. US Airways' campaign with RIF, the nation's oldest and largest children and families' literacy nonprofit organization, also provides books and literacy services to children served by RIF programs throughout the country. US Airways' employee volunteer corps, the Do Crew, will participate in RIF book distributions and reading rallies in communities where the airline has a large concentration of employees and passengers: Boston; Charlotte, N.C.; New York City; Philadelphia; Phoenix; Pittsburgh; and Washington, D.C. For more information, and to access reading resources, visit RIF's website at http://www.rif.org/. 4/9/09 | Posted by Brian Scott at 8:18 AM Barnes & Noble, Inc. will celebrate national "Turnoff Week," April 20th through April 26th, with activities offering alternatives for people to television, the Internet, electronic games and other screen related activities and spend real time with family and friends. Turnoff Week is a primary program of the Center for SCREEN-TIME Awareness, an international nonprofit organization, providing tools for people to live healthier lives in functional families and vibrant communities by taking control of the electronic media in their lives and not allowing it to control them. Turnoff Week is supported by national organizations including the American Medical Association, American Academy of Pediatrics, National Education Association, and President's Council on Physical Fitness and Sports. Since 1994, more than 50 million people have participated in Turnoff Week. "As local community centers, Barnes & Noble stores are centered on literacy and togetherness," said Sarah DiFrancesco, director of community relations for Barnes & Noble, Inc. "We believe Turnoff Week is an important way to highlight storytelling, reading and family - the cornerstones of our business." Barnes & Noble is joining the Center for SCREEN-TIME Awareness, the promoters of Turnoff Week, and ensuring that everyone has a place to go and something to do that is just right for the entire family. Among the events being offered at many Barnes & Noble stores across the country are family Storytimes, Family Fun nights, game nights, book clubs, bookfair fundraisers, crafts, scavenger hunts, and poetry readings. It is expected that 20 million people across the USA will turnoff the recreational use of screens for one week. They will read, play games, spend time with family and friends, venture outdoors and spend time with real people in real time. "Barnes & Noble is the new village green" said Robert Kesten, executive director of Center for SCREEN-TIME Awareness. "These stores offer people a public living room, where books, newspapers, music and families come together, what could be better than that?" To find a Turnoff Week event at a store near you please visit the Barnes & Noble store locator at http://store-locator.barnesandnoble.com/. For more information on Turnoff Week, visit http://www.screentime.org/. Posted by Brian Scott at 8:13 AM When children start pre-school, they bring art work home nearly every day and we proudly plaster it across the refrigerator. But once they start elementary school, the flow of masterpieces slows to a trickle. Thanks to shrinking budgets, many school systems have drastically reduced art instruction. So, if your child isn't taking art in school, how can you be sure their inner artist doesn't waste away? According to Americans for the Arts, students who participate in three hours of arts, three days a week for at least one year are four times more likely to be recognized for academic achievement. Jason Dobkin and Erika Gragg, co-creators of the new children's book "Snobbles the Great: A Snooze Patch Story," (Grabkin Creatives, LLC http://www.snobbles.com/) credit their love of art as their inspiration behind the book. Snobbles is a fruit-eating snake who is ridiculed by the other snakes in the Snooze Patch where they all live. "I was making little clay animals and Erika would place them in plants or other settings and photograph them," says Dobkin. "That's how Snobbles came to life. We wanted to create a fantastical new world for kids so we combined the normal aspects of children's books with painting, sculpture, photography, stage design, lighting, and cinematography to make a hyper-real experience." Dobkin and Gragg hope parents will find ways to incorporate creativity into daily activities. "If kids don't have the opportunity to be creative when they're young, it's not going to dawn on them to start thinking in new ways when they're older," says Gragg. "Put children who don't do well academically in a dance class or give them a paint brush and they connect with it. Suddenly, everything clicks. They start understanding math or English better because their brain interprets those subjects in new ways." Better grades, problem-solving skills and confidence are very strong incentives to make sure you encourage your child's inner artist to come out and play on a very frequent basis. 4/5/09 | Posted by Brian Scott at 5:34 AM The Illinois Arts Council seeks qualified applicants for the 2009-2011 Artstour Program Artists Roster and the 2009-2011 Arts-In-Education Program Artists Roster. Artists rosters are updated every two years and are available on the agency website. Inclusion in these rosters provides artists with increased visibility, job opportunity, and networking support. Artists may apply for inclusion in both or either of the rosters. Artstour Program Artists Roster: Artstour is the Illinois Arts Council's fee support grants program linking arts presenters with Illinois' wealth of touring artists, companies, and ensembles. The Artstour Program is designed to provide a variety of high-quality touring performances and exhibitions in various price ranges to all Illinois communities throughout the year. In order to be considered for the Artstour Program, artists/companies and ensembles must apply for inclusion in the Artstour Program Artists Roster. Arts-In-Education (AIE) Program Artists Roster: The Illinois Arts Council AIE Residency Program provides support to primary and secondary educational institutions, community colleges, and not-for-profit local arts and community organizations to work with an individual artist from one month to six months. Residencies involving performing arts companies range from two weeks to six months. In order to be considered for the AIE Residency Program artists must apply for inclusion in the AIE Residency Program Artists Roster. Guidelines, application materials, and a full IAC Staff list are now available on the Illinois Arts Council website: (www.state.il.us/agency/iac). Posted by Brian Scott at 5:30 AM With families always on the go go go, time for sharing stories with your children is going, going, almost gone! But the experience doesn't have to be lost, thanks to the PlainTales series, which offers classic stories on CDs that can be played on any player, any time (goodbye long, boring trips this spring break!). The series is now expanding with the release of new collections, PlainTales First Tales and PlainTales Explorers. PlainTales, created by writer, entrepreneur and father Brian Keairns, was started with the mission of providing great audio stories for children and their parents, believing that hearing the world's best stories read aloud is a great way for youngsters to develop language and thinking skills, all while being thoroughly entertained. Fables, fairy tales and original stories inspire, educate and encourage creative thinking, and the ones featured in the PlainTales series are chosen for their rich language, cultural significance and pure enjoyment, allowing families to share something special, together. The PlainTales First Tales collection includes: -- "The Gingerbread Boy and Other First Tales" -- Perfect for the littlest listener. R/T: 48 minutes. -- "Paul Bunyan and Other American Tall Tales" -- Three rib-tickling stories. R/T: 60 min. The PlainTales Explorers collection includes: -- "Animal Tales: Raccoon, Bear and Coyote" -- These stories about fictional friends are rich with detail about real animal behaviors. R/T: 48 -- "Johnny Appleseed and Other American Legends" -- Fascinating stories of national treasures. R/T: 67 minutes Previous releases in the PlainTales collection of celebrated storytelling CDs include PlainTales Classics, Fantasy Classics, Literary Classics and Adventure Classics. PlainTales storytelling CDs are recommended for ages 4-10 and are available for $12.95 each. Website: www.plaintales.com 4/3/09 | Posted by Brian Scott at 8:01 AM A new University of California, Berkeley, Web site called "Understanding Science" (http://undsci.berkeley.edu/) paints an entirely new picture of what science is and how science is done, showing it to be a dynamic and creative process rather than the linear – and frequently boring – process depicted in most textbooks. Funded by the National Science Foundation as a resource for teachers and the public, the material was vetted by historians and philosophers of science as well as by K-12 teachers and scientists. "Through this collaborative project, we hope to overturn the paradigm of how science is presented in our classrooms," said Roy Caldwell, a UC Berkeley professor of integrative biology who led the project along with colleague David Lindberg. "The Web site presents, not the rigid scientific method, but how science really works, including its creative and often unpredictable nature, which is more engaging to students and far less intimidating to those teachers who are less secure in their science." "Part of the fun of science is lost when you present it as a linear thing," said Natalie Kuldell, an instructor in biological engineering at the Massachusetts Institute of Technology (MIT) and one of 18 scientific advisors for the project. While the five-step process described in textbooks – ask a question, form an hypothesis, conduct an experiment, collect data and draw a conclusion – isn't wrong, "it is an oversimplification," she said. The core idea, said Judy Scotchmoor, assistant director of the UC Museum of Paleontology at UC Berkeley and coordinator of Understanding Science, is that science is about exploring, asking questions and testing ideas. The site provides a Science Checklist that can be used to determine just how "scientific" particular activities are. Scotchmoor will discuss the Understanding Science approach at a Friday, Feb. 13, session celebrating the Year of Science 2009. The session is from 1:30 to 4:30 p.m. in the Columbus EF room of the Hyatt Regency Chicago. "The goal was to present (the concept) that testable ideas are right at the center of science, and if you don't generate testable ideas, then you are really not doing science," Kuldell said. Testing, however, is intertwined with exploration and discovery -- the "cowboy" aspect of science, in the words of one project advisor -- review of hypotheses and theories by skeptical peers, and actual application of the science to real world problems. Within the Web site, personal stories contributed by top scientists around the country illustrate the interplay of exploration, peer review and outcomes, and demonstrate the different pathways to discovery taken in different fields of science, from biology to cosmology. Scotchmoor hopes that the site will show students and the public that "science really is an adventure. There are certain rules that you need to follow, but really you can't predict where questions will take you." The Web site premiered on Jan. 5 during the launch of Year of Science 2009, and received rave reviews from New York Times science writer Carl Zimmer, who referred to it in his blog as "a guided tour through the basic questions of what science is and how it works." He particularly praised the Process of Science flowchart illustrating how science works. A set of four interlocking circles represent the interplay between hypothesis testing and the ways scientists generate these hypotheses, while multiple arrows connect the circles to illustrate the roundabout way scientists make their discoveries. "At best, I think, stories about science can only be snapshots of small patches of science's cycles within cycles," Zimmer wrote of the flowchart. "It (story telling) uses the one-dimensional medium of language to gesture towards science's mind-boggling multidimensionality. This picture from Understanding Science will help me remember to make that gesture, long after the Year of Science is over." Four years ago, Scotchmoor, Caldwell and Lindberg created a Web site called Understanding Evolution that now provides a much-needed resource for teachers and the public. "We discovered, however, that there was a lot of confusion about what science is and isn't," Scotchmoor said. "Teachers had misconceptions, such as what a theory is or whether creationism is science," Caldwell said. "Many even thought science wasn't creative, in part because of cookbook labs, in part because of the emphasis on testing factual knowledge, not process." With advice and input from historians, philosophers, teachers and scientists, Scotchmoor, Caldwell and Lindberg constructed the Web site from scratch, modeling it after Understanding Evolution. Understanding Science has been endorsed by the California Science Teacher's Association and the American Institute of Biological Sciences, and will be part of the next edition of a popular high school biology textbook, "Biology" (Prentice Hall), by Ken Miller and Joe Levine. Kuldell uses it in her second- and third-year college lab courses to "set the expectations of my students, (to show them) that science is iterative and messy and doesn't always make a clean story – and that that should be expected. You work and then you rework, you get feedback, you rethink your ideas, and then retest. Science isn't quite as neat as people wish it were and think it should be." The Web site will continue to grow, with personal profiles of scientists and their research, each accompanied by a flow chart showing how they proceeded from ideas to discovery. "We hope these cool stories will draw people in," Scotchmoor said. 4/1/09 | Posted by Brian Scott at 11:32 AM
<urn:uuid:4c54609d-56ae-4ba7-ba6a-54dc5db4ee5f>
CC-MAIN-2016-26
http://literacyandreading.blogspot.com/2009_04_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959505
9,092
2.78125
3
The preferred option was now an "all-up" approach in which the crew and all the cargo were dispatched on one vehicle. The 1989 Mars expedition case study ground rules included: a single expendable vehicle would be launched to low Earth orbit; a zero-gravity vehicle would be used; three crew members would descend to the surface for 20 days; and aerobraking would be used at Mars. In the course of the expedition, the environment, geological features, and material of Mars would be studied to advance knowledge of the origin of the solar system and to survey the potential for using Martian resources. Two different trajectory options were initially considered for this case study. The first of these options was the "split/sprint" approach, carried over from the FY 1988 studies, in which a cargo vehicle delivered a portion of the cargo to Mars orbit, followed by a second vehicle carrying the crew and the remainder of the hardware. The second option was an "all-up" approach in which the crew and all the cargo were dispatched on one vehicle. Both options had similar characteristics regarding trajectory type and mission duration. The mission description was based on the (new) second option. Upon completion of the orbital mating operations between the vehicle and the propulsion stages, the three-member crew would depart for Mars in a zero-gravity transfer vehicle on a 500-day roundtrip trajectory with a free flyby/return capability in case of mission abort. Depending upon the particular launch year, a Venus gravity-assist swingby might be utilized to reduce the trajectory energy requirements. Aerobraking, a technique that used the planetary atmosphere to slow the vehicle, would be employed upon arrival to capture the spacecraft into a 500-kilometer circular orbit. Five days would be allowed for lander preparation, after which the excursion vehicle would separate from the cruise module, carrying all three crew members to the Martian surface. During their 20-day stay on Mars, the crew would conduct geologic and geophysical observations near the landing site on foot and by rover, collecting rock, sediment, and exobiology samples. Instrumentation would be deployed for the crew to conduct short-duration experiments, such as seismic tests, atmospheric balloons and rockets, and microbe/ bacterial/plant organics exposure tests. In addition, a geophysical/atmospheric science station would be left on Mars to measure properties and processes that could be monitored from Earth on a long-term basis. Because their stay on Mars was short, the crew would reside in the in-transit habitation module in the Mars excursion vehicle. The crew would depart with selected surface samples to rendezvous with the interplanetary transport parked in orbit. Five days would be allowed for departure preparations, for a total stay time of 30 days at Mars. The return trajectory was either direct or by way of Venus, again depending on the launch year. As the interplanetary vehicle approached Earth, the crew, with samples, would transfer to an Earth crew return vehicle and separate from the larger cruise module. Return to Earth's surface would be via direct atmospheric entry and aeromaneuvering to landing. The total length of the mission would be 16 to 17 months. In the time frame considered for this case study, there were four all-up mission opportunities to Mars. All outbound trajectories had a free-abort capability in the event of a major propulsion system failure detected en route to Mars as well as a powered abort capability in the event of some other system failure. Free aborts required longer-than nominal trip durations (20 to 24 months); powered aborts could return the crew faster, but not in less than 13 months. The major trade-off result showed that the all-up mission option was preferable to the split/sprint option on the basis of total mass required in low Earth orbit and the number of Earth-to-orbit launches. The split/sprint concept was modified from 1988's configuration by removing the trans-Earth injection stage from the cargo flight and placing it on the piloted flight to assure crew safety. This strategy then negated the mass advantage of a separate cargo flight. Limited but important science could be accomplished on the Martian surface during the 20-day stay. Significant science investigations, referred to as "cruise science," could also be conducted on the long trips to and from Mars. In particular, studies of human responses to radiation and zero gravity would be conducted. Particles and fields experiments and astronomical observations would also be possible. A critical requirement for this case study was a heavy lift launch vehicle. Technology requirements were a highly reliable environmental control and life support system, propellant tanks with low inert mass and boil-off rates of cryogenic propellants, propellant transfer capabilities, remote rendezvous and docking in Mars orbit, high-energy aerobrake for Mars capture, hazard avoidance systems for a safe Mars landing, and short-range forecasting technology for solar flares. This case study was a continuation of the FY 1988 preliminary examination of Mars expeditionary requirements, with more in-depth analyses and trade studies of mission options and techniques. A Mars Expedition would offer the national prestige of mounting a mission to land humans on another planet early in the next century. However, significant challenges pervaded this approach. The expeditionary pathway emphasized a major, highly visible effort without the burden and overhead associated with constructing permanent structures and facilities on Mars. Implementation of this approach relied on current or near-term technology and expendable vehicle systems. Although similar to an Apollo-type mission, the Mars Expedition would, nonetheless, stimulate the development of technological and operational capabilities to test avenues of future expansion of exploration opportunities. In addition to its primary objective, the Mars Expedition would investigate the environment, geology, and materials of Mars that were relevant to the advancement of scientific knowledge as well as the potential for longer-term habitation and resource utilization. This mission could serve as a precursor to future Mars outpost development, and it could help identify prime sites for an outpost as well as return samples of the Martian atmosphere and soil, which could be analyzed for potentially useful resources and the presence of hazardous materials. The case study methodology used during fiscal years 1988 and 1989 provided a systematic mechanism for a closer examination of the many pertinent parameters of human exploration of the Moon and Mars. These case studies built directly on earlier analyses conducted in support of the National Commission on Space and the Ride Task Force. Mars Expedition 1989 Mission Summary:
<urn:uuid:16ba4950-ccca-452f-9d39-089d0de73d4c>
CC-MAIN-2016-26
http://www.astronautix.com/craft/marion89.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00099-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945765
1,315
3.734375
4
Guest post from Rhonda Stone By email, Rhonda Stone has sent in the following piece relating to earlier discussions here. Comments are welcome, but, remember that Rhonda is a guest, and please be particularly sure to stick to civilised discussion. Comments that abuse the poster or other commenters will be deleted. If the brain reads sentences through a process of decoding or otherwise identifying individual words, how is it possible to read this: B4UASsM2MCH ABT RDNG, cnsdr tht th BRNISWNDRFLY KreaTV& efcnt. (copyright, 2005, Dee Tadlock, Ph.D., Read Right Systems, Inc.) I would love to know if it has occurred to many of your readers that neither the phonics and decoding view of reading and development nor whole language philosophy accurately reflect what it is that the brain does when it reads sentences? What if individual word identification and sentence reading are completely seperate cognitive acts? What would that mean to our understanding of what must be done to prevent and correct reading problems? 1) The whole issue of phonics vs. whole language may have been nudged dangerously off track in the 1990s with neuroimaging that was used as “strong evidence” that we read through the sounds of speech. This neuroimaging used individual word identification to document that speech and language centers of the brain are involved in “reading.” Well, of course they are involved!! Reading involves language!! If we would stop to think about it, it would be readily apparent that the underlying assumption is FLAWED. The assumption: naming words on word lists is the same cognitive act as reading sentences. See the problem? The scientists did not first seek to determine whether or not their assumption was correct before they paraded out their conclusions about the role of phonics as a scientific certainty. 2) Provocative neuroimaging research has now been completed in England that provides evidence that naming words on word lists and sentence reading are significantly different acts performed by the brain, requiring very different patterns of neural activation. (Vandenberghe et al.; Price et al. — see Price: “The Myth of the Visual Word Form Area.”) This research is not being discussed in the U.S. Is it being discussed throughout the U.K.? It ought to be. It is hugely significant to the acceptance of phonics and decoding as an appropriate initial reading strategy. If word naming and sentence reading are significantly different cognitive acts, it means that we may be teaching children the wrong things with early reading instruction and, as a result, CAUSING significant reading problems. 3) Until the reading field develops universal agreement about what the brain does when it reads sentences in languages that use alphabetic systems, reading problems are bound to exist in languages dependent upon an alphabet. There are actually THREE possibilities for what the brain does: The first we know well — using the alphabet to identify individual words through a process of sequential left-to-right decoding, resulting in what might appear as a viable strategy for word identification. The second is also well known and related — using the alphabet and an initial strategy of decoding to create supposed “word forms” in a proposed “word form area of the brain” (see Shaywitz’s book Overcoming Dyslexia). This is the view that Price and her team’s neuorimaging research recently found to be flawed. The third is not well known or understood — selecting and integrating alphabetic information STRATEGICALLY (not sequentially for decoding or for the purpose of creating supposed “word forms”), in addition to integrating the strategic alphabetic information with other knowledge stored in the brain — our accumulated knowledge of the structure of language (whether it be English, Spanish, German, or whatever), our knowledge of how the world works, etc. in a process of anticipating and constructing the meaning as we go. This is a very different view of how the brain may use the alphabet — and it also explains how it is possible to read the scrambled sentence provided. If THIS is what the brain does when it reads sentences, then teaching children to read through a process of decoding or otherwise identifying individual words sets the stage for a reading problem in those kids who do not experiment with strategic alphabetic sampling and integration of that information with other brain systems. Neuroimaging by Vandeberghe et al., Keller et al., and others has already documented that sentence reading does, indeed, involve broader brain activation than that yielded by naming words from word lists. I think these possibilities are worth considering. As a parent of children who used to have reading problems and now one of the world’s most devoted advocates for new solutions to children’s reading problems, I think they are worthy of discussion. Parent advocate, children’s reading issues Author, The Light Barrier (St. Martin’s Press, 2002) Co-Author, Read Right! Coaching Your Child to Excellence in Reading (McGraw-Hill, 2005) Literature/Research Assistant to Dee Tadlock, Ph.D.
<urn:uuid:b83fe489-e0a3-4604-bca5-93be78e3c5f8>
CC-MAIN-2016-26
http://johnquiggin.com/2005/11/04/guest-post-from-rhonda-stone/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949322
1,070
3.109375
3
As the might invasion fleet sailed through the mine-infested English Channel on the even of Armageddon in the west, other Allied forces were sweeping through Rome; and the Red Army was massed along the eastern front ready to launch its shattering summer offensive. The tides of war were running strongly against the Germans as the full fury of the Western Allies was at last unleashed in an overwhelming triphibious demonstration of technical, logistical, and organizational superiority. The Normandy Campaign (6 June-24 July 1944) In a single day, 6 June 1944, the combined power of the Allied air forces, navies, and armies struck with terrific impact against a fifty-miles sector of the French coast (Map 3). Beginning their work shortly after midnight, 1,136 heavy bombers of the R.A.F. Bomber Command unloaded 5,853 tons of bombs by dawn on selected coastal batteries lining the Bay of the Seine between Cherbourg and Le Havre. Next, the airborne troops began the largest airborne operation conducted up to this time. About 0130 hours, 6,600 men of the 101st Airborne Division began dropping behind UTAH Beach, and an hour later, the three parachute regiments of the 82d Airborne Division began their descent to the west of the 101st. At 0230 hours, two brigades of the British 6th Airborne Division were dropped east of the Orne River, between Caen and the sea. At dawn, the Eighth Air Force took up the air attacks; and in the half hour before the touchdown of the assault forces (from 0600 to 0630 hours) 1,365 American heavy bombers, although hampered by the weather, dropped 2,746 tons of high explosives on the shore defenses. Then the medium and light bombers and the fighter-bombers of the Allied Expeditionary Air Force swarmed in to attack individual targets among the defenses. During the remainder of the day, the strategic air forces concentrated their attacks upon the key communication centers behind the enemy's lines; and the tactical air forces roamed over the entire battle area, attacking German defensive positions, shooting up buildings known to house headquarters, strafing troop concentrations, and destroying transport. During the twenty-four hours of 6 June, Allied aircraft flew 13,000 sorties, and during the first eight hours alone, dropped 10,000 tons of bombs. In contrast to this mighty Allied air assault, such reconnaissance and defense patrols as were flown by the Germans were mainly over the Pas de Calais area, while over the assault beaches and their approaches, only some fifty half-hearted sorties were attempted.* Meanwhile, the Allied sea armada drew in toward the coast of France, preceded by its flotillas of mine sweepers. The bad weather and high seas having drive the enemy surface patrol craft into their harbors, the 100-mile movement across the Channel was uneventful. By 0300 hours, the ships were anchoring in the transport areas some thirteen miles off their assigned beaches, where a stiff wind and waves up to six feet high were making more difficult the complicated task of loading the troops into their landing craft (from the transports) and forming up the assault waves for the dash to the beaches. At 0550, the heavy naval fire-support squadrons began a forty-minute bombardment of the major coast-defense batteries, which were quickly silenced. As H-hour** drew near, the troops approached their beaches under the "comforting thunder" of fire support from destroyers that were closing in to bombard at close range the enemy pillboxes and strong points that commanded the beaches. In addition, each assault force had its own fire support * Total German air sorties for D-day were about 500. ** The Allied assault forces used four different H-hours to meet the differing conditions of tide and bottom on the main assault beaches. The hour was 0630 at UTAH and OMAHA, while the British landings came between 0700 and 0800. in the form of rocket batteries, tanks, and self-propelled artillery mounted in landing craft. As the assault teams neared the beaches, all htis tremendous weight of fire support from the sea and the air reached a climax. The terrain behind the beaches generally favors defensive tactics and the whole is unsuitable for mobile warfare. in the VII Corps zone, the smooth and shallow beaches in the vicinity of Varreville are backed by sand dunes that extend inland 150 to 1,000 yards. Behind the sand dune, the low ground had been inundated from a width of one to two miles, restricting travel from the beaches to four easily defended causeways. Farther inland, the Merderet River, running parallel to the coast, and the Douve River, from which the ground rises northward to the hills around Cherbourg, restrict traffic to the established roads. Ste. M&egrae;re Eglise, St. Sauveur, and Barneville are key points on the road nets leading to Cherbourg. Southeast of UTAH Beach the Douve and Vire Rivers flow into the shallow, muddy Carentan Estuary, which marked the boundary between the VII and V Corps. From Grandcamp, cliffs extend eastward to Arromanches with only two breaks, one in the Vierville-Colleville area (V Corps zone), where there is a beach five miles long, and one at Port en Bessin. The Aure River behind OMAHA Beach is a serious obstacle for a distance of ten miles from its mouth, near Isigny. From Arromanches to the Orne River (in the British zone), the beach is backed by sand dunes, low cliffs, or gently rising ground. Between the Orne and the Dives is a wide, marshy valley. Southeast of Caen there is an open, rolling plain that extends to Falaise; but between the Vire and Orne Rivers the area is covered to a depth of forty miles inland by bocage.* In this area, observation is limited, and vehicle movement is restricted to the roads. The highlands that extend across the invasion front, with a depth up to twenty-five miles, are broken with steep hills and narrow valleys. The dominant terrain is some eighteen miles southwest fo Caen, Although narrow, the roads in the entire area are generally metaled and good. Key centers in the road net, and therefore vital initial terrain objective, are the towns of Carentan, St. Lô, Bayeux, and Caen. The area in which the landings were made was initially held by the LXXXIV Corps of the German Seventh Army, with one panzer * As stated previously, land divided into small fields by hedges, banks, and sunken roads. and five infantry-type divisions deployed between Cherbourg and Caen.* A panzer brigade, an infantry brigade, and a parachute regiment held the coast of the Gulf of St. Malo from Barneville to Avranches. On D-day, Field Marshal Rommel was absent from the front, being at his home near Ulm celebrating his wife's birthday. The eight Allied regimental-brigade assault teams landed on the beaches generally as planned and joined up with most of the paratroopers. By the end of D-day, the four corps had established beachheads as shown by the blue phase line on the map. On all beaches except OMAHA, the opposition had been lighter than expected; and, although all the D-day objectives had not been secured, a foothold had been gained in western Europe. VII Corps Landings.--The mission of the VII Corps was expressed in its field order, which read: "VII Corps assaults UTAH Beach on D-day at H-hour and captures Cherbourg with minimum delay." The 101st Airborne Division was to clear the way for the seaborne assault by seizing the western exits of the four causeways across the inundated area and organize the southern flank of the corps beachhead for defense and further exploitation. The 82d Airborne Division was to secure the western edge of the beachhead by capturing Ste. Mère Eglise, and it was to establish deep bridgeheads over the Merderet River to facilitate a later attack to the west to seal off the peninsula.** The 4th Division was to establish the UTAH beachhead and then advance on Cherbourg. Appendix 8a shows the airborne and seaborne landings and the major operations of the three VII Corps divisions on D-day. The airborne divisions were preceded by twenty pathfinder crews that experienced some difficulty in marking the six drop zones. The 101st approached France in a tight formation; but from the coast to the Merderet, cloud banks loosened the formation, and east of the Merderet flak scattered the transport planes still further. In general, the division did not have a good drop. About 1,500 men were either killed or captured, and approximately 60 per cent of the * The LXXXIV Corps was commanded by lieutenant General Marcks, whose headquarters was in St. Lô. The divisions, from west to east, were 243d, 91st, 709th, 352d, 716th, and 21st Panzer. The Seventh Army headquarters was in Le Mans. ** This represented a last-minute change in the corps plan, which had originally prescribed a landing by the 82d Airborne Division west of St. Sauveur to block the movement of enemy reinforcements into the western half of the Cotentin Peninsula. The identification of the German 91st Division in the area at the end of May necessitated employing the 82d farther east to insure the success of the beach landings. equipment was lost when the bundles fell into swamps or into fields covered by enemy fire. Only a fraction of the division's organized strength could initially be employed on the planned missions, and many of the missions carried out were undertaken by mixed groups formed from the scattered paratroopers. Fifty-one gliders came in about dawn, and thirty-two more arrived at 2100 hours. In spite of the scattered landings, the sudden appearance of the Americans created such confusion among the Germans that it tended to offset the disorganization of the invaders; and by dint of considerable improvisation, the 101st was able to accomplish most of its initial missions. A group of about seventy-five men from the northern regiment made for one of the division's main objectives, and enemy 150-mm coastal battery at Varreville, and found it deserted. They then pushed on and secured the two northern exits from the beach while other troops established defensive positions to the northwest. A force of about 3ighty men from the center regiment attacked the Germans holding the southern causeway, forcing them to surrender by noon. About 1230 hours, the first contact was established with the 4th Division when one of its infantry battalions advanced across the southern causeway. Other troops of the center regiment captured a German 105-mm battery. In the south, the bridges over the Douve north of Carentan were seized by fifty men about 0500 hours, and 150 others secured the lock in the river northwest of Carentan. Heavy fighting developed east of the Ste. Mère Eglise-Carentan road as the paratroopers tried to move to the southwest to destroy other bridges on the Douve. The Germans were bringing intense artillery, mortar, and small-arms fire on the Americans in this area, but a naval shore fire-control officer was able to contact the cruiser Quincy, whose 8-inch salvos neutralized the enemy fire. By evening, 2,500 of the original 6,600 men of the 101st were working together. The division had accomplished all of its initial missions except destroying the Douve bridges west of Carentan, and was assembling in the southern part of its zone to resume the attack to the southwest. Of the 82d Airborne Division's three regiments, the two that were to land west of the Merderet had the worst drop pattern of all the airborne units. In contrast, the regiment that landed east of the Merderet had a very successful drop; and almost half of its 1,000 men were able to assemble rapidly. One battalion immediately started for Ste. Mère Eglise. The men were ordered to go directly into town without searching buildings; and they were told to use only knives, bayonets, and grenades while it was dark so that enemy small-arms fire could be spotted by sight and sound. By 0430, the battalion had occupied the town and had hosted the same American flag that it had raised over Naples upon its entry into that city. Enemy counterattacks had to be beaten off during the day, but by nightfall the situation was well in hand. The chief concern, however, of the 82d Airborne Division during D-day was around two Merderet River bridges west of Ste. Mère Eglise, where the bulk of the assembled forces were committed and where the enemy put up his strongest resistance.* A miscellaneous group of about 400 men from all regiments of the 82d launched an attack about noon and seized one bridge, but they were unable to consolidate a position on the west bank. About this time, the Germans launched a counterattack that recaptured the bridge and isolated the Americans west of the river. The enemy continued to attack across the bridge during the afternoon, but the 82d held the east bank. An attempt by the paratroopers to seize another bridge farther south was unsuccessful. Thus we see that at the end of D-day, the situation in the 872d Division area was not good. The failure to hold the Merderet bridge created a tactical problem that wsa to engage the major forces of the entire division for the next three days. Probably the weakest feature of the situation of the airborne division at the close of D-day, was the lack of communication between their own units and lack of information on the progress of the seaborne landings. In the meantime, the 4th Division was making its assault on UTAH Beach with relative ease, much to the surprise of everyone. During the naval pre-invasion bombardment, 276 medium bombers of the Ninth Air Force dropped over four thousand 250-pound bombs on beach objectives in the assault area. While the first waves of the assault troops were 700 yards off shore, seventeen fire-support craft discharged their rockets. The first wave consisted of twenty LCVP's, each carrying a thirty-man infantry assault team, and eight LCT's, each carrying four amphibious (DD) tanks. Almost exactly at H-hour, the assault craft lowered their ramps,m and 600 men walked into waist-deep water to wade the last 100 yards to the beach. Enemy artillery fired a few shells, but otherwise there was no opposition. The tanks were launched form their LCT's about 3,000 yards * This resistance was partially due to the fact that the 82d Airborne Division's regiments west of the Merderet had landed practically on top of the headquarters of the German 91st Division. from the beach and landed about fifteen minutes after the first assault wave.* The only major divergence from the plan was that the entire force was landed 2,000 yards south of the planned landing area. This error may have been caused by lack of naval control vessels,** by the strong tidal current, or because the shore had been obscured by the smoke and dust of the air and naval bombardment. The error proved fortunate, however, since the defense in the south were found to be weaker than those where the landing was supposed to take place. Army and Navy demolition teams following the assault infantry found the beach less thickly obstructed than anticipated, and the entire area was cleared in an hour. The work of demolishing the sea wall behind the beach and clearing paths through the sand dunes progressed rapidly. The infantry found enemy troops occupying field fortifications; but, apparently demoralized by the preparatory bombardment, they showed little fight. Beach opposition was soon cleaned up, and the assault troops reorganized for the advance inland. By 0800 hours, four battalions of infantry had crossed the flooded area on the three southern causeways and then advanced to the west to gain contact with the airborne troops. By evening, two battalions were on the Carentan road south of Ste. Mère Eglise, while the third battalion was compressing the large enemy pocket that separated the 82d Airborne Division at St. Mère Eglise from the rest of the corps. The other infantry battalions of the 4th Division came ashore about noon and began moving out to the northwest to enlarge the beachhead. Since all the causeways were already congested or under enemy fire, these troops had to wade through the waist-deep water in the inundated areas, somewhat delaying their advance. One of the most critical problems during the morning had been the vehicle congestion created on the beach because of the limited number of exits. The landing at UTAH Beach met less opposition than any of the others, and the 4th Division's losses were astonishingly low. Its total D-day casualties were 197 men, including sixty lost at sea. No less noteworthy was the speed of the landings. The entire division (except one artillery battalion) landed during the first fifteen * The thirty-two DD tanks played little part in the assault. One LCT struck a mine and sank; so only twenty-eight of the tanks reached the beach. ** Only one of the original four control craft was available to guide the assault wave because two had been sunk and one was bringing in the tanks. hours, and by the end of the day over 20,000 troops and 1,700 vehicles had reached UTAH.* D-Day on OMAHA Beach.--The mission of the V Corps was to secure a beachhead in the area between the Vire River and Port en Bessin, from which its troops would push southward toward Caumont and St. Lô, conforming to the advance of the British Second Army. The D-day mission of the 116th Regimental Combat Team, on the right, included the capture of Vierville and St. Laurent, and advance to the west to occupy the area between the flooded Aure River and the sea, and preparations to seize Isigny and make contact with the VII Corps. The 16th Regimental Combat Tea, on the left, would seize Colleville, cross the Isigny-Bayeux road, and take up defensive positions covering the southeastern section of the beachhead, from Trévières to Port en Bessin.** Follow-up regiments of the 1st Division would pass through the 16th Regimental Combat Team and establish contact with the British. The operations on OMAHA Beach will be discussed somewhat in detail because it was there that the Allies met their most serious initial resistance. Appendix 8b shows the terrain at Omaha Beach, the planned and actual landings of the first assault wave, some of the enemy strong points, and major V Corps operations during D-day. In the OMAHA sector, the part of the beach regarded as suitable for landing operations in about 7,000 yards long, on a shore that curves landward in a very slight crescent and is backed with bluffs that merge into the cliffs at either end of the sector. The diagramatic cross-section in the appendix shows the principal features of the beach. The tidal flat of firm sand is exposed at low tide, but at the high-water mark it terminated in a bank of shingle (very heavy gravel that was removed after the landing) that sloped up rather steeply to a height of some eight feet. In places, the shingle embankment was as much as fifteen yards wide, the stones averaging three inches in diameter. On the eastern two-thirds of the beach, the shingle lay against a low sand embankment or line of * The troops included, in addition to the 4th Division, one battalion of the 90th Division, an armored field artillery battalion, a tank destroyer battalion, a chemical mortar battalion, two tank battalions, and the 1st Engineer Special Brigade. ** The 11th and 115th Regimental Combat Teams of the 29th Division were attached to the 1st Division for the landing. Later in the day, the 29th Division was to assume control of operations on the right and the 1st Division on the left. This organization was designed to fit an operation that would develop from an assault by one reinforced division into an attack by two divisions abreast. dunes that formed a barrier impassable for vehicles. On the western part of the beach, the shingle was piled against a sea wall. Between the dune line or sea wall and bluffs lies the beach flat or shelf. Very narrow at either end of the main landing zone, this level shelf of sand widens to more than 200 yards near the center of the stretch. The flat was marked by large patches of marsh and high grass, a road parallel to the beach, and some summer villas. Bluffs 100 to 170 feet in height rise sharply from the flat and dominate the whole beach area. Their slopes are generally steep, but in varying degree. Four valleys provide exits from the beach flat and were, inevitably, key areas both in the plan of attack and in the arrangement of the defense. Realizing that OMAHA Beach was a suitable landing area, the enemy had prepared formidable defenses. In the tidal flat were three bands of heavily mined underwater obstacles consisting of element C's, heavy logs, and steel hedgehogs. On the shelf behind the shingle, liberal use was made of barbed wire and mines. Firing positions were laid out to cover the tidal flat and beach with direct fire, both plunging and grazing, from all types of weapons. Observation on the whole OMAHA area and flanking fire from cliff positions at either end were aided by the crescent-shaped curve of the shore line. Each strong point was a complex system of pillboxes, gun casemates, open positions for light guns, and fire trenches surrounded by mine fields and wire. These were connected with each other and with underground quarters and magazines by deep trenches or tunnels. Most of the strong points were situated near the entrances to the valleys, which were further protected by antitank ditches and road blocks. While machine guns were the basic weapons in all emplacements, there were over sixty light artillery pieces of various types. The heavier guns were sited to give lateral fire along the beach, with traverse limited by thick concrete wingwalls that concealed the flashes and made the guns hard to spot or destroy from the sea. All main enemy defenses were on the beach or just behind its, defenses beyond the beach depending largely on the use of local reserves in counterattacks. The V Corps plans had been worked out in great detail; the assault landing teams had been built up to include every type of specialized technique and weapon needed to fight at the beach; and every unit, down to the smallest, had been trained to carry out a particular task in a definite area. Six companies ot tanks (including ninety-six tanks and sixteen tank dozers), eight companies of infantry (1,450 men), and twenty-four special Army and Navy engineer demolition teams would come in with the first wave. Thirty minutes later, after the demolition teams had cleared gaps through the obstacles, the remainder of the two leading assault infantry regiments and two battalions of Rangers would begin to arrive. By H plus 90 minutes, the artillery and engineer special brigades would start landing. The naval bombardment included 600 rounds of 12-and 14-inch shells from battleships and about 3,000 rounds of 4- to 6-inch shells from cruisers and destroyers, all directed against the beach strong points. The fire-support craft drenched the beach defenses with about 9,000 rounds of light artillery fire from H minus 30 to H minus 5 minutes; and when the assault waves were 300 yards from the beach, 9,000 rockets were fired. As the landing craft neared the beach* at 0630 hours, there was every reason to hope that the enemy shore defenses might have been neutralized by the bombardment. But almost at once, many of the landing craft began to come under fire from automatic weapons and artillery that increased in volume as they approached the touch-down points. It was evident that the enemy fortifications had not been knocked out. The situation rapidly became worse when the assault troops hit the beach, which they found unscarred by the heavy air bombardment that had just been completed.** Only sixteen of the forty-eight tanks, in the 16th Regimental Combat Team sector survived the landing, and only three of the tank dozers arrived in working condition. The special demolition teams suffered 41 per cent casualties during the day--most of them in the first half hour.*** Most of the landing craft grounded fifty to a hundred yards out, sometimes in water neck deep. In crossing the 200 yards of open sand to the cover of the shingle and sea wall, the infantry suffered their heaviest casualties of the day from mortar, artillery, and converging machine-gun fire. Only one of the eight infantry companies in the first wave was ready to operate as a unit after crossing the lower beach. The second assault wave began touching down at 0700 hours; but since no advance had been made beyond the shingle, and since * As shown by Appendix 8b, a majority of the landing craft came in east of their appointed beach sectors, some being as much as 1,000 yards out of position. This made reorganization of units most difficult and caused much confusion. ** The heavy overcast had forced the use of pathfinder instruments by the Eighth Air Force. With this technique and its greater range of possible error, it was necessary to push the center of the drop pattern farther inland to insure the safety of the assault craft. [The HyperWar editor finds it curious that none of the officials histories of any of the services addresses the poor performance of the strategic air forces on D-day in the context of the over-reaching claims put forward by their proponents during the preceding 25 years ("winning wars by strategic bombing along", accuracy sufficient to "drop a bomb in a pickle barrel", etc. This volume, itself, often refers to the Eighth Air Force's "precision bombing" attacks -- "precision" meaning that half the bombs fell with 1-5 miles of the target! At a minimum, one has to ask how the choice of dates for the assault would have been affected if the value of the strategic bombers' contribution had been realistically assessed. -- HyperWar] *** However, they did succeed in blowing six gaps through the bands of obstacles, although only one of them could be marked because of loss of equipment. neither the tanks nor the scattered groups of infantry already ashore were able to give much covering fire, these later troops experienced the same difficulties as the first wave. Mislandings continued to hinder reorganization, and at 0830 the landing of vehicles was suspended. The following account indicates how critical the situation had become: As headquarters groups arrived from 0730 on, they found much the same picture at whatever sector they landed. Along 6,000 yards of beach, behind sea wall or shingle embankment, elements of the assault force were immobilized in what might well appear to be hopeless confusion. As a result of mislandings, many companies were so scattered that they could not be organized as tactical units. At some places, notably in front of the German strongpoints guarding draws, losses in officers and noncommissioned officers were so high that remnants of units were practically leaderless. Bunching of landings had intermingled sections of several companies on crowded sectors like Dog White, Easy Green, and Fox Green. Engineers, navy personnel from wrecked craft, naval shore fire control parties, and elements of other support units were mixed in with the infantry. In some areas, later arrivals found it impossible to find room behind the shingle and had to lie on the open sands behind. Disorganization was inevitable, and dealing with it was rendered difficult by the lack of communications and the mislanding of command groups. However, even landing at the best point, a command party could only influence a narrow sector of beach. It was a situation which put it up to small units, sometimes only a remnant of single boat sections, to solve their own problems of organization and morale. There was, definitely, a problem of morale. The survivors of the beach crossing, many of whom were experiencing their first enemy fire, had seen heavy losses among their comrades or in neighboring units. No action could be fought in circumstances more calculated to heighten the moral effects of such losses. Behind them, the tide was drowning wounded men who had been cut down on the sands and was carrying bodies ashore just below the shingle. Disasters to the later landing waves were still occurring, to remind of the potency of enemy fire. Stunned and shaken by what they had experienced, men could easily find the sea wall and shingle bank all too welcome a cover. It was not much protection from artillery or mortar shells, but it did give defilade from sniper and machine-gun fire. Ahead of them, with wire and minefields to get through, was the beach flat, fully exposed to enemy fire; beyond that the bare and steep bluffs, with enemy strongpoints still in action. That the enemy fire was probably weakening and in many sectors was light would be hard for the troops behind the shingle to appreciate. What they could see was what they had suffered already and what they had to cross to get at the German emplacements. Except for supporting fire of tanks on some sectors, they could count on little but their own weapons. Naval gunfire had practically ceased when the infantry reached the beach; the ships were under orders not to fire, unless exceptionally definite targets offered, until liaison was established with fire control parties. Lacking this liaison, the destroyers did not dare bring fire on the strongpoints through which infantry might be advancing on the smoke-obscured bluffs. At 0800, German observers on the bluff sizing up the grim picture below them might well have felt that the invasion was stopped at the edge of the water. Actually, at three or four places on the four-mile beachfront, U. S. troops were already breaking through the shallow crust of enemy defenses.* The outstanding fact about these first two hours of action is that despite heavy casualties, loss of equipment, disorganization, and all the other discouraging features of the landings, the assault troops did not stay pinned down behind the sea wall and embankment. At half a dozen or more points, they found the necessary drive to leave their cover and move out over the open beach flat toward the bluffs. In nearly every case where an advance was attempted, it carried through the enemy beach defenses. Some penetrations were made by units of company strength; some were made by intermingled sections of different companies; some were accomplished by groups of twenty or thirty men, unaware that any other assaults were under way. Various factors, some of them difficult to evaluate, played a part in the success of these advances. Chance was certainly one; destroyers' guns and tanks were called on for support and rendered goo service; combat engineers blew gaps through enemy wire and cleared paths through mine fields. But the decisive factor was leadership. Wherever an advance was made, it depended on the presence of some few individuals, officers and noncommissioned officers, who inspired, encouraged, or bullied their men forward, often by making the first forward moves. A characteristic of these early penetrations that influenced the rest of the action on D-day was that they were not made up the draws, as planned, but up the bluffs, as shown on the chart. Conditions on the beach improved later in the morning. Fire from the main enemy strong points was gradually reduced as one gun emplacement after another was knocked out, often by tanks. Support from naval units, necessarily limited during the first landings, began to count heavily later on and became a major factor as communications improved between shore and ships. The first decisive improvement along the beach came at the draw northeast of St. laurent. About 1130, the last enemy defenses in front of it were reduced, and within half an hour engineers sere clearing mines in the draw and working dozers on the western slope to rush through an exit road. This road became the main funnel for movement off the beach, although traffic soon became jammed on the plateau at the head of the draw, since the ground was not cleared farther inland. * Historical Division, OMAHA Beachhead. No attempt will be made to describe the fighting inland during D-day. In general, the action centered around the town of Vierville, St. Laurent, and Colleville. In all three areas, assault units, usually in less than battalion strength, fought more or less uncoordinated and separate actions. For example, in the St. Laurent area, elements of five battalions spent the afternoon and evening fighting through an area of about a square mile that contained only scattered pockets of enemy resistance. The effectiveness of the attacking forces had been reduced by a number of factors, including lack of communications, difficulties of control, and the absence of artillery and armored support. So in spite of continued enemy fire on the beach and the over-all confusion, progress was steady as the day wore on. The two support infantry regiments arrived about noon, and General Huebner and the command group of the 1st Division landed at 1900. By evening, thirteen gaps in the beach obstacles had bene made, and work was progressing on additional beach exit roads. One account summarizes the situation at the end of D-day as follows: The assault on Omaha Beach had succeeded, but the going had been harder than expected. Penetrations made in the morning by relatively weak assault groups had lacked the force to carry far inland. Delay in reducing the strongpoints at the draws had slowed landings of reinforcements, artillery, and supplies. Stubborn enemy resistance, both at strongpoints and inland, had held the advance to a strip of ground hardly more than a mile-and-a-half deep in the Colleville area, and considerably less than that west of St-Laurent. Barely large enough to be called a foothold, this strip was well inside the planned beachhead maintenance area. Behind U. S. forward positions, cut-off enemy groups were still resisting. The whole landing area continued under enemy artillery fire from inland. Infantry assault troops had been landed, despite all difficulties, on the scale intended; most of the elements of five regiments were ashore by dark. With respect to artillery, vehicles, and supplies of all sorts, schedules were far behind. Little more than 100 tons had been got ashore instead of the 2,400 tons planned for D Day. The ammunition supply situation was critical and would have been even worse except for the fact that 90 of the 110 pre-loaded dukws in Force "O" had made the shore successfully. Only the first steps had been taken to organize the beach for handling the expected volume of traffic, and it was obvious that further delay in unloadings would be inevitable.* Casualties for the V Corps were in the neighborhood of 3,000 killed, wounded, and missing. The two assaulting regimental combat teams lost about 1,000 men each. The highest proportionate losses were taken by units that landed in the first few hours, including * Historical Division, OMAHA Beachhead. engineers, tank troops, and artillery. Whether by swamping at sea or by action at the beach, matériel losses were considerable, including twenty-six artillery pieces and over fifty tanks. About fifty landing craft and ten larger vessels were lost, with a much larger number of all types damaged. The principal cause of the difficulties of the day was the unexpected strength of the enemy at the beaches. The German 352d Division had just moved into the area to reinforce the coastal troops. As a result, all strong points were completely manned, reserve teams were available for some of the weapons positions, and there were units close behind the beach in support of the main defenses. A most surprising feature of the day's action was the enemy's failure to stage any effective counterattacks. A determined counterblow of even battalion strength might have pushed the battle back to the beach; but, instead, the enemy's power had been frittered away in stubborn defensive action by small groups. A subsidiary V Corps operation was an attack by Rangers on an enemy coastal battery of six 155-mm. howitzers at Pointe du Hoe, three miles west of OMAHA Beach. Three Ranger companies made a frontal assault on the position and, with the help fo ropes,* managed to scale the cliffs. The enemy guns had been removed from their casemated positions; but by 0900, the Rangers had discovered them some distance away, where they had been cleverly camouflaged and sited to fire on either UTAH or OKMAHA Beaches. The guns were destroyed; and the Rangers beat off counterattacks for the rest of the day while waiting for other Rangers to join them overland from Vierville, where they had landed with the rest of the V Corps assault troops. Other D-Day Operations.--While General Bradley's troops were meeting with varying fortunes on UTAH or OKMAHA Beaches, the British Second Army was achieving good results in its zone. General Dempsey's D-day objective was to secure a beachhead extending generally from Port en Bessin through Bayeux and Caen and along the Orne to the sea. The airborne landings began at 0200 when six gliders silently landed "like thieves in the night" to seize the Orne bridges north of Caen. Half an hour later, two brigades of the 6th Airborne Division dropped east of the Orne. The Orne bridgehead was reinforce, and bridges over the Dives were destroyed. By 0500 hours enemy counterattacks began to develop; * These ropes were anchored by grapnels that had been attached to rockets and fired from the landing craft. but the position was stubbornly defended, and in the afternoon contact was established with seaborne forces. The organization of the British assault teams, and their naval and air support, was similar to that of the American teams. (Appendix 8c includes a schematic diagram of one of these teams.) They also experienced unfavorable weather and seas, but the German beach defense was less spirited than at OMAHA. Infantry of the two brigade assault teams of the British 50th Division, followed by DD tanks, landed east of Arromanches about 0725; the Canadian 3d Division's two brigades touched won astride Courseulles about 0800; and the British 3d Division landed just east of Lion sur Mer at 0725 hours. Once clear of the beaches, steady progress was made, although some enemy strong points were by-passed. Probably the most severe opposition occurred in the afternoon north of Caen, where an enemy counterattack by infantry and some twenty tanks of the 21st Panzer Division was stopped by the British 3d Division and some armor. By the end of D-day, the Allies had breached the Atlantic Wall all along the invasion coast, and all assaulting divisions were ashore. Apart from the factor of tactical surprise, the comparatively light casualties that were sustained on all the beaches except OMAHA were in large measure a result of the splendid equipment (amphibious tanks, modern types of landing craft, rocket boats, etc.) that we employed. The greatest and longest step toward the destruction of the German armies in the west had been taken. Initially dazed and confused by the pre-invasion air and naval bombardment and the air landings, and with his communications disrupted throughout the invasion area, the enemy was unable to diagnose the extent of the invasion or to react quickly with effective countermeasures. The German reaction ins well described in the following account that has been pieced together from captured documents and prisoner-of-war interviews: At 0130 on 6 June the German Seventh Army received word from LXXXIV Corps that landings from the air were under way from Caen to the northern Cotentin. By 0230 Army felt able to designate the focal areas as the Orne River mouth and the Ste. Mère Eglise sector. In contrast to Seventh Army's views that the Allies were attacking to cut off the Cotentin Peninsula, Army Group [B] and Western Command [OB West] were of the opinion that a major enemy action was not in progress. Despite further reports of parachute landings at inland points all through western Normandy, at 0400 General Marcks (LXXXIV Corps) confirmed the first impression that the focal points were the Caen sector and around Ste. Mère Eglise. He reported that the 915th Infantry [a regiment of the 352d Division], corps reserve, had been ordered to occupy the Carentan area with the mission of maintaining communications throughout that point. Army Group alerted the 21st panzer Division, attached it to Seventh Army, and ordered it to attack in the Caen area with main effort east of the Orne. At 0600 Corps reported heavy naval gunfire from Grandcamp to the Orne; at 0645 Army told Army Group that the Allied intentions were till not clear and expressed an opinion that the naval gunfire might be part of a diversionary attack, to be followed by the main effort in some other area. Not until 0900 did Army hear from LXXXIV Corps that heavy landings from the sea had taken place from 0715 on; the sectors reported were from the Orne to northeast of Bayeux and at Grandcamp. At 0925, Corps reported the situation as very threatening north of Caen, with Allied armor reaching artillery positions, and asked for a mobile reserve to be constituted at once west of Caen. Penetrations in the forward positions of the 352d Infantry Division were reported at this time but were not regarded as dangerous. Corps reported at 1145 an Allied bridgehead 16 miles wide and over 3 miles deep north and northwest of Caen; no information was on hand from the 352d Division, and communications were out with the eastern Cotentin area. Both Army and Corps were convinced that the Caen landings presented the main threat; the 21st Panzer Division was headed for the beachhead both east and west of the Orne. At noon Corps stated that attempted sea landings from the Vire to the coast northeast of Bayeux had been completely smashed and the only critical area was that near Caen. The 352d Division advised Army at 1335 that the Allied assault had been hurled back into the sea; only at Colleville was fighting still under way, with the Germans counterattacking. At 1500 Army Group decided to put I SS Panzer Corps in charge of the Caen area. The 12th SS Panzer would move at once from the Alençon area toward Caen; Panzer Lehr was to come behind it. The 21st Panzer Division had elements north of Caen by 1600 and was expected to enter the battle at any moment. At 120 Army gave Army Group a general estimate of the situation. The situation in the Cotentin was noted as reassuring, and German forces on hand there were regarded as adequate. Army expressed its surprise that no landings by sea had supported the airborne troops, and hazarded the view that the Allied operation in this sector was diversionary. Twenty minutes later this conclusion was upset by word from Corps that sea landings had taken place just north of the Vire mouth. At 1800 more bad news came from the 352d Division: Allied forces had infiltrated through the strong points, and advance elements with armor had reached Colleville. As for the evening attack of the 21st Panzer Division, that unit had at first made progress and nearly reached the coast [near Douvres]; it then met heavy resistance and was forced to yield ground. By midnight Seventh Army and Army Group had made plans for a heavy panzer counterattack on 7 June against the British landing area by I SS Panzer Corps, with the 716th Division attached. The 21st Panzer DivisionI would attack east of Caen; 12th SS Panzer and Panzer Lehr west of Caen. Steps had been taken during the day for setting in motion other units to reinforce the battle area. Battle groups from the 266th and 77th Divisions were put in a state of readiness, and those from the 265th Division> were started by rail transport as reinforcement for LXXXIV Corps. All these units were in Brittany, and some hesitation was felt by Army Group in taking too much strength from that area before Allied intentions were fully clarified. At the end of D-day the German Seventh Army had decided that the landings near the Orne constituted the main threat and had taken steps to commit its strongest and most readily available reserves in that sector. The situation in the Cotentin was not causing particularly worry. Information as to the OMAHA Beach sector had been scanty throughout the day, and both Corps and Army tended to pay little attention to developments there. When Hitler, on 6 June, received word of the invasion he announced, "It's begun at last." He was confident that all measures were being taken to meet the crisis and that by 13 June counterattacks would wipe out any beachheads.* Operations, 7-12 June.--During the next five days, the Allied concentrated their efforts toward joining up the beachheads into one uninterrupted lodgement area and bringing in the supplies of men and materials necessary to consolidate and expand their foothold. General Bradley's first concern was to strengthen his tenuous hold at OMAHA Beach, launch attacks to secure Isigny and Carentan--to join the V and VII Corps beachheads--and establish firm contact with the airborne divisions so that the VII Corps could begin its drive for Cherbourg. The gap between the British and Americans also had to be closed, and General Dempsey planned to develop his operations with all possible speed to capture Caen. On 8 June, slow progress was made on all fronts. Enemy artillery fire continued to harass beach operations, and scattered enemy groups still held positions within the perimeter of the beachheads.** The Allied forces had not yet recovered from their D-day disorganization, and in many cases, units were not only understrength but lacked even the infantry heavy weapons that were necessary for any effective attack. However, since the Allied build-up continued at a faster tempo than the enemy could shift reserves to the threatened positions on the beachhead perimeter, the situation gradually improved along the entire line. * Prepared by Historical Division, Department of the Army. ** The largest of the strong points was at Douvres in the British zone, where the Germans held out until 17 June in installations 300 feet underground. By 9 June, the VII Corps had secured most of its D-day objectives. The 4th Division had advanced about three miles toward Cherbourg, but was meeting stiff resistance from German coastal fortifications northwest of Varreville. In the center, the 82d Airborne Division had been stalemated at the Merderet Bridges on the 7th, but by evening of the 9th, had fought its way westward to establish a bridgehead that included the elements of the division that had been isolated since D-day. But it was on the left that General Collins' chief concern lay. Here, similar to the early situation at Salerno, an eight-mile gap remained between his left flank and the right flank of the V Corps. General Bradley designated as first-priority missions the capture of Carentan by the VII Corps and Isigny by the VI Corps. General Collins directed the 101st Airborne Division to capture Carentan, and the division launched a coordinated attack on that objective early on 8 June. By evening, the 101st had advanced to the Douve River; but further progress was difficult because any attack from the northwest was canalized along the Ste. Mére Eglise-Carentan causeway, the only approach to Carentan over the inundated countryside. A new attack was launched during the night of 9-10 June, with one force directed to cross the Douve on the causeway, by-pass Carentan, and seize the high ground southwest of the city. A left-flank force of the 101st was to cross the Douve on the bridges that had been held since the 6th, establish contact with the V Corps between the Vire and Douve, and press in on Carentan from the east. A bitter two-day battle developed on the causeway, but by evening of the 11th, the paratroopers had gained control of that important stretch of road. At the same time, the left-flank force made more rapid progress to the east; one company established contact with the 29th Division on the morning of the 10th, and the rest of the force pressed on against Carentan. During the night of 11-12 June the town was set ablaze by artillery and naval gunfire; and early the next morning, the 101st closed the pincers on Carentan, which was seized by 0730 hours. The paratroopers immediately organized defensive positions southeast of the town to meet impending counterattacks from the German 17th SS Panzer Grenadier Division, which was just arriving in this critical area.* In the center of the VII corps zone, General Collins decided to commit the fresh 90th Division through the 82d Airborne's Merderet * The position of the 101st Airborne Division at Carentan remained precarious until 13 June, when it was reinforced by one combat command of the 2d Armored Division (V Corps). bridgehead in an attempt to break through to key terrain in the west. The 90th launched its attack early on the 10th; but enemy artillery fire on the Merderet bridges, strong defensive position on high ground to the west, and persistent counterattacks held the new division to gains measured in hundreds of yards. By the 12th, the 90th was still making only slow progress in its attempt to cut the peninsula. The 4th Division had captured the enemy forts barring its advance, and by the 12th, was approaching its objective, a ridge running through Montebourg to Quineville; but enemy resistance increased as the German 77th Division was committed near Montebourg, and it threatened to block this drive on Cherbourg. In the meantime, in the V Corps zone isolated enemy strong points were cleared on the 7th, and all exit roads from the beach were opened by noon. The 29th Division became operational and took up the attack to the west but made little progress the first day. On the 8th, however, this division scored remarkable gains as one regiment relieved the besieged Rangers on pointe du Hoe and moved on to seize Grandcamp and enemy artillery batteries that had been firing on the beaches. The center regiment advanced twelve miles, capturing Isigny and a bridge across the Aure River by 0800 hours on the 9th. The left-flank regiment closed up along the Aure. The 1st Divisions' attack carried across the Isigny-Bayeux highway and struck an enemy pocket south of Port en Bessin. During the night of 8-9 June, the Germans evacuated this pocket with considerable losses, and early the next day, the 1st Division established contact with the British along the Bayeux highway. So by early morning of the 9th, the V Corps had also reached most of its D-day objectives, and artillery fire on the beaches had been practically eliminated. At noon on the 9th, General Gerow launched a new attack with three divisions abreast, the 2d Division having taken over a 5,000 yard zone in the center. On the 9th, the greatest gains were scored on the right by the 29th Division, which crossed the Aure and cleared out the area east of the Vire. As we have already seen, contact was made with the 101st Airborne Division on the 10th; and on the 11th, the V Corps' right flank was further strengthened by the arrival of elements of the 2d Armored Division. In the center, the 2d Division initially met strong resistance around Trévières; but on the 10th, strengthened by the arrival of much of its missing equipment, it moved rapidly ahead to clear Trévières and most of the Forêt de Cerisy as the remnants of the German 352d Division collapsed. On the left, the 1st Division gained momentum to the south as the corps' attack was renewed on the 12th to gain the high ground near St. Lô and Caumont. The 1st Division reached the edge of Caumont by evening, and the 2d Division secured the high ground south of the Forê de Cerisy; but as the 29th Division approached the hills north of St. Lô, it met resistance that indicated the importance the Germans attached to that key road center. in its first week of fighting, the V Corps suffered 5,846 casualties, of which 1,225 were killed; it captured about 2,500 prisoners and practically destroyed the German 352d Division. The British Second Army also scored good gains on its right, entering Bayeux on the 8th and Port en Bessin on the 8th. The British 50th Division then pressed on to the south abreast of the American 1st Division. The British 7th Armored Division came into action near Tilly on the 10th, but increasing enemy resistance and heavy counterattacks by elements of the Panzer Lehr and 12th SS Panzer Divisions halted progress in the British XXX Corps zone. in the Caen area, the Canadian 3d Division advanced across the Caen-Bayeux road but then met heavy counterattacks by the 12th SS and 21st Panzer Divisions that stopped further progress. The British 3d Division made little headway against the strong German defensive positions covering Caen. In the Orne bridgehead, the 6th Airborne Division withstood persistent counterattacks while the British 51st Division was arriving to launch an attack toward the eastern outskirts of Caen. By 12 June, the entire Allied beachhead was continuous and securely held. The Allies had landed in France and had staked out their claims. It was now evident that they were there to stay. With sixteen divisions already in the beachhead, supported by a steadily growing amount of nondivisional artillery and armor, the danger of a decisive enemy counterattack was fading from reality.* Three airstrips had already been built in the American zone, and others were under construction. It is almost impossible to comprehend the stupendous logistical difficulties that were overcome by the Allies during the days following the landing. The details of this effort, far beyond the scope of this account, will make kan epic story, for on the outcome of the battle of supply depended the success of the invasion of western Europe. With the elimination of enemy artillery fire from the beaches, the supply situation improved rapidly despite continuing unfavorable weather conditions. By the 12th, the beaches were cleared of debris and working smoothly, construction of the artificial * On 12 June, the Germans had only thirteen understrength divisions in Normandy. harbors had begun, and the beach maintenance areas were being established. Although behind planned discharge schedules during the first six days of the operation, 326,547 men landed and 54,186 vehicles and 104,428 tons of stores were brought over the beaches. Even though hampered by poor flying weather during the entire week, all records were broken by the Allied air forces. The Eighth Air Force and the R.A.F. began to work on the Loire River bridges while the Ninth Air Force went after the railroad bridges between the Seine and the Loire. Marshalling yards and other traffic centers received due attention. From the 6th through the 11th, over 37,000 tons of bombs were dropped, and 55,000 sorties were flown. Within an arc extending from the Pas de Calais through Paris to the Brittany Peninsula, 16,000 tons of bombs were dropped on coastal batteries, 4,000 tons on airfields, and 8,500 tons on railway targets. Complete Allied air supremacy was maintained over the beachhead, with only a slight increase in enemy air activity (mostly nighttime mine-laying operations in the Channel) being noted. The map indicates the time of arrival of enemy divisions in Normandy, but it cannot depict the delays encountered en route. As the Allied air attacks on bridges, roads, railroads, and moving columns began to take effect, the German divisions negotiated the distance to the battle area with more and more difficulty. Consequently, units arrived piecemeal, lacking essential weapons and vehicles and short of fuel and ammunition. The increasing Allied pressure on the ground forced the enemy to commit these divisions as they arrived. He was kept so busy plugging the gaps that his much-discussed major counterattack had to be postponed repeatedly. By 12 June, his golden opportunity to crush the invasion had passed, and the Allies definitely held the initiative. During this first week, the Germans lost some 150 tanks and 10,000 men as prisoners. As yet, no divisions had arrived from the Fifteenth Army area in the Pas de Calais, a very significant fact. Rundstedt's entire strategic reserve* had been committed, and three** of the seven divisions in the Brittany Peninsula and the II Parachute Corps had already been shifted to the north. By committing his strongest reserves at Caen, Carentan, and Valognes, Rommel clearly indicated where he most feared Allied advances; and the concentration of panzer forces around Caen demonstrated his particular concern over that important area, whose loss would effectively break the connection with the Fifteenth Army in the Pas de Calais. * 12th SS Panzer, Panzer Lehr, and 17th SS Panzer Grenadier Divisions. ** 77th, 3d Parachute, and 265th Divisions. The Capture of Cherbourg After the establishment of the beachhead, the next major mission of the 21st Army Group was to drive forward before the enemy could recover his breath. Toward the end of July 1918, Marshal Foch wrote: "material forces are veering in our favor. Moral ascendancy we have always had. The moment has not come for us to pass to the offensive." A similar moment had arrived for the Allies in June 1944. They needed elbowroom for the ever increasing flow of reinforcements and supplies and maneuver room for large-scale offensive operations. The most important initial step in the accomplishment of this purpose was the capture of Cherbourg, in order to win essential port facilities for the vast build-up and supply of our forces. Other objective would be Caen and St. Lô, both key road centers of great importance to either side. But before all these objectives could be secured, six weeks of grueling hedgerow fighting was to take place. After the capture of Carentan, the Germans launched panzer counterattacks against that important link in the Allied beachhead, but the American paratroopers (reinforced with armor) held firm and even expanded their foothold southwest of the Douve. At the same time, three enemy divisions were ordered to hold the Montebourg-Quineville Ridge at all costs to stop the 4th Division's drive on Cherbourg. With the bulk of the enemy's available forces on the Cotentin Peninsula committed in these two areas, General Bradley resolved to exploit the weak center and to cut off the peninsula preparatory to an all-out attack on Cherbourg. Accordingly, General Collins launched a new attack through the Merderet bridgehead on the morning of 14 June. The reorganized 82d Airborne Division attacked along the Douve toward St. Sauveur; and the veteran 9th Division, which had just landed, was directed on Ste. Colombe. In spite of the difficult terrain, the appearance of a new enemy division in front, and persistent counterattacks against the 9th Division's right flank, the attack progressed well. By the evening of 16 June, the 82d Airborne Division securely held St. Sauveur, west of the Douve. About the same time, leading elements of the 9th Division also established a bridgehead across the Douve, at Ste. Colombe. These gains broke the main enemy resistance; and while the 82d pivoted to the south to protect the corps' left flank, the 9th continued its attack to the west, debouching through both the Douve bridgeheads. Early on 18 June, the 9th Division occupied Barneville, and by evening, the VII Corps had driven a corridor five miles wide across the peninsula. The enemy north of the corridor counterattacked in a vain effort to reestablish contact with the Germans to the south and then fell back in some disorder toward Cherbourg. Protection of the south flank gradually fell to Major General Troy H. Middleton's VIII Corps, which became operational on 15 June. On that day, it took over the 101st, and on the 19th, the 82d Airborne Division. Later, the 90th Division also came under its control With these adjustments, the VIII Corps was free to concentrate on its drive on Cherbourg. On the 18th, Generals Bradley and Collins decided to use three divisions for the attack to the north. The 4tgh Division launched a surprise night attack near Montebourg, and the 79th and 9th Divisions began their northward advances early the next morning. That evening, as the 4tgh and 79th closed in on Valognes, the Germans decided to withdraw to the strong defensive perimeter they had established in the hills around Cherbourg. By evening of the 20th, the VII Corps reached this position, which consisted of well-prepared field fortifications reinforced by permanent structures of concrete. An ultimatum for the Germans to surrender having expired, General Collins launched a coordinated attack on the afternoon of the 22d, which was supported by 1,000 aircraft of the tactical air forces and heavy artillery fire. However, no real breakthrough was achieved; and the VII Corps was forced to resort to a methodical reduction of strong points. Not until 24 June were the main defenses cracked. The next day, all three divisions, supported from the sea by a heavy naval bombardment force,* reached the outskirts of the city; and the 79th Division captured Fort du Roule, the formidable bastion whose 280-mm. guns dominated the entire harbor area and sea approaches. On the 26th, the Germany Army and Navy commanders surrendered, having previously exacted no-surrender pledges from their men; and the next day, all organized resistance within the city ended. However, the excellent port installations had been so thoroughly demolished and the harbor, which could provide anchorages for over a hundred ships, had been so heavily mined and blocked by sunken ships, that it was 19 July before unloading could begin, and late August before large ships could be brought alongside the docks. By 1 July, the 9th Division had cleaned out the last German resistance in the northwestern tip of the peninsula. * Including three battleships, four cruisers, and eleven destroyers. The capture of Cherbourg marked the attainment of the first major Allied strategic objective in western Europe. The VII Corps had suffered over 22,000 casualties, including 2,800 killed; but the enemy had lost 39,000 captured in addition to an undetermined number of killed and wounded. Even Rommel had to admit that, with Cherbourg in our hands, elimination of the beachhead was no longer possible. His plan of frustrating the invasion at the beaches had failed, and the next few weeks were to see the enemy making a frantic but unavailing effort to create a mobile striking force under Rundstedt's control. While the VII Corps was capturing Cherbourg, the rest of the First Army regrouped to the south and, with the British Second Army, initiated a series of limited-objective attacks designed to gain additional maneuver room and the key terrain features considered essential for a line of departure for a general offensive. The XIX Corps, Major General Charles H. Corlett commanding, became operational on 14 June and immediately launched an attack between Carentan and St. Lô. Some progress was made toward St. Lô, but on the right, the attack was halted along the canal that connects the Taute and Vire Rivers. On 21 June, an active defense was assumed on the fronts of both the XIX and V Corps, and there were no further appreciable changes in the front lines during the rest of the month. This lack of action was partially dictated by the critical logistical situation that had developed on 19 June, which had forced ammunition expenditures to be cut to one-third of a unit of fire per day. In the British Second Army zone, the XXX Corps succeeded in entering Villers Bocage on the 13th with an armored division, but the sudden arrival of strong panzer forces caused the British to give up the town the next day. Farther east, bitter fighting developed around Tilly, which was captured on the 19th. The British I Corps' attacks on Caen from the north and from the Orne bridgehead made little progress against tightening German resistance. General Montgomery then planned an all-out British offensive to capture Caen. Originally scheduled for 18 June, bad weather forced a postponement until the 25th, when the British XXX Corps began attacks that were designed to envelop the city from the southwest. Fierce resistance held the XXX COrps to small gains, but the British VIII Corps* launched the main effort the next day with an armored and two infantry divisions. On the 27th, the armored division secured a bridgehead over the Odon River that was enlarged during * The British VIII Corps, Lieutenant General R.N. O'Connor commanding, had arrived in France on 15 June. the next two day. However, by the 29th it had become apparent that the enemy had concentrated most of his available strength in this area; so Montgomery decided to hold the ground won and regroup for a renewed thrust farther to the east. In contrast to the infantry-artillery fighting around Cherbourg, the battles in the British zone were mostly armored engagements. By the end of the month, the Germans had committed practically all of their strongest reserves, their panzer divisions, in this area. Moreover, the constant pressure around Caen had prevented the withdrawal of any of their panzer units for a counterattack that might have seriously threatened the Allied beachhead. Whereas the Allies had landed twenty-five division (including five armored) by the end of June, the Germans had been able to concentrate only twenty division (including nine panzer=-typed divisions) in Normandy; and the bulk of four of these had been cut off and captured by the VII Corps. Rommel's reinforcements included two of the three panzer divisions from the Fifteenth Army sector, but only one infantry division from north of the Seine. Elements of most of the divisions in Brittany had been shifted north, and two SS panzer division, the 9th and 10th, had been recalled from the eastern front. The fact that these two divisions had taken so long to move from eastern France to Normandy as from Poland to the French frontier was indicative of the effectiveness of the Allied air operations. The destruction of the Seine bridges below Paris and the principal crossings of the Loire had virtually isolated Normandy except for the routes leading through the Paris-Orléans gap, and there the congested roads were offering rich targets for bombing and sabotage. On 17 June, Hitler suddenly appeared in norther France (his first and only visit to the west after 1940). By that time, Rommel and Rundstedt were in agreement that their only hope was to withdraw from Caen to a strong defensive position that could be held by infantry while the panzer divisions were refitted for a powerful counteroffensive against the Americans in the Cotentin Peninsula. Characteristically, Hitler refused to consider such a proposal and insisted that the line beheld at all costs. The German Seventh Army commander, General Dollmann, died on 28 June from a heart attack and was replaced by Lieutenant General Hauser (the first SS officer to command a field army).* About 3 July, von Rundstedt * Hauser had boasted that he reveled in the assignment of impossible tasks. General Montgomery was reported as countering with the remark that such was an essential qualification for any general who took command of the German Seventh Army at this time. was relieved by Field Marshal Gunther von Kluge as Commander in Chief West. From then on, Hitler's personal interference in the strategy and tactics of the campaign was unchecked. The greatest detriment to the Allied build-up had not been the enemy, but the weather. It will be recalled that two of the great Allied "secrete weapons" planned for the invasion were the artificial harbors that were to be installed at St. Laurent and Arromanches to provide the logistical support for the ground forces until large French ports could be captured. The placing of these installations began on D plus 1; and by 19 June, the American harbor, which was designed to provide moorings for seven liberty ships and twelve coasters, was about 90 per cent complete. The map inset shows the plan for this harbor, which was called MULBERRY A. But from 19 to 22 June, one of the worst summer gales in Channel history hit the Bay of the Seine. Unloading operations were virtually stopped, the floating steel caissons broke adrift and sank, the concrete caissons shifted, ferry craft broke loose and smashed into the piers, and the beach was strewn with some 800 stranded and damaged craft. Although the line of sunken ships held together fairly well, the American MULBERRY as a whole was irreparably damaged.* Fortunately, the DUKW's were still available; and by their efficient use, plus emergency measures such as "drying out" the landing ships and coasters so they could be unloaded directly onto the beaches, operations were soon under full swing again. General Eisenhower comments as follows on his visit after the storm: There was no sight in the war that so impressed me with the industrial might of America as the wreckage on the landings beaches. To any other nation the disaster would have been almost decisive; but so great was America's productive capacity that the great storm occasioned little more than a ripple in the development of our build-up.** In spite of the appalling damage wrought by the storm, by 26 June, OMAHA Beach was discharging 122 per cent of its planned cargo capacity; and thereafter, operations over the beaches continued to surpass our best expectations as supplies and personnel moved inland at an ever-increasing rate. By the 26th, 268,718 men, 40,191 vehicles, and 125,812 tons of cargo had been discharged over OMAHA Beach along. * The British MULBERRY was less seriously damaged, and by using salvaged material from St. Laurent it was soon repaired. It operated effectively during the summer and autumn. ** Eisenhower, Crusade in Europe. Expanding the Beachhead By 1 July, the Allied commanders were not worried as much about a German counterattack that would threaten their beachhead as they were over the possibility that the enemy might bring in sufficient reserves to create a stalemate in Normandy. We still needed more room to maneuver; and we needed mor time to build up reserves of men, tanks, and supplies to support a sustained offensive toward the Seine. The general strategy for the breakout (which will be discussed in more detail in the next chapter) called for the main effort to be launched by the Americans on their right with the Allied line pivoted around the British at Caen, who would attempt to hold the bulk of the enemy strength in that area. Consequently, the period 1-24 July involved a struggle for limited objectives in which the First Army fought its way out of the restricted terrain at the base of the Cotentin Peninsula and into more open country west of St. Lô, and the British Second Army established strong forces southeast of Caen. After the capture of Cherbourg, the VII Corps moved to the south and took over the portion of the First Army front around Carentan. Regrouping was completed by 3 July; and on that date the VIII Corps, on the right flank, began Bradley's offensive by launching an attack toward La Haye du Puits with three divisions.* In the fighting that ensued, slow progress was made through the marshlands and bocage country, where every field was a fortress and every hedgerow a German strong point. The VIII Corps converged on La Hay du Puits, capturing the town on the 7th; and by the 14th, Middleton had reached his initial objectives, around Lessay, where he was ordered to halt. On the 4th, two divisions of the VII Corps began an attack that met increasing resistance as it progressed along the Carentan-Periers road. On the 12th, this drive was halted, since it had gained the high ground southwest of Carentan; but General Collins continued to attack with his left, which was reinforced by two more divisions. By the 18th, the VII Corps had reached the Periers-St. Lô road and held the key terrain for the breakout to the south.** The XIX Corps joined the First Army offensive on the 7th, when it struck across the Vire River and to the southwest. It then turned * VIII Corps: 79th, 82d Airborne, 90th Divisions. The 82d Airborne Division was released to army reserve on 8 July and was replaced by the 8th Division. ** VII Corps: 4th, 83d, 9th, 30th Divisions. The 83d Division had relieved the 101st Airborne Division, which was placed in army reserve at Cherbourg. The 3d Armored Division was in corps reserve. over the right of its zone to the VII Corps and on the 13th drove toward St. Lô with its two remaining divisions.* In spite of a strong counterattack, the XIX Corps closed in on St. Lô from the north and east and completed the occupation of the key road center by evening of the 18th. Since General Bradley's plan called for the greatest advance to be made on his right, the V Corps** remained relatively inactive. During the period 1-18 July, the First Army had gained its jump-off positions and had regrouped its divisions for the breakout that was scheduled to begin on 19 July. However, bad weather, which had limited air operations during the entire period, forced a postponement of the big attack until the 25th. During the difficult fighting in the marshlands and hedgerows, the enemy had put up a stubborn defense and had even shifted some of his strength from in front of the British, particularly the 2d SS Panzer Division,I to stop the Americans. in his book, General Eisenhower summarizes the significance and character of these operations as follows: The Battle of the beachhead was a period of incessant and heavy fighting and one which, except for the capture of Cherbourg, showed few geographical gains. yet it was during this period that the stage was set for the later, spectacular liberation of France and Belgium. The struggle in the beachhead was responsible for many developments, both material and doctrinal, that stood us in good stead throughout the remainder of the war. . . . * * * * * * * * * *Although the nature of the terrain and enemy resistance combined with weather to delay the final all-out attack until July 25, the interim was used in battling for position and in building up necessary reserves. . . . The artillery, except for long-range harassing fire, was of little usefulness [because of limited observation]. It was dogged "doughboy" fighting at its worst. Every division that participated in it came out of that action hardened, battle-wise, and self-confident.*** In the meantime, fierce armored battles had continued in the Caen area. if the American breakout was to succeed, it was essential that Dempsey contain most of the enemy's panzer forces on the Allied left; so Montgomery had directed the British Second Army to continue its offensive tactics. On 1 July, the Germans made their last and strongest effort against the British positions when elements of five SS Panzer divisions launched repeated, though not simultaneous, * XIX Corps; 35th, 29th Divisions. ** V Corps: 2d, 5th Divisions. The 2d Armored and 1st Divisions were withdrawn into assembly areas in preparation for the next operation. *** Eisenhower, Crusade in Europe. attacks. Fortunately, the attackers were engaged by British massed artillery with telling effect, and were dispersed. On the 8th, the British I Corps launched an attack Caen from the north, employing three infantry divisions supported by armor. To assist in this attack, the strategic air forces were called upon to blast an area on the northern outskirts of Caen, as shown on the map,* and heavy naval and artillery preparations were fired. By evening, the British I Corps had closed in on the outskirts, and by the end of the next day, had occupied that part of the devastated city west of the Orne. From 10 to 18 July, Dempsey continued to attack, maintaining pressure on as broad a front as possible. Considerable regrouping was also undertaken as the British XII Corps (Lieutenant General G.G. Simons commanding) took over the Caen sector from the British I Corps. Attacks by the British XXX and XII Corps failed to gain much ground, but they succeeded in forcing the enemy to keep his panzer units in the line. On the 18th, the British VIII Corps launched a powerful attack to the south, using three armored divisions. This attack, assisted by another heavy air preparations** and naval bombardment, and supported on the right by the Canadian II Corps and on the left by the British I Corps, was pressed forward until the 21st, when rain turned the battlefield into a sea of mud. This last attack had been intended as a diversion to the main (American) breakout offensive, which, we have seen, was scheduled for the 19th but had to be postponed because of the weather. Although the British VIII Corps attack had gained only four miles--east and south of Caen--it had succeeded in drawing additional German forces to the east, destroying many enemy tanks, and placing the British in position to threaten seriously the strategic flank of the German forces in Normandy. During the period between the capture of Cherbourg and the beginning of the breakout offensive, the bad weather continued to * Six hours before the ground attack was launched, 460 aircraft of the R.A.F. Bomber Command dropped about 2,300 tons of 500- and 1,000-pound bombs on an area approximately 4,000 yards wide and 1,500 yards deep in the first "carpet-bombing" operation of the campaign. ** Some 1,700 heavy bombers of the Bomber Command and Eighth Air Force and 400 medium bombers of the Ninth Air Force took part in this preparation. The medium bombers were assigned a target area directly in front of the British VIII Corps, while the heavies were given targets a little farther away. handicap air operations to some extent; but, as already indicated, the Allied air forces were able to offer strong support to the ground forces. A Ninth Air Force report describes the operations of the tactical air forces during this period: The chief contribution of fighter-bombers was the almost total restriction of enemy movement and reinforcement during flyable daylight hours to a depth of approximately thirty kilometers behind the lines. Von Rundstedt reported to Berlin that "whenever assembly areas are detected, an attack by fighter-bombers is launched without delay." The planes were successful in attacks on strong points, troop formations, self-propelled guns, tanks, armored vehicles, and field fortifications at the fighting front and drastically reduced the volume of enemy artillery fire by their mere threatening presence over the battle area. The "Jabos," as the German troops called fighter-bombers, were indeed the Allies' "most terrible weapon." During the first three weeks of July, the enemy, still fearful of a landing in the Pas de Calais, continued his half-hearted attempts to reinforce the battered units in Normandy by calling up divisions from southern France and Holland. Alarmed by the fall of Cherbourg, hitler ordered that every coastal fortress be reinforced and prepared for a long seige; and likewise, he ordered that every sizable town (such as St. Lô and Caen) be defended to the last. The explanation that OB West later gave for this hold-at-all cost policy of Hitler's was that, if everything else failed, he hoped to destroy enough Allied forces so that finally only the famous "one German battalion" would be left.* During this period, several changes were made in the German High Command in the west. A new army, the Fifth Panzer, was organized in July with Lieutenant General Eberbach as commander.** On 17 July, Allied fighter-bombers attacked a car in which Rommel was visiting the front, and the German commander suffered a bad skill fracture.*** Field Marshal Kluge was directed to take over the post of commander of Army Group B in addition to his duties as Commander in Chief West, and he immediately established his personal headquarters with the army group staff. * Hitler had once said that it did not matter so much how the war was fought or who won the battles if at the end the one battalion left was German. ** This headquarters had formerly functioned as Panzer Group West controlling the panzer divisions in Normandy. *** Rommel recovered at his home in Ulm but, being implicated in the plot against Hitler, was given the choice of going to Berlin for interrogation or taking poison. He chose the latter and died on 14 October. The invasion of western Europe constituted the long-awaited "second front" and proved that the Allies could seek the enemy out, engage him in combat, and defeat him on his own ground. It showed that we could do against the Continent what the Germans had not been able to do against England, that is, execute an amphibious assault on a well-defended hostile shore and secure a lodgement. In this most complex of military operations,k requiring the closest coordination of air, naval, and ground forces, painstaking and untiring efforts in the planning had borne fruit in successful operations. Undoubtedly, some details of planning and execution could have been improved; but, gauged by final results, OVERLORD was a success in every respect. The Normandy campaign officially ended on 24 July. That date marked the successful conclusion of the greatest amphibious operation--an achievement that will undoubtedly have a profound influence on the art of war. It showed that beach maintenance can be relied upon under almost all conditions. By this means, there is conferred on the amphibious assault complete liberty regarding the point of attack, provided suitable beaches are available; and thus increased opportunities for surprise are obtained. Prior to 6 June 1944, the Allies had already demonstrated the practicability and effectiveness of amphibious operations in the Mediterranean and the Pacific, but it was in Normandy that we conclusively proved that large military forces could be landed and supported on a strongly defended hostile shore. in Normandy, the Allies landed and supplies thirty-four divisions across the beaches* during the first seven weeks of the campaign. The millionth man stepped ashore in France on D plus 28, and by D plus 38, a million tons of supplies and 300,000 vehicles had been landed. The greatest advantage (aside from the top priority on most Allied resources) that OVERLORD enjoyed over most other amphibious operations of the war was the proximity of a great base--the United Kingdom. This factor enabled land-based aircraft ot maintain complete air superiority over the invasion area and the at the same time conduct a powerful air offensive against the enemy. For example, during good weather over 1,000 United States heavy bombers by day** and 1,000 British heavy bombers by night could be * In Italy, the Fifth Army had supported about eight divisions over the beaches as Salerno, and later in the war, the Sixth Army supported about eleven divisions over the beaches of Luzon. ** On 12 June, the largest force of heavy bombers hitherto airborne on a single mission, 1,448 B-17's and B-24's, launched a mass attack on French air fields. dispatched against strategic targets, while the Allied Expeditionary Air Force flew as many as 4,000 tactical sorties a day. In contrast, the Luftwaffe's activities were normally limited to defensive patrolling behind German lines, with an average of 300 to 350 sorties per day. Our naval victory in the Battle of the Atlantic had been so complete and our sea forces based in the British Isles were so strong that the biggest problem facing the Allied navies was the mines that the enemy managed to lay off the transport anchorages.* The logistical accomplishments would have been impossible, or would have required a fantastic number of ships, if the supplies had not been so readily availabe. On the other hand, the invasion was no "push over." In order to use the close base, the Allies were forced to assault a strongly defended coast. The efficient Germans had had four years in which to build their much-propagandized Atlantic Wall; the geography of northwestern France favored the rapid transfer of troops to the threatened area; the enemy had available in France on eof the finest communication systems in the world, with its resultant advantage of interior lines; and strong armored units were available to counter any attempteed invasion. But overwhelming air superiority overcame all these disadvantages. Another handicap to the Allies was the atrocious weather that persisted until August in plaguing the operations of the navies, the air forces, and the troops on the ground. A special proglem created by the very immensity of the project was the planning, organizing, and coordinating of the resources of two Allied powers. Perhaps the successful solution to this problem wsas th outstanding factor in the Allies' success; we know that dissension in its High Command contributed mightily to the collapse of Germany. Unity of command had already been established as the best means of achieving coordination among the different services of one nation, but never before had such close cooperation between Allies been achieved. montgomery comments as follows in his book on the operations in western Europe: it should be remebered that this highly complex Anglo-American oerganization set up for launching OVERLODRD had little more than five months for the completion of its task, from the time the higher command was finally settled. Events have amply shown that a splendid spirit of cooperation was established between the * During the three months following D-day, the number of mines swept off the invasion ports totalled one-tenth of those swept in all theaters combined from the beginning of the war to 6 June. British and American services and that under General Eisenhower a strong, loyal team was quickly brought into being, while the varoius components of the great invasion force were welded into a fine fighting machine.* The lessons the Allies had learned in the Mediterranean were indeed well applied. Certain features of the landing are of special interest: - The airborne troops were employed on a hitherto unmatched scale. In spite of the difficulties they encountered, they demonstrated that, under conditions that are favorable for the use of parachute and glider units, the beach assault troops do not have to meet fresh enemy local reserves after exhausting themselves in overcoming the beach defenses, as was the case in the April 1915 landings on the Gallipoli Peninsula. Now, as happened in Normandy, the beach defenses and local reserves can be engaged simultaneously, the latter by airborne troops. - Naval gunfire, although failing to knock out the beach strong points prior to the assault, provided effective support for the ground forces. Fire from destroyers that came in close to shore was particularly effective in attacking enemy beach strong points, and the larger ships provided heavy artillery support ** for the advance inland until heavy field artillery could be landed. - Although they suffered heavy casualties, the early landing of tanks gave much needed strength to the assaulting infantry. But the most important reason for the success of the landing must not be pushed into the background. W must not lose sight of the fact that, in spite of the intense pre-invasion air and sea bombardment, for the first few critical hours the issue rested squarely on the shoulders of those "few brave men"*** selected to set foot first on the soil of France. Overwhelming material superiority, the most detailed plans, and the most meticulous preparations could not insure the success of the operation without the bravery and determination of the officers and men who first advanced across the fire-swept beaches. * The Viscount Montgomery of Alamein, Normandy to the Baltic (Boston: Houghton Mifflin Co., 1948). ** The two divisions that landed on OMAHA Beach on D-day were each supported by a naval fire-support group of a battleship, one or two cruisers, and four destroyers; and each infantry battalion was accompanied by a naval shore fire-control party. *** Mr. Churchill once spoke of "the countless hours of work the enormous amount of time and effort that must be expended by thousands of people, in order that a few brave men can rush on to the beaches of France and plunge their bayonets into the bowels of the enemy." Analysis of the lessons to be learned from the Normandy campaign brings out nothing more startling that the realization that the broad principles of troop leading and staff functioning as developed in the past are sound. The meter of success continues to be the ability of commanders to apply these principles. Unit of command, an aggressive spirit, coordination in amphibious and airborne operations, air-ground cooperation, infantry-tank coordination, and supply discipline are basic requirements for success in any military operation. OVERLORD confirmed all of these old lessons and brought out some new ones. Meticulously coordinated planning and frequent full-scale rehearsals are essential and eradicate many of the "bugs", making an operation as nearly sure-fire as possible; but in the actual assault, nothing will take the place of aggressive, intelligent leadership by all commanders* to overcome the inevitable difficulties, unforeseen and unpreventable, that arise in every tactical and logistical situation. In his report to the Combined Chiefs of Staff,** General Eisenhower discussed the reasons for Germany's failure to stop the invasion: Lack of infantry was the most important cause of the enemy's defeat in Normandy, and his failure to remedy this weakness was due primarily to the success of the Allied threats leveled agasint the Pas de Calais. . . . The German Fifteenth Army, which, if committed to battle in June or July, might possibly have defeated us by sheer weight of numbers, remained inoperative throughout the critical period of the campaign; and only when the breakthrough had been achieved were its infantry divisions brought west across the Seine--too late to have any effect upon the course of victory. A certain amount of reinforcement of the Normandy front from other parts of France and from elsewhere in Europe did take place, but it was fatally slow. The rate of the enemy's build-up in the battle area during the first six weeks of the campaign averaged only about half a division per day. . . . This process of reinforcement was rendered hazardous and slow by the combined efforts of the Allied air forces and the French patriots. . . . The consequence of these attacks upon enemy communications was that the Germans were compelled to detrain their reinforcement troops in eastern France, after a circuitous approach over a disorganized railway system, and then to move them up to the front by road. Road movement, however, was difficult by reason of the critical oil shortage, apart from the exposure of the columns to Allied bombing and strafing. Whole divisions were moved on seized bicycles, with much of their impedimenta on horse transport, while the heavy equipment had to follow as best it could by rail, usually arriving some time after the men. . . . Traveling under such conditions, the reinforcements arrived in Normandy in a [* "Commanders" suggests, at a minimum, battalion-level officers and above. I believe a close look at the operations would show that lieutenants and sergeants were the ones who supplied the aggressive and intelligent leadership that cracked the German defenses and made the landings a success. -- HyperWar] ** Report by The Supreme Commander To the Combined Chiefs of Staff On the Operations in Europe of the Allied Expeditionary Force, 6 June 1944 to 8 May 1945. piecemeal fashion, and were promptly thrown into battle while still exhausted and unorganized. By mid-July, units had been milked from Brittany, the southwest and west of France, Holland, Poland, and Norway; only the Fifteenth Army in the Pas de Calais, waiting for a new invasion which never came, was still untouched. . . . [The] continuing failure by the enemy to form an armored reserve constitutes the outstanding feature of the campaign during June and July; to it we owed the successful establishment of our lodgement area, safe from the threat of counterattacks which might have driven us back into the sea. Every time an attempt was made to replace armor in the line with a newly arrived infantry division, a fresh attack necessitated its hasty recommittal. These continual Allied jabs compelled the enemy to maintain his expensive policy of frantically "plugging the holes" to avert a breakthrough. So long as the pressure continued, and so long as the threat to the Pas de Calais proved effective in preventing the move of infantry reinforcements from there across the Seine, the enemy had no alternative but to stand on the defensive and see the Seventh Army and Panzer Group West slowly bleed to death. All that he could do was play for time, denying us ground by fighting hard for every defensive position. In defending against an invasion from the sea, there are certain sound principles of land warfare that may be applied. The most desirable procedure, of course, is to repel the invaders on the beaches; but it is seldom that a long coast line can be so strongly defended. If too many troops are committed to the defense of the shore line, the result will be a cordon defense, with insufficient reserves held out for a counteroffensive. So the best that can usually be done is to make the landings costly and contain the enemy within a small area while reserves are being assembled to "drive him into the sea." Properly timed, and with sufficient reserves available, the counteroffensive should be successful, for the invader is at a serious disadvantage during that early stage. Prior to the capture of a port, his supply problem is difficult; he can usually bring in only a limited amount of heavy material; and under ordinary conditions the follow-up divisions cannot come in as fast as the defender can assemble his reserves. So if the counteroffensive is made while the invader has one foot in the water and one on shore, it should have a good chance of success. There are, however, two important points that apply to the counteroffensive. The first is that since the reserves must not be committed against a subsidiary or secondary landing, the commander must be able to recognize the main landing. If he attacks at the wrong place, it will probably be too late to rectify the mistake. The second point is that the reserves must be so located, and the road and rail net such, that they can be assembled quickly at the proper place. These principles were,of course, well known to the German High Command. Why then did it fail? The answer may be found in German documents that describe the situation on D-eay as understood by OKW: The picture of the situation corresponded entirely to the [operations staff's] expectations with regard to the first phase of a large-scale invasion. It left the question open, in the [operations staff] as well as in OKL and OKM, as to whether this was a tactical diversion, a strategic landing with a limited objective, or the prelude to the decisive main effort. . . The picture of the situation at the time did not justify the opinion that the enemy's main landing operation had already begun. The choice of location, extent evident up to that moment, and prevailing weather conditions all indicated with much greater likelihood that this was a diversion or a holding attack, without any major strategic objective. . . Hitler, therefore, was still convinced on the afternoon of 6 June that the main landing was yet to come (either on the Channel coast or--in close cooperation with the landing between the Orne and the Vire--on the west coast of Normandy, or in Brittany). The impression had not changed by 9 June, as we see from the following: Evaluation of the information about the enemy and reports in the possession of OKW showed on 9 June that the enemy had not as yet committed in Normandy even 20 per cent of the combat units which, according to evidence on hand, he was presumed to have in England. Even if it were assumed that Montgomery's group of armies was more or less tied down for the continuation and reinforcement of the battle in Normandy, the enemy still had at his disposal for further landings from England the whole group of armies under Patton, and for further landings from North Africa another group of some 15-20 combined-arms units. There was no reliable evidence as to the planned commitment of Patton's group of armies or of the North African group. At that time the [operations staff] believed it fairly certain that the enemy would land his African group on the southern coast of France, since an attack against the deep flank of OB Southwest in Italy would mean excluding these Allied forces from the decisive battle in France, for the sake of a secondary operation. The question of where Patton's group of armies would attack still remained open. It is apparent that German intelligence was not what it should have been and that the disruption of signal communications and the general confusion in Normandy gave the High Command a distorted picture of the landings already made. By 13 June, a rift in the German High Command was beginning to develop, as shown by the following: The p[operations staff] submitted a brief estimate to Hitler, to the effect that one of the chief reasons for the unsatisfactory combat situation in Normandy was the conduct of operations in the west up until then, particularly the failure to stick to the principle established and approved before the invasion: [Once the enemy has landed, concentrate all forces against that one spot--regardless of risk--and destroy him there." The [operations staff] suggested that, regardless of the obscurity surrounding the intentions of Patton's group of armies and of the North African group, the risks involved on other coasts be accepted, the combat front in Normandy be reinforced by all forces available in the west, and, in addition, forces be transferred to France from other theaters on as large a scale as possible, the combat missions of these other theaters being altered accordingly. This would have meant a definite shifting of the main weight of our over-all effort to France. The [operations staff] made this suggestion with the conviction that if the invasion in France could be wiped out in its present (first) phase, then time and forces would be available at a later date to make good the disadvantages and reverses now accepted on other fronts. Hitler only partially approved this view. in particular, he could not bring himself--probably chiefly for political and economic reasons--to agree to a decisive weakening of other theaters of operations. . . . Hitler, who in those day again inclined to the opinion that Patton's group pf armies would make the second landing in the Fifteenth Army sector after all, ordered that the enemy's intention of preventing our commitment of strong forces in Normandy, by transmitting false radio announcements, was to be frustrated by concentrating our forces as much as possible and destroying the enemy beachhead little by little--the weakening of other fronts in the west must be accepted. But at the same time Hitler exempted the one really strong army int he west, Fifteenth Army, from giving up any appreciable number of troops. Field Marshal Rundstedt reported as follows on a conference with hitler that he attended on 30 June: After Rommel and I had given an exhaustive exposition of the complete untenability of the situation, no clear decision was reached. Always: Hold! Hold! New weapons are coming, new fighter planes, more troops--and the same old talk. Here again we said that now something political must happen. Icy silence. I left the conference without any hope, arrived in St. Germain [Paris] after an eighteen hours' journey by automobile, and found the situation there had become still more acute. The next day I was dismissed. Finally, an OB West report includes a realistic account of the German situation in July: The fighting raged on without pause, forcing Army Group B to constantly expend forces at the front, so that there could never be any real formation of a large reserve, let alone any planned relief and rehabilitation of units behind the front or any extensive construction of positions for sealing off Normandy. The field forces suffered incredibly under the massed air attacks, which we were powerless to engage in the air. Supplies were stalled, delivery of fuel had become particularly difficult, and, in the last analysis, all tactical measures of the panzer units were dictated by the amount of fuel available. These were the factors at the end of July 1944 which were to determine the outcome of the Normandy battle for the Western Allies. In summary, some of the factors that contributed to the failure of the Wehrmacht to repel the invasion were as follows: - Rigid adherence to a preconceived idea of the High Command (that the main landings would occur in the Pas de Calais). - Lack of effective combat intelligence, which enabled the Allies to gain tactical surprise. - Complete combat inferiority, particularly in the air. - Lack of a strong, mobile strategic reserve. - Dissension in the High Command, which resulted in a lack of a positive defensive strategy or authority to initiate prompt and effective countermeasures. Table of Contents ** Previous Chapter (2) * Next Chapter (4)
<urn:uuid:ba3eda9f-1628-44ea-b52a-a2e9f6e67c79>
CC-MAIN-2016-26
http://www.ibiblio.org/hyperwar/USA/USMA/WEurope1/WEurope1-3.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977535
21,221
3.171875
3
BackgroundMary, Queen of Scots is what really brought war upon the two nations. Queen Mary was Catholic and promised Phillip the throne of England if she became Queen. Phillip II took it upon himself to avenge Mary and the oppressed Catholics in England, overthrow Queen Elizabeth I, and bring papal justice. The InvasionThe invasion was no surprise to the English. It took Spain years to accumulate the forces and ships needed and they were keen to let England know why the were gathering so much naval-related resources as a force of intimidation. The armada that set sail from Spain on July 19th, 1588 consisted of 130 ships of which 22 were fighting galleons. The fleet consisted of many converted merchant ships as well as smaller zabras and patachs used to supply the galleons The fleet sailed in a very defensive crescent-shaped formation with the slower galleons well protected in the middle and with Battle of GravelinesThe Battle of Gravelines is the beginning of the long demise of the Spanish Armada. The Duke of Medina Sedonia anchored his fleet at the harbor at Gravelines to pick up his additional 30,000 . Luckily for the Spanish, lookouts spotted the fiery threats with time to escape but in doing so many ships were forced to cut This proved to be significant later. From all the minor skirmishes on the channel, the English now knew the Spanish's naval strategy. Unlike the English, Spain used their over complicated and slow canons as a supporting role as they wanted to pull along side an enemy ship and board them. This strategy might have been successful in past but it was not against the English. At Gravelines, the Spanish would expose their broadside while trying to board an English ship and would be quickly sunk by the English's superior and rapid use of canons. The English also used dreadnoughts as their primary warship and these were much faster than galleons adding even more of a kink in Spain's naval tactics. The English fleet was able to do damage to the armada but with such limited ammunition on both sides, the Spanish were able to fight off the English and escape. However the English now blocked the armada from using the channel to return to Spain. The Duke of Medina Sedonia instead was forced to plot a very long route around Scotland and Ireland. The Armada's Demise The armada was already short on supplies but it became so bad in some cases that sailors were forced to eat rope in order to survive. When the fleet was above Scotland it was hit by a horrific storm. Many ships were sunk and the fact that so many ships had no anchors now made their situation even worse. The surviving ships sailed for Ireland for refuge. They thought the Catholic Irish would help them. They were wrong. When the Irish saw the battered fleet arriving on their shores they viewed them as invaders and treated them with hostility killing many. Ultimately, only a mere 67 ships made it back to Spain while the English lost no ships and only 100 men in battle. However, disease took the lives of thousands on each side. Reasons the Spanish were Defeated The conditions definitely gave England a clear advantage. Here is why: TechnologyThe Spanish relied on the strength and firepower of Galleons which were very large and slow. Their cannons were very effective at close range The English used Dreadnoughts as their main warship. These ships were very maneuverable and used very effective cannons. These cannons were not as powerful as the Spanish but had a much greater range and were easier to fire quickly. These proved to be very useful at Gravelines. The English were able to restock supplies easily and did not have to sail far to engage the Spanish. Also the knowledge of currents, terrain, and weather helped press their advantage over the Spanish. Admiral Santa Cruz was supposed to lead Phillip II's great armada but he died prior to departure and the Duke of Medina Sidonia was named his replacement. This was a poor call for though the Duke was an excellent general, he had never been to see before and did not know naval warfare. The English were able to resupply at ports while the Spanish had to take everything they need for the whole journey. With all the delays due to weather and the fiasco with picking up the additional 30,000 soldiers in the Spanish Netherlands without a port left the Spanish with little supplies when they were forced to sail the long way around the British Isles. To make things worse, in preparation to building and preparing the armada, many barrels were made fresh for the ships. Fresh and moist wood rotted the food and made the water sour. Spain's naval strategy involved boarding and capturing ships. Cannons were used as a supporting role to allow the ship to get close enough for the troops to board and fight. The armada sailed in a highly defensible crescent shape that put the valuable galleons in the middle to be protected. This strategy was highly affective against the English but when they dispersed to anchor at Gravelines, the English exploited their vulnerability. England on the other hand was trying to sink all the Spanish ships they could. They relied heavily on cannon fire and could do damage from a distance or let the Spanish come along broadside in an attempt to board then sink them with their superior cannons and rapid discharging capabilities. England's naval battle strategy was vindicated with their victory over Spain. This led to a revolution of naval warfare. Gunnery became relied upon over the traditional ramming and boarding. Also shipbuilding changed with the success of the new Dreadnought design. Many historians believe naval dominance shifted with the defeat of the Spanish Armada. England was riding the wave of a new age of naval warfare and slowly became the worlds most powerful naval power. In the near future it would be England trying to invade Spain. - ↑ The Spanish story of the Armada: and other essays - ↑ http://www.historylearningsite.co.uk/spanish_armada.htm - ↑ "Spanish Armada" - ↑ http://www.historylearningsite.co.uk/spanish_armada.htm - "Armada, Spanish." Columbia Electronic Encyclopedia, 6th Edition. GA Tech Library. Web. 14 Mar. 2012. - "The Spanish Armada." History Learning Site. 2000. Web. 16 Mar. 2012. <http://www.historylearningsite.co.uk/spanish_armada.htm>. - Corbett, Julian S. "Introduction: The Naval Art in the Middle of the Sixteenth Century." Drake and the Tudor Navy, Vol.1, by Julien S. Corbett. [S.l.]: Longmans, Green &, 1912. Print. - Froude, James Anthony. The Spanish Story of the Armada, and Other Essays;. New York: C. Scribner's Sons, 1892. Print.
<urn:uuid:3ac501be-a420-462e-bf09-d02e0ca4b9a7>
CC-MAIN-2016-26
http://digitalbard.lmc.gatech.edu/wiki/index.php/Spanish_Armada
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00062-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982248
1,431
3.671875
4
Scientists at Massachusetts General Hospital (MGH) say a simple adjustment to a CRISPR may be able to improve its specificity. They describe in an advance online article in Nature Biotechnology how adjusting the length of the guide RNA (gRNA) component of CRISPR-Cas RNA-guided nucleases (RGNs) can substantially reduce the occurrence of DNA mutations at sites other than the intended target. “Simply by shortening the length of the gRNA targeting region, we saw reductions in the frequencies of unwanted mutations at all of the previously known off-target sites we examined,” notes J. Keith Joung, M.D., Ph.D., associate chief for research in the department of pathology and senior author of the report (“Improving CRISPR-Cas nuclease specificity using truncated guide RNAs”). “Some sites showed decreases in mutation frequency of 5,000-fold or more, compared with full length gRNAs, and importantly these truncated gRNAs, which we call tru-gRNAs, are just as efficient as full-length gRNAs at reaching their intended target DNA segments.” CRISPR-Cas RGNs combine a gene-cutting enzyme called Cas9 with a short RNA segment and are used to induce breaks in a complementary DNA segment in order to introduce genetic changes. Last year Dr. Joung’s team reported finding that, in human cells, CRISPR-Cas RGNs could also cause mutations in DNA sequences with differences of up to five nucleotides from the target, which could seriously limit the proteins' clinical usefulness. The team followed up those findings by investigating a hypothesis that could seem counterintuitive, that shortening the gRNA segment might reduce off-target mutations. “[We] report that truncated gRNAs, with shorter regions of target complementarity <20 nucleotides in length, can decrease undesired mutagenesis at some off-target sites by 5,000-fold or more without sacrificing on-target genome editing efficiencies,” the team wrote. “In addition, use of truncated gRNAs can further reduce off-target effects induced by pairs of Cas9 variants that nick DNA (paired nickases).” “While we don’t fully understand the mechanism by which tru-gRNAs reduce off-target effects, our hypothesis is that the original system might have more energy than it needs, enabling it to cleave even imperfectly matched sites,” explains Dr. Joung, who is an associate professor of pathology at Harvard Medical School and co-founder of Editas Medicine, a genome editing company. Jennifer A. Doudna, Ph.D., from the University of California, Berkeley, and George Church, Ph.D., from Harvard Medical School, are also co-founders of the firm.
<urn:uuid:64c72f2e-763b-49e5-b245-7c80cb1b3df0>
CC-MAIN-2016-26
http://www.genengnews.com/keywordsandtools/print/4/33780/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943571
598
2.78125
3
Our growing research laboratory and makerspace is full of equipment for projects and prototyping from designing intelligent robots to making chainmail to to building the next Facebook or Google. Think of the space as a combination of great stuff from mechanics, tailors, roboticists, designers, and artists. Browse around Thingiverse, download and try to model something with Blender or SketchUp, or use our NextEngine 3D Scanner to come up with something to print on our MakerBot Replicator. Talk to someone who can use our MakerBot, get trained on the printer, and you can show others how to use it! We have lots of plastic in a rainbow of colors. In addition, we have Arduinos, ShiftBrite LEDs, tons of single-color LEDs, Craftsman hand tools, 1/4 mile of electric fence wire, and 4 XBox Kinects. For the 2012-2013 academic year, we are planning to add a sewing machine, serger, programmable embroidery machine, and conductive thread. We may acquire a spherical etching machine; we may acquire a letterpress printing press; we may acquire a laser cutter. Come and say hello to our robot. See what it can do, how to control or program it, and learn a little about Bob Balaban along the way. Think about taking Computer Science 155: Intelligent Systems (it is a QA course) to learn more about robotics and create a project with iRobot Create robots — like making them squaredance — or propose something else like building an artificial intelligence to play a Euro-style board game.
<urn:uuid:8ea00b7d-e9d2-4e72-b480-30a3f42aab4a>
CC-MAIN-2016-26
http://wheatoncollege.edu/whale/equipment/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.896402
329
2.8125
3
With a $1,000 grant from the DeKalb Education Foundation, Littlejohn Elementary School in DeKalb held a Medieval Times Fine Arts Week during the week of April 7. The week began with a mini-medieval fair, in which several area artists demonstrated medieval art forms for the Littlejohn students. Jennifer Wegmann-Gabb demonstrated how to create illuminated manuscripts using calligraphy, gold-leafing, and ink production. Tonya Hardy showed students how to create goblets and gargoyles out of clay. And Allison Johnson demonstrated lampwork, the art of layering molten glass onto a steel rod to create decorative beads. Northern Illinois University professor Valerie Garver shared information about medieval textiles and clothing, and NIU professor Janet Hatheway shared some musical instruments that were used during the Middle Ages. Each grade level learned some general history of the Middle Ages and studied a different medieval art form. Pre-kindergarteners created mosaic art, kindergarten classes created castles, first graders created weavings after learning about tapestries and storytelling, second graders made stained-glass windows, third graders made clay gargoyles, fourth graders created frescoes, and fifth graders made their own coats of arms. All of the projects were on display during parent/teacher conference week.
<urn:uuid:7b01e32a-62fc-4f55-a43f-4381b9e9bd7e>
CC-MAIN-2016-26
http://www.midweeknews.com/articles/2014/05/09/666247dd6ae742b3a32cdebf02edff90/index.xml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946719
274
2.9375
3
Did you know that in western countries around 14 to 29% of people are likely to suffer from an anxiety problem in their lifetime? If anxiety is something that affects you personally, or someone you know, you probably understand the harm and misery it can cause. Fortunately, meditation gives us ways to approach anxiety that can have a real impact on how it affects us. As steps, we can tackle them all for the greatest results, but even taking just the first can have a real impact on how we feel, lowering our anxiety levels to help us live calmer, happier lives. Step one: The rational approach To start, we need to look at anxiety logically to see why and how it affects us. Think of the last time you felt anxious. Like most of us, you probably tried to fight the feeling, resisting it with an emotion like frustration, sadness, or ironically, more anxiety. But this response conditions us to think it's bad to feel anxious. Then, we make things even worse by noticing the physical sensations that follow; the tight chest, the tense body - and our already anxious mind thinks, 'Oh I'm feeling this, I must be really anxious!' And so the cycle perpetuates. We need to step out of this loop. But not by trying to stop anxiety, instead changing the way we relate to it. Like any other emotion, anxiety isn't good or bad, it's just a passing thought or sensation. Learning mindfulness through meditation means that we can choose how we handle the feeling; the importance we give it and how long it stays with us. We learn to simply notice the sensations that anxiety brings us and how to be present with them, rather than getting caught up in what they represent. In effect, we're taking a step back, and this alone can interrupt the cycle. Step two: The investigative approach Next, we need to observe our anxiety by asking: 1. What is it? 2. Where does it come from? 3. Where do I feel it? 4. What does it feel like? But to examine anxiety, first we need to welcome it. Mediation enables us to do this, and by doing so change the emotion from something we resist to something we embrace. Rushing to find the answers will cause more thinking, which may well bring more anxiety. So we must take our time, being curious, open and honest, as this will create a true and long-lasting shift in our perspective. Because in this step, the process is more important than the result - in many ways, it is the result. Step three: The vulnerable approach And now to the most rewarding part of the journey. It requires a more formal meditation technique where we drop our guard and let anything and everything arise in the mind. Sounds frightening? It can be, but it's also exciting and incredibly liberating. Meditation teaches us to witness the mind with its thoughts and feelings from a place that's neutral and objective. For anxiety, it allows the mind to rest in the present moment, no longer overwhelmed by anxious thoughts and feelings. And when one does come into the mind, instead of trying to block it, we allow it to arise, embrace it, and then, by bringing our attention back to present moment - let it go. This works because it shows us that things are always changing. Sometimes anxiety is with us, sometimes it isn't. It also shows us that all minds behave similarly. For some, anxiety is replaced with sadness, anger or loneliness, but by recognising the pattern in our own mind, we see how similar patterns affect other people too. And with this understanding, we no longer feel isolated or alone, but very normal - with nothing to fear. And finally, this approach softens the mind a little. We see that thoughts are just thoughts; a feeling is just a feeling - nothing more, nothing less. It allows the mind to be free, open and ready to experience life exactly as it is - just without the background hum of anxiety. This is a blog series produced in partnership with Headspace, a project designed to demystify meditation. With scientifically proven techniques that are easy-to-earn and fun-to-do, Headspace can be used every day to experience a healthier and happier mind. You can try it on for size with the free Take10 program by visiting headspace.com
<urn:uuid:d3d9f7e2-9c9b-4c0c-b87e-2f91ae6581aa>
CC-MAIN-2016-26
http://www.huffingtonpost.co.uk/andy-puddicombe/tips-on-coping-with-anxiety_b_5331143.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94903
905
2.671875
3
War in the SouthIntroduction; Border War in the South and West Wyoming Valley | War on the Sea | Treason of Arnold War in the South | Yorktown | Observations For nearly four years the power of the British had been thrown against the great states of the North. They had destroyed much property and taken many lives; they had overrun vast tracts. But the game had been a losing one; a fine army had been sacrificed in the Hudson Valley, and now at the end of the four years the British commander had not possession of a single foot of territory except Manhattan Island and Newport. He therefore determined, while still holding New York as his base, to send his legions to the weaker communities of the South, to conquer Georgia, then the Carolinas, and perchance the Old Dominion, and to hold these until terms could be made with their powerful neighbors to the North. The plan is supposed to have originated in the brain of Lord George Germain. In December, 1778, a force of thirty-five hundred British regulars under Colonel Campbell landed near Savannah, Georgia. The American force there, commanded by General Robert Howe, was less than twelve hundred in number. The two forces met in battle; the Americans were routed, losing five hundred in prisoners, and the city of Savannah surrendered with its guns and stores. General Prevost soon arrived with British reŽnforcements from Florida, and he and Campbell pressed their advantage with vigor; they captured Augusta and other points, and within ten days proclaimed their conquest of the state of Georgia. General Benjamin Lincoln was now made commander in the South, instead of Howe. General Moultrie had just won a signal victory in defending Fort Royal, but the advantage was soon lost, for fifteen hundred men under General Ashe, who were sent by Lincoln against Augusta, suffered a crushing defeat at Briar Creek at the hands of the English. Prevost then crossed the Savannah River and began a march toward Charleston, spreading devastation in his trail; but his course was checked in a skirmish with Lincoln, and he turned back. The summer of 1779 passed, and the British as yet had no foothold north of Georgia. Early in September D'Estaing arrived at the mouth of the Savannah from the West Indies with a powerful French fleet, and American hopes in the South rose with a bound. The first thought was to recapture Savannah, and the siege was begun on September 23. For three weeks, day and night, Lincoln's artillery from the shore joined with that of the French commander from the harbor. But Prevost gave no sign of surrendering the city, and D'Estaing proposed a combined assault. This was made with desperate valor on October 9, but it failed. The French and Americans lost heavily, and, saddest of all, the brave Pulaski was numbered with the slain. D'Estaing fearing the October gales, sailed away, and the coast was clear for two months, when another fleet hove into view. This fleet was not that of a friend; it bore Sir Henry Clinton from New York and Earl Charles Cornwallis with eight thousand soldiers for the subjugation of the South. Clinton landed at Savannah, but his aim was to capture Charleston, the chief seaport of the South. Adding the force of Prevost to his own, he began the march overland to Charleston, which was now occupied by Lincoln with 7000 men. Clinton began engirdling the city about the 1st of April, 1780, and a week later the British fleet ran by Fort Moultrie and entered the harbor. Soon after this Lord Rawdon arrived from New York with three thousand more troops, and the doom of the southern metropolis was scaled. Lincoln should have fled and saved his army, but he lacked the sagacity of a Washington or a Greene; he prepared for defense, while day by day the coil of the anaconda tightened about the doomed city. Lincoln surrendered, and Charleston, with its stores, its Charleston, advantages, and the army that defended it, fell into the hands of the British commander.1 The fall of Charleston was a sad blow to the patriot cause -- the most disastrous event of the war, except the fall of Fort Washington on the Hudson four years before. It gave Clinton control of South Carolina as well as of Georgia, and that offlaer now called away for New York, leaving Cornwallis in command with five thousand men. During the following months the scene in the Carolinas and Georgia was one of wild disorder and anarchy. A large portion of the people were loyalists, and scarcely a day passed without hand to hand encounters, bloodshed, and murder. The patriots were without an army, but bands of roving volunteers annoyed the British incessantly. The most daring and successful leader of these bands was Francis Marion, the "Swamp Fox." With a handful of followers he would creep like a tiger from the coverts of the woods or the fastnesses of the mountains, strike a deadly blow, and disappear again like a shadow. Scarcely inferior to Marion was Thomas Sumter, the "South Carolina Gamecock," who was to outlive all his fellow-officers of the Revolution, and to leave his name upon that famous fort which was destined to be the scene of the opening of that greater war, to be fought by a later generation of Americans. After the war Sumter became a statesman, sat in the United States Senate, was minister to Brazil, and died in 1832 at the great age of ninety-eight years. Next to Sumter must be ranked Andrew Pickens, who also lived many years under the Constitution, and served his state in Congress. These and a few other kindred spirits kept alive the patriot cause in the South after the fall of Charleston, until a new army could be organized. The summer had not passed before the clouds began to break away. Washington had sent De Kalb, who was hastening southward with over fifteen hundred veterans; the call for militia from Maryland, Virginia, and North Carolina met with a considerable response; and a commander to succeed Lincoln was to be sent from the North. Washington preferred Greene for this responsible duty, but the people called for Gates, "the hero of Saratoga," whom public opinion still clothed with the glamour of a great genius. Gates arrived upon the scene late in July, and again the hopes of the lovers of liberty rose -- to be ruthlessly dashed to the ground once more -- only once more. This final disaster was to occur at Camden, South Carolina, whither Gates hastened by forced marches. Reaching a point near the town, he found Lord Rawdon blocking his way with a force smaller than his own. Gates should have struck an immediate blow, but he hesitated for two days, and by that time Cornwallis with the main army had joined Rawdon. Now occurred an unusual coincidence. On the night of the 15th of August, Gates decided to march through a wood for ten miles and surprise the enemy at daybreak. It happened that Cornwallis, on the same night and at the same hour, began a march over the same route for the purpose of surprising Gates. The two armies met midway and both were equally surprised. They waited till daylight, and then came the battle of Camden. The American force was largely composed of raw militia, who broke and fled at the first fire, throwing their loaded muskets to the ground. The regulars fought with great bravery, but the odds were against them, and the American army was totally routed. The noble De Kalb, bleeding from eleven wounds, fell into the enemy's hands and died soon afterward. Gates was borne from the field in the mad retreat, and he kept on galloping, and by night he had covered sixty miles. But he did not stop here; three days later he was at Hillsborough, North Carolina, nearly two hundred miles from the scene of the battle. His "northern laurels were changed to southern willows," as the cynical Charles Lee put it. Gates made an effort to recruit an army, but with little success. He saw that his career was over, and he made a piteous appeal to the commander in chief. Washington wrote him a consoling letter, expressing confidence, and even suggesting that he might be able to place Gates in command of one wing of the Continental army. The broken old general cherished this letter to the end of his days. The writing of this by Washington, in the face of the memory of the Conway Cabal, dislayed a magnanimity with which few of the human race are gifted. A few days after the crushing defeat of the Americans at Camden, another disaster, but of minor importance, was added to it. Sumter, with four hundred men, had captured a British baggage train, but Tarleton overtook him, recaptured the baggage, and made prisoners of three hundred of his men. These were the darkest hours of the Revolution, save only the few weeks preceding the battle of Trenton. But soon the light began to dawn; and never again, from that hour until now, has it been so nearly obscured as in the dark days that followed the battle of Camden. Scarcely had Tarleton won his victory, when Colonel Williams defeated five hundred British and Tories with great slaughter; and a few days later, on the banks of the Santee, Marion, with a handful of men, dashed upon a portion of the British army, captured twenty-six, set one hundred and fifty prisoners free, and darted into the forest without losing a man.2 This was a beginning; King's Mountain was soon to follow. Cornwallis sent Major Ferguson, one of his best officers, with twelve hundred men, five sixths of whom were loyalists, to scour the back country, gather recruits, and strike terror into the hearts of the patriots. The news of his raid spread beyond the mountains, and the frontier settlements were soon roused to fury; and, like the farmers at Lexington and Bennington, these hardy backwoodsmen seized their muskets, and hastened to meet the foe. Without orders, without hope of reward, these men, led by such heroes as John Sevier and Isaac Shelby, William Campbell and James Williams, poured like a torrent from the slopes and glens of the mountains, more than a thousand strong. A motley crowd they were, Indian fighters and hunters, farmers and mountain rangers, dressed in their hunting shirts, with sprigs of hemlock in their hats, fearless and patriotic, and every man a dead shot with the rifle. So eager were they for the fray that the few hundred that were needed to guard the settlements had to be drafted for the purpose.3 Ferguson heard of the coming of the "dirty mongrels," as he called them, and he planted his army on a spur of King's Mountain near the boundary between the Carolinas. The mountaineers, now numbering over thirteen hundred, came upon Ferguson on the afternoon of October 7, hungry and worn with an all-night march. They chose Campbell as their leader, but in truth the battle, like that at Lexington, was fought without a leader. Ferguson had chosen a strong position, but the pioneers were used to mountain climbing. They chose the only plan that could have succeeded: they surrounded the hill and, pressing up the slopes, attacked the British from every side. The latter fought with a courage worthy of a better cause. They fired volley after volley, they rushed upon the foe with the bayonet and pressed them down the hillside. But the Americans instantly re-formed and renewed the attack. At one moment the false cry ran along the American line that Tarleton was in the rear, and about to attack them. It created a panic and several hundred started to run, when John Sevier, whose "eyes were flames of fire, and his words electric bolts," rode among the fleeing men, and, with the magnetic power of a Sheridan, turned them back to duty and to victory. Three times the assaulting columns surged up the hill only to be driven back at the point of the bayonet. But they always came again, and at length the British were exhausted; they huddled together on the hill, their ranks melting before the sharpshooters' bullets like snow beneath a summer's sun, Ferguson was a man of desperate valor. He refused to surrender. A white flag, raised by one of his men, he struck down with his sword. Then with foolhardy daring he made a dash through the encircling columns for liberty. Five sharpshooters leveled their pieces, and the British officer fell with five mortal wounds in his body. The remnant of the force surrendered; 4564 of their number lay dead upon the field, to say nothing of the wounded, while but 28 of the Americans were slain. The battle over, the men who had won it, taking their prisoners with them, hied away again to their crude civilization beyond the Alleghanies, disappearing as suddenly and noiselessly as they came. This was their only service in the war, but it was a noble service. At King's Mountain they turned the tide of the war, and insured the ultimate independence of America. During the following months Marion and Sumter were extremely energetic in their peculiar mode of warfare, and the latter gained a victory over Tarleton. But this was not all; Daniel Morgan came down from the North, -- Morgan, whose romantic career we have noticed, -- and at his hands the scourge of the South, Tarleton, was to suffer the most crushing defeat of his life. General Nathanael Greene was appointed to succeed Gates at the South. He arrived in December, 1780, and with the aid of Thomas Jefferson, Governor of Virginia, raised some two thousand men from that state, and these, with fifteen hundred whom Gates had collected after Camden, gave him a respectable army. Greene's first important move was to send the free lance, Daniel Morgan, to raid the back country. Morgan, with nine hundred men, was soon confronted by eleven hundred under Tarleton. The two met at the Cowpens, not far from King's Mountain. Morgan's tactics were perfect; the battle was furious, and Tarleton's army was almost annihilated, he and a few followers alone escaping through the swamps on horseback. Greene had the services of some of the best men of the Continental army -- Steuben, whom he left in Virginia to watch the traitor Arnold, Kosciusko, and the brilliant cavalry leaders, Henry Lee and William Washington, the latter a distant relative of the commander in chief. Cornwallis was greatly weakened by the defeat at the Cowpens, and he determined to strike Greene as soon as possible and revive the waning spirits of the regulars and loyalists. Perceiving this, Greene decided to lure the British general as far as possible from his base of supplies, and then to give him battle. He began an apparent retreat northward. Cornwallis fell into the trap, destroyed his heavy baggage, and followed. The chase continued for two hundred miles. At Guilford Courthouse, but thirty miles from the Virginia border, Greene, having joined Morgan's forces with his own, wheeled about, and, after some days of sparring for position, offered battle.5 Greene placed his raw militia in front with orders to fire two or three volleys before giving way, after which the brunt of the battle was to be borne by the regulars. This plan had been adopted by Morgan at the Cowpens with great success, and Greene found it highly useful. At one time during the battle the Americans were on the point of being routed when they were saved by a cavalry charge of Colonel Washington. After the battle had continued for some hours the British planted their columns on a hill, from which they fought with great valor and could not be dislodged, and at nightfall they were left in possession of the field. From this cause the battle of Guilford has been considered a British victory. But the real victory lay with Greene. He had lured his enemy far from his base of supplies, and had destroyed one fourth of his army, six hundred men, himself losing but four hundred. Cornwallis saw that he was entrapped, refused Greene's challenge for a second battle, and marched in all haste to the seacoast, leaving his wounded behind. By the flight of Cornwallis North Carolina was left in the hands of the Americans, and South Carolina was soon to share the same good fortune; for Greene, instead of pursuing the enemy toward Wilmington, turned to the latter state, and in three months he and his subordinates had driven the enemy from every stronghold -- Camden, Augusta, Forte Motte, Orangeburg, Ninety-six -- all except Charleston6 and all the energy that the British had expended in two and a half years to possess those states came to naught. 5Greene's flight was prompted also by the fact that he did not feel able, without reŽnforcements, to fight Cornwallis. He offered battle only after making a detour into Virginia and gathering several hundred recruits. [return] 6Colonel Stewart, however, who succeeded Lord Rawdon, remained in South Carolina till September 8, when occurred the battle of Eutaw Springs. This has been pronounced a British victory; but, strange to say, the victors fled and were pursued for thirty miles by the vanquished. Source: "History of the United States of America," by Henry William Elson, The MacMillan Company, New York, 1904. Chapter XIV p. 301-308. Transcribed by Kathy Leigh.
<urn:uuid:5bac790d-94c3-41f2-9d97-ce3fdb2ff58d>
CC-MAIN-2016-26
http://www.usahistory.info/south/war.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983412
3,635
3.78125
4
Homocystinuria is an inherited disorder that keeps the body from processing the essential amino acid methionine. Amino acids are the building blocks of protein. Methionine occurs naturally in various proteins. Infants need it for growth, and adults need it to maintain their body’s The illness usually affects infants during the first few years of life, and the more rare forms of the disorder can lead to children being underweight. If it’s left untreated, homocystinuria can have serious and sometimes What Are the Signs and Symptoms of Homocystinuria? The symptoms will depend on the type of homocystinuria. Symptoms generally develop during the first years of life. However, some people experience symptoms during adulthood. Symptoms are often vague and difficult to detect. The most common forms of this disorder may involve the following - dislocation of the lenses in the eyes - abnormal blood clots - osteoporosis, or weakening of the bones - learning disabilities - developmental problems - chest deformities, such as a protrusion or a caved-in appearance of the breastbone - long, spindly arms and legs Less common variations involve these additional signs and - megaloblastic anemia, an anemia involving larger-than-normal red blood cells - failure to thrive - intellectual disabilities - movement and gait abnormalities What Are the Types Numerous variations of homocystinuria exist. They range from common to rare. No specific names exist for these variations. Instead, they’re distinguished by their symptoms. Infants who are affected by the common form of this disorder generally experience mild symptoms. In fact, they may not even have symptoms that require treatment until they’re older. However, the less common forms of this disorder have been known to cause more serious developmental problems, including impaired intellectual capability. Certain genetic mutations present at birth cause this disease. More than 150 mutations that cause homocystinuria have been found in the gene cystathionine beta-synthase, which is also known as the CBS gene. The CBS gene holds instructions for making an enzyme that uses vitamin B-6 to metabolize the amino acids homocysteine and serine. The mutations prevent the normal functioning of the CBS gene. This results in a buildup of homocysteine and other toxins that damage the nervous system, which includes the brain, and the In rare cases, mutations in other genes like MTHFR, MTR, or MTRR cause the disorder. Homocystinuria is an autosomal recessive trait. This means that for a child to have the signs or symptoms of this condition, they would need to inherit the mutated gene from both parents. Who Is at Risk for Since homocystinuria is passed from parents to children, a family history of the disorder places children at an increased risk of developing this condition. The disorder is more common in: Your child’s doctor may look for certain signs to determine whether you or your child has this condition. An extremely thin or tall child is more likely to have the condition. Additionally, their doctor may search for signs such as chest deformities, a curvature of the spine, and dislocated lenses in the eyes. An eye examination can reveal a dislocated lens if your child experiences double or significantly impaired vision. Your doctor may also order a series of tests to determine if your child is affected. These tests may include: testing to look for one of the genes involved in the disorder amino acid screen of the blood and urine to check for excess homocysteine test to determine the body’s response to consuming methionine liver biopsy and enzyme assay to check enzymatic activity Other tests that may be done to determine the impact of the disease include X-rays to look for signs of osteoporosis, a skin biopsy, and a fibroblast culture. There’s no cure for homocystinuria. High doses of vitamin B-6 are a successful treatment for about half of the people with this disorder. If you respond well to this supplementation, it’s likely that you’ll have to use daily vitamin B-6 supplements for the rest of Alternatively, if you don’t experience positive results from this therapy, your doctor may recommend eating a diet low in foods containing the amino acid methionine. People diagnosed at an early stage have had positive responses after switching to this diet. Your doctor may also recommend taking betaine (Cystadane). Betaine is a nutrient that works to remove homocysteine from the blood. Taking a folic acid supplement and adding the amino acid cysteine to the diet are What Can I Expect If My Child Has Homocystinuria? There’s currently no cure available for this disorder. However, around of people who are diagnosed with this condition show improvement from vitamin B-6 supplements. Infants or children who are diagnosed at an early age may experience positive results from a low-methionine diet. It’s believed that this diet helps to prevent some types of mental disabilities and Even with treatment, you may experience serious complications. You may be at risk for blood clots if you have high homocysteine levels consistently. Make sure you follow your treatment plan and schedule regular checkups with your doctor. This will help your doctor monitor your treatment and help to prevent complications. How Can I Keep My Child from Getting Homocystinuria? This disorder occurs as a result of genetic mutations. This can make it challenging to prevent. You should consider going to a genetic counselor if a history of homocystinuria runs in your family. Prospective parents can use this method to analyze the risks of having a child who inherits Requesting an intrauterine diagnosis of the disorder is another possible way to prevent this condition. This requires testing a culture of amniotic cells or villi for the presence of the genetic mutation.
<urn:uuid:07078f1d-52f8-4fd4-b973-42dc58ab93be>
CC-MAIN-2016-26
https://www.aarpmedicareplans.com/health/homocystinuria?hlpage=health_center&loc=basic_info_tab
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913649
1,337
3.5625
4
One of the difficult aspects of the study of electronic music is the accurate description of the sounds used. With traditional music, there is a general understanding of what the instruments sound like, so a simple notation of 'violin', or 'steel guitar' will convey enough of an aural image for study or performance. In electronic music, the sounds are usually unfamiliar, and a composition may involve some very delicate variations in those sounds. In order to discuss and study such sounds with the required accuracy, we must use the tools of mathematics. There will be no proofs or rigorous developments, but many concepts will be illustrated with graphs and a few simple functions. Here is a review of the concepts you will encounter: In dealing with sound, we are constantly concered with frequency, the number of times some event occurs within a second. In old literature, you will find this parameter measured in c.p.s., standing for cycles per second. In modern usage, the unit of frequency is the Hertz, (abbr. hz) which is officially defined as the reciprocal of one second. This makes sense if you remember that the period of a cyclical process, which is a time measured in seconds, is equal to one over the frequency. (P=1/f) Since we often discuss frequencies in the thousands of Hertz, the unit kiloHertz (1000hz=1khz) is very useful. Many concepts in electronic music involve logarithmic or exponential relationships. A relationship between two parameters is linear if a constant ratio exists between the two, in other words, if one is increased, the other is increased a proportianal amount, or in math expression: where k is a number that does not change (a constant). A relationship between two parameters is exponential if the expression has this form: In this situation, a small change in X will cause a small change in Y, but a moderate change in X will cause a large change in Y. The two kinds of relationship can be shown graphically like this: One fact to keep in mind whenever you are confronted with exponential functions: X^0=1 no matter what X is. A logarithm is a method of representing large numbers originally developed for use with mechanical calculators. It is the inverse of an exponential relationship. If Y=10^X, X is the logarithm (base 10) of Y. This system has several advantages; it keeps numbers compact (the log of1,000,000 is 6), and there are a variety of mathematical tricks that can be performed with logarithms. For instance, the sum of the logarithms of two numbers is the logarithm of the product of the two numbers-if you know your logs (or have a list of them handy), you can multiply large numbers with a mechanical adder. (This is what a slide rule does.) Two times the logarithm of a number is the log of the square of that number, and so forth. We find logarithmic and exponential relationships many places in music. For instance the octave relationship may be expressed as Freq= F*2^n where F is the frequency of the original pitch and n is the number of octaves you want to raise the pitch. We discuss the logarithmic nature of loudness at length in Hearing and the Earand Decibels. The strength of sounds, and related electronic measurements are often expressed in decibels (abbr. dB). The dB is not an absolute measurement; it is based upon the relative strengths of two sounds. Furthermore, it is a logarithmic concept, so that very large ratios can be expressed with small numbers. The formula for computing the decibel relationship between two sounds of powers A and B is 10 log(A/B). Please see Decibels for more complete information. A spectral plot is a map of the energy of a sound. It shows the frequency and strength of each component. Each component of a complex sound is represented by a bar on the graph. The frequency of a component is indicated by its position to the right or left, and its amplitude is represented by the height of the bar. The frequencies are marked out in a manner that gives equal space to each octave of the audible spectrum. The amplitude scale is not usually marked, since we are usually only concerned with the relative strengths of each component. It is important to realize that whenever a spectral plot is presented, we are talking about the contents of sound. In the example, the sound has four noticable components, at 500 hz, 1000, just below 2000 hz, and just above 2000 hz.See also Sound Spectra Envelopes are a very familiar type of graph, showing how some parameter changes with time. This example shows how a sound starts from nothing, builds quickly to a peak, falls to an intermediate value and stays near that value a while, then falls back to zero. When we use these graphs, we are usually more concerned with the rate of the changes that take place than with any actual values. A variation of this type of graph has the origin in the middle: Even when the numbers are left off, we understand that values above the line are positive and values below the line are negative. The origin does not represent 'zero frequency', it represents no change from the expected frequency. The most complex graph you will see combines spectral plots and envelopes in a sort of three dimensional display: This graph shows how the amplitudes of all of the components of a sound change with time. The 'F' stands for frequency, which is displayed in this instance with the lower frequency components in the back. That perspective was chosen because the lowest partials of this sound have relaltively high amplitudes. A different sound may be best displayed with the low components in front. When we are discussing the effects of various devices on sounds, we often are concerned with the way such effects vary with frequency. The most common frequency dependent effect is a simple change of amplitude; in fact all electronic devices show some variation of output level with frequency. We call this overall change frequency response, and usually show it on a simple graph: The dotted line represents 0 dB, which is defined as the 'flat' output, which would occur if the device responded the same way to all frequencies of input. This is not a spectral plot; rather, it shows how the spectrum of a sound would be changed by the device. In the example, if a sound with components of 1 kHz, 3kHz, and 8kHz were applied, at the device output the 1kHz partial would be reduced by 2dB, the 8kHz partial would be increased by 3dB, and the 3kHz partial would be unaffected. There would be nothing happening at 200Hz since there was no such component in the input signal. When we analyze frequency response curves, we will often be interested in the rate of change, or slope of the curve. This is expressed in number of dB change per octave. In the example, the output above 16kHz seems to be dropping at about 6 dB/oct. Once in a while, we will look at the details of the change in pressure (or the electrical equivalent, voltage) over a single cycle of the sound. A graph of the changing voltage is the waveform, as: Time is along the horizontal axis, but we usually do not indicate any units, as the waveform of a sound is more or less independent of its frequency. The graph is always one complete period. The dotted line is the average value of the signal. This value may be zero volts, or it may not. The amplitude of the signal is the maximum departure from this average. The most common waveform we will see is the sine wave, a graph of the function v=AsinT. Understanding of some of the applications of sine functions in electronic music may come more easily if we review how sine values are derived. You can mechanically construct sine values by moving a point around a circle as illustrated. Start at the left side of the circle and draw a radius. Move the point up the circle some distance, and draw another radius. The height of the point above the original radius is the sine of the angle formed by both radii. The sine is expressed as a fraction of the radius, and so must fall between 1 and -1. Imagine that the circle is spinning at a constant rate. A graph of the height of the point vs. time would be a sine wave. Now imagine that there is a new circle drawn about the point that is also spinning. A point on this new circle would describe a very complex path, which would have an equally complex graph. It is this notion of circles upon circles upon circles which is the basis for the concept of breaking waveforms into collections of sine waves. (See the essay Sound Spectra for more information.) This fanciful machine shows how complex curves are made up of simple ones. A mathematical series is a list of numbers in which each new member is derived by performing some computation with previous members of the list. A famous one is the Fibonacci series, where each new number is the sum of the two previous numbers (1,1,2,3,5,8 etc.) In music, we often encounter the harmonic series, constructed by multiplying a base number by each integer in turn. The harmonic series built on 5 would be 5,10,15,20,25,30 and so forth. The number used as the base is called the fundamental, and is the first number in the series. Other members are named after their order in the series, so you would say that 15 is the third harmonic of 5. The series was called harmonic because early mathematicians considered it the foundation of musical harmony. (They were right, but it is only part of the story.) One of the aspects of music that is based on tradition is which frequencies of sound may be used for 'correct' notes. The concept of the octave, where one note is twice the frequency of another is almost universal, but the number of other notes that may be found between is highly variable from one culture to another, as is the tuning of those notes. In the western European tradition, there are twelve scale degrees, which are generally used in one or two assortments of seven. For the past hundred and fifty years or so, the tunings of these notes have been standardized as dividing the octave into twelve equal steps. The western equal tempered scale can then be defined as a series built by multiplying the last member by the twelfth root of two (1.05946). The distance between two notes is known by the musical term interval. (Frequency specifications are not very useful when we are talking about notes.) The smallest interval is the half step, which can be further broken down into one hundred units called cents. Equal temperament has a variety of advantages over the alternatives, the most notable one being the ability of simple keyboard instruments to play in any key. The major disadvantage of the system is that none of the intervals beside the octave is in tune. To justify that last statement we have to define "in tune". When two musicians who have control of their instruments attempt to play the same pitch, they will adjust their pitch so the resulting sound is beat free. (Beating occurs when two tones of almost the same frequency are combined. The beat rate is the difference between the frequencies.) If the two attempt to play an interval expected to be consonant, they will also try for a beat free effect. This will occur when the frequencies of the notes fall at some simple whole number ratio, such as 3:2 or 5:4. If the instruments are restricted to equal tempered steps, that 5:4 ratio is unobtainable. The actual interval (supposed to be a third) is almost an eighth of a step too large. It is possible to build scales in which all common intervals are simple ratios of frequency. It was such scales that were replaced by equaltemperament. We say scales-plural, because a different scale is required for each key; if you build a pure scale on C and one on D, you find that some notes which are supposed to occur in both scales come out with different frequencies. String instruments, and to some extent winds can deal with this, but keyboard instruments cannot. If you combine a musical style that requires modulation from key to key with the popularity keyboards have had for the last two centuries you have a situation where equal temperament is going to be the rule. I wouldn't even bring this topic up if it weren't for two factors. One is that the different temperaments have a strong effect on the timbres achieved when harmony is part of a composition. The other is that the techniques of electronic music offer the best of both systems. It is possible to have the nice intonation of pure scales and the flexability for modulation offered by equal temperament. Composers are starting to explore the possibilities, and some commercial instrument makers are including multi-temperament capability on their products, so the near future may hold some interesting developments in the area. Peter Elsea 1996
<urn:uuid:d726bc0a-e9be-4bfd-84d0-c75f86650581>
CC-MAIN-2016-26
http://artsites.ucsc.edu/ems/music/tech_background/TE-11/teces_11.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952914
2,724
3.6875
4
Registered: December 2004 No. 433 —38.2 x 54.0 cm. Watercolour and Ink. What the text says: Meiji 32 (1899); polished rice cost 10-sen (for 1.4 kg) “Sakiyama” (hewer/husband) goes into the pit first and starts mining coal. “Atoyama” (pusher/wife) is late to finish the house chore; a baby son (less than 10 years old) is carried on his back, while she carries lunch boxes and coal plates (tallies). The team is issued with a “Chagame” (canteen or flask), “Sumifuda” (Coal tallies to attach to skips), and “Karui” (toweing rope to draw coal basket or box), and they enter into the deep carefully. When the adult is carrying a baby on the back, it is usually safe, but when the drive is narrow and ceiling low the baby can hit its head. The cost of a nursery above-ground is 10-sen with an additional cost of 3 to 4-sen. It is costly, and children tend to often leave school temporarily or for a longer term and work underground. At the end of Meiji (1912), there were some nursery schools at the middle sized pits. Text at top left: From the age of 7 and 8 years old, kids work at the pit. Descending the pit with lantern. Nobody could blame the miners for taking their children to the workplace when they could not afford childcare.
<urn:uuid:56e7268e-5361-4aa5-b6e1-ea660444b1b8>
CC-MAIN-2016-26
http://www.unesco-ci.org/photos/showphoto.php/photo/6201/title/painting-by-sakubei-yamamoto/cat/1031
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00063-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958021
341
3.234375
3
Material handling occurs in one way or another in every department of every business on every working day – it is not surprising that accidents and injuries resulting from incorrect manual handling techniques comprise the largest group of occupational hazards that result in lost time. This program covers the following: 1. Anatomy and physiology of the neck and back 2. Types of injuries – muscle-ligament and disc 3. Steps to safe lifting 4. Team lifting 5. Physical characteristics of loads 6. Working conditions 7. Personal limitations of personnel involved in manual handling tasks It is important to understand that of all the manual handling activities that put people at risk, lifting and carrying of objects accounts for 75% of all manual handling accidents and injuries. The principles of correct lifting and carrying must therefore be an important part of any manual handling training program. This program has been produced with the general workforce in mind, however, because the principles remain unchanged, regardless of the location, it is a program suitable for a wide audience. - Occupational Hazards of Rootwork: Aphrodisiacs « The Lonely Goth’s Guide to Independent Catholicism - Derrek Lee Played with Torn Ligament in Thumb « The Baseball Blog - Working Conditions for Psychologists | Helping Psychology - Link-Belt Names Bane Machinery Earthmoving and Material Handling Dealer - Channel 6 News » Foxconn denies accusations of illegal working conditions in China
<urn:uuid:417a983e-808e-4dfb-9dc0-dc7c61554927>
CC-MAIN-2016-26
http://www.personalinjuryclaimsolicitor.org/lifting-and-carrying-workplace-safety-training-video-2010-manual-handling-safetycare.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.893859
283
2.828125
3
Definitions for multipleˈmʌl tə pəl This page provides all possible meanings and translations of the word multiple the product of a quantity by an integer "36 is a multiple of 9" having or involving or consisting of more than one part or entity or individual "multiple birth"; "multiple ownership"; "made multiple copies of the speech"; "his multiple achievements in public life"; "her multiple personalities"; "a pineapple is a multiple fruit" A number that may be divided by another number with no remainder. One of a set of the same thing; a duplicate. Having more than one element, part, component, or function. My Swiss Army knife has multiple blades. Origin: From multiple. containing more than once, or more than one; consisting of more than one; manifold; repeated many times; having several, or many, parts a quantity containing another quantity a number of times without a remainder Origin: [Cf. F. multiple, and E. quadruple, and multiply.] Chambers 20th Century Dictionary mul′ti-pl, adj. having many folds or parts: repeated many times.—n. a number or quantity which contains another an exact number of times.—n. Mul′tiplepoinding (Scots law), a process by which a person who has funds claimed by more than one, in order not to have to pay more than once, brings them all into court that one of them may establish his right.—Common multiple, a number or quantity that can be divided by each of several others without a remainder; Least common multiple, the smallest number that forms a common multiple. [L. multiplex—multus, many, plicāre, to fold.] The Standard Electrical Dictionary A term expressing connection of electric apparatus such as battery couples, or lamps in parallel with each other. In the ordinary incandescent lamp circuits the lamps are connected in multiple. Synonym--Multiple Arc. British National Corpus Spoken Corpus Frequency Rank popularity for the word 'multiple' in Spoken Corpus Frequency: #4111 Rank popularity for the word 'multiple' in Adjectives Frequency: #552 The numerical value of multiple in Chaldean Numerology is: 7 The numerical value of multiple in Pythagorean Numerology is: 9 Sample Sentences & Example Usage Images & Illustrations of multiple Translations for multiple From our Multilingual Translation Dictionary - múltipleCatalan, Valencian - mehrere, VielfacheGerman - multiple, múltiploSpanish - چندگانه, چندتایی, مضربPersian - moninkertainen, moni, usea, monikertaFinnish - többszörös, többHungarian - 倍数, 多重Japanese - meerdere, veelvoudDutch - złożony, wielokrotny, wieloraki, wielokrotnośćPolish - múltiplo, múltiplosPortuguese - много, многократный, множественный, кратное, множество, несколько, кратный, многочисленный, кратное числоRussian - flera stycken, flera, multipelSwedish Get even more translations for multiple » Find a translation for the multiple definition in other languages: Select another language:
<urn:uuid:3306fad9-cf80-4f6b-a538-525257f93f61>
CC-MAIN-2016-26
http://www.definitions.net/definition/multiple
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.68292
843
3.15625
3
You choose both the aperture and the shutter speed. Manual mode even offers a shutter speed of "bulb" for long exposures. Because you control both aperture and shutter speed, manual mode offers great scope for expression. But choose the wrong combination and your photo will be too bright or too dark, or in other words over- or under-exposed. Keep your eye on the exposure indicator when choosing aperture and shutter speed. 01. Press the mode dial lock release and rotate the mode dial to M. 1. The mode dial lock release 02. While the exposure meters are on, rotate the main command dial to choose a shutter speed. Shutter speed can be set to “x200” or to values between 30 s and 1/4,000 s or the shutter can be held open indefinitely for a long time-exposure (Bulb). 03. Rotate the sub-command dial to set aperture. Aperture can be set to values between the minimum and maximum values for the lens. View information on functions related to Exposure Mode: Mode M (Manual).
<urn:uuid:cfabf513-ce44-492b-bc7c-cf111d2694d7>
CC-MAIN-2016-26
http://imaging.nikon.com/support/digitutor/d610/functions/shootingmodes_m.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00026-ip-10-164-35-72.ec2.internal.warc.gz
en
0.861427
223
2.578125
3
The Black Hills gradually rise off the South Dakota prairie, creating an island of mountains on the western edge of the state and spilling over into the eastern edge of Wyoming. This “island” of mountains is part of the 1.2-million acre Black Hills National Forest and covers an area roughly 125 miles from north to south and 65 miles east to west. Elevations range from 3,200 feet to 7,000. That helps make the Black Hills the most significant mountain range between the Mississippi River and the West. The Black Hills feature 350 miles—40 of which are in Wyoming—of groomed trails and unlimited off-trail opportunities. The snowmobiling season for the Black Hills is Dec. 15 through March 31. The extensive trail system can be accessed from numerous parking areas spread out over South Dakota and Wyoming. Trails stretch from Lead, Deadwood and just south of Spearfish in the north to near the Crazy Horse Memorial and Custer State Park in the south and from near Galena in the east to close to Buckhorn in eastern Wyoming.
<urn:uuid:9f5c3150-1799-4472-8867-3d4bbfb126bd>
CC-MAIN-2016-26
http://www.snowest.com/2013/02/south-dakota
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939356
219
2.546875
3
The seat of Indian Parliament The Parliament of India is also known as the Sansad bhawan located in New Delhi and is basically bicameral in nature. The Parliament house is segmented into two houses - Lok Sabha and the second one is Rajya Sabha. The Parliament House (Sansad Bhavan) is a circular building designed by the British architects Sir Edwin Lutyens and Sir Herbert Baker in 1912–1913. Construction began in 1921, and in 1927 the building was opened as the home of the Council of State, the Central Legislative Assembly and the Chamber of Princes.The Parliament House of India which is also known as the Sansad Bhawan is now the supreme law making body in India. It is the center of power and politicians decide the fate of the Indian Democracy here. Visitors are not allowed inside the house but when the house is in session, visitors may take permission to go inside and watch the proceedings of the house. The parliament consists of three halls- Lok Sabha(House of People) , Rajya Sabha (the Council of States) and the central hall (the office of President of India). Not far from the Rastrapathi Bhawan (President's Residence) the structure goes round and is 171 meters in diameter. The beautiful structurally perfect Parliament house is not just one massive structure; it is also a building that protects the most massive democracy in the world.
<urn:uuid:131e9297-9234-45b5-bfbf-d5e6cceb3113>
CC-MAIN-2016-26
http://www.windhorsetours.com/sights/sights_view.php?country=india&placeid=234
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963576
286
3.53125
4
Smoky Mountain Graben, Utah January 09, 2007 The graben shown above was photographed in south central Utah this past October. A graben is a sunken block of Earth’s crust bounded by parallel faults. Large-scale grabens include Death Valley in California and much of the Great Rift Valley in East Africa. The Basin and Range region of Arizona and Nevada is characterized by parallel grabens and horsts (up-faulted blocks). Most of them trend northwest-southeast. Seen from high above, the region looks like an army of caterpillars heading from Mexico toward Oregon. This landscape was created by extensional forces resulting from relative plate motions along the San Andreas Fault, which runs the length of California. Small-scale grabens are not as common. Smoky Mountain was named for smoke rising from natural, lightning-caused fires within coal seams underground. These coal layers also lubricate horizontal motion of the overlying bedrock. Burning and compression of the underlying coal beds has also allowed this 20-meter wide section of Straight Cliffs Formation to drop about 2-3 meters, creating a classic mini-graben structure. Note the car "parked" near the far side of the graben. This exciting but remote road is not recommended for passenger cars or wet weather travel.
<urn:uuid:283bc863-549c-42bf-9c88-62d5d25b540d>
CC-MAIN-2016-26
http://epod.usra.edu/blog/2007/01/smoky-mountain-graben-utah.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00077-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921728
268
3.4375
3
Provided by: manpages-dev_3.35-0.1ubuntu1_all sched_setaffinity, sched_getaffinity - set and get a process's CPU #define _GNU_SOURCE /* See feature_test_macros(7) */ int sched_setaffinity(pid_t pid, size_t cpusetsize, int sched_getaffinity(pid_t pid, size_t cpusetsize, A process's CPU affinity mask determines the set of CPUs on which it is eligible to run. On a multiprocessor system, setting the CPU affinity mask can be used to obtain performance benefits. For example, by dedicating one CPU to a particular process (i.e., setting the affinity mask of that process to specify a single CPU, and setting the affinity mask of all other processes to exclude that CPU), it is possible to ensure maximum execution speed for that process. Restricting a process to run on a single CPU also avoids the performance cost caused by the cache invalidation that occurs when a process ceases to execute on one CPU and then recommences execution on a different CPU. A CPU affinity mask is represented by the cpu_set_t structure, a "CPU set", pointed to by mask. A set of macros for manipulating CPU sets is described in CPU_SET(3). sched_setaffinity() sets the CPU affinity mask of the process whose ID is pid to the value specified by mask. If pid is zero, then the calling process is used. The argument cpusetsize is the length (in bytes) of the data pointed to by mask. Normally this argument would be specified as sizeof(cpu_set_t). If the process specified by pid is not currently running on one of the CPUs specified in mask, then that process is migrated to one of the CPUs specified in mask. sched_getaffinity() writes the affinity mask of the process whose ID is pid into the cpu_set_t structure pointed to by mask. The cpusetsize argument specifies the size (in bytes) of mask. If pid is zero, then the mask of the calling process is returned. On success, sched_setaffinity() and sched_getaffinity() return 0. On error, -1 is returned, and errno is set appropriately. EFAULT A supplied memory address was invalid. EINVAL The affinity bit mask mask contains no processors that are currently physically on the system and permitted to the process according to any restrictions that may be imposed by the "cpuset" mechanism described in cpuset(7). EINVAL (sched_getaffinity() and, in kernels before 2.6.9, sched_setaffinity()) cpusetsize is smaller than the size of the affinity mask used by the kernel. EPERM (sched_setaffinity()) The calling process does not have appropriate privileges. The caller needs an effective user ID equal to the real user ID or effective user ID of the process identified by pid, or it must possess the CAP_SYS_NICE ESRCH The process whose ID is pid could not be found. The CPU affinity system calls were introduced in Linux kernel 2.5.8. The system call wrappers were introduced in glibc 2.3. Initially, the glibc interfaces included a cpusetsize argument, typed as unsigned int. In glibc 2.3.3, the cpusetsize argument was removed, but was then restored in glibc 2.3.4, with type size_t. These system calls are Linux-specific. After a call to sched_setaffinity(), the set of CPUs on which the process will actually run is the intersection of the set specified in the mask argument and the set of CPUs actually present on the system. The system may further restrict the set of CPUs on which the process runs if the "cpuset" mechanism described in cpuset(7) is being used. These restrictions on the actual set of CPUs on which the process will run are silently imposed by the kernel. sched_setscheduler(2) has a description of the Linux scheduling scheme. The affinity mask is actually a per-thread attribute that can be adjusted independently for each of the threads in a thread group. The value returned from a call to gettid(2) can be passed in the argument pid. Specifying pid as 0 will set the attribute for the calling thread, and passing the value returned from a call to getpid(2) will set the attribute for the main thread of the thread group. (If you are using the POSIX threads API, then use pthread_setaffinity_np(3) instead A child created via fork(2) inherits its parent's CPU affinity mask. The affinity mask is preserved across an execve(2). This manual page describes the glibc interface for the CPU affinity calls. The actual system call interface is slightly different, with the mask being typed as unsigned long *, reflecting the fact that the underlying implementation of CPU sets is a simple bit mask. On success, the raw sched_getaffinity() system call returns the size (in bytes) of the cpumask_t data type that is used internally by the kernel to represent the CPU set bit mask. clone(2), getcpu(2), getpriority(2), gettid(2), nice(2), sched_getscheduler(2), sched_setscheduler(2), setpriority(2), CPU_SET(3), pthread_setaffinity_np(3), sched_getcpu(3), This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://man7.org/linux/man-pages/.
<urn:uuid:ad9afcb7-1c6d-48e2-a4fc-873089056526>
CC-MAIN-2016-26
http://manpages.ubuntu.com/manpages/precise/en/man2/sched_getaffinity.2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.842306
1,307
2.71875
3
Holocaust Education & Archive Research Team [The Occupied Nations] The Destruction of the Norwegian Jews Between the end of the 13th century and 1814 Norway was ruled by Denmark. In 1814 the European great powers decided that Norway should enter a personal union with Sweden under the Swedish King, thereby delaying its independence until 1905. But in 1814 a wave of patriotism swept the country. n 1814, Norway acquired its first constitution. This document was relatively liberal, but it stated that the official state religion was Lutheran Protestantism and that Jews and Jesuits were forbidden from entering the kingdom. Of the numerous constitutional drafts drawn up before the constituent assembly, only a couple prohibited Jews from entering the country, however it was the version put forth by cleric Nicolai Wergeland that was the most virulently anti-Semitic, in his draft he wrote the following clause: "No person of the Jewish creed may enter Norway, far less settle down there". The debate on the so-called "Jewish clause" was long and heated, however ban on Jews entering Norway was passed and was not to be lifted until 1851 after which time the Jewish population grew slowly until the early 20th century, when pogroms in Russia and the Baltic states increased the number of immigrants. An further increase in Jewish immigration came in the 1930s, as Jews fled Nazi persecution in Germany and areas under German control. By 1941-1942 the Jewish population of Norway numbered roughly 1,000 households and approximately 2,200 individuals. The Jewish minority was primarily involved in the business sector. Norwegian Jews owned about 400 enterprises. About 40 were professionals , the remainder craftsmen and artists. Few Jews were employed in the public sector or as farmers or fishermen. There were two main communities, in Oslo and Trondheim. In both cities the Jewish population enjoyed a lively cultural life, and the Jewish communities operated numerous religious institutions and cultural organizations that ran various educational and welfare programs. Though the Jewish minority was small and widely dispersed, several anti-Semitic stereotypes took hold in popular literature in the early 20th century. In such books by the widely read authors Rudolf Muus and Øvre Richter Frich, Jews are described as obsessed with money and sadistic. In 1920, The Protocols of the Elders of Zion was published in Norway under the title "The New World Emperor". The country's immigration policy shifted following World War I to a far more restrictive line, and Jews were particularly singled out. The ministries of justice and foreign affairs were often at odds on the issue of Jewish immigration, but in practice the policy made it difficult for Jews to immigrate or settle in Norway. Restrictions were justified on an economic basis, Jews would either create destructive competition for Norwegian merchants and tradesmen, or freeload on public assistance. Some were based on purely political concerns, Jews as communists and other subversive elements would create political instability, or general xenophobia against "foreign" groups. Whether the immigration policy was driven by the characterizations above, or vice versa is not clear. Anti-Semitism climaxed when the Germans invaded Norway and Denmark on 9 April 1940, in a combined attack, and despite the gallant efforts of the Norwegian, British, Polish and French forces the Germans proved too strong. Norwegian armed resistance began with the first great act of sabotage (though it lacked any military significance), with the bombing of the Lysaker Bridge linking Oslo to its airport in Fornebu. On 1 February 1942, he took power in Norway as the Minister President, and set about encouraging Nazi values and promoting the German cause in Norway. German authorities under the leadership of Reichskommissar Josef Terboven, put Norwegian civilian authorities under his control. This included various branches of Norwegian police, including the district sheriffs, criminal police, and order police. The Jewish community of Norway was hit hard by the policies of the Nazis authorities and first anti-Jewish measure was introduced just a month after the beginning of the occupation, in May 1940, when radios of Jews were confiscated. In October 1941 registration of Jewish property started; a number of Jewish owned firms and businesses were confiscated. The programme of anti-Jewish measures continued with the stamping of Jewish identity cards which began in January 1942,Jews were to have a red “J” stamped in their identification papers. During this period there were some arrests of Jewish men , who were sent to prisons and labor camps inside Norway.but this did not lead to the mass arrests of the 1,700 Jews, of which the majority were refugees from the Reich, concentrated in Oslo. Quisling appointed a 'Liquidation Board of Confiscated Jewish Assets.' Jewish households and businesses were treated as bankrupt, thus enabling their assets to be sold. The Jewish estates were liquidated, but continued to exist as legal entities, thus permitting expenses to be levied against them. This practice remained in effect even after the war, when a democratic government was established again in Norway. "The belongings of the estates were distributed according to the interests of the Quisling regime. All gold and silver objects and wristwatches were given to the German security police. The assets of Jews originally from Germany, Austria, and Czechoslovakia were given to the German authorities. By the end of the war, the 'Liquidation Board' had used approximately 30 percent of the value of the Jewish properties for its own administration. A reception camp for Jews was soon established at Berg, near Tonsberg, and during June 1942 a general registration for Jews took place, and the confiscation of all Jewish property was completed by October 1942. On the 2 September 1942 the Chief Rabbi of Norway, Julius Samuel, was ordered to report to the Gestapo. his wife Henriette Samuel urged him to go into hiding, or to flee, but he told her: “As Rabbi, I cannot abandon my community in this perilous hour.” He was then arrested, together with 208 Norwegian men, they were sent to an internment camp at Berg, south of Oslo. Then on the 25 October the police, ably assisted by the Hirden, the National Socialist militia founded by Vidkun Quisling, seized some 209 Jewish men and boys over sixteen years of age. They were sent by sea from Norway to Stettin on the Nord-Deutscher Lloyd steamer Donau and then continued their journey by rail to the death camp at Auschwitz-Birkenau in Poland. Then on the 26 November the Norwegian police assisted by the Hirden, the National Socialist Militia founded by Vidkun Quisling, carried out another round up of Jews in Oslo. At 04.30 hours one hundred taxis and 300 men of the Hirden and Norwegian police divided into approximately fifty groups were charged with collecting ten Jews each. A Norwegian Police officer called Knut Roed planned the action. The trip with steamer Donau started in Oslo from the so-called American Quay at 2pm in the afternoon with 532 Jews on board. The steamer docked in Stettin on the 30 November 1942 and this transport arrived in Auschwitz- Birkenau on the 1 December 1942 Among the Jews deported in this transport is Professor Dr. Bertold Epstein, professor of paediatrics at the University of Prague, who emigrated to Norway after the Germans occupied Prague. He receives the camp registration number 79104 and becomes a prisoner physician in the men’s camp at Birkenau, in the Buna auxiliary camp and in the Gypsy camp, his wife dies in the gas chambers. This transport consisted of 532 Jewish men, women and children, of which 186 men are admitted into the camp, the remaining 346 people are killed in the gas chambers at Birkenau. Some of the Jews rounded up in November 1942 were not included in the above transport to Poland but were imprisoned at Bredveit prison in Oslo to await deportation to Poland. On the 24 February 1943, the Bredveit prisoners, along with twenty-five people from the concentration camp at Grini (built in 1939 as a prison for women – but turned into a KZ by the Germans) boarded the Gotenland steamer in Oslo. The ship departed the following day carrying 158 deportees, landing at Stettin on the 27 February 1943. The deportees then travelled to Berlin, where they stayed overnight at the Levetzowstrasse Synagogue. They then travelled by train and arrived in Auschwitz-Birkenau on the 3 March 1943. The majority of these transports are murdered in the gas chambers, it is thought that out of these transports only thirteen survived the war, some were employed at the Monowitz sub-camp. Of the remaining 930 Jews they succeeded in escaping over the border into neutral Sweden, with the help of the Norwegian people, who risked their own lives, to help them reach safety. Among those saved was Henriette Samuel, the wife of the already deported Chief Rabbi. Not only was she saved but also her children, a twenty-five year old Norwegian girl, Inge Sletten, a member of the Norwegian resistance, not only warned her of the impending deportations, but took her and the children to the home of a Christian friend and a week later arranged for them to be smuggled across the border into neutral Sweden, together with forty other Jews. Others remained in hiding, or were exempt from deportation through marriage to Aryans and interned in camps. The behaviour of the Swedish Government deserves special recognition. After the first sailing of the Donau Dr Richert, the Swedish Minister in Berlin, proposed that his country should receive all the remaining Jews in Norway. True to his usual stance, Weizacker, the Chief State Secretary of the German Foreign Office, refused even to discuss the matter, and informed Ribbentrop, that he had told Dr Richert that “the project would not stand a chance.” There was, nevertheless a liberal dispensation of naturalisation papers by the Swedish Consulate of which Terboven, the Reichskommissar complained in March 1943. Altogether approximately 767 Jewish men, women and children were deported to Poland, mostly to Auschwitz, and 26 survived the war. The former house of Norwegian collaborator and dictator, Vikun Quisling, has become the Norwegian Centre for Holocaust and Genocide Studies, opened on August 23, 2006 as a research center. It is located on the Oslo Fjord, with views of the harbor where Norwegian Jews were shipped to Stettin and Auschwitz. The Final Solution by G. Reitlinger – Vallentine Mitchell &Co Ltd 1953. Atlas of the Holocaust by Martin Gilbert, published by Michael Joseph Ltd 1982 Encyclopedia of the Holocaust, Israel Gutman, Macmillan Publishing Company, New York, 1990 Auschwitz Chronicle by Danuta Czech, published by Henry Holt New York 1990 Bjorn Westlie, Coming to Terms with the Past: The Process of Restitution of Jewish Property in Norway, Institute of the World Jewish Congress. Holocaust Historical Society. * Special thanks to Olaf Nielsen Copyright Victor Smart H.E.A.R.T 2008
<urn:uuid:f5a23bf5-778c-4c7d-bc7f-1291a9e0b8d9>
CC-MAIN-2016-26
http://www.holocaustresearchproject.org/nazioccupation/norwayjews.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974215
2,303
4.09375
4
Each subject should sign an informed consent document before data collection begins. Basic Elements of Informed Consent (from 45 CFR 46.116) In seeking informed consent, the following information shall be provided to each subject: - A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject’s participation, a description of the procedures to be followed, and identification of any procedures which are experimental. - A description of any reasonably foreseeable risks or discomforts to the subject. (NOTE: This includes any information about procedures that might make a subject hesitant to participate.) - A description of any benefits to the subject or to others which may reasonably be expected from the research. - A disclosure of appropriate alternative procedures or courses of treatment, if any, that might be advantageous to the subject. - A statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained. - For research involving more than minimal risk, an explanation as to whether any compensation and an explanation as to whether any medical treatments are available if injury occurs and, if so, what they consist of, or where further information may be obtained. - An explanation of whom to contact for answers to pertinent questions about the research and research subjects’ rights, and whom to contact in the event of a research-related injury to the subject. - A statement that participation is voluntary, that refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and that the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is entitled. Informed consent document should be submitted as an attachment with the application by the principle investigator.
<urn:uuid:b1fa6ab8-b61d-4543-91fd-ffaf1d57f6b7>
CC-MAIN-2016-26
http://adrian.edu/academics/institutional-review-board/informed-consent/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910757
357
2.828125
3
UNEP backs action on e-waste in East Africa Nairobi, 7 September 2010 - Kenya is set to become the first East African nation to develop regulations on the management of electronic waste (e-waste), following a national conference held at the United Nations Environment Programme (UNEP) in Nairobi. The aim is to minimize the impacts of the unsafe disposal of electronic products on public health and the environment. Delegates from Kenya's Environment Ministry, the country's National Environment Management Authority (NEMA), software giant Microsoft, UNEP and the telecommunications industry came together , Tuesday, to chart a common way forward in dealing with e-waste management in line with the Basel Convention and other international frameworks. The need to identify and map the environmental impact of e-waste on Kenya was identified as a national priority. Experts also discussed the capacity constraints hindering the disposal of e-waste as well as the collection system and recycling infrastructure. E-waste consists of old electronic items such as computers, printers, mobile phones, refrigerators and televisions. Increasing demand for electronic goods in Kenya and in the developing world means that levels of e-waste are growing fast. As a result, the hazardous substances such as heavy metals contained in most of these discarded products are posing a serious risk to the environment and to human health. But e-waste also presents an economic opportunity through the recycling and refurbishing of discarded electronic goods and the harvesting of the precious metals they contain. A recent baseline study conducted by the Kenyan Information Communications and Technology Network, showed that Kenya generates 3,000 tons of electronic waste per year. The study predicts that the quantity is expected to rise as demand for electronic goods increases. Internationally, China, India and Pakistan receive much of the world's e-waste. Worldwide, e-waste generation is growing by about 40 million tons a year.
<urn:uuid:85157c5a-e95a-4174-9178-88dd5ccd55b4>
CC-MAIN-2016-26
http://www.unep.org/Documents.Multilingual/Default.asp?DocumentID=647&ArticleID=6744&l=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930121
391
3.140625
3
Normally, clouds rise in the sky. Water vapor in the air condenses to form droplets of water or ice. Usually the condensation is around microscopic dust particles. Cloud movement may be helped by warm air moving up from the ground. But not all clouds move up. News You Can Use - A mammatus cloud is an unusual formation usually seen after a severe thunderstorm. They do not indicate a coming tornado, although many people believe this. Mammatus clouds represent a rare example of a sinking cloud. - Sometimes the cloud has a high concentration of droplets and ice particles, making it heavier that the air around it. This higher density causes the cloud to sink. As the cloud sinks, it will warm and the particles in it will start to evaporate. If more energy is needed for evaporation than is produced by the drop in altitude, the cloud will continue to sink. - Watch the video below of a mammatus cloud shot in Michigan: http://www.latimes.com/news/science/sciencenow/la-sci-sn-orange-bubble-clouds-video-20130726,0,4837916.story With the links below, learn more about clouds. Then answer the following questions. - How are clouds classified? - What problems arise when satellites are used to detect clouds? - How do mountains influence cloud formation? - What causes cloud iridescence?
<urn:uuid:40766fc3-a411-4722-8a75-667e1e9b39cf>
CC-MAIN-2016-26
http://www.ck12.org/chemistry/Physical-Change/rwa/Sinking-Clouds/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.830118
297
3.953125
4
Explanatory Notes on the Whole Bible, by John Wesley, [1754-65], at sacred-texts.com 3 Kings (1 Kings) 12:1 kg1 12:1Were come - Rehoboam did not call them thither, but went thither, because the Israelites prevented him, and had pitched upon that place, rather than upon Jerusalem, because it was most convenient for all, being in the center of the kingdom; and because that being in the potent tribe of Ephraim, they supposed there they might use that freedom of speech, which they resolved to use, to get there grievances redressed. So out of a thousand wives and concubines, he had but one son to bear his name, and he a fool! Is not sin an ill way of building up a family? 3 Kings (1 Kings) 12:3 kg1 12:3They sent - When the people sent him word of Solomon's death, they also sent a summons for him to come to Shechem. That the presence and countenance of a man of so great interest and reputation, might lay the greater obligation upon Rehoboam to grant them ease and relief. 3 Kings (1 Kings) 12:4 kg1 12:4Grievous - By heavy taxes and impositions, not only for the temple and his magnificent buildings, but for the expenses of his numerous court, and of so many wives and concubines. And Solomon having so grossly forsaken God, it is no wonder if he oppressed the people. 3 Kings (1 Kings) 12:7 kg1 12:7This day - By complying with their desires, and condescending to them for a season, till thou art better established in thy throne. They use this expression, fore - seeing that some would dissuade him from this course, as below the majesty of a prince. And answer - Thy service is not hard, it is only a few good words, which it is as easy to give as bad ones. 3 Kings (1 Kings) 12:8 kg1 12:8Young men - So called, comparatively to the old men: otherwise they were near forty years old. 3 Kings (1 Kings) 12:10 kg1 12:10Shall be thicker - Or rather, is thicker, and therefore stronger, and more able to crush you, if you proceed in these mutinous demands, than his loins, in which is the principal seat of strength. 3 Kings (1 Kings) 12:15 kg1 12:15From the Lord - Who gave up Rehoboam to so foolish and fatal a mistake, and alienated the peoples affections from him; and ordered all circumstances by his wise providence to that end. 3 Kings (1 Kings) 12:16 kg1 12:16In David - In David's family and son; we can expect no benefit or relief from him, and therefore we renounce all commerce with him, and subjection to him. They named David, rather than Rehoboam; to signify, that they renounced not Rehoboam only, but all David's family. Son of Jesse - So they call David in contempt; as if they had said, Rehoboam hath no reason to carry himself with such pride and contempt toward his people; for if we trace his original, it was as mean and obscure as any of ours. To your tents - Let us forsake him, and go to our own homes, there to consider, how to provide for ourselves. 3 Kings (1 Kings) 12:17 kg1 12:17Judah - The tribe of Judah; with those parts of the tribes of Levi, and Simeon, and Benjamin, whose dwellings were within the confines of Judah. 3 Kings (1 Kings) 12:18 kg1 12:18Sent Adoram - Probably to pursue the counsel which he had resolved upon, to execute his office, and exact their tribute with rigour and violence, if need were. 3 Kings (1 Kings) 12:19 kg1 12:19Rebelled - Their revolt was sinful, as they did not this in compliance with God's counsel, but to gratify their own passions. 3 Kings (1 Kings) 12:20 kg1 12:20Was come - From Egypt; which was known to them before who met at Shechem, and now by all the people. Was none - That is, no entire tribe. 3 Kings (1 Kings) 12:24 kg1 12:24From me - This event is from my counsel and providence, to punish Solomon's apostasy. 3 Kings (1 Kings) 12:25 kg1 12:25Shechem - He repaired, and enlarged, and fortified it; for it had been ruined long since, Jdg 9:45. He might chuse it as a place both auspicious, because here the foundation of his monarchy was laid; and commodious, as being near the frontiers of his kingdom. Penuel - A place beyond Jordan; to secure that part of his dominions. 3 Kings (1 Kings) 12:26 kg1 12:26Said, &c. - Reasoned within himself. The phrase discovers the fountain of his error, that he did not consult with God, who had given him the kingdom; as in all reason, and justice, and gratitude he should have done: nor believed God's promise, Kg1 11:38, but his own carnal policy. 3 Kings (1 Kings) 12:27 kg1 12:27Will turn - Which in itself might seem a prudent conjecture; for this would give Rehoboam, and the priests, and Levites, the sure and faithful friends of David's house, many opportunities of alienating their minds from him, and reducing them to their former allegiance. But considering God's providence, by which the hearts of all men, and the affairs of all kingdoms are governed, and of which he had lately seen so eminent an instance; it was a foolish, as well as wicked course. 3 Kings (1 Kings) 12:28 kg1 12:28Calves - In imitation of Aaron's golden calf, and of the Egyptians, from whom he was lately come. And this he the rather presumed to do, because he knew the people of Israel were generally prone to idolatry: and that Solomon's example had exceedingly strengthened those inclinations; and therefore they were prepared for such an attempt; especially, when his proposition tended to their own ease, and safety, and profit, which he knew was much dearer to them, as well as to himself, than their religion. Too much - Too great a trouble and charge, and neither necessary, nor safe for them, as things now stood. Behold thy gods - Not as if he thought to persuade the people, that these calves were that very God of Israel, who brought them out of Egypt: which was so monstrously absurd and ridiculous, that no Israelite in his right wits could believe it, and had been so far from satisfying his people, that this would have made him both hateful, and contemptible to them; but his meaning was, that these Images were visible representations, by which he designed to worship the true God of Israel, as appears, partly from that parallel place, Exo 32:4, partly, because the priests and worshippers of the calves, are said to worship Jehovah; and upon that account, are distinguished from those belonging to Baal, Kg1 18:21, Kg1 22:6-7, and partly, from Jeroboam's design in this work, which was to quiet the peoples minds, and remove their scruples about going to Jerusalem to worship their God in that place, as they were commanded: which he doth, by signifying to them, that he did not intend any alteration in the substance of their religion; nor to draw them from the worship of the true God, to the worship of any of those Baals, which were set up by Solomon; but to worship that self - same God whom they worshipped in Jerusalem, even the true God, who brought them out of Egypt; only to vary a circumstance: and that as they worshipped God at Jerusalem, before one visible sign, even the ark, and the sacred cherubim there; so his subjects should worship God by another visible sign, even that of the calves, in other places; and as for the change of the place, he might suggest to them, that God was present in all places, where men with honest minds called upon him; that before the temple was built, the best of kings, and prophets, and people, did pray, and sacrifice to God in divers high places, without any scruple. And that God would dispense with them also in that matter; because going to Jerusalem was dangerous to them at this time; and God would have mercy, rather than sacrifice. 3 Kings (1 Kings) 12:29 kg1 12:29Beth - el, &c. - Which two places he chose for his peoples conveniency; Beth - el being in the southern, and Dan in the northern parts of his kingdom. 3 Kings (1 Kings) 12:30 kg1 12:30A sin - That is, an occasion of great wickedness, not only of idolatry, which is called sin by way of eminency; nor only of the worship of the calves, wherein they pretended to worship the true God; but also of the worship of Baal, and of the utter desertion of the true God; and of all sorts of impiety. To Dan - Which is not here mentioned exclusively, for they went also to Beth - el, Kg1 12:32-33, but for other reasons, either because that of Dan was first made, the people in those parts having been long leavened with idolatry, Jdg 18:30, or to shew the peoples readiness and zeal for idols; that those who lived in, or near Beth - el, had not patience to stay 'till that calf was finished, but all of them were forward to go as far as Dan, which was in the utmost borders of the land, to worship an idol there; when it was thought too much for them to go to Jerusalem to worship God. 3 Kings (1 Kings) 12:31 kg1 12:31An house - Houses, or chapels, besides the temples, which are built at Dan and Beth - el; he built also for his peoples better accommodation, lesser temples upon divers high places. Of the lowest - Which he might do, either, because the better sort refused it, or, because such would be satisfied with mean allowances; and so he could put into his own purse a great part of the revenues of the Levites, which doubtless he seized upon when they forsook him, and went to Jerusalem, Ch2 11:13-14, or, because mean persons would depend upon his favour, and therefore be pliable to his humour, and firm to his interest, but the words in the Hebrew properly signify, from the ends of the people; which may be translated thus, out of all the people; promiscuously out of every tribe. Which exposition seems to be confirmed by the following words, added to explain these, which were not of the sons of Levi; though they were not of the tribe of Levi. And that indeed was Jeroboam's sin; not that he chose mean persons, for some of the Levites were such; and his sin had not been less, if he had chosen the noblest and greatest persons; as we see in the example of Uzziah. But that he chose men of other tribes, contrary to God's appointment, which restrained that office to that tribe. Levi - To whom that office was confined by God's express command. 3 Kings (1 Kings) 12:32 kg1 12:32A feast - The feast of tabernacles. So he would keep God's feast, not in God's time, which was the fifteenth day of the seventh month, and so onward, Lev 23:34, but on the fifteenth day of the eighth month. And this alteration he made, either, to keep up the difference between his subjects, and those of Judah as by the differing manners, so by the distinct times of their worship. Or, lest he should seem directly to oppose the God of Israel, (who had in a special manner obliged all the people to go up to Jerusalem at that time,) by requiring their attendance to celebrate the feast elsewhere, at the same time. Or, to engage as many persons as possibly he could, to come to his feast; which they would more willingly do when the feast at Jerusalem was past and all the fruits of the earth were perfectly gathered in. Fifteenth day - And so onward till the seven days ended. Like that in Judah - He took his pattern thence, to shew, that he worshipped the same God, and professed the same religion for substance, which they did: howsoever he differed in circumstances. He offered - Either, by his priests. Or, rather, by his own hands; as appears from Kg1 13:1, Kg1 13:4, which he did, to give the more countenance to his new - devised solemnity. Nor is this strange; for he might plausibly think, that he who by his own authority had made others priests might much more exercise a part of that office; at least, upon an extraordinary occasion; in which case, he knew David himself had done some things, which otherwise he might not do. So he did - He himself did offer there in like manner, as he now had done at Dan. 3 Kings (1 Kings) 12:33 kg1 12:33Devised - Which he appointed without any warrant from God.
<urn:uuid:21d67467-655d-49f4-86b0-45edb658bc3a>
CC-MAIN-2016-26
http://www.sacred-texts.com/bib/cmt/wesley/kg1012.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.986151
2,896
2.75
3
Nuclear History: Nuclear Arms and Politics in the Missile Age, 1955-1968 Berlin Crisis: 1958-1962 NEW DOCUMENTS FROM THE KENNEDY ADMINISTRATION National Security Archive Electronic Briefing Book No. 56 William Burr, Editor September 25, 2001 In stark contrast to the close U.S.-Russian relationship of today, forty years ago serious tensions over Berlin and Germany and the danger of world war clouded Moscow-Washington relations. Fred Kaplan's article in the October 2001 issue of The Atlantic Monthly, "JFK's First Strike Plan," shows that key White House officials and the President himself briefly considered proposals for a limited nuclear first strike against Soviet military targets in the event that the Berlin crisis turned violent. Kaplan's essay is partly based on archival materials that the National Security Archive obtained through declassification requests. Exploring John F. Kennedy's approach to the Berlin crisis, one of most serious Cold War crises, Kaplan presents the grim situation that unfolded after Kennedy met with Chairman Nikita Khrushchev in Vienna in early June 1961. Worried about the future of their East German ally, the Soviets presented the West with an ultimatum: a peace treaty with East and West Germany (including "free city" or neutral status for West Berlin) must be negotiated by December. That deadline and then the building of the wall dividing West Berlin in August 1961 raised East-West tensions and U.S. policymakers and their NATO allies wondered when the next shoe would drop. While Kennedy saw West Berlin's security as a top priority, he also wanted to avoid war. If it broke out, however, he understood that his military alternatives were grim. Some on the White House staff saw the Pentagon's war plans as catastrophic and proposed instead a limited surprise attack on the Soviet Union if military confrontation over Berlin unfolded. As Kaplan shows, Kennedy was aware of the first-strike plan but was even more interested finding ways to prevent the use of nuclear weapons. The following are some of the documents that are central to Kaplan's presentation. Note: The following documents are in PDF format. You will need to download and install the free Adobe Acrobat Reader to view. |Carl Kaysen to General Maxwell Taylor, Military Representative to the President, "Strategic Air Planning and Berlin," 5 September 1961, Top Secret, excised copy, with cover memoranda to Joint Chiefs of Staff Chairman Lyman Lemnitzer, released to National Security Archive (appeal pending at Department of Defense). |Source: National Archives, Record Group 218, Records of the Joint Chiefs of Staff (hereinafter RG 218), Records of Maxwell Taylor (Document still under appeal at Department of Defense; appeal for withheld Department of Energy information already rejected by DoE) Fred Kaplan first explored White House nuclear policy and the Berlin crisis in his path-breaking book on U.S. nuclear planning, The Wizards of Armageddon (New York, Simon and Schuster, 1983). In the course of his research he interviewed a number of Kennedy administration officials, including NSC staffer Carl Kaysen. Kaysen told him that, during the summer of 1961, when East-West tensions over Berlin threatened to turn dangerous, he had prepared a study on the possibility of a limited first strike against the Soviet Union. Nearly twenty years later, that study was declassified (with excisions that are under appeal at the Defense Department). What motivated Kaysen was his concern that the U.S. nuclear war plan--the Single Integrated Operational Plan (SIOP)-- involved an unimaginably catastrophic attack involving thousands of nuclear weapons.(1) The SIOP included an option for preemption, a first strike in the event that Washington had strategic warning of an imminent Soviet attack, but that was not what Kaysen had in mind. In keeping with then-current interest in controlled nuclear response and presidential options, he wanted the president to have military alternatives that involved less loss of life in the Soviet Union and less danger to U.S. territory. Therefore, he proposed contingency planning for a limited nuclear first strike on the handful of Soviet ICBMs. Kaysen recognized that there were risks and uncertainties in such a plan, but he nevertheless believed that a limited approach would encourage the Soviets to avoid attacks on U.S. urban-industrial targets as well as "minimiz[e] the force of the irrational urge for revenge." |Major William Y. Smith to General Maxwell Taylor, "Strategic Air Planning and Berlin," 7 September 1961, Top Secret, excised copy, released with more information after appeal by National Security |Source: John F. Kennedy Library, National Security File, box 82, Germany, Berlin, General, 9/7/61-9/8/61 (first published in National Security Archive, U.S. Nuclear History: Arms and Politics in the Missile Age (Washington, D.C./Alexandria Va, National Security Archive/Chadwyck Healey, 1998) William Y. Smith, an Air Force officer on General Taylor's staff, prepared a summary of Kaysen's report so that the busy general could get the gist of it without laboring over the detail.(2) Despite its summary nature, some of the information contained in Smith's report goes beyond the excised text of Kaysen's study as released by the Defense Department. For example, in annex A, page 2 of the Kaysen report, assumption 1 is withheld in its entirety. Page 2 of Smith's summary discloses that assumption: that 26 of the "essential targets" are the "staging bases that do not need to be hit in the first wave." Moreover, on page 2 of annex B of Kaysen's report, excisions in the third full paragraph concern tactics for overcoming enemy defenses. This material was already declassified in the Smith summary, e.g., the discussion of the use of "low level attacks," mass attacks, and opening corridors |Memorandum from General Maxwell Taylor to General Lemnitzer, 19 September 1961, enclosing memorandum on "Strategic Air Planning," Top Secret, released in full on appeal by National Security Archive. |Source: RG 218, Records of Maxwell Taylor, Box 34, Memorandums for the President, 1961 of this document On 19 September, a few weeks after Smith prepared the summary of Kaysen's report, Taylor presented Kennedy with the same text.(3) Apparently Taylor discussed the summary of Kaysen's paper with the president because on the same day he presented JCS Chairman Lemnitzer with a series of questions that must have arisen in the course of discussion. However the questions were prepared, they clearly reflected the concerns of President Kennedy and Secretary of Defense McNamara, among others, about SIOP-62 (for FY 1962). For example, Kennedy wondered if it would be possible to fashion attacks that excluded urban areas or "governmental controls", China, or the East European satellite states. SIOP-62 entailed a massive attack on targets in Eastern Europe, the Soviet Union, China, North Korea, and North Korea. Thus, if China was not in the war, it would nevertheless be subject to attack. Undoubtedly influenced by Kaysen's report, Kennedy also asked questions about the feasibility of a limited first strike, the prospect of redundant destruction (overkill), and the danger of a false alarm, among others. |Memorandum of Conference with President Kennedy, 20 September Relations of the United States, 1961-63 (Washington, D.C., Government Printing Office, 1998?), 130-131 Taylor transmitted Kennedy's questions for consideration by Commander-in-Chief Strategic Air Command (CINCSAC) Thomas Power who to meet with Kennedy, Lemnitzer, Taylor, and military aide General Chester Clifton. Only this brief record of the discussion is available and little of it directly bears on the president's questions. If there was any discussion of an "alternative first strike plan," it was not recorded. Part of the discussion centered on General Power's doubts about the latest intelligence estimates of Soviet strategic missile forces--that Moscow only had about 20 ICBM pads. Lemnitzer and Taylor disagreed with Power, who had the audacity to recommend the resumption of U-2 flights over the Soviet Union. In addition, Power believed that there was a risk of a Soviet surprise attack; if "general atomic war was inevitable" he recommended striking first once key Soviet nuclear targets were located. Kennedy did not comment on Power's advice but he was concerned enough to ask his military advisers to find out how long it took the Soviets to launch |Memorandum for General Taylor from General Lemnitzer, "Strategic Air Planning and Berlin," CM-386-61, 11 October 1961, Top Secret. |Source: RG 218, Records of Lyman Lemnitzer, box 1 Lemnitzer had little use for criticisms of SIOP-62, which he thought was "far better than anything previously in existence." Although Lemnitzer conceded the need for "flexibility of execution and controlled response", he believed that the lack of survivable forces inhibited those goals (that is, if nuclear forces could not survive an attack they had to be used quickly). He suggested that SIOP-63, which was under development, would have have important elements of flexibility. Lemnitzer made no complaint about Kaysen's first-strike proposal because preemptive options remained part of U.S. planning for any major conflict with the Soviet Union. Lemnitzer's tacit rejection of Kaysen's proposal for a limited first strike option meant that Kennedy had no military alternative to the massive attack option posited by SIOP-62. Belying the flexibility that Lemnitzer promised, the attack options of SIOP-63 would involve massive use of nuclear weapons. It remains to be seen whether Taylor informed the President about the JCS's reply or discussed its implications. In any event, in an effort to avoid confrontation and find the basis for an "accommodation," Kennedy and Khrushchev were already carrying on private "pen pal" correspondence.(5) |Department of Defense News Release, Address by Roswell L. Gilpatric, Deputy Secretary of Defense Before the Business Council, Saturday, October 21, 1961. Realizing that the west would not comply with his deadline and no doubt uneasy about the danger of conflict over Berlin, Khrushchev waived the deadline on Berlin in a speech at the Communist party congress on 17 October 1961. To impress further on Khrushchev the importance of negotiations, Kennedy approved a speech on U.S. military power that Deputy Secretary of Defense Gilpatric delivered before the Business Council, a top-level corporate advisory body. Gilpatric did not disclose how much the U.S. knew about Moscow's tiny ICBM force but he nevertheless implied that U.S. intelligence had a good understanding of the limits of Soviet missile strength: "their Iron Curtain is not so impenetrable as to force us to accept at face value the Kremlin's boasts" about Soviet ICBMs. Describing U.S. strategic nuclear forces, Gilpatric confidently argued that the U.S.'s retaliatory capacity was so enormous that an "enemy move which brought it into play would be an act of self-destruction on his part." Confronted with the U.S.'s second-strike capability, he concluded that the Soviets "will not provoke a major nuclear conflict." 1. For the history of SIOP-62 and background on U.S. nuclear planning during this period, see David A. Rosenberg, "The Origins of Overkill: Nuclear Weapons and American Strategy, 1945-60," International Security 7 (Spring 1983): 3-71. 2. Smith became a lieutenant general, U.S. Air Force, and before retiring in 1991 served as Chief of Staff, Supreme Headquarters Allied Powers Europe (SHAPE) and Deputy Commander, U.S. European Command. During the 1990s he was president of the Institute for Defense Analyses. General Smith is a member of the National Security Archive's board of directors. 3. An excised version is published in FRUS, 1961-63, 4. According to a report that Lemnitzer presented to Kennedy a week later, Soviet missiles could be launched in 5 to 10 minutes if missile crews were on alert, electrical equipment was warmed up, and missiles were fueled and topped. If missiles were unfueled, launching would take between 15 minutes and a half hour. If crews were on routine standby, equipment was cold, and missiles were unfueled, launching would take up to three hours. See Memorandum from Lemnitzer to Kennedy, "Reaction Time Required by the Soviets...," 27 September 1961, FRUS 1961-63, vol. 5. For the correspondence on Berlin, during the early fall of 1961, see FRUS 1961-63, Vol. XIV, 444-455, 502-508.
<urn:uuid:8a27630c-03ec-45d0-8af7-4f3ac1c82240>
CC-MAIN-2016-26
http://nsarchive.gwu.edu/NSAEBB/NSAEBB56/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932046
2,859
3.03125
3
GENESEE COUNTY, Michigan — Nearly 40 years ago, psychologist Philip Zimbardo turned the basement of a Stanford University building into a mock prison and watched it bring out the worst in people. But now, the professor famous for his 1971 “Stanford Prison Experiment” is studying how to bring out people’s best — through a unique program being used in Genesee County schools. The famed author of “The Lucifer Effect” is best known for his controversial prison simulation where students playing “guard” roles became sadistic while student “prisoners” became so emotionally traumatized that Zimbardo ended the project early after six days. He’s spent years researching “evil” behavior but now Zimbardo is studying the opposite topic: What drives people to do good? And the goal of his “Heroes Imagination Project” that is using Flint-area schools as a pilot is seemingly idealistic: To turn regular people into heroes. “We know much about what makes ordinary people turn ‘evil’ ... but we know little about the opposite side of the coin, what makes ordinary people take heroic action,” Zimbardo wrote in an e-mail to The Journal. “Our mission is to seed the earth with everyday heroes. We want to encourage youth, first in Flint, Michigan and then globally, to make a public commitment on line and in classes, to be a hero in waiting.” Self-described “social entrepreneur” Matt Langdon is one of Zimbardo’s foot soldiers fighting the mission — visiting local schools in Grand Blanc, Mt. Morris, Fenton, Lapeer and others — to spread heroism. Langdon’s t-shirt brandishing his business’s name says it all: The Hero Construction Company. His 45-minute sessions teach students how simple tasks such as standing up for people being bullied and being nice can be heroic. His presentations, which include follow up visits and materials for teachers to use in classes, prompted sixth graders at Grand Blanc Middle West to launch a Haiti fundraiser and create a program to reduce student referrals. Zimbardo hopes Langdon’s program will serve as a hero-making model. “Matt Langdon’s classes celebrate what is best in human nature: the heroism of being one’s best self in service to humanity,” Zimbardo said. “His work is the first curriculum I know of that systematically explores the nature of heroism of everyday, ordinary people ... to making heroism more central in the thinking of students.” He’s hoping to find the opposite results from what he saw in 1971 — that choosing to do something good over doing nothing is contagious. Langdon, a former residential director at Camp Copneconic in Fenton Township, started presenting his hero program in local schools three years ago and recently shared his ideas with Zimbardo who wanted to study the results. “Do you know any heroes?” Langdon recently asked a group of seventh graders at Grand Blanc Middle School West during one of his sessions. Hands shot up with different answers: Superman. Soldiers. The president. But Langdon pitched them another idea —that they were all on a hero’s journey themselves. It’s OK to start small. Opening the door for someone. Speaking up. Just being someone’s friend. “I saw how easily kids could be influenced by one strong clear message,” said Langdon, an Australian native who lives in Brighton and regularly updates Zimbardo on his program. “Everyone can be heroes in their own stories. They don’t have to get stuck in a repeating story of what everyone around them is going through or what their parents went through. “You can say its idealistic but all I’m asking them to do is do something good for someone else. I’m just encouraging them to take action rather than being a bystander.” Much of Langdon’s program is rooted in Joseph Campbell’s book “The Hero's Journey,” which studies heroes through time and the five steps each one makes. Langdon leads a discussion about what a hero is. It is the psychology in play that led Northwest passenger Jasper Schuringa to help prevent an alleged bomber from setting off an explosive device on Christmas day. Or why construction worker Wesley Autrey famously leapt into the train tracks in New York in 2007 to rescue a man who had fallen in after a seizure. The train was going too fast and Autrey covered himself over the man in between the two rails as five cars rolled over them just inches from his head — earning him his nickname "Subway Superman." Langdon asks why the father made that split-second decision while others stood by — because the man’s mindset was one of a “hero in waiting.” He even discusses Harry Potter’s “hero journey.” “I used to think a hero was only someone who had dome something really phenomenal like Martin Luther King and Abraham Lincoln stopping slavery,” said Grand Blanc seventh grader Matthew McHenry, 12, after one of Langdon’s sessions. “But I learned how something so little like opening a door for someone could make a big difference.” McHenry joined other classmates in standing up in front of the group and making specific pledges to be a “hero in waiting.” Some promised to be nicer to their parents. To listen more. To be better friends. Others vowed to stand up for someone being picked on or to be more tolerant. At the end, they all get “hero” buttons to wear to remind them of their vows and what Langdon compares to putting on their superhero capes. After one session, Grand Blanc sixth graders Andrew Robson and Tyler Butterfield were inspired to launch a program at their school to reduce discipline numbers. The 12-year-olds started a contest that will award students without any deferrals or suspensions a pizza party at the end of the year. “We’re hoping it will help the teachers, students and school,” Butterfield said. The Hero Construction Project has reached more than 5,000 students and charges $250 per class. Many session are paid for by Parent Teacher Organizations and “character building” grants. Groups such as Bucket Fillers Inc. and others have also offered character building programs in Michigan schools. “They don’t use any tricks,” Grand Blanc Middle West Principal Jeffrey Neall said of the hero program. “It gives students the tools to reach within themselves and be their own hero.”
<urn:uuid:5309c8d7-eb6b-447f-80ba-32091167b3d2>
CC-MAIN-2016-26
http://www.mlive.com/news/flint/index.ssf/2010/02/psychologist_famous_for_stanfo.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00152-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961494
1,431
2.9375
3
The early detection and aggressive management of sepsis is vital in reducing morbidity and mortality, and the gold standard in detecting bacteraemia in our patients is the blood culture. Contamination of blood culture specimens or poor technique may lead to delay in optimum clinical decisions and management with inappropriate or unnecessary antibiotics. Not to mention wasted expenses. Blood culture bottles contain a soup of nutrients that feed a wide range of bacteria/fungi. Some bottles (including the BD BACTEC Plus media) also contain a resin to neutralise any antibiotics present in the patient’s blood in order to promote organism growth. When taking blood cultures aseptic non-touch technique should be followed. Emphasis should be placed on following your hospital blood culture collection policy without taking shortcuts. The most common cause of false positive results occurs due to contamination from the patient’s own skin at the collection site. Solutions that can be used for site decontamination include: - greater than 0.5% alcohol chlorhexidine (drying time 60 seconds) - 70% isopropyl alcohol (drying time 0 seconds) - providone iodine (drying time 2 minutes) Always allow enough time for antiseptic solution to dry before taking cultures. It is also important to thoroughly clean the tops and necks of culture bottles prior to collection. There are also commercially available one-step applicators containing combinations such as chlorhexidine gluconate and isopropyl alcohol. Studies have found alcohol based products show statistically significant improvement in reducing false positives from skin contamination (Dawson 2013). One randomized, study involving 64 interns in an ICU/medical wards found that the routine use of sterile gloves resulted in lower contamination rates. Sterile or not, it is important to resist the urge to re-palpate the vein after cleaning the site as this increases contamination risk. Blood specimens obtained after an antibiotic has been administered may contain enough quantities of antibiotic to kill any bacteria collected in the bottle (Halm 2011). Therefore specimens should be collected prior to antibiotics…. with the important caveat that blood collection must not significantly delay time to antibiotic administration. If antibiotics have been administered the cultures should be taken just prior to the next dose for this same reason (Dawson 2013) It is very important to obtain the correct volume of blood. The preferred volume for each blood culture bottle is 10mls (However, you should refer to your individual manufacturers recommendation). So that means a 20mls collection from a single site divided into each bottle. Under filling may result in an insufficient ‘yield’ of microorganisms. Overfilling may result in false positive results. Each blood culture collection should comprise a paired set, each set taken from a different location. In patients with limited peripheral access both sets can be taken from the same site. However the second specimen should be obtained as if from a separate site with new equipment and re-cleaning of the area etc. If an infected central line is suspected (eg cellulitis or discharge from the insertion site or extended use of the line), the second set of cultures may be taken from this site. Blood should be drawn from the distal lumen after decontamination as above. Order of draw: Which bottle should you fill first? Actually it depends on the technique used. The idea is to prevent air being introduced into the ANAEROBIC bottle and altering its environment. - If a butterfly needle and needle-safety connector device is used the AEROBIC bottle should be filled first as there will likely be air in the tubing. - If a needle and syringe is used the ANAEROBIC bottle should be filled first as any air is likely to be at the top of the syringe and thus introduced into the second bottle. - If blood is being collected for other tests at the same time the culture bottles should be filled first to prevent cross contamination from other blood tubes. Collection of separate samples can be done “back to back”. The common practice of separating collection samples by 15 to 30 minutes does not enhance the yield of bacteria and may increase the time to antibiotic administration. (Halm 2011) The labelling of the specimen bottles is important. As well as patient details information should be included describing: - Source of sample (eg central line, anatomical location). - Time sample was obtained Dawson S. Blood cultures. British Journal Of Hospital Medicine (17508460) April 2012;73(4):C53–5. Accessed March 18, 2013. Jennifer Denno, Mary Gannon, Practical Steps to Lower Blood Culture Contamination Rates in the Emergency Department, Journal of Emergency Nursing, 10.1016/j.jen.2012.03.006. Flayhart D. Blood cultures and detection of sepsis… …Tips from the clinical experts. MLO: Medical Laboratory Observer. March 2012;44(3):34 Accessed March 18, 2013. Halm M, Hickson T, Stein D, Tanner M, VandeGraaf S. BLOOD CULTURES AND CENTRAL CATHETERS: IS THE “EASIEST WAY” BEST PRACTICE?. American Journal Of Critical Care. July 2011;20(4):335–338. Accessed March 18, 2013. Kim N, Kim M, Oh M, et al. Effect of routine sterile gloving on contamination rates in blood culture: a cluster randomized trial. Annals Of Internal Medicine February 2011;154(3):145–151. Accessed March 18, 2013.
<urn:uuid:00440cb4-95da-49bb-af1a-08f2bc56bb95>
CC-MAIN-2016-26
http://thenursepath.com/2013/10/08/how-to-collect-blood-cultures/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00131-ip-10-164-35-72.ec2.internal.warc.gz
en
0.879962
1,171
2.578125
3
Original author: Dennis Crunkilton Shift registers, like counters, are a form of sequential logic. Sequential logic, unlike combinational logic is not only affected by the present inputs, but also, by the prior history. In other words, sequential logic remembers past events. Shift registers produce a discrete delay of a digital signal or waveform. A waveform synchronized to a clock, a repeating square wave, is delayed by "n" discrete clock times, where "n" is the number of shift register stages. Thus, a four stage shift register delays "data in" by four clocks to "data out". The stages in a shift register are delay stages, typically type "D" Flip-Flops or type "JK" Flip-flops. Formerly, very long (several hundred stages) shift registers served as digital memory. This obsolete application is reminiscent of the acoustic mercury delay lines used as early computer memory. Serial data transmission, over a distance of meters to kilometers, uses shift registers to convert parallel data to serial form. Serial data communications replaces many slow parallel data wires with a single serial high speed circuit. Serial data over shorter distances of tens of centimeters, uses shift registers to get data into and out of microprocessors. Numerous peripherals, including analog to digital converters, digital to analog converters, display drivers, and memory, use shift registers to reduce the amount of wiring in circuit boards. Some specialized counter circuits actually use shift registers to generate repeating waveforms. Longer shift registers, with the help of feedback generate patterns so long that they look like random noise, pseudo-noise. Basic shift registers are classified by structure according to the following types: Above we show a block diagram of a serial-in/serial-out shift register, which is 4-stages long. Data at the input will be delayed by four clock periods from the input to the output of the shift register. Data at "data in", above, will be present at the Stage A output after the first clock pulse. After the second pulse stage A data is transfered to stage B output, and "data in" is transfered to stage A output. After the third clock, stage C is replaced by stage B; stage B is replaced by stage A; and stage A is replaced by "data in". After the fourth clock, the data originally present at "data in" is at stage D, "output". The "first in" data is "first out" as it is shifted from "data in" to "data out". Data is loaded into all stages at once of a parallel-in/serial-out shift register. The data is then shifted out via "data out" by clock pulses. Since a 4- stage shift register is shown above, four clock pulses are required to shift out all of the data. In the diagram above, stage D data will be present at the "data out" up until the first clock pulse; stage C data will be present at "data out" between the first clock and the second clock pulse; stage B data will be present between the second clock and the third clock; and stage A data will be present between the third and the fourth clock. After the fourth clock pulse and thereafter, successive bits of "data in" should appear at "data out" of the shift register after a delay of four clock pulses. If four switches were connected to DA through DD, the status could be read into a microprocessor using only one data pin and a clock pin. Since adding more switches would require no additional pins, this approach looks attractive for many inputs. Above, four data bits will be shifted in from "data in" by four clock pulses and be available at QA through QD for driving external circuitry such as LEDs, lamps, relay drivers, and horns. After the first clock, the data at "data in" appears at QA. After the second clock, The old QA data appears at QB; QA receives next data from "data in". After the third clock, QB data is at QC. After the fourth clock, QC data is at QD. This stage contains the data first present at "data in". The shift register should now contain four data bits. A parallel-in/parallel-out shift register combines the function of the parallel-in, serial-out shift register with the function of the serial-in, parallel-out shift register to yield the universal shift register. The "do anything" shifter comes at a price– the increased number of I/O (Input/Output) pins may reduce the number of stages which can be packaged. Data presented at DA through DD is parallel loaded into the registers. This data at QA through QD may be shifted by the number of pulses presented at the clock input. The shifted data is available at QA through QD. The "mode" input, which may be more than one input, controls parallel loading of data from DA through DD, shifting of data, and the direction of shifting. There are shift registers which will shift data either left or right. If the serial output of a shift register is connected to the serial input, data can be perpetually shifted around the ring as long as clock pulses are present. If the output is inverted before being fed back as shown above, we do not have to worry about loading the initial data into the "ring counter". Serial-in, serial-out shift registers delay data by one clock time for each stage. They will store a bit of data for each register. A serial-in, serial-out shift register may be one to 64 bits in length, longer if registers or packages are cascaded. Below is a single stage shift register receiving data which is not synchronized to the register clock. The "data in" at the D pin of the type D FF (Flip-Flop) does not change levels when the clock changes for low to high. We may want to synchronize the data to a system wide clock in a circuit board to improve the reliability of a digital logic circuit. The obvious point (as compared to the figure below) illustrated above is that whatever "data in" is present at the D pin of a type D FF is transfered from D to output Q at clock time. Since our example shift register uses positive edge sensitive storage elements, the output Q follows the D input when the clock transitions from low to high as shown by the up arrows on the diagram above. There is no doubt what logic level is present at clock time because the data is stable well before and after the clock edge. This is seldom the case in multi-stage shift registers. But, this was an easy example to start with. We are only concerned with the positive, low to high, clock edge. The falling edge can be ignored. It is very easy to see Q follow D at clock time above. Compare this to the diagram below where the "data in" appears to change with the positive clock edge. Since "data in" appears to changes at clock time t1 above, what does the type D FF see at clock time? The short over simplified answer is that it sees the data that was present at D prior to the clock. That is what is transfered to Q at clock time t1. The correct waveform is QC. At t1 Q goes to a zero if it is not already zero. The D register does not see a one until time t2, at which time Q goes high. Since data, above, present at D is clocked to Q at clock time, and Q cannot change until the next clock time, the D FF delays data by one clock period, provided that the data is already synchronized to the clock. The QA waveform is the same as "data in" with a one clock period delay. A more detailed look at what the input of the type D Flip-Flop sees at clock time follows. Refer to the figure below. Since "data in" appears to changes at clock time (above), we need further information to determine what the D FF sees. If the "data in" is from another shift register stage, another same type D FF, we can draw some conclusions based on data sheet information. Manufacturers of digital logic make available information about their parts in data sheets, formerly only available in a collection called a data book. Data books are still available; though, the manufacturer's web site is the modern source. The following data was extracted from the CD4006b data sheet for operation at 5VDC, which serves as an example to illustrate timing. tS is the setup time, the time data must be present before clock time. In this case data must be present at D 100ns prior to the clock. Furthermore, the data must be held for hold time tH=60ns after clock time. These two conditions must be met to reliably clock data from D to Q of the Flip-Flop. There is no problem meeting the setup time of 60ns as the data at D has been there for the whole previous clock period if it comes from another shift register stage. For example, at a clock frequency of 1 Mhz, the clock period is 1000 µs, plenty of time. Data will actually be present for 1000µs prior to the clock, which is much greater than the minimum required tS of 60ns. The hold time tH=60ns is met because D connected to Q of another stage cannot change any faster than the propagation delay of the previous stage tP=200ns. Hold time is met as long as the propagation delay of the previous D FF is greater than the hold time. Data at D driven by another stage Q will not change any faster than 200ns for the CD4006b. To summarize, output Q follows input D at nearly clock time if Flip-Flops are cascaded into a multi-stage shift register. Three type D Flip-Flops are cascaded Q to D and the clocks paralleled to form a three stage shift register above. Type JK FFs cascaded Q to J, Q' to K with clocks in parallel to yield an alternate form of the shift register above. A serial-in/serial-out shift register has a clock input, a data input, and a data output from the last stage. In general, the other stage outputs are not available Otherwise, it would be a serial-in, parallel-out shift register.. The waveforms below are applicable to either one of the preceding two versions of the serial-in, serial-out shift register. The three pairs of arrows show that a three stage shift register temporarily stores 3-bits of data and delays it by three clock periods from input to output. At clock time t1 a "data in" of 0 is clocked from D to Q of all three stages. In particular, D of stage A sees a logic 0, which is clocked to QA where it remains until time t2. At clock time t2 a "data in" of 1 is clocked from D to QA. At stages B and C, a 0, fed from preceding stages is clocked to QB and QC. At clock time t3 a "data in" of 0 is clocked from D to QA. QA goes low and stays low for the remaining clocks due to "data in" being 0. QB goes high at t3 due to a 1 from the previous stage. QC is still low after t3 due to a low from the previous stage. QC finally goes high at clock t4 due to the high fed to D from the previous stage QB. All earlier stages have 0s shifted into them. And, after the next clock pulse at t5, all logic 1s will have been shifted out, replaced by 0s We will take a closer look at the following parts available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow the links. The following serial-in/ serial-out shift registers are 4000 series CMOS (Complementary Metal Oxide Semiconductor) family parts. As such, They will accept a VDD, positive power supply of 3-Volts to 15-Volts. The VSS pin is grounded. The maximum frequency of the shift clock, which varies with VDD, is a few megahertz. See the full data sheet for details. The 18-bit CD4006b consists of two stages of 4-bits and two more stages of 5-bits with a an output tap at 4-bits. Thus, the 5-bit stages could be used as 4-bit shift registers. To get a full 18-bit shift register the output of one shift register must be cascaded to the input of another and so on until all stages create a single shift register as shown below. A CD4031 64-bit serial-in/ serial-out shift register is shown below. A number of pins are not connected (nc). Both Q and Q' are available from the 64th stage, actually Q64 and Q'64. There is also a Q64 "delayed" from a half stage which is delayed by half a clock cycle. A major feature is a data selector which is at the data input to the shift register. The "mode control" selects between two inputs: data 1 and data 2. If "mode control" is high, data will be selected from "data 2" for input to the shift register. In the case of "mode control" being logic low, the "data 1" is selected. Examples of this are shown in the two figures below. The "data 2" above is wired to the Q64 output of the shift register. With "mode control" high, the Q64 output is routed back to the shifter data input D. Data will recirculate from output to input. The data will repeat every 64 clock pulses as shown above. The question that arises is how did this data pattern get into the shift register in the first place? With "mode control" low, the CD4031 "data 1" is selected for input to the shifter. The output, Q64, is not recirculated because the lower data selector gate is disabled. By disabled we mean that the logic low "mode select" inverted twice to a low at the lower NAND gate prevents it for passing any signal on the lower pin (data 2) to the gate output. Thus, it is disabled. A CD4517b dual 64-bit shift register is shown above. Note the taps at the 16th, 32nd, and 48th stages. That means that shift registers of those lengths can be configured from one of the 64-bit shifters. Of course, the 64-bit shifters may be cascaded to yield an 80-bit, 96-bit, 112-bit, or 128-bit shift register. The clock CLA and CLB need to be paralleled when cascading the two shifters. WEB and WEB are grounded for normal shifting operations. The data inputs to the shift registers A and B are DA and DB respectively. Suppose that we require a 16-bit shift register. Can this be configured with the CD4517b? How about a 64-shift register from the same part? Above we show A CD4517b wired as a 16-bit shift register for section B. The clock for section B is CLB. The data is clocked in at CLB. And the data delayed by 16-clocks is picked of off Q16B. WEB , the write enable, is grounded. Above we also show the same CD4517b wired as a 64-bit shift register for the independent section A. The clock for section A is CLA. The data enters at CLA. The data delayed by 64-clock pulses is picked up from Q64A. WEA, the write enable for section A, is grounded. Parallel-in/ serial-out shift registers do everything that the previous serial-in/ serial-out shift registers do plus input data to all stages simultaneously. The parallel-in/ serial-out shift register stores data, shifts it on a clock by clock basis, and delays it by the number of stages times the clock period. In addition, parallel-in/ serial-out really means that we can load data in parallel into all stages before any shifting ever begins. This is a way to convert data from a parallel format to a serial format. By parallel format we mean that the data bits are present simultaneously on individual wires, one for each data bit as shown below. By serial format we mean that the data bits are presented sequentially in time on a single wire or circuit as in the case of the "data out" on the block diagram below. Below we take a close look at the internal details of a 3-stage parallel-in/ serial-out shift register. A stage consists of a type D Flip-Flop for storage, and an AND-OR selector to determine whether data will load in parallel, or shift stored data to the right. In general, these elements will be replicated for the number of stages required. We show three stages due to space limitations. Four, eight or sixteen bits is normal for real parts. Above we show the parallel load path when SHIFT/LD' is logic low. The upper NAND gates serving DA DB DC are enabled, passing data to the D inputs of type D Flip-Flops QA QB DC respectively. At the next positive going clock edge, the data will be clocked from D to Q of the three FFs. Three bits of data will load into QA QB DC at the same time. The type of parallel load just described, where the data loads on a clock pulse is known as synchronous load because the loading of data is synchronized to the clock. This needs to be differentiated from asynchronous load where loading is controlled by the preset and clear pins of the Flip-Flops which does not require the clock. Only one of these load methods is used within an individual device, the synchronous load being more common in newer devices. The shift path is shown above when SHIFT/LD' is logic high. The lower AND gates of the pairs feeding the OR gate are enabled giving us a shift register connection of SI to DA , QA to DB , QB to DC , QC to SO. Clock pulses will cause data to be right shifted out to SO on successive pulses. The waveforms below show both parallel loading of three bits of data and serial shifting of this data. Parallel data at DA DB DC is converted to serial data at SO. What we previously described with words for parallel loading and shifting is now set down as waveforms above. As an example we present 101 to the parallel inputs DAA DBB DCC. Next, the SHIFT/LD' goes low enabling loading of data as opposed to shifting of data. It needs to be low a short time before and after the clock pulse due to setup and hold requirements. It is considerably wider than it has to be. Though, with synchronous logic it is convenient to make it wide. We could have made the active low SHIFT/LD' almost two clocks wide, low almost a clock before t1 and back high just before t3. The important factor is that it needs to be low around clock time t1 to enable parallel loading of the data by the clock. Note that at t1 the data 101 at DA DB DC is clocked from D to Q of the Flip-Flops as shown at QA QB QC at time t1. This is the parallel loading of the data synchronous with the clock. Now that the data is loaded, we may shift it provided that SHIFT/LD' is high to enable shifting, which it is prior to t2. At t2 the data 0 at QC is shifted out of SO which is the same as the QC waveform. It is either shifted into another integrated circuit, or lost if there is nothing connected to SO. The data at QB, a 0 is shifted to QC. The 1 at QA is shifted into QB. With "data in" a 0, QA becomes 0. After t2, QA QB QC = 010. After t3, QA QB QC = 001. This 1, which was originally present at QA after t1, is now present at SO and QC. The last data bit is shifted out to an external integrated circuit if it exists. After t4 all data from the parallel load is gone. At clock t5 we show the shifting in of a data 1 present on the SI, serial input. Why provide SI and SO pins on a shift register? These connections allow us to cascade shift register stages to provide large shifters than available in a single IC (Integrated Circuit) package. They also allow serial connections to and from other ICs like microprocessors. Let's take a closer look at parallel-in/ serial-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow these the links. The SN74ALS166 shown above is the closest match of an actual part to the previous parallel-in/ serial out shifter figures. Let us note the minor changes to our figure above. First of all, there are 8-stages. We only show three. All 8-stages are shown on the data sheet available at the link above. The manufacturer labels the data inputs A, B, C, and so on to H. The SHIFT/LOAD control is called SH/LD'. It is abbreviated from our previous terminology, but works the same: parallel load if low, shift if high. The shift input (serial data in) is SER on the ALS166 instead of SI. The clock CLK is controlled by an inhibit signal, CLKINH. If CLKINH is high, the clock is inhibited, or disabled. Otherwise, this "real part" is the same as what we have looked at in detail. Above is the ANSI (American National Standards Institute) symbol for the SN74ALS166 as provided on the data sheet. Once we know how the part operates, it is convenient to hide the details within a symbol. There are many general forms of symbols. The advantage of the ANSI symbol is that the labels provide hints about how the part operates. The large notched block at the top of the '74ASL166 is the control section of the ANSI symbol. There is a reset indicted by R. There are three control signals: M1 (Shift), M2 (Load), and C3/1 (arrow) (inhibited clock). The clock has two functions. First, C3 for shifting parallel data wherever a prefix of 3 appears. Second, whenever M1 is asserted, as indicated by the 1 of C3/1 (arrow), the data is shifted as indicated by the right pointing arrow. The slash (/) is a separator between these two functions. The 8-shift stages, as indicated by title SRG8, are identified by the external inputs A, B, C, to H. The internal 2, 3D indicates that data, D, is controlled by M2 [Load] and C3 clock. In this case, we can conclude that the parallel data is loaded synchronously with the clock C3. The upper stage at A is a wider block than the others to accommodate the input SER. The legend 1, 3D implies that SER is controlled by M1 [Shift] and C3 clock. Thus, we expect to clock in data at SER when shifting as opposed to parallel loading. The ANSI/IEEE basic gate rectangular symbols are provided above for comparison to the more familiar shape symbols so that we may decipher the meaning of the symbology associated with the CLKINH and CLK pins on the previous ANSI SN74ALS166 symbol. The CLK and CLKINH feed an OR gate on the SN74ALS166 ANSI symbol. OR is indicated by => on the rectangular inset symbol. The long triangle at the output indicates a clock. If there was a bubble with the arrow this would have indicated shift on negative clock edge (high to low). Since there is no bubble with the clock arrow, the register shifts on the positive (low to high transition) clock edge. The long arrow, after the legend C3/1 pointing right indicates shift right, which is down the symbol. Part of the internal logic of the SN74ALS165 parallel-in/ serial-out, asynchronous load shift register is reproduced from the data sheet above. See the link at the beginning of this section the for the full diagram. We have not looked at asynchronous loading of data up to this point. First of all, the loading is accomplished by application of appropriate signals to the Set (preset) and Reset (clear) inputs of the Flip-Flops. The upper NAND gates feed the Set pins of the FFs and also cascades into the lower NAND gate feeding the Reset pins of the FFs. The lower NAND gate inverts the signal in going from the Set pin to the Reset pin. First, SH/LD' must be pulled Low to enable the upper and lower NAND gates. If SH/LD' were at a logic high instead, the inverter feeding a logic low to all NAND gates would force a High out, releasing the "active low" Set and Reset pins of all FFs. There would be no possibility of loading the FFs. With SH/LD' held Low, we can feed, for example, a data 1 to parallel input A, which inverts to a zero at the upper NAND gate output, setting FF QA to a 1. The 0 at the Set pin is fed to the lower NAND gate where it is inverted to a 1 , releasing the Reset pin of QA. Thus, a data A=1 sets QA=1. Since none of this required the clock, the loading is asynchronous with respect to the clock. We use an asynchronous loading shift register if we cannot wait for a clock to parallel load data, or if it is inconvenient to generate a single clock pulse. The only difference in feeding a data 0 to parallel input A is that it inverts to a 1 out of the upper gate releasing Set. This 1 at Set is inverted to a 0 at the lower gate, pulling Reset to a Low, which resets QA=0. The ANSI symbol for the SN74ALS166 above has two internal controls C1 [LOAD] and C2 clock from the OR function of (CLKINH, CLK). SRG8 says 8-stage shifter. The arrow after C2 indicates shifting right or down. SER input is a function of the clock as indicated by internal label 2D. The parallel data inputs A, B, C to H are a function of C1 [LOAD], indicated by internal label 1D. C1 is asserted when sh/LD' =0 due to the half-arrow inverter at the input. Compare this to the control of the parallel data inputs by the clock of the previous synchronous ANSI SN75ALS166. Note the differences in the ANSI Data labels. On the CD4014B above, M1 is asserted when LD/SH'=0. M2 is asserted when LD/SH'=1. Clock C3/1 is used for parallel loading data at 2, 3D when M2 is active as indicated by the 2,3 prefix labels. Pins P3 to P7 are understood to have the smae internal 2,3 prefix labels as P2 and P8. At SER, the 1,3D prefix implies that M1 and clock C3 are necessary to input serial data. Right shifting takes place when M1 active is as indicated by the 1 in C3/1 arrow. The CD4021B is a similar part except for asynchronous parallel loading of data as implied by the lack of any 2 prefix in the data label 1D for pins P1, P2, to P8. Of course, prefix 2 in label 2D at input SER says that data is clocked into this pin. The OR gate inset shows that the clock is controlled by LD/SH'. The above SN74LS674 internal label SRG 16 indicates 16-bit shift register. The MODE input to the control section at the top of the symbol is labeled 1,2 M3. Internal M3 is a function of input MODE and G1 and G2 as indicated by the 1,2 preceding M3. The base label G indicates an AND function of any such G inputs. Input R/W' is internally labeled G1/2 EN. This is an enable EN (controlled by G1 AND G2) for tristate devices used elsewhere in the symbol. We note that CS' on (pin 1) is internal G2. Chip select CS' also is ANDed with the input CLK to give internal clock C4. The bubble within the clock arrow indicates that activity is on the negative (high to low transition) clock edge. The slash (/) is a separator implying two functions for the clock. Before the slash, C4 indicates control of anything with a prefix of 4. After the slash, the 3' (arrow) indicates shifting. The 3' of C4/3' implies shifting when M3 is de-asserted (MODE=0). The long arrow indicates shift right (down). Moving down below the control section to the data section, we have external inputs P0-P15, pins (7-11, 13-23). The prefix 3,4 of internal label 3,4D indicates that M3 and the clock C4 control loading of parallel data. The D stands for Data. This label is assumed to apply to all the parallel inputs, though not explicitly written out. Locate the label 3',4D on the right of the P0 (pin7) stage. The complemented-3 indicates that M3=MODE=0 inputs (shifts) SER/Q15 (pin5) at clock time, (4 of 3',4D) corresponding to clock C4. In other words, with MODE=0, we shift data into Q0 from the serial input (pin 6). All other stages shift right (down) at clock time. Moving to the bottom of the symbol, the triangle pointing right indicates a buffer between Q and the output pin. The Triangle pointing down indicates a tri-state device. We previously stated that the tristate is controlled by enable EN, which is actually G1 AND G2 from the control section. If R/W=0, the tri-state is disabled, and we can shift data into Q0 via SER (pin 6), a detail we omitted above. We actually need MODE=0, R/W'=0, CS'=0 The internal logic of the SN74LS674 and a table summarizing the operation of the control signals is available in the link in the bullet list, top of section. If R/W'=1, the tristate is enabled, Q15 shifts out SER/Q15 (pin 6) and recirculates to the Q0 stage via the right hand wire to 3',4D. We have assumed that CS' was low giving us clock C4/3' and G2 to ENable the tri-state. An application of a parallel-in/ serial-out shift register is to read data into a microprocessor. The Alarm above is controlled by a remote keypad. The alarm box supplies +5V and ground to the remote keypad to power it. The alarm reads the remote keypad every few tens of milliseconds by sending shift clocks to the keypad which returns serial data showing the status of the keys via a parallel-in/ serial-out shift register. Thus, we read nine key switches with four wires. How many wires would be required if we had to run a circuit for each of the nine keys? A practical application of a parallel-in/ serial-out shift register is to read many switch closures into a microprocessor on just a few pins. Some low end microprocessors only have 6-I/O (Input/Output) pins available on an 8-pin package. Or, we may have used most of the pins on an 84-pin package. We may want to reduce the number of wires running around a circuit board, machine, vehicle, or building. This will increase the reliability of our system. It has been reported that manufacturers who have reduced the number of wires in an automobile produce a more reliable product. In any event, only three microprocessor pins are required to read in 8-bits of data from the switches in the figure above. We have chosen an asynchronous loading device, the CD4021B because it is easier to control the loading of data without having to generate a single parallel load clock. The parallel data inputs of the shift register are pulled up to +5V with a resistor on each input. If all switches are open, all 1s will be loaded into the shift register when the microprocessor moves the LD/SH' line from low to high, then back low in anticipation of shifting. Any switch closures will apply logic 0s to the corresponding parallel inputs. The data pattern at P1-P7 will be parallel loaded by the LD/SH'=1 generated by the microprocessor software. The microprocessor generates shift pulses and reads a data bit for each of the 8-bits. This process may be performed totally with software, or larger microprocessors may have one or more serial interfaces to do the task more quickly with hardware. With LD/SH'=0, the microprocessor generates a 0 to 1 transition on the Shift clock line, then reads a data bit on the Serial data in line. This is repeated for all 8-bits. The SER line of the shift register may be driven by another identical CD4021B circuit if more switch contacts need to be read. In which case, the microprocessor generates 16-shift pulses. More likely, it will be driven by something else compatible with this serial data format, for example, an analog to digital converter, a temperature sensor, a keyboard scanner, a serial read-only memory. As for the switch closures, they may be limit switches on the carriage of a machine, an over-temperature sensor, a magnetic reed switch, a door or window switch, an air or water pressure switch, or a solid state optical interrupter. A serial-in/parallel-out shift register is similar to the serial-in/ serial-out shift register in that it shifts data into internal storage elements and shifts data out at the serial-out, data-out, pin. It is different in that it makes all the internal stages available as outputs. Therefore, a serial-in/parallel-out shift register converts data from serial format to parallel format. If four data bits are shifted in by four clock pulses via a single wire at data-in, below, the data becomes available simultaneously on the four Outputs QA to QD after the fourth clock pulse. The practical application of the serial-in/parallel-out shift register is to convert data from serial format on a single wire to parallel format on multiple wires. Perhaps, we will illuminate four LEDs (Light Emitting Diodes) with the four outputs (QA QB QC QD ). The above details of the serial-in/parallel-out shift register are fairly simple. It looks like a serial-in/ serial-out shift register with taps added to each stage output. Serial data shifts in at SI (Serial Input). After a number of clocks equal to the number of stages, the first data bit in appears at SO (QD) in the above figure. In general, there is no SO pin. The last stage (QD above) serves as SO and is cascaded to the next package if it exists. If a serial-in/parallel-out shift register is so similar to a serial-in/ serial-out shift register, why do manufacturers bother to offer both types? Why not just offer the serial-in/parallel-out shift register? They actually only offer the serial-in/parallel-out shift register, as long as it has no more than 8-bits. Note that serial-in/ serial-out shift registers come in bigger than 8-bit lengths of 18 to to 64-bits. It is not practical to offer a 64-bit serial-in/parallel-out shift register requiring that many output pins. See waveforms below for above shift register. The shift register has been cleared prior to any data by CLR', an active low signal, which clears all type D Flip-Flops within the shift register. Note the serial data 1011 pattern presented at the SI input. This data is synchronized with the clock CLK. This would be the case if it is being shifted in from something like another shift register, for example, a parallel-in/ serial-out shift register (not shown here). On the first clock at t1, the data 1 at SI is shifted from D to Q of the first shift register stage. After t2 this first data bit is at QB. After t3 it is at QC. After t4 it is at QD. Four clock pulses have shifted the first data bit all the way to the last stage QD. The second data bit a 0 is at QC after the 4th clock. The third data bit a 1 is at QB. The fourth data bit another 1 is at QA. Thus, the serial data input pattern 1011 is contained in (QD QC QB QA). It is now available on the four outputs. It will available on the four outputs from just after clock t4 to just before t5. This parallel data must be used or stored between these two times, or it will be lost due to shifting out the QD stage on following clocks t5 to t8 as shown above. Let's take a closer look at Serial-in/ parallel-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow the links. The 74ALS164A is almost identical to our prior diagram with the exception of the two serial inputs A and B. The unused input should be pulled high to enable the other input. We do not show all the stages above. However, all the outputs are shown on the ANSI symbol below, along with the pin numbers. The CLK input to the control section of the above ANSI symbol has two internal functions C1, control of anything with a prefix of 1. This would be clocking in of data at 1D. The second function, the arrow after after the slash (/) is right (down) shifting of data within the shift register. The eight outputs are available to the right of the eight registers below the control section. The first stage is wider than the others to accommodate the A&B input. The above internal logic diagram is adapted from the TI (Texas Instruments) data sheet for the 74AHC594. The type "D" FFs in the top row comprise a serial-in/ parallel-out shift register. This section works like the previously described devices. The outputs (QA' QB' to QH' ) of the shift register half of the device feed the type "D" FFs in the lower half in parallel. QH' (pin 9) is shifted out to any optional cascaded device package. A single positive clock edge at RCLK will transfer the data from D to Q of the lower FFs. All 8-bits transfer in parallel to the output register (a collection of storage elements). The purpose of the output register is to maintain a constant data output while new data is being shifted into the upper shift register section. This is necessary if the outputs drive relays, valves, motors, solenoids, horns, or buzzers. This feature may not be necessary when driving LEDs as long as flicker during shifting is not a problem. Note that the 74AHC594 has separate clocks for the shift register (SRCLK) and the output register ( RCLK). Also, the shifter may be cleared by SRCLR and, the output register by RCLR. It desirable to put the outputs in a known state at power-on, in particular, if driving relays, motors, etc. The waveforms below illustrate shifting and latching of data. The above waveforms show shifting of 4-bits of data into the first four stages of 74AHC594, then the parallel transfer to the output register. In actual fact, the 74AHC594 is an 8-bit shift register, and it would take 8-clocks to shift in 8-bits of data, which would be the normal mode of operation. However, the 4-bits we show saves space and adequately illustrates the operation. We clear the shift register half a clock prior to t0 with SRCLR'=0. SRCLR' must be released back high prior to shifting. Just prior to t0 the output register is cleared by RCLR'=0. It, too, is released ( RCLR'=1). Serial data 1011 is presented at the SI pin between clocks t0 and t4. It is shifted in by clocks t1 t2 t3 t4 appearing at internal shift stages QA' QB' QC' QD' . This data is present at these stages between t4 and t5. After t5 the desired data (1011) will be unavailable on these internal shifter stages. Between t4 and t5 we apply a positive going RCLK transferring data 1011 to register outputs QA QB QC QD . This data will be frozen here as more data (0s) shifts in during the succeeding SRCLKs (t5 to t8). There will not be a change in data here until another RCLK is applied. The 74AHC595 is identical to the '594 except that the RCLR' is replaced by an OE' enabling a tri-state buffer at the output of each of the eight output register bits. Though the output register cannot be cleared, the outputs may be disconnected by OE'=1. This would allow external pull-up or pull-down resistors to force any relay, solenoid, or valve drivers to a known state during a system power-up. Once the system is powered-up and, say, a microprocessor has shifted and latched data into the '595, the output enable could be asserted (OE'=0) to drive the relays, solenoids, and valves with valid data, but, not before that time. Above are the proposed ANSI symbols for these devices. C3 clocks data into the serial input (external SER) as indicate by the 3 prefix of 2,3D. The arrow after C3/ indicates shifting right (down) of the shift register, the 8-stages to the left of the '595symbol below the control section. The 2 prefix of 2,3D and 2D indicates that these stages can be reset by R2 (external SRCLR'). The 1 prefix of 1,4D on the '594 indicates that R1 (external RCLR') may reset the output register, which is to the right of the shift register section. The '595, which has an EN at external OE' cannot reset the output register. But, the EN enables tristate (inverted triangle) output buffers. The right pointing triangle of both the '594 and'595 indicates internal buffering. Both the '594 and'595 output registers are clocked by C4 as indicated by 4 of 1,4D and 4D respectively. The CD4094B is a 3 to 15VDC capable latching shift register alternative to the previous 74AHC594 devices. CLOCK, C1, shifts data in at SERIAL IN as implied by the 1 prefix of 1D. It is also the clock of the right shifting shift register (left half of the symbol body) as indicated by the /(right-arrow) of C1/(arrow) at the CLOCK input. STROBE, C2 is the clock for the 8-bit output register to the right of the symbol body. The 2 of 2D indicates that C2 is the clock for the output register. The inverted triangle in the output latch indicates that the output is tristated, being enabled by EN3. The 3 preceding the inverted triangle and the 3 of EN3 are often omitted, as any enable (EN) is understood to control the tristate outputs. QS and QS' are non-latched outputs of the shift register stage. QS could be cascaded to SERIAL IN of a succeeding device. A real-world application of the serial-in/ parallel-out shift register is to output data from a microprocessor to a remote panel indicator. Or, another remote output device which accepts serial format data. The figure "Alarm with remote key pad" is repeated here from the parallel-in/ serial-out section with the addition of the remote display. Thus, we can display, for example, the status of the alarm loops connected to the main alarm box. If the Alarm detects an open window, it can send serial data to the remote display to let us know. Both the keypad and the display would likely be contained within the same remote enclosure, separate from the main alarm box. However, we will only look at the display panel in this section. If the display were on the same board as the Alarm, we could just run eight wires to the eight LEDs along with two wires for power and ground. These eight wires are much less desirable on a long run to a remote panel. Using shift registers, we only need to run five wires- clock, serial data, a strobe, power, and ground. If the panel were just a few inches away from the main board, it might still be desirable to cut down on the number of wires in a connecting cable to improve reliability. Also, we sometimes use up most of the available pins on a microprocessor and need to use serial techniques to expand the number of outputs. Some integrated circuit output devices, such as Digital to Analog converters contain serial-in/ parallel-out shift registers to receive data from microprocessors. The techniques illustrated here are applicable to those parts. We have chosen the 74AHC594 serial-in/ parallel-out shift register with output register; though, it requires an extra pin, RCLK, to parallel load the shifted-in data to the output pins. This extra pin prevents the outputs from changing while data is shifting in. This is not much of a problem for LEDs. But, it would be a problem if driving relays, valves, motors, etc. Code executed within the microprocessor would start with 8-bits of data to be output. One bit would be output on the "Serial data out" pin, driving SER of the remote 74AHC594. Next, the microprocessor generates a low to high transition on "Shift clock", driving SRCLK of the '595 shift register. This positive clock shifts the data bit at SER from "D" to "Q" of the first shift register stage. This has no effect on the QA LED at this time because of the internal 8-bit output register between the shift register and the output pins (QA to QH). Finally, "Shift clock" is pulled back low by the microprocessor. This completes the shifting of one bit into the '595. The above procedure is repeated seven more times to complete the shifting of 8-bits of data from the microprocessor into the 74AHC594 serial-in/ parallel-out shift register. To transfer the 8-bits of data within the internal '595 shift register to the output requires that the microprocessor generate a low to high transition on RCLK, the output register clock. This applies new data to the LEDs. The RCLK needs to be pulled back low in anticipation of the next 8-bit transfer of data. The data present at the output of the '595 will remain until the process in the above two paragraphs is repeated for a new 8-bits of data. In particular, new data can be shifted into the '595 internal shift register without affecting the LEDs. The LEDs will only be updated with new data with the application of the RCLK rising edge. What if we need to drive more than eight LEDs? Simply cascade another 74AHC594 SER pin to the QH' of the existing shifter. Parallel the SRCLK and RCLK pins. The microprocessor would need to transfer 16-bits of data with 16-clocks before generating an RCLK feeding both devices. The discrete LED indicators, which we show, could be 7-segment LEDs. Though, there are LSI (Large Scale Integration) devices capable of driving several 7-segment digits. This device accepts data from a microprocessor in a serial format, driving more LED segments than it has pins by by multiplexing the LEDs. For example, see link below for MAX6955 The purpose of the parallel-in/ parallel-out shift register is to take in parallel data, shift it, then output it as shown below. A universal shift register is a do-everything device in addition to the parallel-in/ parallel-out function. Above we apply four bit of data to a parallel-in/ parallel-out shift register at DA DB DC DD. The mode control, which may be multiple inputs, controls parallel loading vs shifting. The mode control may also control the direction of shifting in some real devices. The data will be shifted one bit position for each clock pulse. The shifted data is available at the outputs QA QB QC QD . The "data in" and "data out" are provided for cascading of multiple stages. Though, above, we can only cascade data for right shifting. We could accommodate cascading of left-shift data by adding a pair of left pointing signals, "data in" and "data out", above. The internal details of a right shifting parallel-in/ parallel-out shift register are shown below. The tri-state buffers are not strictly necessary to the parallel-in/ parallel-out shift register, but are part of the real-world device shown below. The 74LS395 so closely matches our concept of a hypothetical right shifting parallel-in/ parallel-out shift register that we use an overly simplified version of the data sheet details above. See the link to the full data sheet more more details, later in this chapter. LD/SH' controls the AND-OR multiplexer at the data input to the FF's. If LD/SH'=1, the upper four AND gates are enabled allowing application of parallel inputs DA DB DC DD to the four FF data inputs. Note the inverter bubble at the clock input of the four FFs. This indicates that the 74LS395 clocks data on the negative going clock, which is the high to low transition. The four bits of data will be clocked in parallel from DA DB DC DD to QA QB QC QD at the next negative going clock. In this "real part", OC' must be low if the data needs to be available at the actual output pins as opposed to only on the internal FFs. The previously loaded data may be shifted right by one bit position if LD/SH'=0 for the succeeding negative going clock edges. Four clocks would shift the data entirely out of our 4-bit shift register. The data would be lost unless our device was cascaded from QD' to SER of another device. Above, a data pattern is presented to inputs DA DB DC DD. The pattern is loaded to QA QB QC QD . Then it is shifted one bit to the right. The incoming data is indicated by X, meaning the we do no know what it is. If the input (SER) were grounded, for example, we would know what data (0) was shifted in. Also shown, is right shifting by two positions, requiring two clocks. The above figure serves as a reference for the hardware involved in right shifting of data. It is too simple to even bother with this figure, except for comparison to more complex figures to follow. Right shifting of data is provided above for reference to the previous right shifter. If we need to shift left, the FFs need to be rewired. Compare to the previous right shifter. Also, SI and SO have been reversed. SI shifts to QC. QC shifts to QB. QB shifts to QA. QA leaves on the SO connection, where it could cascade to another shifter SI. This left shift sequence is backwards from the right shift sequence. Above we shift the same data pattern left by one bit. There is one problem with the "shift left" figure above. There is no market for it. Nobody manufactures a shift-left part. A "real device" which shifts one direction can be wired externally to shift the other direction. Or, should we say there is no left or right in the context of a device which shifts in only one direction. However, there is a market for a device which will shift left or right on command by a control line. Of course, left and right are valid in that context. What we have above is a hypothetical shift register capable of shifting either direction under the control of L'/R. It is setup with L'/R=1 to shift the normal direction, right. L'/R=1 enables the multiplexer AND gates labeled R. This allows data to follow the path illustrated by the arrows, when a clock is applied. The connection path is the same as the"too simple" "shift right" figure above. Data shifts in at SR, to QA, to QB, to QC, where it leaves at SR cascade. This pin could drive SR of another device to the right. What if we change L'/R to L'/R=0? With L'/R=0, the multiplexer AND gates labeled L are enabled, yielding a path, shown by the arrows, the same as the above "shift left" figure. Data shifts in at SL, to QC, to QB, to QA, where it leaves at SL cascade. This pin could drive SL of another device to the left. The prime virtue of the above two figures illustrating the "shift left/ right register" is simplicity. The operation of the left right control L'/R=0 is easy to follow. A commercial part needs the parallel data loading implied by the section title. This appears in the figure below. Now that we can shift both left and right via L'/R, let us add SH/LD', shift/ load, and the AND gates labeled "load" to provide for parallel loading of data from inputs DA DB DC. When SH/LD'=0, AND gates R and L are disabled, AND gates "load" are enabled to pass data DA DB DC to the FF data inputs. the next clock CLK will clock the data to QA QB QC. As long as the same data is present it will be re-loaded on succeeding clocks. However, data present for only one clock will be lost from the outputs when it is no longer present on the data inputs. One solution is to load the data on one clock, then proceed to shift on the next four clocks. This problem is remedied in the 74ALS299 by the addition of another AND gate to the multiplexer. If SH/LD' is changed to SH/LD'=1, the AND gates labeled "load" are disabled, allowing the left/ right control L'/R to set the direction of shift on the L or R AND gates. Shifting is as in the previous figures. The only thing needed to produce a viable integrated device is to add the fourth AND gate to the multiplexer as alluded for the 74ALS299. This is shown in the next section for that part. Let's take a closer look at Serial-in/ parallel-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets, follow the links. We have already looked at the internal details of the SN74LS395A, see above previous figure, 74LS395 parallel-in/ parallel-out shift register with tri-state output. Directly above is the ANSI symbol for the 74LS395. Why only 4-bits, as indicated by SRG4 above? Having both parallel inputs, and parallel outputs, in addition to control and power pins, does not allow for any more I/O (Input/Output) bits in a 16-pin DIP (Dual Inline Package). R indicates that the shift register stages are reset by input CLR' (active low- inverting half arrow at input) of the control section at the top of the symbol. OC', when low, (invert arrow again) will enable (EN4) the four tristate output buffers (QA QB QC QD ) in the data section. Load/shift' (LD/SH') at pin (7) corresponds to internals M1 (load) and M2 (shift). Look for prefixes of 1 and 2 in the rest of the symbol to ascertain what is controlled by these. The negative edge sensitive clock (indicated by the invert arrow at pin-10) C3/2has two functions. First, the 3 of C3/2 affects any input having a prefix of 3, say 2,3D or 1,3D in the data section. This would be parallel load at A, B, C, D attributed to M1 and C3 for 1,3D. Second, 2 of C3/2-right-arrow indicates data clocking wherever 2 appears in a prefix (2,3D at pin-2). Thus we have clocking of data at SER into QA with mode 2 . The right arrow after C3/2 accounts for shifting at internal shift register stages QA QB QC QD. The right pointing triangles indicate buffering; the inverted triangle indicates tri-state, controlled by the EN4. Note, all the 4s in the symbol associated with the EN are frequently omitted. Stages QB QC are understood to have the same attributes as QD. QD' cascades to the next package's SER to the right. The table above, condensed from the data '299 data sheet, summarizes the operation of the 74ALS299 universal shift/ storage register. Follow the '299 link above for full details. The Multiplexer gates R, L, load operate as in the previous "shift left/ right register" figures. The difference is that the mode inputs S1 and S0 select shift left, shift right, and load with mode set to S1 S0 = to 01, 10, and 11respectively as shown in the table, enabling multiplexer gates L, R, and load respectively. See table. A minor difference is the parallel load path from the tri-state outputs. Actually the tri-state buffers are (must be) disabled by S1 S0 = 11 to float the I/O bus for use as inputs. A bus is a collection of similar signals. The inputs are applied to A, B through H (same pins as QA, QB, through QH) and routed to the load gate in the multiplexers, and on the the D inputs of the FFs. Data is parallel load on a clock pulse. The one new multiplexer gate is the AND gate labeled hold, enabled by S1 S0 = 00. The hold gate enables a path from the Q output of the FF back to the hold gate, to the D input of the same FF. The result is that with mode S1 S0 = 00, the output is continuously re-loaded with each new clock pulse. Thus, data is held. This is summarized in the table. To read data from outputs QA, QB, through QH, the tri-state buffers must be enabled by OE2', OE1' =00 and mode =S1 S0 = 00, 01, or 10. That is, mode is anything except load. See second table. Right shift data from a package to the left, shifts in on the SR input. Any data shifted out to the right from stage QH cascades to the right via QH'. This output is unaffected by the tri-state buffers. The shift right sequence for S1 S0 = 10 is: SR > QA > QB > QC > QD > QE > QF > QG > QH (QH') Left shift data from a package to the right shifts in on the SL input. Any data shifted out to the left from stage QA cascades to the left via QA', also unaffected by the tri-state buffers. The shift left sequence for S1 S0 = 01 is: (QA') QA < QB < QC < QD < QE < QF < QG < QH (QSL') Shifting may take place with the tri-state buffers disabled by one of OE2' or OE1' = 1. Though, the register contents outputs will not be accessible. See table. The "clean" ANSI symbol for the SN74ALS299 parallel-in/ parallel-out 8-bit universal shift register with tri-state output is shown for reference above. The annotated version of the ANSI symbol is shown to clarify the terminology contained therein. Note that the ANSI mode (S0 S1) is reversed from the order (S1 S0) used in the previous table. That reverses the decimal mode numbers (1 & 2). In any event, we are in complete agreement with the official data sheet, copying this inconsistency. The Alarm with remote keypad block diagram is repeated below. Previously, we built the keypad reader and the remote display as separate units. Now we will combine both the keypad and display into a single unit using a universal shift register. Though separate in the diagram, the Keypad and Display are both contained within the same remote enclosure. We will parallel load the keyboard data into the shift register on a single clock pulse, then shift it out to the main alarm box. At the same time , we will shift LED data from the main alarm to the remote shift register to illuminate the LEDs. We will be simultaneously shifting keyboard data out and LED data into the shift register. Eight LEDs and current limiting resistors are connected to the eight I/O pins of the 74ALS299 universal shift register. The LEDS can only be driven during Mode 3 with S1=0 S0=0. The OE1' and OE2' tristate enables are grounded to permenantly enable the tristate outputs during modes 0, 1, 2. That will cause the LEDS to light (flicker) during shifting. If this were a problem the EN1' and EN2' could be ungrounded and paralleled with S1 and S0 respectively to only enable the tristate buffers and light the LEDS during hold, mode 3. Let's keep it simple for this example. During parallel loading, S0=1 inverted to a 0, enables the octal tristate buffers to ground the switch wipers. The upper, open, switch contacts are pulled up to logic high by the resister-LED combination at the eight inputs. Any switch closure will short the input low. We parallel load the switch data into the '299 at clock t0 when both S0 and S1 are high. See waveforms below. Once S0 goes low, eight clocks (t0 tot8) shift switch closure data out of the '299 via the Qh' pin. At the same time, new LED data is shifted in at SR of the 299 by the same eight clocks. The LED data replaces the switch closure data as shifting proceeds. After the 8th shift clock, t8, S1 goes low to yield hold mode (S1 S0 = 00). The data in the shift register remains the same even if there are more clocks, for example, T9, t10, etc. Where do the waveforms come from? They could be generated by a microprocessor if the clock rate were not over 100 kHz, in which case, it would be inconvenient to generate any clocks after t8. If the clock was in the megahertz range, the clock would run continuously. The clock, S1 and S0 would be generated by digital logic, not shown here. If the output of a shift register is fed back to the input. a ring counter results. The data pattern contained within the shift register will recirculate as long as clock pulses are applied. For example, the data pattern will repeat every four clock pulses in the figure below. However, we must load a data pattern. All 0's or all 1's doesn't count. Is a continuous logic level from such a condition useful? We make provisions for loading data into the parallel-in/ serial-out shift register configured as a ring counter below. Any random pattern may be loaded. The most generally useful pattern is a single 1. Loading binary 1000 into the ring counter, above, prior to shifting yields a viewable pattern. The data pattern for a single stage repeats every four clock pulses in our 4-stage example. The waveforms for all four stages look the same, except for the one clock time delay from one stage to the next. See figure below. The circuit above is a divide by 4 counter. Comparing the clock input to any one of the outputs, shows a frequency ratio of 4:1. How may stages would we need for a divide by 10 ring counter? Ten stages would recirculate the 1 every 10 clock pulses. An alternate method of initializing the ring counter to 1000 is shown above. The shift waveforms are identical to those above, repeating every fourth clock pulse. The requirement for initialization is a disadvantage of the ring counter over a conventional counter. At a minimum, it must be initialized at power-up since there is no way to predict what state flip-flops will power up in. In theory, initialization should never be required again. In actual practice, the flip-flops could eventually be corrupted by noise, destroying the data pattern. A "self correcting" counter, like a conventional synchronous binary counter would be more reliable. The above binary synchronous counter needs only two stages, but requires decoder gates. The ring counter had more stages, but was self decoding, saving the decode gates above. Another disadvantage of the ring counter is that it is not "self starting". If we need the decoded outputs, the ring counter looks attractive, in particular, if most of the logic is in a single shift register package. If not, the conventional binary counter is less complex without the decoder. The waveforms decoded from the synchronous binary counter are identical to the previous ring counter waveforms. The counter sequence is (QA QB) = (00 01 10 11). The switch-tail ring counter, also know as the Johnson counter, overcomes some of the limitations of the ring counter. Like a ring counter a Johnson counter is a shift register fed back on its' self. It requires half the stages of a comparable ring counter for a given division ratio. If the complement output of a ring counter is fed back to the input instead of the true output, a Johnson counter results. The difference between a ring counter and a Johnson counter is which output of the last stage is fed back (Q or Q'). Carefully compare the feedback connection below to the previous ring counter. This "reversed" feedback connection has a profound effect upon the behavior of the otherwise similar circuits. Recirculating a single 1 around a ring counter divides the input clock by a factor equal to the number of stages. Whereas, a Johnson counter divides by a factor equal to twice the number of stages. For example, a 4-stage ring counter divides by 4. A 4-stage Johnson counter divides by 8. Start a Johnson counter by clearing all stages to 0s before the first clock. This is often done at power-up time. Referring to the figure below, the first clock shifts three 0s from ( QA QB QC) to the right into ( QB QC QD). The 1 at QD' (the complement of Q) is shifted back into QA. Thus, we start shifting 1s to the right, replacing the 0s. Where a ring counter recirculated a single 1, the 4-stage Johnson counter recirculates four 0s then four 1s for an 8-bit pattern, then repeats. The above waveforms illustrates that multi-phase square waves are generated by a Johnson counter. The 4-stage unit above generates four overlapping phases of 50% duty cycle. How many stages would be required to generate a set of three phase waveforms? For example, a three stage Johnson counter, driven by a 360 Hertz clock would generate three 120o phased square waves at 60 Hertz. The outputs of the flop-flops in a Johnson counter are easy to decode to a single state. Below for example, the eight states of a 4-stage Johnson counter are decoded by no more than a two input gate for each of the states. In our example, eight of the two input gates decode the states for our example Johnson counter. No matter how long the Johnson counter, only 2-input decoder gates are needed. Note, we could have used uninverted inputs to the AND gates by changing the gate inputs from true to inverted at the FFs, Q to Q', (and vice versa). However, we are trying to make the diagram above match the data sheet for the CD4022B, as closely as practical. Above, our four phased square waves QA to QD are decoded to eight signals (G0 to G7) active during one clock period out of a complete 8-clock cycle. For example, G0 is active high when both QA and QD are low. Thus, pairs of the various register outputs define each of the eight states of our Johnson counter example. Above is the more complete internal diagram of the CD4022B Johnson counter. See the manufacturers' data sheet for minor details omitted. The major new addition to the diagram as compared to previous figures is the disallowed state detector composed of the two NOR gates. Take a look at the inset state table. There are 8-permissible states as listed in the table. Since our shifter has four flip-flops, there are a total of 16-states, of which there are 8-disallowed states. That would be the ones not listed in the table. In theory, we will not get into any of the disallowed states as long as the shift register is RESET before first use. However, in the "real world" after many days of continuous operation due to unforeseen noise, power line disturbances, near lightning strikes, etc, the Johnson counter could get into one of the disallowed states. For high reliability applications, we need to plan for this slim possibility. More serious is the case where the circuit is not cleared at power-up. In this case there is no way to know which of the 16-states the circuit will power up in. Once in a disallowed state, the Johnson counter will not return to any of the permissible states without intervention. That is the purpose of the NOR gates. Examine the table for the sequence (QA QB QC) = (010). Nowhere does this sequence appear in the table of allowed states. Therefore (010) is disallowed. It should never occur. If it does, the Johnson counter is in a disallowed state, which it needs to exit to any allowed state. Suppose that (QA QB QC) = (010). The second NOR gate will replace QB = 1 with a 0 at the D input to FF QC. In other words, the offending 010 is replaced by 000. And 000, which does appear in the table, will be shifted right. There are may triple-0 sequences in the table. This is how the NOR gates get the Johnson counter out of a disallowed state to an allowed state. Not all disallowed states contain a 010 sequence. However, after a few clocks, this sequence will appear so that any disallowed states will eventually be escaped. If the circuit is powered-up without a RESET, the outputs will be unpredictable for a few clocks until an allowed state is reached. If this is a problem for a particular application, be sure to RESET on power-up. A pair of integrated circuit Johnson counter devices with the output states decoded is available. We have already looked at the CD4017 internal logic in the discussion of Johnson counters. The 4000 series devices can operate from 3V to 15V power supplies. The the 74HC' part, designed for a TTL compatiblity, can operate from a 2V to 6V supply, count faster, and has greater output drive capability. For complete device data sheets, follow the links. CD4017 Johnson counter with 10 decoded outputs CD4022 Johnson counter with 8 decoded outputs The ANSI symbols for the modulo-10 (divide by 10) and modulo-8 Johnson counters are shown above. The symbol takes on the characteristics of a counter rather than a shift register derivative, which it is. Waveforms for the CD4022 modulo-8 and operation were shown previously. The CD4017B/ 74HC4017 decade counter is a 5-stage Johnson counter with ten decoded outputs. The operation and waveforms are similar to the CD4017. In fact, the CD4017 and CD4022 are both detailed on the same data sheet. See above links. The 74HC4017 is a more modern version of the decade counter. These devices are used where decoded outputs are needed instead of the binary or BCD (Binary Coded Decimal) outputs found on normal counters. By decoded, we mean one line out of the ten lines is active at a time for the '4017 in place of the four bit BCD code out of conventional counters. See previous waveforms for 1-of-8 decoding for the '4022 Octal Johnson counter. The above Johnson counter shifts a lighted LED each fifth of a second around the ring of ten. Note that the 74HC4017 is used instead of the '40017 because the former part has more current drive capability. From the data sheet, (at the link above) operating at VCC= 5V, the VOH= 4.6V at 4ma. In other words, the outputs can supply 4 ma at 4.6 V to drive the LEDs. Keep in mind that LEDs are normally driven with 10 to 20 ma of current. Though, they are visible down to 1 ma. This simple circuit illustrates an application of the 'HC4017. Need a bright display for an exhibit? Then, use inverting buffers to drive the cathodes of the LEDs pulled up to the power supply by lower value anode resistors. The 555 timer, serving as an astable multivibrator, generates a clock frequency determined by R1 R2 C1. This drives the 74HC4017 a step per clock as indicated by a single LED illuminated on the ring. Note, if the 555 does not reliably drive the clock pin of the '4015, run it through a single buffer stage between the 555 and the '4017. A variable R2 could change the step rate. The value of decoupling capacitor C2 is not critical. A similar capacitor should be applied across the power and ground pins of the '4017. The Johnson counter above generates 3-phase square waves, phased 60o apart with respect to (QA QB QC). However, we need 120o phased waveforms of power applications (see Volume II, AC). Choosing P1=QA P2=QC P3=QB' yields the 120o phasing desired. See figure below. If these (P1 P2 P3) are low-pass filtered to sine waves and amplified, this could be the beginnings of a 3-phase power supply. For example, do you need to drive a small 3-phase 400 Hz aircraft motor? Then, feed 6x 400Hz to the above circuit CLOCK. Note that all these waveforms are 50% duty cycle. The circuit below produces 3-phase nonoverlapping, less than 50% duty cycle, waveforms for driving 3-phase stepper motors. Above we decode the overlapping outputs QA QB QC to non-overlapping outputs P0 P1 P2 as shown below. These waveforms drive a 3-phase stepper motor after suitable amplification from the milliamp level to the fractional amp level using the ULN2003 drivers shown above, or the discrete component Darlington pair driver shown in the circuit which follow. Not counting the motor driver, this circuit requires three IC (Integrated Circuit) packages: two dual type "D" FF packages and a quad NAND gate. A single CD4017, above, generates the required 3-phase stepper waveforms in the circuit above by clearing the Johnson counter at count 3. Count 3 persists for less than a microsecond before it clears its' self. The other counts (Q0=G0 Q1=G1 Q2=G2) remain for a full clock period each. The Darlington bipolar transistor drivers shown above are a substitute for the internal circuitry of the ULN2003. The design of drivers is beyond the scope of this digital electronics chapter. Either driver may be used with either waveform generator circuit. The above waceforms make the most sense in the context of the internal logic of the CD4017 shown earlier in this section. Though, the AND gating equations for the internal decoder are shown. The signals QA QB QC are Johnson counter direct shift register outputs not available on pin-outs. The QD waveform shows resetting of the '4017 every three clocks. Q0 Q1 Q2, etc. are decoded outputs which actually are available at output pins. Above we generate waveforms for driving a unipolar stepper motor, which only requires one polarity of driving signal. That is, we do not have to reverse the polarity of the drive to the windings. This simplifies the power driver between the '4017 and the motor. Darlington pairs from a prior diagram may be substituted for the ULN3003. Once again, the CD4017B generates the required waveforms with a reset after the teminal count. The decoded outputs Q0 Q1 Q2 Q3 sucessively drive the stepper motor windings, with Q4 reseting the counter at the end of each group of four pulses. http://www.st.com/stonline/psearch/index.htm select standard logicshttp://www.st.com/stonline/books/pdf/docs/2069.pdf http://www.ti.com/ (Products, Logic, Product Tree) Lessons In Electric Circuits copyright (C) 2000-2015 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
<urn:uuid:27f5b84e-93e9-48f4-b688-be7f15c830ec>
CC-MAIN-2016-26
http://www.ibiblio.org/kuphaldt/electricCircuits/Digital/DIGI_12.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00126-ip-10-164-35-72.ec2.internal.warc.gz
en
0.905132
16,583
4.46875
4
Beavers great for dragonflies and damselflies! News / 27th October 2012 The effect of the Eurasian Beaver on Dragonflies and Damselflies (Odonata) by Sara Schloemer, Lutz Dalbeck and Andreé Hamm. Institute of crop Science and Resource Conservation (INRES). ***Download the scientific poster from the download page*** As a result of a reintroduction project in 1981 the Eurasian Beaver returned to the Hürtgenwald, a large woodland area in the Eifel mountain range in the extreme west of Germany. The study looked at the effects of the large-scale changes to dragonfly and damselfly communities in the narrow, originally wooded, mountain valleys of the northern Eifel area. In order to compare beaver ponds with woodland streams representative of large areas north of the Alps, but yet not influenced by the beaver, we studied the following habitat types: 1. Natural springs (definite woodland springs) 2. Streams (natural – semi-natural in woodland, not influenced by the beaver) 3. Beaver ponds (some 10 -15 years old, up to 2000 m², sunny to half-shaded). 4. Beaver ponds abandoned for 1 to 3 years. All areas were searched for dragonflies and damselflies, their larvae and exuvia throughout the season in 2011 and 2012. In addition chemical and physical parameters such as pH-value, temperature, and water speed were measured at all sample sites, and makrozoobenthos collected, in order to gather information on the water quality in both the presence and absence of the beaver. With a total of 29 species, the number of species in beaver ponds is markedly higher than in ponds without beavers (4 species). Even in abandoned beaver ponds the number of species is higher than in the streams (7 species). If species typical of the streams are considered, these also profited from the influence of the beaver. This is due on the one hand to the dams, which are clearly very suitable habitat and on the other to the increased exposure to sunlight, even on stretches of running water, caused by the beaver’s activities. Despite the relatively short period of time since the return of the beaver, and the rather small number of beaver ponds, the ponds already now make a remarkable contribution to the conservation and spread of rare dragonfly and damselfly species. Beavers contribute markedly to nature and species conservation in the densely settled countryside of Central Europe. The species should therefore be more greatly integrated into plans to implement conservation measures and renaturisation of water bodies than it has been to date. Particularly notable are: – The extraordinary combinations of species (boreal alongside sub-Mediterranean species) – The extremely different habitat requirements of the species – The increase in typical stream dragonflies and damselflies in spite of damming by the beaver – The increase in part of highly endangered species List of Dragonflies and Damselflies associated with beaver modified habitat (the usual habitat these species are found within is shown underneath): Still waters, small, sunny and bare Still waters with well developed vegetation Standing waters, well-vegetated Standing and slow-flowing waters Preferring small and shaded ponds Acidic heathy lakes and bogs Still and slowflowing waters mainly runnels in boggy areas Small streams, preferring scantily vegetated sites Streams, in forests, open moors and heaths All kinds of slow-flowing and standing waters Pioneer of newly created ponds Mostly acidic waters, bogs, moorland and heathy lakes Acidic, oligotrophic lakes, tarns and bogs, also richly vegetated habitats Less acidic, mesotrophic bogs, forest lakes, marshy ditches and oxbows Reedy canals, marshes, oxbows Running waters, avoiding shade Running water, classic habitat forest streams Running and standing water, favours the presence of aquatic vegetation Running and especially standing waters Small or temporary ponds Running and still water Any standing water, numerous at recent shallow or acidic sites Heath and bog lakes with peatmoss Standing or slow flowing water with bordering trees and bushes Small streams, bogs and heathy lakes with peatmoss Well-vegetated standing and running waters
<urn:uuid:a859ffa3-633c-464e-a84a-c1af4658f122>
CC-MAIN-2016-26
http://www.welshbeaverproject.org/beavers-are-great-for-dragonflies/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.864444
931
3.5625
4
Ancient Mammoths: Russian Scientists Examine Why They Died Russian scientists have now discovered after examining the bones from “mammoth cemeteries” that these terrestrial mammals (among the largest on earth) died as a result of environmental factors; namely, geological and ecological changes that precipitated the depletion of necessary nutrients and minerals in soil and food, which in turn caused severe bone disease. The mammoth bones that were examined carried traces of osteoporosis, osteomalacia (bone softening) and osteochondrosis. The mammoth was certainly hunted for its meat, as were bison and other hoofed animals. Spear tips embedded in between mammoth ribs found in two fossils in the United States prove the point beyond all reasonable doubt. As herbivorous animals, mammoths needed large amounts of minerals to survive and they compensated for the lack of such by eating certain kinds of clay known as alkali soils, at “animal pastures.” The need to eat these clays was particularly prevalent during mating season and pregnancy. (Talk about cravings!) Due to tectonic forces, alkali soils were transformed into soils of an acid nature, which were lacking in nutrients. Grass which was a mammoth staple, lacked necessary minerals and this progressive decrease in proper nutrition caused various pathological processes in bones, some so painful that the poor animals couldn’t even move, much less forage for food (and more likely become someone else’s food). Although the exact truth may never be known, Russian scientists have at least clarified much of the mystery surrounding the disappearance of mammoths. What could be next? I hear saber-tooth tigers may well be on the horizon.
<urn:uuid:2032b186-36f4-46da-9d0e-6e7ff3fedf69>
CC-MAIN-2016-26
http://inventorspot.com/node/20005
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97483
349
3.40625
3
Workplace dress codes touch on a variety of issues, including, freedom of speech, personal hygiene, customer relations, religious freedom, the minimum wage and racial and gender stereotypes. Employers have a number of legitimate reasons for imposing a dress code, but court rulings have limited their options. In Price Waterhouse v. Hopkins, 490 U.S. 228 (1989), the U.S. Supreme Court ruled that employment decisions, including workplace policies, may not target one gender. Generally, courts have interpreted that to mean employers may create dress codes that are different for each sex as long as they don’t unduly burden one gender more than the other. Note: Twenty states and the District of Columbia have adopted laws prohibiting discrimination based on gender identity, while others have adopted similar protections. Some gender identity laws specifically ban discrimination based on gender identity “expre...(register to read more) - How to Fire an Employee the Legal Way: 6 Termination Guidelines - 10 Secrets to an Effective Performance Review - Do you have the qualities of a leader? - Not everyone wears a halo: Courts don't expect your work atmosphere to be perfect - High Court to review cap on damages - Summer staff get-togethers for every budget
<urn:uuid:c6beb393-b0e8-489f-a10f-76adb0e53b4a>
CC-MAIN-2016-26
http://www.businessmanagementdaily.com/2101/dress-codes
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00027-ip-10-164-35-72.ec2.internal.warc.gz
en
0.91134
258
2.765625
3
- Browse Cases - Search Cases - Browse Methods East Timorese activists campaign for independence from Indonesia, 1987-2002 East Timor, a portion of the Indonesian archipelago, was colonized by Portugal in the 16th century. It was not until 1975 that Portugal decolonized the area, at which point East Timor declared independence. Shortly after this, however, the Indonesian army, under the orders of Indonesian President Suharto, invaded and annexed East Timor. 60,000 East Timorese were killed or died of starvation during the invasion. East Timorese resisted the occupation from the beginning of the invasion, largely in the form of guerrilla organizations engaging in armed conflict with the Indonesian military. The main organization was the Revolutionary Front for an Independent East Timor (FRETILIN). One of Suharto’s justifications for annexing East Timor was based on a claim that FRETILIN was a communist threat to both East Timor and to Indonesia. Suharto installed a puppet government, instituted a heavy military presence, and allowed 100,000 Muslim Indonesian settlers to move into East Timor, the population of which was mostly Catholic. The Indonesian government also controlled all of the information and people that went in and out of the territory. From 1975 to 1978, FRETILIN fought against Indonesian troops using guerilla tactics, but in 1980 the military massacred about 200,000 East Timorese (nearly 1/3 of the territory’s population), which effectively shut down most FRETILIN guerrilla activity. In 1987, Xanana Gusmao, one of the FRETILIN commanders, stepped down and created the National Council of Maubere Resistance (CNRM). CNRM had three pillars: an Armed Front, a Diplomatic Front, and a Clandestine Front. The Clandestine Front, largely made up of students, organized nonviolent resistance. Much of the youth was involved in relaying messages, body counts, eyewitness testimonies, and other valuable information to international human rights organizations. This wing of the organization was very dispersed and decentralized. The students relied heavily on educational campaigns and nonviolent protests to raise awareness about human rights abuses in East Timor. The Clandestine Front was also the link between the FRETILIN guerillas in the mountains, headed by Gusmao, and the diplomatic faction led by Jose Ramos-Horta, the CNRM foreign minister. In 1989, a group of students who had received scholarships to study in Indonesia formed the National Resistance of East Timorese Students (Renetil). They had three main strategies: maintaining distance from Indonesian influences, revealing the brutality of the Suharto regime and Indonesian occupation to the outside world, and preparing East Timorese professionals to be able to help build an independent East Timor. From the start, the organization was concerned with preparing for any potential chaos or power vacuums. One of the main goals of these organizations was to gain attention on the international stage, over both human rights abuses and issues of self-determination. However, East Timor was not prominent within international political dialogue. An opportunity to gain exposure came, however, in 1989. In November of 1988, in order to counter accusations that Indonesia’s presence in East Timor was harmful and unjust, Suharto had declared East Timor “open territory,” and in 1989, invited Pope John Paul II to Dili, the capital. East Timorese activists used this opportunity to launch their first public protest. The pope visited Dili in October of 1989. During the mass, a group of youths ran to the alter and shouted, “Long live the Pope,” and, “Long live East Timor.” They then unfurled banners saying, “Free East Timor,” and, “Indonesia, get out.” For the first time, the independence movement gained significant mass media coverage around the world, thoroughly embarrassing the Indonesian government. The action also helped to galvanize the East Timor population behind the independence movement. Activists (mostly students) staged a series of more protests coinciding with visits from foreign delegations. One such protest was when US Ambassador John Jonjo visited in January of 1991. The activists also managed to get an Australian journalist to publish interviews with Gusmao and the FALANTIL. November 1991 was a turning point for the campaign. On November 12, 1991, East Timorese youths transformed a funeral for a fellow activist in Dili into a large pro-independence rally. Attendees were unarmed, yet when they reached the Santa Cruz cemetery, Indonesian troops opened fire, killing over 250 people. Two American journalists were there, along with a British cameraman who caught the incident on film. The story circulated throughout the world, inciting international outrage and an international solidarity movement (for more information on the international solidarity movement, see the case “U.S. Activists Pressure the U.S. Government to Withdraw Support from Indonesia During the East Timorese Independence Movement, 1991-1999”). On November 19, eighty East Timorese and Indonesian students marched down the main street in Jakarta from the UN offices in the city. While police were detaining a portion of the students, one student read a statement refuting the police claim that only nineteen people were killed in the Santa Cruz massacre. That same day, twenty people also gathered at the Santa Cruz cemetery where the shooting took place to hold a mass for those who had died. The massacre also convinced some that the East Timorese and the Indonesian people had the same enemy: the Suharto regime. Renetil started trying to bring Indonesians, in addition to the larger international community, into the cause. They reached out to intellectuals, political opposition leaders, and human rights activists. Students started studying Bahasa Indonesian (the official language of Indonesia), went to school there, started becoming part of Indonesian life, and then protested in the streets in Indonesia itself. They also worked closely with Indonesian human rights groups such as the Indonesian Legal Aid Society, Solidamor, and the Institute for Human Rights Study and Advocacy. In their campaign to gain the support of foreign institutions and governments, the Diplomatic Front of CNRM, namely Jose Ramos-Horta, traveled to meet foreign diplomats and government officials, as well as UN officials. Additionally, East Timorese activists began to travel around the world to participate in conferences and solidarity meetings, urging people to pressure their governments to push Indonesia to allow East Timorese self-determination. This growing international solidarity helped to encourage the continuing grassroots movement within East Timor. In 1994, the CNRM proposed a Three Phase Peace Plan to the UN. Phase one called for UN-supervised talks between Indonesia and Portugal with an aim toward ending armed conflict in East Timor, and to release political prisoners. Phase two would be a transition stage of autonomy in which the East Timorese would govern themselves through their own institutions. Phase three would be a referendum on self-determination. However, the Indonesian government ignored this plan. On November 12, 1994, during an Asia-Pacific Economic Cooperation summit in Jakarta, twenty-nine Indonesian and East Timorese demonstrators climbed the wall of the U.S. embassy where the summit was being held and stayed for 12 days. This further attracted international media attention. Student activity in East Timor continued, resulting in arrests, torture, and assassinations of hundreds of student activists. A turning point in Western governments’ policies toward East Timor occurred in 1996. That December, the leader of the Catholic Church in East Timor, Bishop Carlos Ximenes Belo and Jose Ramos-Horta were awarded the Nobel Peace Prize. When they each accepted the prize, they called on the international community to support a referendum on East Timor’s right to self-determination. In 1998, East Timorese pro-independence factions joined under an organization called the National Council of Timorese Resistance (CNRT). The CNRT and students helped lead a mobilization of East Timorese, including business elites and members of the Indonesian security forces to call for Suharto’s resignation. Shortly thereafter, in May of 1998, Suharto resigned from his position as president. President B.J. Habibie was appointed as his successor. After pressure from the international community, President Habibie offered East Timor special autonomy in June of 1998 in exchange for recognition of Indonesia’s sovereignty. In response, on June 15, 15,000 students demonstrated in Dili and demanded a referendum on independence, as well as the release of Gusmao from house arrest. Due to both internal and external pressure, Habibie offered independence as an option in January 1999, and on May 5, 1999, an agreement was signed between Indonesia, Portugal, and the UN calling for a UN-supervised referendum on the status of East Timor. Close to 80% of East Timorese voted for independence. However, the day after the vote, Indonesia-backed militias invaded East Timor and instituted a scorched-earth campaign that led to mass displacement. Gusmao called on FALINTIL guerillas to not fight back, saying later, “we did not want to be drawn into their game and their orchestration of violence in a civil war…We never expected such a dimension in the rampage that followed.” Finally, on September 14, 2000, the UN Security Council authorized an Australian-led international force to East Timor, and one month later, the UN Transnational Administration in East Timor was established. After a two-year transition period, East Timor became an independent state in May of 2002. Gusmao became the first president of the Democratic Republic of East Timor.
<urn:uuid:ec006069-c9b1-4f24-8251-90e62f9928f7>
CC-MAIN-2016-26
http://nvdatabase.swarthmore.edu/content/east-timorese-activists-campaign-independence-indonesia-1987-2002
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00148-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961557
2,029
3.625
4
CoalMap online tool shows what policy regulations and technological advancements can do for the cost-competitiveness of solar and wind energy. Meeting the world's growing demand for energy, minimizing related impacts on the environment and reducing the potential geopolitical tensions associated with increased competition for energy supplies represent some of the greatest technical and policy challenges of the next several decades. Research to develop alternative energy sources to displace conventional fossil fuels, including relevant economic management, social science and policy dimensions, is critical to meeting these objectives. Fossil fuels supply more than 80 percent of the world's primary energy but they are finite resources and major contributors to global warming. The ways and means for their ultimate replacement with clean, affordable and sustainable energy sources at the scale required to power the world are not yet fully obvious, readily available or, in many instances, technically feasible. Alternative sources are not all benign and their impacts on the environment, particularly when deployed at scale, are not fully understood. Also, renewable and efficiency technologies rely on some essential critical materials, which may be scarce or unavailable in quantities necessary to meet future demand. Further, existing energy infrastructures in the United States and around the world are complex and very large, represent enormous capital investment and have operational life spans of 50 years or more. Wholesale or even piecemeal replacement of these infrastructures will be costly, will take time and will be frequently resisted by entrenched interests. Finally, meeting dramatic increases in energy demand, particularly in the developing world, will compound these problems at the same time that it enables opportunities for enhanced national stability, economic development and improved quality of life. The coordination of policy, regulation and technology development is critical to the widespread market diffusion of transformational energy technologies. The development of appropriate, technically informed and economically sound incentives and investments to promote development and deployment of these technologies must seek to: - Avoid the stranding of existing assets. - Optimize the investment of scarce research dollars. - Recycle, replace and reduce use of critical materials. - Minimize potential economic dislocation during the transition to a sustainable energy future. - Preserve the fundamental and desirable drivers and aspects of free markets by internalizing policy and regulatory costs to the maximum extent possible. - Maximize the opportunities for successful transformation of global energy systems.
<urn:uuid:67a12028-89ae-4c10-bf3a-c71c6437163c>
CC-MAIN-2016-26
http://mitei.mit.edu/research/transformations
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908863
473
3.09375
3
The United Nations Children's Fund warns hundreds of millions of children who live in cities and towns are excluded from vital services, including health, education, clean water and sanitation. In this year’s State of the World's Children report, UNICEF describes the grim reality of children growing up in poverty in city slums, which offer few of the benefits available to children of a wealthier class. Cities are great places for people who can afford to go to the doctor, get an education and take advantage of the many recreational activities available. But, cities are not such great places for poor children forced to live in slums and shantytowns. The U.N. Children’s Fund says these children are among the most disadvantaged and vulnerable in the world. It says they live amid violence and exploitation. They are deprived of the most basic services and denied a chance to thrive. UNICEF’s "State of the World’s Children Report" finds overcrowding and unsanitary conditions in these slums rapidly spread diseases such as pneumonia and diarrhea, two of the biggest killers of children under five in the world. The report notes one urban resident in three lives in a slum. This rises to six in 10 in Africa. It says many children in slums live close to vital services, such as schools and clinics. The problem, it says, is they are excluded from these services because of poverty and discrimination. UNICEF spokeswoman, Marixie Mercado, says these problems begin at birth. “One-third of children in urban areas are not registered at birth," said Mercado. "That proportion rises almost half in Africa, makes them much more vulnerable to exploitation throughout their lives. Slum dwellers, for example, living without secure land tenure always live with the permanent risk of being evicted. This gives them absolutely no incentive to improve their households or their communities and it contributes to a huge sense of insecurity for children and their families.” And, UNICEF warns the situation is likely to get worse unless governments put children at the heart of urban planning and improve services for all. Today, more than half of all people, including more than one billion children, live in urban settings. By 2050, the United Nations predicts, 70 percent of the world population will live in cities. Mercado says the increase in the number of slum-dwellers will only add to the deplorable conditions under which so many children are forced to live. “In Kenya, for example, a study in 2009 showed that stunting rates among children in the slums of Nairobi were almost three times higher than in urban areas in general," she said. "In Bangladesh, mortality rates among children in slums was higher than both the rural and the urban rates, generally.” But, the report cites other studies where cities have implemented programs, which are benefiting the poor. Mexico has an initiative which provides cash to the poorest families to send their children to school and pay for health care. This operates in both rural and urban areas and is now being followed by other countries. UNICEF says steps such as the abolition of medical and school fees will improve children’s health and allow many more children to get an education. The agency says vital services often are available to the poor, but they are unaware of this. It says it is important that slum-dwellers be informed of their rights.
<urn:uuid:9e4875a8-9f79-45e8-84dc-70b751b2232c>
CC-MAIN-2016-26
http://www.voanews.com/content/unicef-hundreds-of-millions-of-city-children-lack-vital-services-140654433/159669.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970047
711
3.109375
3
Very little is known about Johann Gutenberg, including his actual date of birth. Even the portrait seen here is based on guesses about his appearance. Most of what scholars know about Gutenberg comes from a handful of legal and financial documents. Johann Gensfleisch zum Gutenberg was born into an aristocratic family of skilled metal craftsmen. Knowledge of metals was useful to him as he developed his method of casting printing type. Before beginning his work on the Bible around 1450, he experimented with printing single sheets of paper and even small books. In 1455, Gutenberg was sued by his wealthy business partner Johann Fust for the return of large sums of money. In all probability, these funds were used in the development of printing and the production of Gutenberg's Bible. Gutenberg lost this suit and presumably had to turn over some of his printing equipment to Fust. Little is known about Gutenberg's later years, although he was given a pension by the Archbishop of Mainz and probably continued to print and develop new techniques until his death in 1468. Gutenberg's grave is now lost, but his legacy lives on. In 1997, Gutenberg's invention was chosen as the most important of the second millennium by Time-Life Magazine. Two years later, the A&E Network ranked Gutenberg the most influential person of the second millennium on their "Biographies of the Millennium" countdown.
<urn:uuid:78857a09-532d-4296-b683-b91ba7ad5c6d>
CC-MAIN-2016-26
http://www.hrc.utexas.edu/exhibitions/permanent/gutenbergbible/gutenberg/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983534
279
3.703125
4