text stringlengths 199 648k | id stringlengths 47 47 | dump stringclasses 1 value | url stringlengths 14 419 | file_path stringlengths 139 140 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 50 235k | score float64 2.52 5.34 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
beginning of the new millennium marked the 200th anniversary of the discovery
of two types of Jurassic dinosaur tracks in Massachusetts, in 1802. At that
time, dinosaurs were unknown. They would not be named until 1841. The three-toed
tracks, later named Grallator (meaning stilt-walking bird), were
dubbed the trail of Noahs raven and attributed to unknown
ancient birds. We now attribute them to theropods two-legged carnivorous
dinosaurs that paleontologists consider the ancestors of birds. The larger four-toed
tracks, later named Otozoum (meaning giant animal), were very enigmatic
at the time but are now attributed to prosauropods early relatives of
the giant sauropods, more familiarly known as the long-necked, long-tailed brontosaurs.
This dinosaur print is one of thousands at the Twentymile Wash Dinosaur Tracksite, located at the Grand Staircase Escalante National Monument in Utah. Discovered in 1998, the site contains dinosaur footprints preserved in the upper part of the Entrada Formation. Image courtesy of Brent Breithaupt.
These early finds, like many footprint finds today, belonged to creatures previously unknown or were found before the discovery of fossilized skeletal remains of the trackmakers representing but one unique contribution of dinosaur tracks to the dynamic study of the ancient world.
Paleoartist Greg Paul (see story, this issue) once suggested that dinosaur trackways are the nearest thing we have to dinosaur movies, emphasizing the dynamic nature of tracks made by living animals, in contrast to the death and decay represented by fossilized skeletons. Although much neglected until a generation ago, the burgeoning field of dinosaur and vertebrate tracking ichnology has made rapid strides in the wake of the dinosaur renaissance that has been taking place over the past few decades. With more track discoveries has come a greater understanding of how dinosaurs moved and lived, and the discoveries have also helped capture the imagination of the public.
Forging a trail
The dinosaur renaissance of the 1960s and 1970s reflected a new ecological-environmental awareness that infused paleontology and sedimentary geology with a more dynamic and holistic mindset (see story, page 18). Almost overnight, dinosaurs were transformed from defunct, extinct failures into athletic superheroes that had suppressed mammalian evolution for much of the Mesozoic. Debate over the posture, gait and speeds of dinosaurs peppered the pages of Nature and other prestigious journals, and, despite a scarcity of known tracksites, footprints from a few important sites figured prominently as concrete evidence supporting these revolutionary new interpretations.
David Attenborough (left) and a BBC crew films at the 160-million-year-old megatracksite in the Entrada Formation near Moab, Utah, made by a carnivorous dinosaur. Image courtesy of Martin Lockley; taken in 1987.
By the mid-1970s, paleontologists began using trackways of long-striding bipedal dinosaurs to estimate the maximum speeds they attained. A 110-million-year-old theropod trackway from Texas provided an estimated speed of about 43 kilometers per hour (26 miles per hour) as fast as (or faster than) Olympic sprinters. Another theropod trackway, found in China in 2001, provides a close second, representing a silver-medal performance at an estimated 41 kilometers per hour. To date, almost all reported runners were theropods, suggesting high activity levels among these carnivorous predators, at least for the short distances over which trackway segments have been recorded. Narrow-gauge brontosaur trackways, more like those of elephants than wide straddling hippos, dispelled the archaic notion that the animals were cumbersome, primitive swamp-dwellers unable to support their weight on land.
Despite these discoveries, it was not until the 1980s that people reported many large, previously overlooked tracksites, and dinosaur ichnologists began publishing detailed maps of large sites with thousands of footprints comprising the trackways made by hundreds of different individuals representing a variety of identifiable groups. These important sites shifted the emphasis from individual behavior to ideas about social behavior, especially among gregarious sauropods and large ornithopods (duckbilled dinosaurs) that evidently sometimes traveled in large herds.
Such tracksites also proved useful for understanding the relationships between predators and prey, and for determining the preference that various dinosaurs and other vertebrate groups had for particular habitats. The rock type in which the tracks are found, or facies, helps reconstruct ancient environments and ecology.
Research in these and other new areas has gathered increasing momentum. Computer models, for example, now help us visualize the biomechanics of sauropods showing variations in posture, locomotion and weight distribution by modeling the anatomy of the track-making animals. The rapid pace of discovery has also established the basic morphology of tracks of other previously unknown dinosaur groups such as the horned dinosaurs (ceratopsians) and the two-toed sickle-clawed raptors (dromaeosaurids).
The dinosaur tracks renaissance has also accompanied revolutions in the study of pterosaur and bird tracks. Debate over the distinction between crocodile and pterosaur tracks, and whether the latter walked on two legs or four legs, raged in the mid-1990s, even making the pages of Time magazine. The verdict: Pterosaurs were quadrupedal and their tracks were common worldwide, in some cases forming extensive track assemblages, or ichnofacies.
Likewise, bird tracks were once thought rare in the Mesozoic Era, 240 million to 65 million years ago, with only three sites known prior to 1980. Now sites are widely known, especially in Asia, and prove an early origin for shorebird-like species dating back to the middle Mesozoic, 145 million years ago, if not earlier. Mammal-like reptile tracks and associated spider and scorpion trails are also abundant in the Permian through the middle of the Mesozoic. Such track distribution patterns suggest a distinctive desert dune-field paleoecology, dominated by small vertebrates including mouse- and squirrel-sized protomammals and pigeon- or crow-sized dinosaurs.
Such track diversity reminds us that many of these track-rich formations contain few if any fossils. Even in cases where an appreciable fossil record exists, it usually lacks most of the small vertebrates and invertebrates.
The track record thus fills major gaps in the fossil record in many regions. For example, in the Jurassic of the western United States, only the well-known Morrison Formation has a significant record of vertebrate body fossils, including many brontosaurs. Most other well-known formations, including the Wingate, Kayenta, Navajo, Entrada and Summerville, are almost completely devoid of vertebrate remains, except for a few sites and isolated finds, celebrated for their very rarity.
By contrast, each of these formations has yielded dozens, even hundreds, of tracksites that consistently produce characteristic and consistent patterns of ancient ecology in formations previously dismissed as barren. For this reason, the vertebrate fossil record of many classic national monuments, parks and recreation areas, from Arches and Monument Valley to Capitol Reef, Glen Canyon and Zion, is often based almost exclusively on the track record. Abundant tracks are recorded in many other outcrop-rich protected park areas elsewhere around the world, especially in semi-arid and mountainous or coastal sections, such as northeastern Spain, Portugal, northern China and South Korea.
Further down the trail
By studying the trackways not only for clues about how dinosaurs lived but also for clues about where they lived, ichnologists have vastly increased understanding of the ancient planet. Combining particular track types and facies, they have discovered what type of environments certain dinosaur species favored. For example, brontosaur tracks are typically found in limestone that represents tropical coastal plain systems, whereas the tracks of large ornithopod dinosaurs such as Iguanodon, and the shield- and spike-bearing ankylosaurs are much more typical of temperate, higher latitude, sand-, mud- and coal-dominated coastal systems.
The Jurassic Museum of Asturias in Spain is built in the shape of an ornithopod footprint and contains one of the largest collections of Jurassic dinosaur tracks in the world. The La Griega beach along the Coast of Asturias is in the background, and the rocks at the high-tide mark have Jurassic dinosaur footprints. Image courtesy of Jose Carlos Garcia Ramos.
One of the most striking conceptual shifts has surrounded the recognition of megatracksites or dinosaur freeways regionally extensive track-bearing units, sometimes confined to a single surface. These freeways, ranging from less than 10,000 to more than 100,000 square kilometers, provide data on the type and distribution of thousands of trackmakers. They also provide insight into changing coastal dynamics and appear to be related to the buildup of sediment on coastal plains as sea level rose.
At least a dozen such megatracksites have been reported since the first were recorded in the late 1980s, and several have been placed in proper stratigraphic context. Trampling, or dinoturbation, has also been recognized as a widespread phenomenon that has a huge impact on the substrate. Some formations have hundreds of track-bearing layers indicating that reworking by trampling or plowing by vertebrate feet can affect much of the entire rock volume in some formations.
As interesting as such large-scale megatracksite phenomena are for geologists in general, perhaps the most revolutionary recent application of dinosaur tracks has been in the area of paleogeography, especially in the Mediterranean. The discovery of dinosaur tracksites in areas thought to be open marine basins or isolated Bahama-like platforms has prompted some authors to call for a major reevaluation or rewriting of the geodynamics of parts of the region during the Mesozoic, 240 million to 65 million years ago. Either dinosaurs somehow crossed deep seaways between widely separated landmasses, or our reconstructions of ancient Mediterranean geography are simply wrong.
Maintaining the trail
The most striking aspect of the dinosaur tracking revolution has been the discovery of vast numbers of sites that require in situ protection for scientific study and public education (see sidebar). Unlike bone sites that are frequently destroyed or buried as excavated skeletons are removed to museums, most dinosaur tracksites, some larger than football fields, have to be preserved in place. As already noted, these sites number in the hundreds in many nations, and add up to thousands on the continental scale. For example, about 250 recorded sites are in Colorado, and more than 100 are in the small province of La Rioja, Spain. Equally high densities are typical of many other areas, including Utah and the southern coastal region of Korea.
In some cases, large-scale mining operations have exposed large tracksites, often in rugged mountainous terrain in the high Andes, Rockies, Alps or Pyrenees. Such sites pose special logistical, conservation and political problems. First, they are dangerous places in which to work, requiring specialized skills and equipment (expert mountaineers and helicopters). Second, they are often in imminent danger of complete collapse or highly accelerated rates of erosion. Third, their long-term conservation may be prohibitively expensive and thus administratively and politically problematic.
Technology, however, can help address these challenges. A 3-D landscape imagery technique known as photogrammetry can be employed at all scales, from single footprints, to large sites of several acres and beyond, to the scale of megatracksites. While contour maps of individual tracks are visually pleasing and can even be used to print out 3-D replicas, photogrammetry may have much greater potential for accurate mapping of large sites, especially those where access is difficult and complex topography defies mapping by traditional compass, tape and grid methods.
The abundance of dinosaur tracksites has generated significant research collections, interpretive centers and museums. For example, the Dinosaur Tracks Museum at the University of Colorado in Denver, the St. George Dinosaur Discovery site in southwestern Utah and the Jurassic Museum of Asturias in Spain (built in the shape of a giant footprint), have all sprung up in the last few years. All contain thousands of footprint specimens and strong links to local interpretive trails and displays, such as Dinosaur Ridge in Colorado. Such museums and associated interpretive sites and trails each receive hundreds of thousands of visitors annually on the merits of tracks alone, often without the added attraction of large dinosaur skeletons and models.
End of the trail
The rise in the scientific study of dinosaur footprints epitomizes the dynamic field of paleontology, as it covers almost half of the entire track record of vertebrate life on land creating an evolutionary path through time. From the oldest trails of invertebrate and vertebrate animals (respectively about 450 and 400 million years old), we are led through the middle era (Mesozoic) age of dinosaurs, pterosaurs and early birds, to the age of mammals (65 million years ago to the present) and our own hominid ancestors, the latter represented by the footprints of 3.5-million-year-old Tanzanian Australopithecus now carefully buried for protection and study by future generations. These eternal trails in the sands of time are highly evocative of our ancestry and among the spectacular legacies left us by the evolving geological landscape.
Henry David Thoreau once stated: If I were to make a study of the tracks of animals and represent them by plates, I should conclude with the tracks of man. This statement seems prescient in the light of recent finds of purportedly 40,000-year-old hominid tracks from Mexico that suggest colonization of the Americas more than twice as early as previously thought (see story, this issue). Again, we see footprints opening a new field hominid tracking of special interest to our own origins and prehistoric wanderings. Such rewriting of the tracks record has progressively recast our view of the ancient landscape and physically integrated it into our present cultural landscape.
In December 1999, a high school student ran across some unusual footprints
and tracks embedded in the rocks at the Union Chapel Coal Mine in northwestern
Alabama. He told a teacher, who happened to be a member of the Alabama
Paleontological Society, about the find, setting in motion legal, legislative
and scientific battles. The conservation effort ended last year, and in
March, the Alabama Department of Conservation and Natural Resources dedicated
the new Steven C. Minkin Paleozoic Footprint site. The site is now open
for scientific research and fossil collecting. | <urn:uuid:3185f4a0-5c52-498c-b512-e1e11febeb2b> | CC-MAIN-2016-26 | http://www.geotimes.org/jan06/feature_dinotrails.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00095-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947612 | 3,044 | 3.921875 | 4 |
A short introduction to anarchist-communism.
Anarchist communism is a form of anarchism that advocates the abolition of the State and capitalism in favour of a horizontal network of voluntary associations through which everyone will be free to satisfy his or her needs.
Anarchist communism is also known as anarcho-communism, communist anarchism, or, sometimes, libertarian communism. However, while all anarchist communists are libertarian communists, some libertarian communists, such as council communists, are not anarchists. What distinguishes anarchist communism from other variants of libertarian communism is the formers opposition to all forms of political power, hierarchy and domination.
Anarchist communism stresses egalitarianism and the abolition of social hierarchy and class distinctions that arise from unequal wealth distribution, the abolition of capitalism and money, and the collective production and distribution of wealth by means of voluntary associations. In anarchist communism, the state and property no longer exist. Each individual and group is free to contribute to production and to satisfy their needs based on their own choice. Systems of production and distribution are managed by their participants.
The abolition of wage labour is central to anarchist communism. With distribution of wealth being based on self-determined needs, people will be free to engage in whatever activities they find most fulfilling and will no longer have to engage in work for which they have neither the temperament nor the aptitude. Anarchist communists argue that there is no valid way of measuring the value of any one person's economic contributions because all wealth is a collective product of current and preceding generations. Anarchist communists argue that any economic system based on wage labour and private property will require a coercive state apparatus to enforce property rights and to maintain the unequal economic relationships that will inevitably arise.
Well known anarchist communists include Peter, or Piotr, Kropotkin (Russia), Errico Malatesta (Italy) and Nestor Makhno (Ukraine). Kropotkin is often seen as the most important theorist of anarchist communism, outlining his economic ideas in books The Conquest of Bread and Fields, Factories and Workshops. Kropotkin felt co-operation to be more beneficial than competition, arguing in Mutual Aid: A Factor of Evolution that this was illustrated in nature. Anarchist communist ideas were very influential in the introduction of anarchism to Japan through the efforts of Kôtoku Shûsui in the early 1900s who corresponded with Kropotkin and translated his works. Alexander Berkman and Emma Goldman (who were both deported from USA in 1919) became important proponents of ‘Communist anarchism’ and became especially critical of Bolshevism after they discovered its devastating reality first-hand in Russia, and after the Red Army's crushing of the Kronstadt uprising. They in turn had been influenced by German-born émigrée to the USA, Johann Most, who had earlier helped bring anarchist communist thought to Britain though his contact with Frank Kitz in London around 1880 (see Anarchist Communism in Britain for a full historical account).
Many platformists refer to themselves as anarchist communists, although other anarchist communists are uncomfortable with some areas of the Organisational Platform document, such as the issue of ‘collective responsibility’ as supported by Mahkno but opposed by Malatesta. While historically many anarchist communists have been active anarcho-syndicalists, many are critical towards those syndicalists who seek some form of self-managed wage system rather than its abolition, pointing out that any system which maintains economic relations based on reward of effort and exchange is not communist.
Modern day anarchist communists are represented in several organisations within the International of Anarchist Federations, including the Anarchist Federation (Britain). Platformist anarchist communists include the Workers Solidarity Movement (Ireland) and the North-Eastern Federation of Anarchist Communists (USA). Many nascent Eastern European, Russian and Caucasian anarchist groups identify with anarchist communism and there is a strong anarchist communist current amongst contemporary Latin American and Caribbean anarchist organisations.
- Anarchism - reading guide - Libcom.org's reading guide on anarchism, anarchist theorists and their development through history.
- What is anarchism? - Alexander Berkman - Easy to read introduction to 'Communist Anarchism'.
- The Conquest of Bread - Peter Kropotkin - Kropotkin's classic work on how an anarchist-communist society could function.
- Anarchy and "Scientific" Communism - Luigi Fabbri - a response to a Bolshevik mischaracterisation of anarchism, Fabbri argues for anarchism as anti-state communism.
- Anarchism and Organisation - Errico Malatesta - classic work on the need of anarchists to organise themselves in relation to the class struggle.
- My Disillusionment in Russia - Emma Goldman - famous book outlining her experiences of the degeneration of the 1917 Russian Revolution
Edited by libcom from an article by the Anarchist Federation.
|Anarchist communism - an introduction.pdf||104.76 KB| | <urn:uuid:3fa769d5-0e93-4473-84c3-1f32c8514995> | CC-MAIN-2016-26 | http://libcom.org/thought/anarchist-communism-an-introduction | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00107-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.942385 | 1,007 | 3.171875 | 3 |
Teens learn dangers of texting while driving
"It definitely taught me to be careful and not to text while driving because I'm going to kill somebody," Mayott said.
So far 25 states have banned texting while driving, but many are going a step further, sending kids through similar courses, so they can see the errors, accidents and fatalities they could cause. Officials hope the reality will alleviate the temptation to send an electronic message to a friend while behind the wheel.
"It's pretty eye opening for the kids," said David Teater, senior director of transportation initiatives for the National Safety Council in Itasca, Ill. "They're very unsuccessful at texting and navigating the cones."
The NSC estimates that 28 percent of crashes — or 1.6 million per year — are caused by cell phone use, either talking or texting.
Drivers who use cell phones are four times more likely to be in a crash, while drivers who text increase that risk to 8 to 23 times, the NRC said.
People shouldn't be messing with cell phones when they're trying to drive," said Drew Bloom, captain of the Vermont Department of Motor Vehicle enforcement, who brought the obstacle course idea to Vermont after hearing about it in North Carolina. "We're finding a 400 percent average increase in driving errors. ... So when you have a 400 percent increase in amount of mistakes you're making and your reaction time slows dramatically, the proof is in data."
The teens drive through the course once, and then a second time while texting to a friend on the side lines. It gives them hands-on experience that authorities hope will sink in.
"If we can reach one teen out of five teens who won't text and drive then they could possibly save their life in the future," said Sgt. Jeff Gordon, public information office for the Highway Patrol in North Carolina, which has seen a rise in teen fatalities.
Motor vehicle departments and driver's education courses around the country hope to plant the no texting message early, while teens are just learning the rules of the road.
"This age group from 15 to 20 represents about 15 percent of licensed drivers in Vermont yet they're involved in almost 30 percent of the crashes. So they're prone to crashing anyway. If you add texting and electronic devices and those sorts of things then the probability goes up dramatically," said Skip Allen, executive director of the Youth Safety Council in Vermont, which passed a law this month banning texting while driving.
The Turn Off Texting campaign brought the golf cart event to five schools in Vermont this spring, and plans to get to three more before the end of the school year.
Many teen drivers, who must be accompanied by a parent in the car until they get their licenses, already are prohibited from texting.
"It's a big no, no," said Corissa Peterson, 16, of Hartford. "I put my phone on silent and put it in my step dad's pocket."
Trever Nadeau, 16, of Sharon, sometimes brings his cell phone with him but doesn't answer.
After running through the obstacle course, he said he won't text.
"I did horrible. I got like one mistake the first time, eleven the second," Nadeau said.
Hannah Chambers, 16, already knew something about the dangers of texting while driving. Her older cousin went off the road and hit a tree while texting.
But the 16-year-old was still surprised at how hard it was to text and navigate through the tight turns, and stop signs.
If she's in a car with a driver who texts, she tells them to pull over or hand the phone to her to send the message.
"The driver needs to pay attention to the road not their cell phone," she said.
Last updated: 1:50 pm Thursday, December 13, 2012 | <urn:uuid:b60d526a-a196-49d6-a95b-0c80b3b3d952> | CC-MAIN-2016-26 | http://www.gazettextra.com/news/2010/may/17/teens-learn-dangers-texting-while-driving/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967906 | 787 | 2.734375 | 3 |
The 115/22-kV Jiangxai substation in the Lao People’s Democratic Republic was used for material storage during construction.
Lao People’s Democratic Republic (Lao PDR) is a scenic, mountainous and highly forested landlocked Southeast Asian country with a beautiful landscape. This developing country has a population of about 6 million, the majority of whom live in the countryside.
The Laos government considers the development of the country’s vast hydropower resources to have significant potential to be a highly appropriate method of achieving sustainable economic growth and to meet the domestic electrification needs. In addition, the country exports surplus electrical energy to its neighboring countries of Thailand, Vietnam, Cambodia and China. Lao PDR has signed a power exchange agreement with Thailand and Vietnam to export 20,000 MW to Thailand and 2,000 MW to Vietnam before 2020.
Development of the country’s first hydropower plant began during 1970. The country now has a total installed capacity of 671 MW, of which 360 MW is currently being exported to Thailand and Vietnam. Lao PDR has a hydropower potential of 36,000 MW, of which around 18,000 MW is deemed to be technically exploitable.
Power Supply System in Laos
Electricite du Laos (EdL) is responsible for the management of the state-owned power system. Based on the variable geographical characteristics of the country, the existing transmission system supplying the whole country is operated as four distinct systems, namely the northern region, the central regions 1 and 2, and the southern region. Currently, Laos lacks a common interconnected transmission system, and the majority of villages are still without electrification.
The geographical conditions and availability of hydropower resources for the existing and planned hydropower plants (being installed to meet the rising domestic demand) are mainly located in the central 1 and southern regions. Therefore, the following power-supply strategy was developed:
- The central 1 region will supply the power requirements for the northern and central 2 regions.
- The central 2 region will supply the power requirements for the southern region.
To implement this strategy, the interconnecting transmission line transfer capacity between the regions required investments to fulfill the existing power-exchange agreements and to optimize the use of the system reserve margin. | <urn:uuid:3b4f47ce-80fa-4b3f-ac21-69bbe42a345a> | CC-MAIN-2016-26 | http://tdworld.com/overhead-transmission/laos-expands-grid | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00071-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929622 | 472 | 2.65625 | 3 |
WASHINGTON, Nov. 17, 2011 — The American Chemical Society (ACS), the world’s largest scientific society, today released a series of audio podcasts highlighting the science and cutting-edge technology behind solar power. The podcasts, available without charge at www.acs.org/chemmatters, tell the story of how scientists and students are making progress in harnessing the abundant energy of the sun.
Well-suited for classroom use, the first two episodes explain the chemistry behind solar power — a promising alternative to fossil fuels that could have a larger role in the years ahead as a sustainable energy source for the world. The third and fourth podcasts describe a competition supported by the U.S. Department of Energy called the Solar Decathlon, in which students compete to build the world’s best solar homes.
The podcasts are based on articles published in the latest issue of ChemMatters, ACS’ magazine for high school students. Published quarterly by the ACS Office of High School Chemistry, each issue contains articles about the chemistry of everyday life and is of interest to high school students and their teachers. To request a free copy of ChemMatters, go to http://fs7.formsite.com/ACSEducation/ChemMatters/index.html.
The American Chemical Society is a non-profit organization chartered by the U.S. Congress. With more than 163,000 members, ACS is the world’s largest scientific society and a global leader in providing access to chemistry-related research through its multiple databases, peer-reviewed journals and scientific conferences. Its main offices are in Washington, D.C., and Columbus, Ohio. | <urn:uuid:8364b42f-446d-4f02-962b-71a0563a9ac9> | CC-MAIN-2016-26 | https://www.acs.org/content/acs/en/pressroom/newsreleases/2011/november/four-new-american-chemical-society-podcasts-shine-a-light-on-solar-energy.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00123-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.916845 | 340 | 3.171875 | 3 |
Darfur: Beyond the Brink of Disaster
Darfur: Beyond the Brink of Disaster
Though the crisis in Darfur is finally receiving significant international attention, both the conflict and the humanitarian catastrophe continue. What is behind UN and international reluctance to become more involved in the situation? And what are the key issues for international actors to keep in mind in constructing policies to quell the violence and reduce the civilian suffering?
A great deal of controversy surrounds the ethnicity and self-identification of the inhabitants of Darfur, the westernmost and largest province of Sudan. Inhabitants of Darfur can be superficially divided into two major groups: sedentary farmers (largely identified as African) and nomadic herders (largely identified as Arab). Such distinctions are further complicated by longstanding tribal, ethnic, and class divisions. Though the two groups have clashed frequently throughout Darfur's history, conflicts over resources and access to land were for the most part resolved through traditional tribal settlement mechanisms.
More recently, however, the conflict has become politicized and has taken on distinct ethnic and racial undertones, with Darfur serving as a base for anti-government forces. In February 2003, rebel groups attacked Sudanese troops, accusing the government of at best neglecting and at worst exploiting the region. With the outbreak of hostilities, the government was accused of arming local, predominately Arab militias called the Janjaweed to attack villages belonging primarily to the Fur, Zaghawa, and Massalit tribes. The government suspects these tribes of supporting the two major rebel groups in the region, the Sudanese Liberation Army (SLA) and the Justice and Equality Movement (JEM). Since April 2003, the Janjaweed have been systematically attacking, looting, and destroying villages. The Sudanese government has also initiated offensives, including aerial bombing, against populations in Darfur that it considers disloyal.
The violence and destruction has led to the deaths of untold numbers of civilians and caused massive displacement, including 200,000 refugees in Chad and estimates of 1.2 million internally displaced persons. The security and humanitarian situation is precarious for both groups. The UN High Commissioner for Refugees (UNHCR) has worked frantically to move refugee camps deeper into Chad as a result of ongoing cross-border raids and has begun limited operations in accessible areas of Darfur.
Despite the government of Sudan's claims to be allowing unimpeded humanitarian access, humanitarian and human rights agencies still complain of bureaucratic obstructionism and lack of access to major swathes of the region. The International Rescue Committee (IRC), one of the few NGOs operating in the region, estimates that only 30 percent of the displaced are currently being reached by any assistance. Representatives of various humanitarian agencies are very concerned about the extent of starvation and malnutrition, particularly in children, as well as continued violence, human rights violations, acute water shortages, and disease. The U.S. Agency for International Development (USAID) has estimated that without immediate, unobstructed, and massive humanitarian relief, more than 350,000 people could die by the end of 2004. In all, 2.2 million people are at risk.
On July 30, the United Nations Security Council issued its first resolution focused on the situation in Darfur. The fact that it took the Security Council over a year to issue a single resolution on Darfur is in itself difficult for many observers to accept. The fact that the resolution does not even mention the possibility of sanctions or other UN action should the government fail to rein in the Janjaweed is, as humanitarian commentator John Prendergast has noted, "stunning."
Lack of International Action
There are a variety of reasons behind the reluctance of the UN Security Council and many individual member states to act more forcefully toward the Sudanese government. Economic interests in Sudan's oil supplies play a role. Arguably, however, the two most pressing concerns are fears of disrupting the peace process between the government of Sudan and the Southern People's Liberation Movement/Army (SPLM/A) and of infringing on Sudan's sovereignty.
The southern peace process is aimed at ending more than 20 years of war between the government and the largest rebel group in the southern part of the country. The last protocols of the peace framework for the south were agreed upon by the parties in late May—a development that inspired hopes for a lasting end to the conflict. However, concern on the part of many international actors of somehow disrupting or stalling the peace process has led, in many cases, to a so-called "softly, softly" approach. Though not defined, that approach has resulted in a profound reluctance to forcefully condemn the policies of the Sudanese government. It has further resulted in the inability of the UN to take or even threaten concrete action to ensure Khartoum's compliance with international demands for a halt to the violence and unobstructed access to the region for humanitarian agencies, a significant presence of human rights monitors, and a robust African Union-led protection force.
Further contributing to the hesitancy to act forcefully in face of what many observers—including the U.S. Congress—have called genocide, are concerns over setting a precedent of infringing on state sovereignty. By most accounts, the safeguarding of Sudan's sovereignty is behind the refusal of several members of the UN Security Council, most notably China, Algeria, and Pakistan, to endorse any resolution even mentioning the word "sanctions." Therefore, although a resolution (number 1556) was finally agreed upon at the end of July, it gave no concrete indication of precisely what would happen should the government of Sudan not follow the resolution's recommendations. Beyond that, resolution 1556 was most notable for its sheer vagueness: expressing the UN's intention, for example, to "consider further actions…in the event of non-compliance." The government of Sudan has shown clearly in the past that while it does react to the threat of use of force by the international community, it rather easily ignores unspecific warnings. And in fact, that is precisely what it has done.
An Ongoing Humanitarian Crisis
Since resolution 1556 was agreed upon on July 30, UNHCR has reported an increase in the number of refugees fleeing to Chad, violence has continued nearly unabated, and nearly all refugees and internally displaced persons, still fearing for their lives, have refused to return to their villages. There are reports of forced return of IDPs to devastated villages, where they not only have no means of survival, but continue to be victimized and killed by the Janjaweed. Whereas the Sudanese government pledged several weeks ago to disarm and punish the Janjaweed, human rights organizations suggest that members of the militia are actually being recruited into the armed forces supposedly sent to protect the displaced.
Meanwhile, the humanitarian crisis has by all accounts become much worse. The onset of the rainy season has not only made it more difficult for relief convoys to access remote refugee and IDP camps, but also has increased the prevalence of disease throughout Darfur. Humanitarian appeals have so far only been funded at 50 percent, and organizations such as the UN's World Food Program are in dire need of more resources. USAID has upwardly revised its figures of the number of affected and at-risk civilians several times, most recently to 2.2 million people.
Key Issues Moving Forward
There are several key issues to keep in the forefront of policy discussions. First is Darfur's neighbor, Chad. Though to date Chad has been a welcoming host to the 200,000 refugees, it is itself a very resource-poor country, particularly in terms of water. An additional 200,000 people in an already strained environment cannot help but put pressure on local populations, and cross-border raids by the Janjaweed only exacerbate the problems. Humanitarian agencies have recognized the urgent need to ensure that all in need, whether refugees or not, are taken care of to the extent possible. Of course, this also necessitates fulfillment of expanding humanitarian appeals. The international community—and even more so the civilians of Darfur and Chad—can ill afford a full-blown regional security and humanitarian crisis.
The second key issue is the status of the Janjaweed. Nearly all major international observers, including the UN Commission on Human Rights, the U.S. government, and UN Secretary-General Kofi Annan, have found abundant proof that the Janjaweed was created, armed, and is supported by the government of Sudan. Khartoum denies this accusation, claiming it cannot control the Janjaweed. However, it has also said, particularly in light of resolution 1556, that it is in the process of arresting Janjaweed leaders and disarming combatants. As mentioned above, the government has reportedly recruited Janjaweed members into local police forces. This policy underscores the need for international—preferably African Union—human rights monitors and protection forces.
Third, many Sudan analysts have noted that the grievances expressed by the Darfur rebel forces—namely neglect, exclusion from power, and exploitation of local resources—are shared by other peripheral regions of Sudan, including the south, the Nuba mountains, and others. This being the case, it is difficult to see how the southern peace process can fully succeed with an ongoing war next door. It is therefore a risky endeavor to treat the two conflicts as entirely separate. Rather, the only sustainable peace process for the south, for Darfur, and for the rest of the fractious country will be one which addresses regional grievances and needs in a comprehensive manner.
Last of all is the question of whether or not what is occurring in Darfur is truly genocide. With regard to the crisis there, the most relevant section of the 1948 Convention on the Prevention and Punishment of Genocide is Article 2. It states, in part, that genocide is defined as "any of the following acts committed with the intention to destroy, in whole or in part, a national, ethnical, racial or religious group, as such: a). killing members of the group; b). causing serious mental harm to members of the group; c). deliberately inflicting on the group conditions of life meant to bring about its physical destruction in whole or in part."
Following this definition as written, many observers have come to the conclusion that the conflict and man-made humanitarian catastrophe in Darfur do indeed constitute genocide. Others, however—most notably those states reluctant to impose sanctions on Sudan or to take other forceful actions—refuse to use the genocide label for fear of thereby necessitating international intervention.
There are two main facets to this issue: the first is that the oft-quoted "action requirement" of the Genocide Convention is in fact to prevent genocide before it occurs, not to wait until there is indisputable proof that it has already occurred. By that time, of course, it is far too late. Second, once again, is the issue of precedent. As discussed above, several states have argued that intervening in Darfur will set a dangerous precedent of infringing on state sovereignty. However, many analysts argue that not intervening in the face of genocide (or even of "merely" mass atrocities) sets a much more dangerous precedent—one of allowing the death of possibly more than a quarter million civilians. | <urn:uuid:26f3cf07-29d4-49cd-8aef-0c1dcf9f8de2> | CC-MAIN-2016-26 | http://www.migrationpolicy.org/article/darfur-beyond-brink-disaster | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00043-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95653 | 2,299 | 2.765625 | 3 |
|Suicide prevention, awareness coalition to support
According to the Centers for Disease Control and Prevention, suicide is
the second leading cause of death among 25- to 34-year-olds, the third
leading cause of death among 15- to 24-year-olds and the 11th leading
cause of death for all Americans. Every 18 minutes a person dies by
suicide, which claims more than 33,000 lives each year.
The major causes of suicide are undiagnosed or untreated mental health
conditions, trauma, drugs and alcohol abuse and dependence. The risk
for teenagers is higher because they are prone to acting impulsively
under severe stress. Examples of stressors that often trigger teens are
relationship problems, difficulties in school and pressure from parents.
MUSC Level One Trauma Center is hosting a Walk to Prevent Suicide as a
part of Out of the Darkness Community Walks. The walks are 1- to 5-mile
walks taking place in more than 200 communities across the country this
fall, with the proceeds benefiting the American Foundation for Suicide
Participating in this event will help raise money for research and
education programs to prevent suicide, increase national awareness
about depression and suicide, advocate for mental health issues and
assist survivors of suicide loss. The walk is being held at 2 p.m. Nov.
7 at Hampton Park. Registration is from 1 to 1:30 p.m. To
register or donate, visit http://www.outofthedarkness.org
and under “team” type MUSC Level One Trauma Center.
The committee supporting this walk is working to form a suicide
prevention and awareness coalition with a group of survivors of suicide
and a multi-disciplinary team from the MUSC community and outside
organizations. This group, headed by Martina Mueller, Ph.D., assistant
professor in the College of Nursing meets at 5:15 p.m. every three to
four weeks on Tuesdays in Room 211 of the College of Nursing. For the
next meeting, contact Mueller at firstname.lastname@example.org or email@example.com.
General information on suicide prevention also is available at http://www.afsp.org or the National
Suicide Prevention Lifeline at 800-273-Talk (8255). Locally the 211
hotline is available 24 hours a day for supportive services:
800-922-2283 or for teens 747-teen.
Wednesday: MUSC Level One Trauma Center and Safe Kids Trident Area will
be available form 10 a.m. to 1 p.m., Oct. 13 in the university hospital
near Starbucks to discuss pedestrian safety.
- Introduction to
Pilates: A free Pilates class will be held from 12:15 to 12:45 p.m.,
Oct. 12 at the MUSC Wellness Center. Participants will also receive a
free one-day pass to the Wellness Center. E-mail
firstname.lastname@example.org or call 792-4471 to register.
Friday, Oct. 15, 2010 | <urn:uuid:5d3d9a1f-7231-40a3-9b8d-1a597560f1df> | CC-MAIN-2016-26 | https://depthtml.musc.edu/catalyst/2010/co10-8suicide.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00156-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.900308 | 655 | 2.75 | 3 |
Canadian Genealogy Articles
Canadians united against American forces to defend their country during the War of 1812. Here's how to research your War of 1812 genealogy in Canada.
It's easy to discover your family's French Canadians connection—just cycle through the resources and records in our guide.
For help tracing your roots in Canada, look to the Library and Archives of Canada.
Don't let dit names get your French-Canadian research off-track.
Find Canadian ancestors without delay—two computerized censuses can speed your search.
Where on the web to find your Canadian kin in censuses.
You've got questions about discovering, preserving and celebrating your family history; our experts have the answers.
Got ancestors for whom Canada is home and native land? Get started exploring your Canadian roots with this step-by-step guide. | <urn:uuid:204e3b0e-4521-49a5-a768-17924de054b9> | CC-MAIN-2016-26 | http://www.familytreemagazine.com/articlecategory/canadian/L0/HeadlineText/Descending | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00158-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.898617 | 176 | 2.75 | 3 |
Come Visit the Only Pre-Revolutionary Court House in Maine. The Pownalborough Court House still stands on its original site on the banks of the Kennebec River in Dresden. Listed on the National Register of Historic Places, the Court House is a remarkable example of colonial New England architecture.
- Visit the museum and learn about its history and collections from our trained docents
- Picnic on the grounds, visit the period garden or walk the nature trail along the river
- Hike a in the forest across the road
- Visit the cemetery with the graves of Revolutionary War, War of 1812 and Civil War veterans
- Come to an encampment event and watch living historians reenact colonial activities
Designed by Boston architect Gershom Flagg and built in 1761 by the Kennebec Proprietors for the newly created Lincoln County, the Pownalborough Court House received such notable visitors as John Adams, Benedict Arnold, Robert Treat Paine, William Cushing, Reverend Jacob Bailey and two future Massachusetts governors: David Sewall and James Sullivan. Numerous trials were held here, including that of Judge North which was featured in the Pulitzer Prize winning book, The Midwife's Tale by Laurel Thatcher Ulrich, based on the diary of local resident Martha Ballard (1735-1812).
The Court House also served as a tavern, a place for church services, a dancing school, and as the Dresden Post Office from 1807-1855. In addition to its vital role in the legal history of Lincoln County and Maine, the Court House was a family home. From 1761 - when Captain Samuel Goodwin, an original Kennebec Proprietor and captain of the guard at Fort Shirley, moved his family from the guardhouse into the Court House - until 1954, his descendants used the building as a house.
MORE HISTORY MUSEUM HOURS NEW TRAIL SYSTEM
The Court House is a beautiful site for a wedding or party.
For information, . | <urn:uuid:159d581e-b81e-42a3-8688-ffbfc5eecfd2> | CC-MAIN-2016-26 | http://www.lincolncountyhistory.org/PCHAboutBuilding.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00129-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943571 | 409 | 2.8125 | 3 |
view a plan
This language arts lesson involves multicultural fairy tales
5, 4, 3, 2
Multicultural Fairy Tales by D. Van Zee
Michigan Sate Standard: #5 All students will read and analyze a wide variety of classic and contemporary literature to seek information, ideas, enjoyment and understanding of their individuality, our common heritage and humanity, and the rich diversity in our society. (also Standards 9 & 10)
Standard: #2 Meaning and Communication
State Benchmarks: Compare and contrast story elements, key ideas, concepts and varied perspectives found in multiple texts from around the world.
Lansing objective: The learner will continue to compare and contrast story elements (character, setting, plot, theme, point of view, etc.) key ideas and concepts found in multiple texts from around the world.
With the whole group, share the original story of Cinderella by Charles Perrault. Discuss the elements that make it a fairy tale (happened long ago, there’s an element of magic, it ends with the main characters living “happily ever after”, etc.) Share that many cultures through out the world have similar tales. On a wall size chart, make a matrix with the following headings going horizontally across the paper:
TITLE, COUNTRY, SETTING, CHARACTERS, PROBLEM, SOLUTION, MAGIC, BEGINNING WORDS, ENDING WORDS
As a whole group, fill in the columns with information from Perrault’s Cinderella. Next ask children to volunteer to write the information on 4×6 cards. Each piece of information should be illustrated on the same cards. Attach each illustrated card in the appropriate column.
Assign groups of 4-5 children a version of the fairy tale theme from another culture to investigate. Give them 4×6 cards to write the information and illustrate the card. For instance, the title card would have a jacket cover illustration for the story and the title written on it. The characters card would have an illustration of the characters and their names written in sentence form.(And so on for each category.) After each group has had a chance to read write and illustrate the information for their assigned version, they attach them in the appropriate places and report orally to the class on what they found. The whole class can them compare and contrast the different versions.
Revisit: Use multiple versions of other fairy tales such as The 3Bears,The 3 Little Pigs or 3 Wishes stories.
Enrichment: Have children write their own version representative of their environment, or have them research to find other culture’s versions of the Cinderella tale.
Assessment: Completed matrix (Group) Complete inquiry (individual) I can name three ways the stories were similar. I can name three ways the fairy tales were different. I can list three characteristics of fairy tales. Resources: The Rough-Face Girl; Rafe Martin The Egyptian Cinderella; S. Climo Yeh-Shen; retold by. A Louie The Korean Cinderella; S. Climo Cinderella, C. Perrault | <urn:uuid:a6806357-9821-4f08-87a1-2b915515ab15> | CC-MAIN-2016-26 | http://lessonplanspage.com/lamultifairy-htm/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00126-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.898273 | 639 | 4.125 | 4 |
Economist and Nobel Prize laureate James M. Buchanan remarked to the Wall Street Journal in 1996 that "Just as no physicist would claim that "water runs uphill”, no self-respecting economist would claim that increases in the minimum wage increase employment." Of course this statement remains broadly true today, but the advent of better data, improved statistical techniques and the proliferation of country studies – have made economists far more careful about pre-judging the impact of minimum wages on employment and wages. Indeed, in a now famous study of fast food restaurants in New Jersey and Pennsylvania, David Card and Alan Krueger showed how the imposition of a minimum wage had no significant disemployment effects, and in some cases increased employment, arising out of a large enough increase in demand for the firms’ products.
The evidence for South Africa, some twenty years after the demise of apartheid, is equally compelling. In a two-part study, my co-authors and I find an intriguing set of contrasting economic outcomes, from the imposition of a series of sectoral minimum wage laws. In South Africa, the minimum wage setting body, known as the Employment Conditions Commission (ECC), advises the Minister of Labour on appropriate and feasible minimum wages for different sectors or sub-sectors in the economy. Currently, the economy has in place 11 such sectoral minimum wage laws in sectors ranging from Agriculture and Domestic Work, to Retail and Private Security.
Our first study examined the impact of the minimum wage in Agriculture on employment, hours of work, real wages and finally worker protection. The second paper , estimated the minimum wage effects on the same variables, but this time for five other minimum wage affected sectors, namely Retail, Domestic Work, Forestry, Taxi Operators, and Private Security. Both studies utilized the same data source, namely the nationally representative Labour Force Surveys, and applied identical statistical techniques. The latter incidentally were those utilized by Card and Krueger in their seminal work.
So what were these results? Well, in the case of Agriculture, the minimum wage had a swift and large negative impact on employment. Employment fell in the year after the imposition of the minimum wage by close to 150 000 workers, representing a 17% decline. This was confirmed by the conditional probability of an individual in the sample working as a farmworker having fallen by around 10% in the period after the law. In turn, farm owners did not reduce hours worked in a bid to control higher wage costs, as hours worked after the minimum wage was introduced, did not change significantly. Farmworkers gained however, as our estimates show that farmworker wages in the post-law period, increased on average by 17.6%. Finally worker protection improved, as the share of workers with a written employment contract, rose sharply from 34% in the pre-law period to reach 57% by the end 2007.
In contrast however, our results for South Africa’s remaining 5 minimum wage schedules, for which we are able to measure impact – suggest a far more benign impact on employment. Econometric evidence for the Retail, Domestic Worker and private security sectors actually suggest that the probability of getting employment in these sectors was significantly higher in the period after the introduction of minimum wages. Only the Taxi sector shows a statistically significant drop in the likelihood of employment. For the Forestry sector this probability did not change significantly. In turn, these five sectoral minimum wage laws were associated with positive increases in real hourly earnings afterwards in four of the five sectors with Forestry being the outlier. It does appear that the minimum wage had an effect on the utilisation of workers in terms of the usual number of hours worked per week. Statistically significant declines in hours of work occurred in Retail (a 4.5% decline), Domestic Work (7.7%; 3.3 hours) and the Security sector (4.5%; 2.7 hours). These also are the sectors where the employment numbers continued to rise after minimum wage laws were enacted. This suggests that employers may have started to reduce the usual work hours of employees in order to afford higher hourly wages. However, when investigating the effect of the laws on real monthly income, the Retail, Domestic work and Security sectors showed an increase in real hourly wages that was sufficient to outweigh any reductions in hours worked – workers ended up being better off in the aggregate. Workers in the Forestry and Taxi sectors appear to have been unaffected in real income terms. The link between new real hourly wages and amended hours of work following the advent of a minimum wage therefore is crucial.
Ultimately though, these two studies illustrate the care that needs to be taken when attempting to pre-judge the potential economic consequences arising from the imposition of a minimum wage. Factors as wide-ranging as the level at which the minimum wage is set, the economic conditions prevalent within the sector, the wage elasticity and so on – all serve to influence the impact of an instrument which has become a key active labour market policy intervention in the developing world. | <urn:uuid:b92b2724-3975-4313-ade9-9ca9df615fff> | CC-MAIN-2016-26 | http://blogs.worldbank.org/futuredevelopment/print/tale-two-impacts-minimum-wage-outcomes-south-africa?page=1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00022-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963026 | 1,008 | 2.671875 | 3 |
This text was copied from Wikipedia on 21 June 2016 at 3:22AM.
Sir Charles Dormer of Wing, 3rd Baronet, 2nd Earl of Carnarvon, 2nd Viscount Ascott, 3rd Baron Dormer of Winge (25 October 1632 – 29 November 1709) was an English peer.
Baptised in St Benet's in London, he was the son of Robert Dormer, 1st Earl of Carnarvon and Lady Anna Sophia Herbert, daughter of Philip Herbert, 4th Earl of Pembroke. Dormer was educated at the University of Cambridge, where he graduated Master of Arts in 1648. In 1643, on his father's death at the First Battle of Newbury he succeeded to his father's titles and became Headmaster of Immanuel College and Keeper of the King's Hawks.
Carnarvon was witty, hospitable and extravagant; Samuel Pepys records his saying that God provides timber so that men may pay their debts. He rarely spoke on public affairs, but his intervention in the House of Lords debate on the impeachment of the Earl of Danby in 1678 may have been crucial. In a speech of great wit and humour he drew examples going back over a century to show that managing the impeachment of another public figure was virtually a guarantee of being impeached oneself and cheerfully urged his fellow peers to "mark the man who first dares to run down Lord Danby and see what becomes of him". The Lords then voted not to commit Danby to prison until he had been heard in his own defence.
He was a friend of the future Queen Anne, and was one of the few who remained loyal to her after her violent quarrel with William and Mary in 1692 led to her banishment from Court. When Anne was reconciled with William after Mary's death in 1694, Carnarvon noted with cynicism the large crowds at her house, and said he hoped she would remember the time when none of them called on her.
He died in Ascott House and was buried in Wing in Buckinghamshire. With his death the earldom and the viscountcy became extinct, while the baronetcy and barony were inherited by Rowland Dormer, a grandson of the second son of the 1st Baron Dormer.
Dormer was married twice. Firstly to Hon. Elizabeth Capel, daughter of the 1st Baron Capel around 1653. They had 3 daughters and 1 son:
- Lady Anna Sophia Dormer
- Lady Elizabeth Dormer married Philip Stanhope, 2nd Earl of Chesterfield
- Charles Dormer, 3rd Viscount Ascott, born 25 June 1652, died before 1673. He matriculated at Christ Church, Oxford University, Oxford, Oxfordshire, England, on 22 April 1664. He graduated from Merton College, Oxford University, Oxford, Oxfordshire, England, on 8 September 1665 with a Master of Arts (M.A.).
- Lady Isabella Dormer, born 27 Aug 1663, married Charles Coote, 3rd Earl of Mountrath
After his death his estate passed to his daughters Elizabeth and Isabella.
- G.E. Cokayne; with Vicary Gibbs, H.A. Doubleday, Geoffrey H. White, Duncan Warrand and Lord Howard de Walden, editors, The Complete Peerage of England, Scotland, Ireland, Great Britain and the United Kingdom, Extant, Extinct or Dormant, new ed., 13 volumes in 14 (1910-1959; reprint in 6 volumes, Gloucester, U.K.: Alan Sutton Publishing, 2000), volume III, page 45. Hereinafter cited as The Complete Peerage.
- Lundy, Darryl. "Lady Isabella Dormer". The Peerage. Retrieved December 2011.
- Lundy, Darryl. "Lady Mary Bertie". The Peerage. Retrieved December 2011.
- Catalog note for the portrait of Charles Dormer by Peter Lely
- "Charles Dormer, 2nd Earl of Carnarvon". Retrieved 2007-03-22.
|Peerage of England|
|Earl of Carnarvon | <urn:uuid:f04e8214-6fbd-4f7f-a85c-370ecbc7ae42> | CC-MAIN-2016-26 | http://www.pepysdiary.com/encyclopedia/11092/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00165-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.971047 | 883 | 2.796875 | 3 |
Isaiah 53:1–12comment (0)
May 4, 2006
By Cecil Taylor
Related Scripture: Isaiah 53:1–12
Explore the Bible
Dean, School of Christian Studies, University of Mobile
RECOGNIZE GOD’S WAYS
In the wisdom of this world, power and influence are everything. Such thinking, however, is contrary to the ways of God. He works through the weak and despised to accomplish His glorious purposes (1 Cor. 2:26–28). His ways are not our ways (Isa. 55:8).
Isaiah wrote a series of songs about a “Servant of the Lord” who suffers to provide redemption. This is one of those songs (52:13–53:12). Some claim the Servant is the nation of Israel, which suffers to bring the redeeming knowledge of the Lord and His law to the world; for Christians, the Servant is Jesus of Nazareth. It is not that God works through others redemptively in precisely the same way He worked for redemption through Jesus Christ but that in Jesus, God demonstrated the unexpected ways in which He works.
God May Use Unlikely People (1–3)
The Servant comes of lowly origin. He is like a useless “sucker” on a plant that threatens full harvest. He is a “root out of parched ground,” i.e. void of divine blessing as contrasted to a well-watered plant (Ps. 1:3), i.e. divinely blessed. Parched ground is a most unpromising habitat. He is also unattractive. He is no conquering hero, no overpowering warrior, no shrewd politician. He is an outcast from society, despised (the Hebrew word indicates someone worthless, insignificant or unworthy of attention), friendless, a man of “pains and sickness” (literally). “Not esteem” means “to estimate at nothing.”
Jesus was not a person of wealth or influence. Only a Palestinian peasant was He, a poor carpenter, but God did His work through Him. God often works through persons whom society rejects or thinks insignificant.
God May Use Unexpected Means (4–6)
The Servant suffers intensely. Words such as grief, sorrow, stricken, smitten, afflicted, wounded, crushed, chastening and scourging pile up to indicate terrible suffering. Like the friends of Job, observers think God is punishing him for his own sins but they are wrong. He is bearing the sins of others to reconcile them to God. That the Servant dies a substitutionary death is repeated 13 times in different ways in these verses.
Sin is universal (“all”); it separates (“gone astray”); it is self-will (“turned to his own way”); and its only cure is substitution (“the iniquity of us all [falls] on Him”). Those who mistreated Jesus meant it for evil, but God turned their actions to good. He worked even through the oppression of wicked people to achieve His purposes. It is hard to think God would use evil for His own ends but He does. Other unexpected means He uses as well.
God May Use Undeserved Suffering (7–9)
The Servant patiently endures unjust treatment without a word, without resentment or rebellion, voluntarily giving his life for the transgression of his people. There is here a statement regarding the Servant’s trial and death, mainly revolving around two strong phrases — “taken away” and “cut off” — and one about the Servant’s burial. Those responsible for his death assign him a grave with the “wicked,” but God intervenes and associates the Servant “with a rich man (one whom God had honored; Ps. 67:1–2) in his death.”
Jesus suffered patiently and without complaint for the sins of others. Often God works through undeserved suffering to accomplish His end and shape His people.
God Rewards the Faithful (10–12)
By suffering death, the Servant accomplishes several things: He will see his family (of faith); he will prolong days (perhaps implying resurrection); he will fully accomplish the purpose (“pleasure”) of God; by his work, many will “be accounted righteous” (the technical phrase for acquittal in a court of law); and, on account of his sacrifice, he will be counted by God among the greatest of earth. The figure in the first part of 53:12 is that of a general dividing the booty of battle to his strongest and best warriors.
It was for the “joy set before Him” that Jesus “endured the Cross, despising the shame” (Heb. 12:2). God appropriately rewards faithful servants through whom He works — if not in time, then in eternity. | <urn:uuid:868d8059-ac4c-4f69-a8bf-91e4b620d144> | CC-MAIN-2016-26 | http://www.thealabamabaptist.org/print-edition-article-detail.php?id_art=10106&pricat_art=5 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00149-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963164 | 1,041 | 2.546875 | 3 |
This window enables you to review previously recorded panorama data sets. Like the `Panorama' window, it is only useful for visualising data that has been acquired all in the same plane. The spatial arrangement of such a data set can be seen in this example, illustrated using two different viewing angles in the `Outline' window.
This picture shows how all the images have been taken in approximately the same plane.
Here is the `Repanorama' window itself.
There is a slider to permit control of the distance over which gaps will be filled between the segments taken from each of the images. This can be set to 0, in which case it will be determined automatically as a function of the movement of the probe between each pair of slices.
There is also a `Save' button to enable image ppm files to be saved.
Note that if image registration has been applied, this window will show the registered data, not the original data. This can be used to remove the obvious probe pressure artfects in the example above, as illustrated here. | <urn:uuid:1fdea270-ba85-4670-9609-075187a69130> | CC-MAIN-2016-26 | http://mi.eng.cam.ac.uk/~rwp/stradx7.3/repanorama_window.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00006-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923699 | 217 | 2.515625 | 3 |
he son of Hugh Munro of Novar, Cromartyshire, Hector Munro enlisted
with Loudons' Highlanders in 1747, at the age of 21. He fought in the
Low Countries and Ireland before embarking for India with the 89th
Regiment of Foot in December 1760, arriving at Bombay in November 1761.
Three years later, at Buxar, Munro faced the forces of Mir Kasim and
Shah Alam II - some 50,000 men. Although hopelessly outnumbered, Munro
and 7,000 soldiers from the Bengal Presidency were victorious, thus
establishing British power in Bengal. Munro went on to take Pondicherry
from the French in 1778, and was knighted in 1779. However, his indecision
at Pollilur the following year precipitated
one of the greatest military disasters the British had known: the near
annihilation of Baillie's detachment and a grim sentence in Tipu's
prisons for any survivors. Munro fought under General Sir Eyre
Coote in 1781, when British confidence was restored with a significant
victory on 1st July at Porto Novo.
Munro returned to his native North Briton, where he was
well respected as a benificent and public-spirited country
gentleman. Much time was spent enlarging and improving his
estate, and a Plan of the Mains of Novar shows that the
field names still current in 1771 were changed in 1788 to
'Buxar Park';' Madras Park';'Mount Delly'; 'Calcutta Park.'
Scots back from India often felt themselves 'exiles' in
Scotland, so far away from the intense heat, vast landscapes,
numerous deities and languages, exotic courts and palaces
of India. Munro also received a cruel reminder of India's
dangers: his son was savaged by a tiger while picnicing
on Saugur Island, in December 1792, and died of his wounds.
The fatal catastrophe was reported in detail in The
Scots Magazine of July 1793, and as far away as Philadelphia
Portrait medallions such as this were intended to evoke the antique
cameos and gems of Classical Greece and Rome, and were produced in
a vitreous glass paste, in imitation of the antique. The formula for
this paste was invented by James Tassie, of Pollokshaws, near Glasgow,
working in Dublin with a physician and Professor of Physics, Henry
Quin. Tassie had already studied modelling at the Foulis Academy, Glasgow,
and from Quin he learned the art of manufacturing imitation gems and
cameos. Tassie moved to London in 1766, to develop this business, and
continued to supply Scottish clients and patrons, both with 'gems'
and with portrait cameos. From about 1769, he was also supplying casts
to the Wedgwood, and a complete collection of his pastes, gems and
cameos was dispatched (c.1783) to that voracious collector, Catherine
the Great of Russia. By 1791, the 'Catalogue of a general collection
of ancient and modern gems, cameos as well as intaglios, taken from
the most celebrated cabinets in Europe…by James Tassie, modeller…'
included descriptions of 15,800 items.
From about 1785, Tassie had employed Rudolph Eric Raspe
(1737-1794), to catalogue his extensive and expanding collection,
and Raspe published an initial Account of the collection
in 1786. A man of many talents, Raspe had also edited a
text of 'The Travels and Surprising Adventures of Baron
Munchausen,' (1785) in which the Baron juggles with canon
balls hurled at him by Tipu; unseats Tipu from his elephant;
fights Tipu hand-to-hand and finally kills Tipu with a single
blow of the sword. Among the illustrations 'from the Baron's
designs', three relate to these episodes, including 'The
Baron besieges Seringapatam.' Thus Raspe, the cataloguer,
Tassie, the modeller, and Munro, the sitter, not only share
professional associations, but also their associations with
Tipu, Seringapatam and the Mysore Wars. | <urn:uuid:4d851878-a8c6-4507-987e-90ec28011400> | CC-MAIN-2016-26 | http://www.tigerandthistle.net/scots431.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00110-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955013 | 941 | 2.75 | 3 |
Achumawi, Adai, Afro-Seminole Creole, Ahtna, Alabama, Aleut, Alutiiq, Arapaho,
Assiniboine, ... There are also many languages indigenous to North America or to
U.S. states or .... According to the A...
English is the language spoken by most people in the. United States. The official
language of many states is. English1 and it is the language used in nearly all ...
Nov 4, 2015 ... But try this sign on for size: "America—We Speak 350 Languages Here." ... and
many others, as well as 150 Native North American languages ...
Nov 3, 2015 ... Golden State Warriors coach Steve Kerr answers questions during a news ... the
wide-ranging language diversity of the United States,” said Erik Vickstrom, ...
Knowing the number of languages and how many speak these ... Languages
spoken at home are only part of the story for those policymakers.
Table 1: Indigenous Languages Spoken in the United States (by Language).
Reprinted by permission of the NCBE. How many indigenous American
Dec 30, 2015 ... The United States is a leader in international business and a ... is the most
commonly spoken language within the United States, and the ... Many Asians
have also moved into the US for other business purposes as well.
May 1, 2015 ... The most popular languages spoken in the US have been revealed, thanks to ...
given the ever-expanding diversity of language in the United States and the ...
Italian claims only 723,632 speakers – half as many as in 1980.
Information about the language situation in United States.
Aug 13, 2013 ... how many people have learned Spanish as a second language in Florida? .... As
far as I know, “Hispanic” is a legal term in the United States ...
Sep 5, 2013 ... I believe that Spanish will be the spoken language of the U.S. ... Although there
are many spanish speakers in the USA, it is still a language of ... | <urn:uuid:59caaa34-2df7-4b45-957a-4192aedf1210> | CC-MAIN-2016-26 | http://www.ask.com/web?q=How+Many+Languages+Are+Spoken+in+the+United+States%3F&oo=2603&o=0&l=dir&qsrc=3139&gc=1&qo=popularsearches | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00109-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.899549 | 434 | 3.328125 | 3 |
Cooperative conservation with farmers and ranchers
With land values and crop prices increasing, landowners are finding it more and more difficult to protect their land and pocketbooks. In 2009, DU worked with members of Congress on two bills—one in the House and one in the Senate—to continue tax incentives that encourage landowners to conserve their land. Representatives Mike Thompson (CA) and Eric Cantor (VA) and Senators Max Baucus (MT) and Charles Grassley (IA) sponsored the bill to extend these incentives. For the latest information on these incentives and what you can do to help encourage Congress to extend them, visit www.ducks.org/taxes.
Another way that DU is promoting private land conservation is by working with Congress to develop concrete policies for the ecosystem services that conservation provides. Adopting management practices that provide cleaner air and water and protect against soil erosion benefits everyone, but incentives for landowners to adopt these practices are limited. There are several bills that would create these incentives for biological offsets for things like greenhouse gases that DU staff are following. | <urn:uuid:d505b553-e4fa-4125-bfa5-db4026c9883c> | CC-MAIN-2016-26 | http://www.ducks.org/conservation/public-policy/working-for-waterfowl-in-washington/page3 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00045-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939093 | 213 | 2.65625 | 3 |
the Army and the Asiatic Fleet with Japanese diplomatic ciphers and keys for manual systems before they came into use.
The decade of the 1930s also witnessed a resurgence of U.S. Army interest in cryptanalysis. In 1930, after the collapse of Yardley's New York "Black Chamber,"32 William F. Friedman was tasked to create an Army cryptologic capability in the office of the Chief Signal Officer. Starting with four civilian students whose names have become bywords in the U.S. cryptologic community -- Frank Rowlett, Solomon Kullback, Abraham Sinkov, and John Hurt -- Friedman began the slow and difficult training process which would ultimately lead to the compilation of War Department codes and ciphers and the solution of foreign military and diplomatic codes and ciphers.
It was to this embryonic work force that the Navy turned in 1931 for help against two cryptographic targets which at the time almost completely occupied OP-20-G's efforts -- Japanese diplomatic and naval communications. The introduction of the Blue Book, as the Japanese Navy Operations code was known, in February 1931 (replacing the Red Book) and an unexpected surge in cipher traffic on diplomatic circuits had created an immense work load for Navy cryptanalysts. This forced the Navy to realize that it could no longer handle both targets and to seek a division of effort with the Army, with which it would furnish intercepted traffic until the Army could develop its own collection capabilities.33
Not willing to give up all diplomatic communications, however, the Navy -proposed that the Army analyze all counterpart Army radio communications and all diplomatic radio communications except for those of the four major naval powers, England, France, Italy, and japan. This arrangement, the navy claimed, would help it reduce an estimated two-year time lag in breaking the Japanese Blue Book.34
For a number of reasons, negotiations were not immediately fruitful. A primary cause of the lack of progress in the negotiations was that Army intercept sites, when established in the U.S. or even those existing in the Philippines, would not be able to hear low-power military radio transmissions. This unpleasant fact made the Navy's proposition partially irrelevant except in China. There Station A could and did irregularly intercept both Chinese and Japanese ground forces communications, which were provided to Army analysts. Talks continued without resolution until 1933, when a tentative position was developed from presentation to the Joint Army-Navy Board under a much broader heading, "Joint Effectiveness of Army and Navy Communications Systems." The joint proposal encompassed not only COMINT but communications and communications security matters as well. Possibly in return for its promise of cooperation on COMINT target distribution as outlined in 1931, the Army obtained concessions from the Navy in several vital areas including training intercept operators and in preparing a COMSEC Annex to Army War Plans. in addition, the Navy agreed to provide training for enlisted communicators and communications officers.35
In 1933 the official aim of both the Navy and the Army in the negotiations could be summed up in two words: "Cooperation" and "uniformity." Uniform communications, uniform censorship rules in wartime, uniform authentication systems, and common recognition signals for aircraft, local defense forces, and defense districts were goals which motivated both sides. Since the Navy already had such tools in place within its framework of naval districts and the Army lacked such a structure, it clearly made sense for the Army to consider building on the Navy's experience.
Regarding COMINT matters, however, joint agreements were harder to resolve. The fragile system of cooperation on COMINT targets almost collapsed within days of its tentative approval when it was disclosed by the Army that, inexplicably, the State Department had completely rejected the proposal insofar as it pertained to the Army's collecting diplomatic communications. According to internal Navy correspondence, Army negotiators from the office of the Chief Signal Officer discussed the proposed division of effort with the chief of the Army's War Plans Division, who informed the State
Department. State's rejection of the plan was reported in a memorandum to DNC by OP-20-G on 10 April 1933. Despite State objections, however, some degree of cooperation between the Navy and Army seemed assured.36
Convinced that the OP-20-G work load was already excessive, Safford originated several appeals to Rear Admiral Leigh Noyes, DNC, between July and September concerning the pitfalls of this approach. in October 1940, for example, he advised Noyes that the Navy did not want to do German, Mexican, and Italian traffic. he also said that the Signal Corps had little to do if it did not copy high-powered diplomatic transmitters since its stations could not hear the relatively low-powered military radios. he advised Noyes that the Navy should relinquish the entire diplomatic target rather than agree to the proposed Mauborgne scheme.
Before the study could be undertaken, the Army General Staff ordered the Signal Corps to copy the diplomatic circuits of Japan, Germany, Italy and Mexico.41 Although it meant wholesale duplication of collection, this directive left little room for the two departments to negotiate (no doubt to Safford's immense relief) and led eventually to the recommendation of August 1940 in which the U.S. Navy became responsible for deciphering and translating Japanese diplomatic and consular service messages on odd days of the month and the Army on even days (see Chart A). This narrow and highly simplified arrangement at least relieved Safford of the specter of two conflicting translations of the same message being delivered to the president. It did not, however, as will be seen, relieve the Navy's cryptanalytic and linguistic workload, particularly in 1941 as the crisis between Japan and the United States deepened and the number of diplomatic messages to and from Japan increased. The recommendation was nevertheless approved on 3 October 1940.42
|Army and Navy Sites Authorized to Intercept Diplomatic Traffic, August 1940|
|Site Location||Site Designator||Number of Collectors||Site Location||Site Designator||Number of Collectors|
|Fort Monmouth, NJ||1||19||Winter Harbor, ME||W||8|
|Presidio, CA||2||9||Amagansett, NY||G||4|
|Fort Sam Houston, TX||3||14||Cheltenham, MD||M||20|
|Corozal, CA||4||20||Jupiter, FL||J||4|
|Fort Shafter, HI||5||19||Bainbridge Island, WA||S||12|
|Fort Hunt, VA||7||24||Heeia, HI||H||8|
From 1924 to 1940, U.S. cryptanalysts adopted a system of color designations for certain high-level Japanese cryptographic systems. The Japanese diplomatic machine ciphers were designated Red for the A machine and, in 1939, Purple for the B machine which replaced it at many embassies. In 1939, a naval attaché machine cipher was introduced. It was designated Coral by the U.S. and was in use until 1945.43 The Japanese Navy's main operational code was designated Reed until 1930, Blue until 1938, and Black until 1940, when its designation was changed to JN-25, the Fleet General Purpose System.44
The Japanese Navy also employed several other cryptosystems to conduct it business which were not swept into the U.S. system of color designations. At OP-20-G, for example, one worker decrypted all messages in the Japanese navy-merchant vessel liaison code.45 U.S. cryptanalysts read the code in its entirety from the fall of 1939 to the tenth of August 1941.46 Six other Japanese naval systems were intercepted regularly. Two of these -- an auxiliary ship cipher and a minor general-purpose system -- were not worked. A third, an intelligence code, was considered of little importance after its contents were discovered, and it was ignored. The three remaining systems were worked intensively. They were the Japanese naval administrative system, a materiel system, and the fleet general-purpose system. The administrative and materiel systems had similar encipherment forms, and both encipherments were broken from time to time. When this occurred, two workers were assigned to recovery of the underlying codes. Success in the administrative system led to a limited capability to solve the general-purpose code. The materiel code was worked during the spring of 1940 in an unsuccessful attempt to learn details about the performance characteristics of the battleships Yamato and the Musashi, superbattleships built in violation of existing treaties, which were launched in 1941 and 1942, respectively. Regrettably, all recoveries on Japanese naval systems before Pearl Harbor yielded cryptanalytic technical information rather than current intelligence.47
In the grips of a rapidly expanding work load, the limited number of skilled U.S. cryptanalysts and linguists made it impossible to produce current intelligence except in the diplomatic field.48 The explosive growth of Japanese diplomatic and naval cipher traffic (1200 percent growth between 1930 and 1935)49 continued in both volume and numbers of systems throughout the 1930s. By the end of 1942, the Japanese Navy employed fourteen different minor systems which generated over 40,000 messages per
year in addition to messages obtained from the general-purpose system which, by November 1941, had reached 7,000 messages per month.50
The outgoing Blue Code was never used without a cipher to be stripped off before the code could be reconstructed. Navy cryptanalysts Safford, Dyer, and Driscoll solved the Blue Code in 1933, making possible the important successes against the Imperial Fleet exercises in 1934 and 1935. Their success had followed what was possibly the most difficult cryptanalytic task ever undertaken by the United States up to that time. in Safford's opinion, Driscoll's work in solving the system may have been even more brilliant than the Army's subsequent solution of the Purple machine because "there were no cribs or translations to help out."51 The introduction of IBM "tabulating machines" against the Blue Book was also a major advancement at the time.
The JN-25 system required three books to operate: a code book, a book of random numbers called an additive book, and an instruction book. The original code book contained some 30,000 five-digit numbers which represented Kana particles, numbers, place-names, and myriad other meanings. A key characteristic of this system was that, when the digits in a group were added together, the total was always divisible by three. The book of random numbers consisted of 30 pages, each of which contained 100 numbers on a 10 x 10 matrix. These numbers were used as additives -- they were added to the code groups digit by digit without the carryover used in customary addition -- thus enciphering the code. The instruction book contained the rules for using the aperiodic cipher. The number of each page and the number of the line on the page where the selection of additives began served as "keys" which were included in each message at the beginning and end. This code subsequently became the most widely distributed and extensively used of all of Japan's naval cryptosystems.52
Using improved IBM card sorting equipment and newly developed analytic techniques and noting similarities to an earlier four-digit "S" system stolen from a consulate, Driscoll and her colleagues were soon stripping off daily keys and additives in the Able, or first cipher, and slowly reconstructing the code.53 After investing a year in attempting to understand its components, OP-20-G put aside all work against the current JN-25 cipher during the summer and fall of 1940 in favor of slow but steady progress toward actual reading of the underlying code. After keys were recovered on each new cipher, the traffic itself was filed for later study.
Though they were working on year-old traffic, the cryptanalysts recovered a segment of the Able code which led to discovery of pattern messages, such as medical reports, and stereotyped messages containing noon positions for convoys. On 1 October 1940, the Japanese introduced the fifth Able cipher (Able Five). It was quickly diagnosed by OP-20-G analysts. Once the new keying system was understood, Washington policymakers decided that all units, including Hawaii, should begin working on the current cipher in the hope that by January 1941 the first JN-25 message of the new year would be read on the same day it was sent. By December 1940, U.S. cryptanalysts had recovered the system of text additive, two systems of keys, and the actual code groups for the numbers 000 through 999. At this point the only factor which seemed to prevent complete exploitation of JN-25 was lack of manpower. Out of the total cryptanalyst population in Washington at this time (thirty-six in December 1940), only from two to five people could be spared to work on this still unreadable system.54
With the progress made on recovery of the new code values, U.S. officials believed that the combined efforts of all units would again bring the system close to the point of reading current traffic by early summer 1941. Code recovery continued to progress well. Throughout the summer and fall of 1941, new discoveries about the nature of the code were routinely committed to a Registered Intelligence publication (RIP) and given wide if slow-moving distribution to the field units.
The actual reading of current Japanese messages before Pearl Harbor, however, was not to be. U.S. cryptanalysis of the ciphers had outstripped the U.S. capability for code
recoveries. That is, OP-20-G and Corregidor (as well and London and Singapore) had not recovered enough of the basic code, and JN-25 decrypts could not be produced in time to play a part in U.S. and policy or military decisions during this crucial period. Thousands of intercepted Japanese Navy messages in JN-25 were not exploited because, as a result of manpower shortages and higher priorities, the underlying code values remained unrecovered.57
These proved to be costly factors indeed, because analysts at Hawaii, Corregidor, and Washington never discovered the vital information contained in the untranslated messages. We now know that they contained important details concerning the existence, organization, objective, and even the whereabouts of the Pearl Harbor Strike Force, the Japanese Navy's First Air Fleet. Hidden in these messages was the full magnitude of the enterprise planned by the Japanese to begin on 7-8 December 1941. Had these messages been read on a current basis, it is possible -- even probably, given the analytic skills so evident in these centers -- that the early course of the war would have been significantly altered. unfortunately, most of the U.S. Navy's cryptanalytic effort was devoted to another Japanese cryptographic problem: recovering the daily cipher, translating the texts, and reading the Japanese diplomatic messages.
31. History of Signal Security Agency, Series III.hh, CCH History Collection (classified).
32. Herbert O. Yardley, The American Black Chamber, originally published in book form 1 June 1931.
33. 29 October 1931 memorandum from DNC to CNO via DNI, "Allocation of RI Activities Between the Army and the Navy," Series VII.19, Box 4, Vol. 1, pre-1942, CCH History Collection (classified).
34. 29 October 1931 memorandum from OP-20-G to CNO via DNI, Series VII.19, Box 4, Vol. 1, CCH History Collection (classified).
35. 12 April 1933 memorandum for the Director of Naval Communications from J.W. McClaran, OP-20-G, Series VII.19, Box 4, Vol. 1, CCH History Collection (classified).
36. There is no record of this episode in the Department of State files.
37. RG 220, NA. Proceedings of the joint Board, National Archives, Joint Board Memorandum, 24 April 1933.
38. A series of unclassified memoranda between CNO, the Joint Board, and the Secretary of War during the period March-July 1933, Joint Board #319, Serial 516. RG 220, NA.
39. Interview Prescott H. Currier, Captain USN (Ret), 14 November 1980, by Robert Farley and Henry Schorreck, OH 39-80 (classified).
40. SRH-149, SRH-305, RG 457, NA.
41. OP-20-G memorandum Serial 051220 dated 25 July 1940, "Coordination of Intercept and Decrypting Activities of the Army and Navy," Series VII.18, Box 4, 19, CCH History Collection (classified).
42. See 14 February 1946 memorandum for OP-20-4 from Captain L.F. Safford, "Responsibility for Decoding and Translating Japanese intercepts," Series VII, Box 4, 19, CCH History Collection (classified).
43. "History of FNA 20" (CORAL), Vol. I. NSA Cryptologic Archival Holding Area (classified).
44. SRH-305, RG 457, NA, and SRH-149, RG 457, NA.
45. This code was stolen without the knowledge of the Japanese soon after its introduction in 1939. "History of OP-20-GYP," 2, Series IV.W.I.5.12, CCH History Collection (classified).
46. "History of OP-20-3-GYP," 2. Series IV.W.I.5.12, CCH History Collection (classified).
49. "Military Study Communication Intelligence Research Activities," United States Navy, 30 June 1937, SRH-151, RG 457, NA.
50. History of OP-20-GYP-1, Series IV.W.I.5.13, CCH History Collection (classified).
51. SRH-149, RG 457, NA; see also 7 December 1929 memorandum to DNI from DNC, "Radio Intelligence.: At that time the main purpose of the Research Desk as deciphering Japanese codes, Series III.G.0, CCH History Collection; Japanese traffic volumes, Howe monograph, Early Background of the U.S. Cryptologic Community, Series VII.15, CCH History Collection; Army-Navy DOE, 1931, Navy proposed to divide diplomatic communications based n availability of intercept and Naval power, Series VII.19, Box 4, Vol. I pre-1942, CCH History Collection (classified); Japanese Blue Book introduced February 1931, SRH-305, RG 457, NA.
52. "History of OP-20-GYP-1," Series IV.W.I.5.13, CCH History Collection. See also Edwin T. Layton, And I Was There, (New York: William Morrow and Co., 1985), 77.
53. "History of OP-20-GYP-1," Series IV.W.I.5.13, CCH History Collection (classified).
55. Ibid.; see also Layton, And I Was There, 77-78, 249.
56. Ibid.; see also "History of OP-20-GY Series IV.W.I.5.10-12:13, CCH History Collection (classified).
57. "History of OP-20-GYP-1" (classified). | <urn:uuid:0f5ce7fd-751e-44bc-a3c6-6d41feb844a1> | CC-MAIN-2016-26 | http://www.ibiblio.org/hyperwar/PTO/Magic/ComInt-1924-41/ComInt-2.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00082-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956282 | 3,993 | 3.171875 | 3 |
The backbone of this movement began with the temperance crusaders of the nineteenth century. Clergymen, politicians, business leaders, and social reformers grew concerned about the nation's health, morality and economic prosperity, blamed society's increased drinking for the deterioration of these mores. Evangelical Protestants thought that visible indicators of God's good graces, such as economic success and political liberty, had their foundation in sobriety (Kyvig, 6). These reformers began banding together to form close-knit societies aiming to push their social and political agendas focused on enacting legislation that would curb widespread intemperance. The Prohibition Party (1869) and the Women's Christian Temperance Union (WCTU) (1873) became the first of these well-organized groups that put prohibition issues on many state and local ballots towards the end of the nineteenth century. In 1893, like-minded reformers joined together to assemble the powerful Anti-Saloon League, generating a dedicated political machine targeted at outlawing the consumption of alcohol (Kyvig, 6-7). As these temperance organizations became a "political force to be reckoned with" by the early twentieth century, Progressivism, a second cornerstone of the Prohibition movement, appeared on the political horizon (ibid.).
By 1903, the Progressives and temperance organizations had already made their presence known on a national scale. Fully one-third of the United States (at that time about thirty-five million people) lived under some sort of prohibitory legislation. The effect on national government came ten years later when Congress passed the Webb-Kenyon Act, outlawing the shipment of alcoholic beverages between wet and dry states. The dry forces had succeeded so well in filling the Senate and House of Representatives with their allies that a national act of prohibition was a near certainty. In December of 1917, the 18th Amendment easily passed through Congress and headed to the states for ratification (Lender, 129). As the Amendment left Washington, D.C., events in Europe assured its ratification.
The last, and most immediate of the motivating factors behind the enactment of the 18th Amendment was World War I. The rabid patriotism and widespread wartime conservation spelled the end of legal alcohol manufacturing.
The language of the 18th Amendment was more simply worded than other amendments to the Constitution. The first section prohibited the manufacture, sale and transportation, and the importing and exporting of alcohol from or to the United States, or any of its territories. The second section granted Congress and the governments of the individual states the power to concurrently enforce the new law. The power to enforce the Amendment on the federal level came from the National Prohibition Act. Originally written by ardent dry Wayne Wheeler, the president of the Anti-Saloon League, it found an enthusiastic sponsor in Minnesota Representative Andrew Volstead where it acquired its popular name, the Volstead Act. Expecting a loose definition of alcohol, such as the allowance of beer containing 3.2% alcohol (as was the case in many of the states' older prohibitory legislation), the Volstead Act proved to be much more stringent than any wets could have imagined. It defined "intoxicating liquors" as any beverage of more than .5% alcohol; the Volstead Act had gone so far as to regulate the use of sacramental wines used in religious ceremonies and medicinal liquor (ibid.). Even the lightest wines and beers now found themselves outlawed by the Constitution. With this victory, the drys thought they could celebrate a triumph over their age-old foe once and for all, but a host of factors during the Prohibition Era led to its early demise in December of 1933.
Primarily, the notions many people had of personal liberty never succumbed to the power of the Constitution, as many prohibition crusaders had expected. Believing that American citizens would automatically obey the edict of America's most hallowed document, Congress grossly underestimated the enforcement needs required by the outlawing of alcohol. Assuming that there would be a small amount of violators of the Volstead Act, John Kramer, head of the newly created Prohibition Bureau, lobbied Congress for a mere $5 million a year to enforce the law. He estimated that, over time, his agency would require a continuously smaller budget as more people got used to a dry republic. Along with his budget and a force of 1,526 agents, Kramer aimed to police the nineteen thousand miles of American coastline and borderlands as well as its nearly 110 million people - roughly one agent per 12 miles of coastline and borders or one agent per 71,000 people (Lender, 149-150). The extreme lack of enforcement resources provided by the federal government allowed American citizens widespread access to alcohol of all types.
People could go to one of a variety of places to get liquor. Not only was the local speakeasy available, but covert breweries remained active along with traditional, grassroots efforts to make alcohol with homemade stills and bathtub gin. Rum runners from the Caribbean and whiskey brought down from Canada all provided for the still active thirst Americans had for liquor. However nowhere was the business of booze more active than in the city.
With their vast networks of corrupted officials, tightly controlled territories and loyal gang members, the gangster bootlegger thrived in what they merely considered supplying a service. Al Capone, gangster film archetype and the most notorious of the Prohibition Era gangsters, publicized his brazen flouting of the law by saying, "Prohibition is a business, all I do is supply a public demand" (Kyvig, 26). The demand didn't appear to be abating, and, as the cities grew, it actually seemed to be swelling. The growing city, a significant part of the gangster's life and livelihood, became the battleground where the fight to repeal prohibition began.
The new cosmopolitan city, a result of the rapid industrialization around the turn of the century, teemed with working class people of various nationalities. This new untapped demographic was far too rich in votes for politicians striving for public office to avoid. Democrats, hoping to break the Republican stranglehold on the White House shifted their platform to incorporate these men and, after the passage of universal suffrage with the 19th Amendment, women. Rising up for the Irish Lower East Side of Manhattan through the Tammany Hall political machine, Al Smith promoted the Democratic shift in policy by running on a wet
Specifically in Public Enemy, prohibition created an entire business of illegality that launched small time, inner city crooks into untold wealth and prominence. With distinct points of reference provided by date placards, we see the origins of Tom Powers (the lead character played by James Cagney) as a prepubescent shoplifter in 1909. Drinking from a big bucket of beer (acquired inside the "family entrance" of a saloon), tripping girls on roller skates and trying to sell stolen watches were the highlights of his mischievousness. As he grew older, his lawlessness grew more criminal, yet didn't equate to great financial reward. Following a failed heist attempt at a fur trading company, Tom met Paddy Ryan, his bootlegging mentor. Though a legitimate businessman before prohibition, the scene immediately following the chaos of "Prohibition Eve" in 1920 depicts Paddy Ryan as having big plans for his business in the succeeding dry era:
Following this decree Tom has his first big robbery. In organizing a crew to siphon a tanker truck full of beer from a liquor warehouse the reward is almost immediate. The next scene depicts Tom buying his first suit, with a broad smile. He trades the tweed hat and rags he'd worn previously for a stylish, tailor-made, three-piece suit and fedora hat.
Following several decades of working towards a broad-based prohibition law, the temperance movement had finally passed a law banning alcohol. However, the social ills that the Progressives railed against in the exploding urban centers had only begun. The 18th Amendment's passage may have rid the city of its legal saloons and pacified the Progressives intent on cleaning up cities, yet it created a business in illegal booze that allowed crime to flourish and criminals to prosper. In seeing this unusual prosperity after growing up on the streets, the urban gangster proved to be a unique representation of the glorified American ideal of the self-made man.
|Public Enemy Little Caesar Scarface|
Prohibition - The Self-Made Man - The Business of Crime
The Ethnic, Urban, Working Class - Sound and the Gangster
Censorship and the Hays Code - Conclusion | <urn:uuid:d0423e3b-e999-46f4-a2ce-e62f1047c319> | CC-MAIN-2016-26 | http://xroads.virginia.edu/~MA03/holmgren/prohib/prohib.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00121-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961683 | 1,755 | 3.921875 | 4 |
What Is Ordinary Time?
Ordinary Time encompasses that part of the year that does not fall within the seasons of Advent, Christmas, Lent or Easter. The Catholic Church celebrates two periods of Ordinary Time. The first period, Ordinary Time I, begins after the Feast of the Baptism of the Lord and ends the Tuesday evening before Ash Wednesday. Ordinary Time II runs from the Monday after Pentecost until Evening Prayer is said the night before Advent begins. Ordinary Time gets its name from the word ordinal, meaning “numbered,” since the weeks of Ordinary Time are expressed numerically. Depending up on the year, there are either 33 or 34 weeks of Ordinary Time.
Ordinary time does not mean “plain,” and is not meant to imply that we somehow get a break from the practicing our faith. Ordinary Time celebrates the mystery of Christ in all its aspects. Many important liturgical celebrations occur during Ordinary Time including the Solemnity of the Holy Trinity, the Solemnity of Corpus Christi, the Solemnity of the Sacred Heart, the Solemnity of the Assumption, the Solemnity of All Saints and the Solemnity of Christ the King. Ordinary Time invites us to contemplate the parts of Jesus’ life that were ordinary, much like our own lives, and inspires us to see the Father, the Son and the Holy Spirit in the most ordinary events and everyday activities of our lives. When we are able to see God in the most mundane aspects of our lives we realize that nothing is in fact ordinary! The liturgical color for Ordinary Time is green; symbolizing hope and growth. | <urn:uuid:59b2e759-054b-4359-b5a0-3b57944620c9> | CC-MAIN-2016-26 | http://stlaurence.org/index.cfm?load=page&page=320 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00004-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934858 | 343 | 2.828125 | 3 |
Enders' live attenuated measles-virus vaccine 1 has provided the key to control and possible eventual eradication of this most important disease of childhood. The numerous studies now recorded have shown the uniform effectiveness of the vaccine for inducing neutralizing and CF antibodies in susceptible children, and Krugman et al.2 and Hoekenga et al.3 have demonstrated conclusively that the vaccine given alone affords protection against the natural disease in epidemics. The vaccine virus, even though markedly attenuated, still has the drawback of causing significant clinical reaction, primarily in the nature of rash and fever. These reactions are sufficiently great to preclude general patient and physician acceptance. Consequently, further modification of vaccine reaction is needed.
Early efforts by our group4 to modify the clinical response by administration of vaccine to babies who had maternal antibody were unsuccessful because of apparent total neutralization of the virus. Failing this, vaccination was | <urn:uuid:cef60c59-ecd4-4489-a3b9-e58e30c9630a> | CC-MAIN-2016-26 | http://archpedi.jamanetwork.com/article.aspx?articleid=500137 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00195-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949896 | 182 | 2.53125 | 3 |
CHICAGO - In a study examining possible factors regarding the associations between fructose consumption and weight gain, brain magnetic resonance imaging of study participants indicated that ingestion of glucose but not fructose reduced cerebral blood flow and activity in brain regions that regulate appetite, and ingestion of glucose but not fructose produced increased ratings of satiety and fullness, according to a preliminary study published in the January 2 issue of JAMA.
"Increases in fructose consumption have paralleled the increasing prevalence of obesity, and high-fructose diets are thought to promote weight gain and insulin resistance. Fructose ingestion produces smaller increases in circulating satiety hormones compared with glucose ingestion, and central administration of fructose provokes feeding in rodents, whereas centrally administered glucose promotes satiety," according to background information in the article. "Thus, fructose possibly increases food-seeking behavior and increases food intake." How brain regions associated with fructose- and glucose-mediated changes in animal feeding behaviors translates to humans is not completely understood.
Kathleen A. Page, M.D., of Yale University School of Medicine, New Haven, Conn., and colleagues conducted a study to examine neurophysiological factors that might underlie associations between fructose consumption and weight gain. The study included 20 healthy adult volunteers who underwent two magnetic resonance imaging sessions in conjunction with fructose or glucose drink ingestion. The primary outcome measure for the study was the relative changes in hypothalamic (a region of the brain) regional cerebral blood flow (CBF) after glucose or fructose ingestion.
The researchers found that there was a significantly greater reduction in hypothalamic CBF after glucose vs. fructose ingestion. "Glucose but not fructose ingestion reduced the activation of the hypothalamus, insula, and striatum--brain regions that regulate appetite, motivation, and reward processing; glucose ingestion also increased functional connections between the hypothalamic-striatal network and increased satiety."
"The disparate responses to fructose were associated with reduced systemic levels of the satiety-signaling hormone insulin and were not likely attributable to an inability of fructose to cross the blood-brain barrier into the hypothalamus or to a lack of hypothalamic expression of genes necessary for fructose metabolism."
(JAMA. 2013;309(1):63-70; Available pre-embargo to the media at http://media.
Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.
Editorial: Fructose Ingestion and Cerebral, Metabolic, and Satiety Responses
Jonathan Q. Purnell, M.D., and Damien A. Fair, PA-C, Ph.D., of Oregon Health & Science University, Portland, write in an accompanying editorial that "these findings support the conceptual framework that when the human brain is exposed to fructose, neurobiological pathways involved in appetite regulation are modulated, thereby promoting increased food intake."
"... the implications of the study by Page et al as well as the mounting evidence from epidemiologic, metabolic feeding, and animal studies, are that the advances in food processing and economic forces leading to increased intake of added sugar and accompanying fructose in U.S. society are indeed extending the supersizing concept to the population's collective waistlines."
(JAMA. 2013;309(1):85-86; Available pre-embargo to the media at http://media.
Editor's Note: The authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.
To contact corresponding author Robert S. Sherwin, M.D., call or email Helen Dodson. To contact editorial co-author Jonathan Q. Purnell, M.D., call Mirabai Vogt at 503-494-7986 or email email@example.com. | <urn:uuid:c0673639-c43a-445b-9538-8b3d2996943c> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2013-01/jaaj-ise122712.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00094-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.898451 | 782 | 2.703125 | 3 |
The quiet creep of antibiotic resistance is an increasing danger to public health. Over the last several decades, the protection against deadly infections has become limited, and in some cases, non-existent with eight known pathogens resistant to available antibiotics. Aided by the inappropriate use of antibiotics, these so-called superbugs, once considered rare, are becoming common in healthcare facilities throughout the United States. To bring awareness to this health emergency, the Society for Healthcare Epidemiology of America (SHEA) is partnering with the Centers for Disease Control and Prevention (CDC) and others for Get Smart about Antibiotics Week.
“Over time our arsenal against infections has dwindled, but we continue to assume another, stronger antibiotic will be available to take its place. With little investment in new antibiotics, we must work to reevaluate how we use antibiotics we have available to prolong the efficacy of these medications,” says Sara Cosgrove, MD, MS, associate professor of medicine at Johns Hopkins University School of Medicine and a SHEA board member.
Drug-resistant bugs are associated with increased patient morbidity, mortality and higher healthcare costs spent on futile use of antibiotics and longer, more intensive hospital stays. One tool to combat the rise of antibiotic-resistant bacteria is antimicrobial stewardship programs and interventions. Antimicrobial stewardship helps prescribers know when antibiotics are needed and what the best treatment choices are for a particular patient to help improve the use of these drugs. The goal is to ensure patients receive antibiotics only when needed and in the safest way possible.
Recent studies published in the journal Infection Control and Hospital Epidemiology demonstrate the need for stewardship programs to provide safe, high quality and cost-effective care:
• Antibiotic use can put patients at risk of serious infections. Up to 85 percent of patients with potentially fatal Clostridium difficile-associated diseases have been exposed to antibiotics in the preceding 28 days.
• In a study in 128 veterans’ hospitals, switching from IV to oral antibiotics was found to be safe, to potentially reduce costs and to increase hospital staff awareness of appropriate antibiotic use. Antibiotics administered by IV are known to hinder patients through longer hospital stays.
• Restricting ciprofloxacin in hospital units has been linked to a decrease in multi-drug resistant bacteria.
• By closely monitoring antibiotic use, an antimicrobial stewardship program at a university medical center cut costs for antimicrobials nearly $3 million. After the program was ended, costs increased again by $2 million in two years.
Visit the SHEA website for more information on the appropriate use and management of antimicrobials in all healthcare settings to help slow resistance and improve patient care. Among the resources available are a comprehensive resource page and an online training for health professionals. | <urn:uuid:c31f0368-340b-4a39-9377-86fa1d5b00b9> | CC-MAIN-2016-26 | http://www.infectioncontroltoday.com/news/2012/11/shea-supports-cdcs-get-smart-about-antibiotics-week-campaign.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.927137 | 571 | 3.28125 | 3 |
John Winthrop (12 January 1588 – 26 March 1649) led a group of English Puritans to the New World, joined the Massachusetts Bay Company in 1629 and was elected their governor in October 1629. Between 1639 and 1648 he was voted out of governorship and re-elected a total of 12 times. Although Winthrop was a respected political figure, he was criticized for his obstinacy regarding the formation of a general assembly in 1634.
Benson John Lossing, ed. Harper's Encyclopedia of United States History (vol. 10) (New York, NY: Harper and Brothers, 1912) | <urn:uuid:2d3697e1-37ae-4179-a6d5-aab13eac324b> | CC-MAIN-2016-26 | http://etc.usf.edu/clipart/57800/57803/57803_john_winthro.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00115-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968963 | 128 | 3.90625 | 4 |
Some Florida school districts still use corporal punishment, choosing to spank misbehaving students with paddles. But at Holmes County High School in Bonifay, administrators have gone one step further. They have students make the paddles in woodshop class.
Though some may be offended by this practice, it's completely legal. School corporal punishment may have been banned in most of the country, but Florida is one of 19 states that still allows the practice.
The widespread ban has made wooden paddles difficult to find, principal Eddie Dixon told StateImpact, a joint project between local public media and NPR. To fix the problem, he has the woodshop instructor teach students to make paddles. They're even given specific dimensions.
There has been some movement at the federal level to implement a nationwide ban on School corporal punishment, but there seems to be little political backing. As such, states are free to continue the practice for as long as they wish.
However, this does not mean public schools can inflict physical punishment whenever and however they want. Administrators and teachers are still subject to the state's child abuse laws. Therefore, they must generally only use corporal punishment under reasonable circumstances, with a reasonable instrument, on a reasonable body part, and with a reasonable amount of force.
In other words, schools can have students make paddles, and they can use those paddles to administer a few mediocre smacks. But if they're not careful, an act of school corporal punishment can end up in criminal or civil court.
- At one Florida school, students make the paddles used in spankings (MSNBC)
- School Discipline History (FindLaw)
- Teacher's Aide Taped Kinder Student's Mouth Shut (FindLaw's Law & Daily Life) | <urn:uuid:49f78180-310c-4174-95fc-f9ec2395e76f> | CC-MAIN-2016-26 | http://blogs.findlaw.com/law_and_life/2012/03/fl-students-make-paddles-used-to-spank-them.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00185-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959926 | 367 | 2.796875 | 3 |
Heel pain is a very common complaint and plantar fasciitis is the most common cause of chronic pain, comprising 11–15% of foot complaints requiring professional care among adults. In fact, recent research has reported that 1 in 10 people will develop plantar fasciitis during their lifetime. It is most common in middle-aged females and also with young male athletes with a higher incidence amongst the athletic population.
The plantar fascia is a tight band of non-elastic tissue that spans from your heel bone to the base of your toes. It functions to maintain the shape of your arch and control your foot during pronation (your foot collapsing inwards). Strain on the tissue from excessive and repetitive loading (as in long distance running) can create inflammation and pain will develop directly under the heel bone. The pain is greatest when the tissue is allowed to shorten, such as after long periods of rest or after a night’s sleep. Therefore, the first few steps when you get out of bed in the morning result in rapid lengthening of the plantar fascia tissue which is very painful and causes more inflammation and swelling. This initial pain generally subsides after several steps but will gradually increase throughout the day as you continue to walk and load the tissue.
Plantar fasciitis usually only involves one foot at a time, but up to 30% of cases involve both feet. Interestingly, tightness of Achilles tendon is found in almost 80% of plantar fasciitis cases. Thus, there are some basic treatment recommendations that must be followed to alleviate the pain and symptoms.
There are several devices that can help to stretch out the Achilles tendon and it is critical that improved flexibility of this tendon becomes part of your rehabilitation routine. Research has shown that it takes approximately 3-4 weeks for the Achilles tendon to begin to lengthen. You should stretch the Achilles tendon at least 2 times per day and perform at least 3-4 stretches of 30 seconds during each session. The best times to stretch are first thing in the morning to help lengthen the Achilles and the plantar fascia tissue and then stretch again just before you go to bed.
Two other methods that have been shown to be effective are braces and supports to help stabilize the arch of the foot and to decrease the amount of stretching to the plantar fascia. These arch support devices either wrap around the arch itself or they can also help to support the ankle joint. Because those first few steps are so painful first thing in the morning, another good option to reduce the pain is a Night Splint. The purpose of Night Splints is to prevent the plantar fascia from contracting over-night and to also promote Achilles stretching. The Splint keeps the ankle joint in a flexed position and holds the plantar fascia in an elongated position. A recent research study found that optimal treatment of plantar fasciitis consisted of Night Splints along with supportive shoes, anti-inflammatory drugs, and stretching.
Research from our lab, the Running Injury Clinic, has also shown that over-the-counter orthotics by SOLE reduce strain to the plantar fascia by 35%. These orthotics are considered semi-custom devices since you heat them up in the oven, place them in your shoes, and they mold to the shape of your foot. If you simply placed them in your shoes without heat-molding, plantar fascia strain is only reduced by 24% so we recommend going through the molding process to optimize foot care and comfort.
By incorporating SOLE insoles into a comprehensive rehabilitation program involving Achilles tendon stretching, Night Splints, and changing your exercise regime to reduce the amount of repetitive loading, hopefully you find your plantar fasciitis pain and symptoms improving quickly. If not, we recommend seeing a foot and ankle specialist (podiatrist) to discuss and confirm your condition.
Dr. Reed Ferber Ph.D., ATC
Director – Running Injury Clinic
Associate Professor, Faculty of Kinesiology, University of Calgary
The Running Injury Clinic is a world-leader in running-related research. Please visit our website at www.runninginjuryclinic.com for research concerning running injury prevention and treatment. | <urn:uuid:a1d69ad4-e0d1-49bc-939e-d8c01a5444d0> | CC-MAIN-2016-26 | http://blog.footsmart.com/plantar-fasciitis-tips-for-runners/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00156-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.94039 | 867 | 2.515625 | 3 |
Many common diseases, such as asthma, diabetes or obesity, involve
altered interactions between thousands of genes. High-throughput techniques (omics)
allow identification of such genes and their products, but functional understanding
is a formidable challenge. Network-based analyses of omics data have identified
modules of disease-associated genes that have been used to obtain both a systems
level and a molecular understanding of disease mechanisms. For example, in allergy a
module was used to find a novel candidate gene that was validated by functional and
clinical studies. Such analyses play important roles in systems medicine. This is an
emerging discipline that aims to gain a translational understanding of the complex
mechanisms underlying common diseases. In this review, we will explain and provide
examples of how network-based analyses of omics data, in combination with functional
and clinical studies, are aiding our understanding of disease, as well as helping to
prioritize diagnostic markers or therapeutic candidate genes. Such analyses involve
significant problems and limitations, which will be discussed. We also highlight the
steps needed for clinical implementation.
In critically ill patients, glucose control with insulin mandates time– and blood–consuming glucose monitoring. Blood glucose level fluctuations are accompanied by metabolomic changes that alter the composition of volatile organic compounds (VOC), which are detectable in exhaled breath. This review systematically summarizes the available data on the ability of changes in VOC composition to predict blood glucose levels and changes in blood glucose levels.
A systematic search was performed in PubMed. Studies were included when an association between blood glucose levels and VOCs in exhaled air was investigated, using a technique that allows for separation, quantification and identification of individual VOCs. Only studies on humans were included.
Nine studies were included out of 1041 identified in the search. Authors of seven studies observed a significant correlation between blood glucose levels and selected VOCs in exhaled air. Authors of two studies did not observe a strong correlation. Blood glucose levels were associated with the following VOCs: ketone bodies (e.g., acetone), VOCs produced by gut flora (e.g., ethanol, methanol, and propane), exogenous compounds (e.g., ethyl benzene, o–xylene, and m/p–xylene) and markers of oxidative stress (e.g., methyl nitrate, 2–pentyl nitrate, and CO).
There is a relation between blood glucose levels and VOC composition in exhaled air. These results warrant clinical validation of exhaled breath analysis to monitor blood glucose levels.
Glucose; Monitoring; Volatile organic compound; Breath
Rationale: Molecular phenotyping of chronic obstructive pulmonary disease (COPD) has been impeded in part by the difficulty in obtaining lung tissue samples from individuals with impaired lung function.
Objectives: We sought to determine whether COPD-associated processes are reflected in gene expression profiles of bronchial airway epithelial cells obtained by bronchoscopy.
Methods: Gene expression profiling of bronchial brushings obtained from 238 current and former smokers with and without COPD was performed using Affymetrix Human Gene 1.0 ST Arrays.
Measurements and Main Results: We identified 98 genes whose expression levels were associated with COPD status, FEV1% predicted, and FEV1/FVC. In silico analysis identified activating transcription factor 4 (ATF4) as a potential transcriptional regulator of genes with COPD-associated airway expression, and ATF4 overexpression in airway epithelial cells in vitro recapitulates COPD-associated gene expression changes. Genes with COPD-associated expression in the bronchial airway epithelium had similarly altered expression profiles in prior studies performed on small-airway epithelium and lung parenchyma, suggesting that transcriptomic alterations in the bronchial airway epithelium reflect molecular events found at more distal sites of disease activity. Many of the airway COPD-associated gene expression changes revert toward baseline after therapy with the inhaled corticosteroid fluticasone in independent cohorts.
Conclusions: Our findings demonstrate a molecular field of injury throughout the bronchial airway of active and former smokers with COPD that may be driven in part by ATF4 and is modifiable with therapy. Bronchial airway epithelium may ultimately serve as a relatively accessible tissue in which to measure biomarkers of disease activity for guiding clinical management of COPD.
chronic obstructive pulmonary disease; gene expression profiling; biologic markers
The acute respiratory distress syndrome (ARDS) is a common, devastating complication of critical illness that is characterized by pulmonary injury and inflammation. The clinical diagnosis may be improved by means of objective biological markers. Electronic nose (eNose) technology can rapidly and non–invasively provide breath prints, which are profiles of volatile metabolites in the exhaled breath. We hypothesized that breath prints could facilitate accurate diagnosis of ARDS in intubated and ventilated intensive care unit (ICU) patients.
Prospective single-center cohort study with training and temporal external validation cohort. Breath of newly intubated and mechanically ventilated ICU-patients was analyzed using an electronic nose within 24 hours after admission. ARDS was diagnosed and classified by the Berlin clinical consensus definition. The eNose was trained to recognize ARDS in a training cohort and the diagnostic performance was evaluated in a temporal external validation cohort.
In the training cohort (40 patients with ARDS versus 66 controls) the diagnostic model for ARDS showed a moderate discrimination, with an area under the receiver–operator characteristic curve (AUC–ROC) of 0.72 (95%–confidence interval (CI): 0.63-0.82). In the external validation cohort (18 patients with ARDS versus 26 controls) the AUC–ROC was 0.71 [95%–CI: 0.54 – 0.87]. Restricting discrimination to patients with moderate or severe ARDS versus controls resulted in an AUC–ROC of 0.80 [95%–CI: 0.70 – 0.90]. The exhaled breath profile from patients with cardiopulmonary edema and pneumonia was different from that of patients with moderate/severe ARDS.
An electronic nose can rapidly and non–invasively discriminate between patients with and without ARDS with modest accuracy. Diagnostic accuracy increased when only moderate and severe ARDS patients were considered. This implicates that breath analysis may allow for rapid, bedside detection of ARDS, especially if our findings are reproduced using continuous exhaled breath profiling.
NTR2750, registered 11 February 2011.
ARDS; Exhaled breath; Electronic nose; Volatile organic compound; Sensitivity and specificity
Pandemics caused by novel emerging or re-emerging infectious diseases could lead to high mortality and morbidity world-wide when left uncontrolled. In this perspective, we evaluate the possibility of integration of global omics-data in order to timely prepare for pandemics. Such an approach requires two major innovations. First, data that is obtained should be shared with the global community instantly. The strength of rapid integration of simple signals is exemplified by Google’sTM Flu Trend, which could predict the incidence of influenza-like illness based on online search engine queries. Second, omics technologies need to be fast and high-throughput. We postulate that analysis of the exhaled breath would be a simple, rapid and non-invasive alternative. Breath contains hundreds of volatile organic compounds that are altered by infection and inflammation. The molecular fingerprint of breath (breathprint) can be obtained using an electronic nose, which relies on sensor technology. These breathprints can be stored in an online database (a “breathcloud”) and coupled to clinical data. Comparison of the breathprint of a suspected subject to the breathcloud allows for a rapid decision on the presence or absence of a pathogen.
pandemic; exhaled breath; systems biology; diagnosis; metabolomics; metabolite profiling
Chronic mucus hypersecretion (CMH) is associated with an increased frequency of respiratory infections, excess lung function decline, and increased hospitalisation and mortality rates in the general population. It is associated with smoking, but it is unknown why only a minority of smokers develops CMH. A plausible explanation for this phenomenon is a predisposing genetic constitution. Therefore, we performed a genome wide association (GWA) study of CMH in Caucasian populations.
GWA analysis was performed in the NELSON-study using the Illumina 610 array, followed by replication and meta-analysis in 11 additional cohorts. In total 2,704 subjects with, and 7,624 subjects without CMH were included, all current or former heavy smokers (≥20 pack-years). Additional studies were performed to test the functional relevance of the most significant single nucleotide polymorphism (SNP).
A strong association with CMH, consistent across all cohorts, was observed with rs6577641 (p = 4.25×10−6, OR = 1.17), located in intron 9 of the special AT-rich sequence-binding protein 1 locus (SATB1) on chromosome 3. The risk allele (G) was associated with higher mRNA expression of SATB1 (4.3×10−9) in lung tissue. Presence of CMH was associated with increased SATB1 mRNA expression in bronchial biopsies from COPD patients. SATB1 expression was induced during differentiation of primary human bronchial epithelial cells in culture.
Our findings, that SNP rs6577641 is associated with CMH in multiple cohorts and is a cis-eQTL for SATB1, together with our additional observation that SATB1 expression increases during epithelial differentiation provide suggestive evidence that SATB1 is a gene that affects CMH.
This report summarizes the proceedings of the 14th workshop of the Genomic Standards Consortium (GSC) held at the University of Oxford in September 2012. The primary goal of the workshop was to work towards the launch of the Genomic Observatories (GOs) Network under the GSC. For the first time, it brought together potential GOs sites, GSC members, and a range of interested partner organizations. It thus represented the first meeting of the GOs Network (GOs1). Key outcomes include the formation of a core group of “champions” ready to take the GOs Network forward, as well as the formation of working groups. The workshop also served as the first meeting of a wide range of participants in the Ocean Sampling Day (OSD) initiative, a first GOs action. Three projects with complementary interests – COST Action ES1103, MG4U and Micro B3 – organized joint sessions at the workshop. A two-day GSC Hackathon followed the main three days of meetings.
The Genomic Standards Consortium (GSC) is an open-membership community that was founded in 2005 to work towards the development, implementation and harmonization of standards in the field of genomics. Starting with the defined task of establishing a minimal set of descriptions the GSC has evolved into an active standards-setting body that currently has 18 ongoing projects, with additional projects regularly proposed from within and outside the GSC. Here we describe our recently enacted policy for proposing new activities that are intended to be taken on by the GSC, along with the template for proposing such new activities.
Asthma exacerbations are frequently triggered by rhinovirus infections. Both asthma and respiratory tract infection can activate haemostasis. Therefore we hypothesized that experimental rhinovirus-16 infection and asthmatic airway inflammation act in synergy on the haemostatic balance.
28 patients (14 patients with mild allergic asthma and 14 healthy non-allergic controls) were infected with low-dose rhinovirus type 16. Venous plasma and bronchoalveolar lavage fluid (BAL fluid) were obtained before and 6 days after infection to evaluate markers of coagulation activation, thrombin-antithrombin complexes, von Willebrand factor, plasmin-antiplasmin complexes, plasminogen activator inhibitor type-1, endogenous thrombin potential and tissue factor-exposing microparticles by fibrin generation test, in plasma and/or BAL fluid. Data were analysed by nonparametric tests (Wilcoxon, Mann Whitney and Spearman correlation).
13 patients with mild asthma (6 females, 19-29 y) and 11 healthy controls (10 females, 19-31 y) had a documented Rhinovirus-16 infection. Rhinovirus-16 challenge resulted in a shortening of the fibrin generation test in BAL fluid of asthma patients (t = -1: 706 s vs. t = 6: 498 s; p = 0.02), but not of controls (t = -1: 693 s vs. t = 6: 636 s; p = 0.65). The fold change in tissue factor-exposing microparticles in BAL fluid inversely correlated with the fold changes in eosinophil cationic protein and myeloperoxidase in BAL fluid after virus infection (r = -0.517 and -0.528 resp., both p = 0.01).
Rhinovirus-16 challenge led to increased plasminogen activator inhibitor type-1 levels in plasma in patients with asthma (26.0 ng/mL vs. 11.5 ng/mL in healthy controls, p = 0.04). Rhinovirus-16 load in BAL showed a linear correlation with the fold change in endogenous thrombin potential, plasmin-antiplasmin complexes and plasminogen activator inhibitor type-1.
Experimental rhinovirus infection induces procoagulant changes in the airways of patients with asthma through increased activity of tissue factor-exposing microparticles. These microparticle-associated procoagulant changes are associated with both neutrophilic and eosinophilic inflammation. Systemic activation of haemostasis increases with Rhinoviral load.
This trial was registered at the Dutch trial registry (http://www.trialregister.nl): NTR1677.
Rhinovirus; Coagulation; Fibrinolysis; Asthma; Microparticles; Inflammation
Inhaled corticosteroids (ICS) reduce exacerbation rates and improve health status but can increase the risk of pneumonia in COPD. The GLUCOLD study, investigating patients with mild-to-moderate COPD, has shown that long-term (2.5-year) ICS therapy induces anti-inflammatory effects. The literature suggests that cigarette smoking causes ICS insensitivity. The aim of this study is to compare anti-inflammatory effects of ICS in persistent smokers and persistent ex-smokers in a post-hoc analysis of the GLUCOLD study.
Persistent smokers (n = 41) and persistent ex-smokers (n = 31) from the GLUCOLD cohort were investigated. Effects of ICS treatment compared with placebo were estimated by analysing changes in lung function, hyperresponsiveness, and inflammatory cells in sputum and bronchial biopsies during short-term (0–6 months) and long-term (6–30 months) treatment using multiple regression analyses.
Bronchial mast cells were reduced by short-term and long-term ICS treatment in both smokers and ex-smokers. In contrast, CD3+, CD4+, and CD8+ cells were reduced by short-term ICS treatment in smokers only. In addition, sputum neutrophils and lymphocytes, and bronchial CD8+ cells were reduced after long-term treatment in ex-smokers only. No significant interactions existed between smoking and ICS treatment.
Even in the presence of smoking, long-term ICS treatment may lead to anti-inflammatory effects in the lung. Some anti-inflammatory ICS effects are comparable in smokers and ex-smokers with COPD, other effects are cell-specific. The clinical relevance of these findings, however, are uncertain.
Rhinovirus infections are the most common cause of asthma exacerbations. The complex responses by airway epithelium to rhinovirus can be captured by gene expression profiling. We hypothesized that: a) upper and lower airway epithelium exhibit differential responses to double-stranded RNA (dsRNA), and b) that this is modulated by the presence of asthma and allergic rhinitis.
Identification of dsRNA-induced gene expression profiles of primary nasal and bronchial epithelial cells from the same individuals and examining the impact of allergic rhinitis with and without concomitant allergic asthma on expression profiles.
This study had a cross-sectional design including 18 subjects: 6 patients with allergic asthma with concomitant rhinitis, 6 patients with allergic rhinitis, and 6 healthy controls. Comparing 6 subjects per group, the estimated false discovery rate was approximately 5%. RNA was extracted from isolated and cultured primary epithelial cells from nasal biopsies and bronchial brushings stimulated with dsRNA (poly(I:C)), and analyzed by microarray (Affymetrix U133+ PM Genechip Array). Data were analysed using R and the Bioconductor Limma package. Overrepresentation of gene ontology groups were captured by GeneSpring GX12.
In total, 17 subjects completed the study successfully (6 allergic asthma with rhinitis, 5 allergic rhinitis, 6 healthy controls). dsRNA-stimulated upper and lower airway epithelium from asthma patients demonstrated significantly fewer induced genes, exhibiting reduced down-regulation of mitochondrial genes. The majority of genes related to viral responses appeared to be similarly induced in upper and lower airways in all groups. However, the induction of several interferon-related genes (IRF3, IFNAR1, IFNB1, IFNGR1, IL28B) was impaired in patients with asthma.
dsRNA differentially changes transcriptional profiles of primary nasal and bronchial epithelial cells from patients with allergic rhinitis with or without asthma and controls. Our data suggest that respiratory viruses affect mitochondrial genes, and we identified disease-specific genes that provide potential targets for drug development.
Asthma; Rhinitis; Epithelium; Gene expression; dsRNA
The link between upper and lower airways in patients with both asthma and allergic rhinitis is still poorly understood. As the biological complexity of these disorders can be captured by gene expression profiling we hypothesized that the clinical expression of rhinitis and/or asthma is related to differential gene expression between upper and lower airways epithelium.
Defining gene expression profiles of primary nasal and bronchial epithelial cells from the same individuals and examining the impact of allergic rhinitis with and without concomitant allergic asthma on expression profiles.
This cross-sectional study included 18 subjects (6 allergic asthma and allergic rhinitis; 6 allergic rhinitis; 6 healthy controls). The estimated false discovery rate comparing 6 subjects per group was approximately 5%. RNA was extracted from isolated and cultured epithelial cells from bronchial brushings and nasal biopsies, and analyzed by microarray (Affymetrix U133+ PM Genechip Array). Data were analysed using R and Bioconductor Limma package. For gene ontology GeneSpring GX12 was used.
The study was successfully completed by 17 subjects (6 allergic asthma and allergic rhinitis; 5 allergic rhinitis; 6 healthy controls). Using correction for multiple testing, 1988 genes were differentially expressed between healthy lower and upper airway epithelium, whereas in allergic rhinitis with or without asthma this was only 40 and 301 genes, respectively. Genes influenced by allergic rhinitis with or without asthma were linked to lung development, remodeling, regulation of peptidases and normal epithelial barrier functions.
Differences in epithelial gene expression between the upper and lower airway epithelium, as observed in healthy subjects, largely disappear in patients with allergic rhinitis with or without asthma, whilst new differences emerge. The present data identify several pathways and genes that might be potential targets for future drug development.
Although the high mortality rate of pulmonary invasive aspergillosis (IA) in patients with prolonged chemotherapy-induced neutropenia (PCIN) can be reduced by timely diagnosis, a diagnostic test that reliably detects IA at an early stage is lacking. We hypothesized that an electronic nose (eNose) could fulfill this need. An eNose can discriminate various lung diseases through the analysis of exhaled volatile organic compounds (VOCs). An eNose is cheap and noninvasive and yields results within minutes. In a single-center prospective cohort study, we included patients who were treated with chemotherapy expected to result in PCIN. Based on standardized indications, a full diagnostic workup was performed to confirm invasive aspergillosis or to rule it out. Patients with no aspergillosis were considered controls, and patients with probable or proven aspergillosis were considered index cases. Exhaled breath was examined with a Cyranose 320 (Smith Detections, Pasadena, CA). The resulting data were analyzed using principal component reduction. The primary endpoint was cross-validated diagnostic accuracy, defined as the percentage of patients correctly classified using the leave-one-out method. Accuracy was validated by 100,000 random classifications. We included 46 subjects who underwent 16 diagnostic workups, resulting in 6 cases and 5 controls. The cross-validated accuracy of the eNose in diagnosing IA was 90.9% (P = 0.022; sensitivity, 100%; specificity, 83.3%). Receiver operating characteristic analysis showed an area under the curve of 0.93. These preliminary data indicate that PCIN patients with IA have a distinct exhaled VOC profile that can be detected with eNose technology. The diagnostic accuracy of the eNose for invasive aspergillosis warrants validation.
Metagenomics is a relatively recently established but rapidly expanding field that uses high-throughput next-generation sequencing technologies to characterize the microbial communities inhabiting different ecosystems (including oceans, lakes, soil, tundra, plants and body sites). Metagenomics brings with it a number of challenges, including the management, analysis, storage and sharing of data. In response to these challenges, we have developed a new metagenomics resource (http://www.ebi.ac.uk/metagenomics/) that allows users to easily submit raw nucleotide reads for functional and taxonomic analysis by a state-of-the-art pipeline, and have them automatically stored (together with descriptive, standards-compliant metadata) in the European Nucleotide Archive.
Airway inflammation in asthma involves innate immune responses. Toll-like receptors (TLRs) and thymic stromal lymphopoietin (TSLP) are thought to be involved in airway inflammation, but their expression in asthmatics’ both large and small airways has not been investigated.
To analyze the expression of TLR2, TLR3, TLR4 and TSLP in large and small airways of asthmatics and compare their expression in smoking and nonsmoking asthmatics; to investigate whether TLR expression is associated with eosinophilic or neutrophilic airway inflammation and with Mycoplasma pneumoniae and Chlamydophila pneumoniae infection.
Using immunohistochemistry and image analysis, we investigated TLR2, TLR3, TLR4 and TSLP expression in large and small airways of 24 victims of fatal asthma, FA, (13 nonsmokers, 11 smokers) and 9 deceased control subjects (DCtrl). TLRs were also measured in 18 mild asthmatics (MA) and 12 healthy controls (HCtrl). Mycoplasma pneumoniae and Chlamydophila pneumoniae in autopsy lung tissue was analyzed using real-time polymerase chain reaction. Airway eosinophils and neutrophils were measured in all subjects.
Fatal asthma patients had higher TLR2 in the epithelial and outer layers of large and small airways compared with DCtrls. Smoking asthmatics had lower TLR2 levels in the inner and outer layers of the small airways than nonsmoking asthmatics. TSLP was increased in the epithelial and outer layers of the large airways of FA. FA patients had greater TLR3 expression in the outer layer of large airways and greater TLR4 expression in the outer layer of small airways. Eosinophilic airway inflammation was associated with TLR expression in the epithelium of FA. No bacterial DNA was detected in FA or DCtrls. MA and HCtrls had only a small difference in TLR3 expression.
Conclusions and Clinical Relevance
Increased expression of TLR 2, 3 and 4 and TSLP in fatal asthma may contribute to the acute inflammation surrounding asthma deaths.
lung; innate immunity; immunohistochemistry
The diagnosis of childhood asthma covers a broad spectrum of pathological mechanisms that can lead to similarly presenting clinical symptoms, but may nonetheless require different treatment approaches. Distinct underlying inflammatory patterns are thought to influence responsiveness to standard asthma medication.
The purpose of the PACMAN2 study is to identify inflammatory phenotypes that can discriminate uncontrolled childhood asthma from controlled childhood asthma by measures in peripheral blood and exhaled air. PACMAN2 is a nested, case–control follow-up study to the ongoing pharmacy-based “Pharmacogenetics of Asthma medication in Children: Medication with Anti-inflammatory effects” (PACMAN) study. The original PACMAN cohort consists of children aged 4–12 years with reported use of asthma medication. The PACMAN2 study will be conducted within the larger PACMAN cohort, and will focus on detailed phenotyping of a subset of the PACMAN children. The selected participants will be invited to a follow-up visit in a clinical setting at least six months after their baseline visit based on their adherence to usage of inhaled corticosteroids, their asthma symptoms in the past year, and their age (≥ 8 years). During the follow-up visit, current and long-term asthma symptoms, medication use, environmental factors, medication adherence and levels of exhaled nitric oxide will be reassessed. The following measures will also be examined: pulmonary function, exhaled volatile organic compounds, as well as inflammatory markers in peripheral blood and blood plasma. Comparative analysis and cluster-analyses will be used to identify markers that differentiate children with uncontrolled asthma despite their use of inhaled corticosteroids (ICS) (cases) from children whose asthma is controlled by the use of ICS (controls).
Asthmatic children with distinct inflammatory phenotypes may respond differently to anti-inflammatory therapy. Therefore, by identifying inflammatory phenotypes in children with the PACMAN2 study, we may greatly impact future personalised treatment strategies, uncover new leads for therapeutic targets and improve the design of future clinical studies in the assessment of the efficacy of novel therapeutics.
Asthma; Child; Phenotypes; Inflammation; Proteomics; Volatile organic compounds; Corticosteroids
Ideally, invading bacteria are detected as early as possible in critically ill patients: the strain of morbific pathogens is identified rapidly, and antimicrobial sensitivity is known well before the start of new antimicrobial therapy. Bacteria have a distinct metabolism, part of which results in the production of bacteria-specific volatile organic compounds (VOCs), which might be used for diagnostic purposes. Volatile metabolites can be investigated directly in exhaled air, allowing for noninvasive monitoring. The aim of this review is to provide an overview of VOCs produced by the six most abundant and pathogenic bacteria in sepsis, including Staphylococcus aureus, Streptococcus pneumoniae, Enterococcus faecalis, Pseudomonas aeruginosa, Klebsiella pneumoniae, and Escherichia coli. Such VOCs could be used as biological markers in the diagnostic approach of critically ill patients. A systematic review of existing literature revealed 31 articles. All six bacteria of interest produce isopentanol, formaldehyde, methyl mercaptan, and trimethylamine. Since humans do not produce these VOCs, they could serve as biological markers for presence of these pathogens. The following volatile biomarkers were found for identification of specific strains: isovaleric acid and 2-methyl-butanal for Staphylococcus aureus; 1-undecene, 2,4-dimethyl-1-heptane, 2-butanone, 4-methyl-quinazoline, hydrogen cyanide, and methyl thiocyanide for Pseudomonas aeruginosa; and methanol, pentanol, ethyl acetate, and indole for Escherichia coli. Notably, several factors that may effect VOC production were not controlled for, including used culture media, bacterial growth phase, and genomic variation within bacterial strains. In conclusion, VOCs produced by bacteria may serve as biological markers for their presence. Goal-targeted studies should be performed to identify potential sets of volatile biological markers and evaluate the diagnostic accuracy of these markers in critically ill patients.
Smoking and inflammation contribute to the pathogenesis of chronic obstructive pulmonary disease (COPD), which involves changes in extracellular matrix. This is thought to contribute to airway remodeling and airflow obstruction. We have previously observed that long-term treatment with inhaled corticosteroids can not only reduce bronchial inflammation, but can also attenuate lung function decline in moderate-severe COPD. We hypothesized that inhaled corticosteroids and current smoking modulate bronchial extracellular matrix components in COPD.
To compare major extracellular matrix components (elastic fibers; proteoglycans [versican, decorin]; collagens type I and III) in bronchial biopsies 1) after 30-months inhaled steroids treatment or placebo; and 2) between current and ex-smokers with COPD.
We included 64 moderate-severe, steroid-naive COPD patients (24/40 (ex)-smokers, 62±7 years, 46 (31–54) packyears, post-bronchodilator forced expiratory volume in one second (FEV1) 62±9% predicted) at baseline in this randomized, controlled trial. 19 and 13 patients received 30-months treatment with fluticasone or placebo, respectively. Bronchial biopsies collected at baseline and after 30 months were studied using (immuno)histochemistry to evaluate extracellular matrix content. Percentage and density of stained area were calculated by digital image analysis.
30-Months inhaled steroids increased the percentage stained area of versican (9.6% [CI 0.9 to 18.3%]; p = 0.03) and collagen III (20.6% [CI 3.8 to 37.4%]; p = 0.02) compared to placebo. Increased collagen I staining density correlated with increased post-bronchodilator FEV1 after inhaled steroids treatment (Rs = 0.45, p = 0.04). There were no differences between smokers and ex-smokers with COPD in percentages and densities for all extracellular matrix proteins.
These data show that long-term inhaled corticosteroids treatment partially changes the composition of extracellular matrix in moderate-severe COPD. This is associated with increased lung function, suggesting that long-term inhaled steroids modulate airway remodeling thereby potentially preventing airway collapse in COPD. Smoking status is not associated with bronchial extracellular matrix proteins.
The aim of this study was to assess cross-sectional and longitudinal correlations between uEPX and other markers of asthma control and eosinophilic airway inflammation. Methods. We measured uEPX at baseline, after 1 year and after 2 years in 205 atopic asthmatic children using inhaled fluticasone. At the same time points, we assessed symptom scores (2 weeks diary card), lung function (forced expiratory volume in one second (FEV1)), airway hyperresponsiveness (AHR), and percentage eosinophils in induced sputum (% eos). Results. We found negative correlations between uEPX and FEV1 at baseline (r = −0.18, P = 0.01), after 1 year (r = −0.25, P < 0.01) and after 2 years (r = −0.21, P = 0.02). Within-patient changes of uEPX showed a negative association with FEV1 changes (at 1 year: r = −0.24, P = 0.01; at 2 years: r = −0.21, P = 0.03). Within-patient changes from baseline of uEPX correlated with changes in % eos. No relations were found between uEPX and symptoms. Conclusion. In this population of children with atopic asthma, uEPX correlated with FEV1 and % eos, and within-subjects changes in uEPX correlated with changes in FEV1 and % eos. As the associations were weak and the scatter of uEPX wide, it seems unlikely that uEPX will be useful as a biomarker for monitoring asthma control in the individual child.
The Global Biodiversity Information Facility and the Genomic Standards Consortium convened a joint workshop at the University of Oxford, 27-29 February 2012, with a small group of experts from Europe, USA, China and Japan, to continue the alignment of the Darwin Core with the MIxS and related genomics standards. Several reference mappings were produced as well as test expressions of MIxS in RDF. The use and management of controlled vocabulary terms was considered in relation to both GBIF and the GSC, and tools for working with terms were reviewed. Extensions for publishing genomic biodiversity data to the GBIF network via a Darwin Core Archive were prototyped and work begun on preparing translations of the Darwin Core to Japanese and Chinese. Five genomic repositories were identified for engagement to begin the process of testing the publishing of genomic data to the GBIF network commencing with the SILVA rRNA database.
Publication bias jeopardizes evidence-based medicine, mainly through biased literature syntheses. Publication bias may also affect laboratory animal research, but evidence is scarce.
To assess the opinion of laboratory animal researchers on the magnitude, drivers, consequences and potential solutions for publication bias. And to explore the impact of size of the animals used, seniority of the respondent, working in a for-profit organization and type of research (fundamental, pre-clinical, or both) on those opinions.
All animal laboratories in The Netherlands.
Laboratory animal researchers.
Main Outcome Measure(s)
Median (interquartile ranges) strengths of beliefs on 5 and 10-point scales (1: totally unimportant to 5 or 10: extremely important).
Overall, 454 researchers participated. They considered publication bias a problem in animal research (7 (5 to 8)) and thought that about 50% (32–70) of animal experiments are published. Employees (n = 21) of for-profit organizations estimated that 10% (5 to 50) are published. Lack of statistical significance (4 (4 to 5)), technical problems (4 (3 to 4)), supervisors (4 (3 to 5)) and peer reviewers (4 (3 to 5)) were considered important reasons for non-publication (all on 5-point scales). Respondents thought that mandatory publication of study protocols and results, or the reasons why no results were obtained, may increase scientific progress but expected increased bureaucracy. These opinions did not depend on size of the animal used, seniority of the respondent or type of research.
Non-publication of “negative” results appears to be prevalent in laboratory animal research. If statistical significance is indeed a main driver of publication, the collective literature on animal experimentation will be biased. This will impede the performance of valid literature syntheses. Effective, yet efficient systems should be explored to counteract selective reporting of laboratory animal research.
Variability in the extent of the descriptions of data (‘metadata’) held in public repositories forces users to assess the quality of records individually, which rapidly becomes impractical. The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements. Here, we introduce such a measure - the ‘Metadata Coverage Index’ (MCI): the percentage of available fields actually filled in a record or description. MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. Here we demonstrate the utility of MCI scores using metadata from the Genomes Online Database (GOLD), including records compliant with the ‘Minimum Information about a Genome Sequence’ (MIGS) standard developed by the Genomic Standards Consortium. We discuss challenges and address the further application of MCI scores; to show improvements in annotation quality over time, to inform the work of standards bodies and repository providers on the usability and popularity of their products, and to assess and credit the work of curators. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework.
Here we present a standard developed by the Genomic Standards Consortium (GSC) for reporting marker gene sequences—the minimum information about a marker gene sequence (MIMARKS). We also introduce a system for describing the environment from which a biological sample originates. The ‘environmental packages’ apply to any genome sequence of known origin and can be used in combination with MIMARKS and other GSC checklists. Finally, to establish a unified standard for describing sequence data and to provide a single point of entry for the scientific community to access and learn about GSC checklists, we present the minimum information about any (x) sequence (MIxS). Adoption of MIxS will enhance our ability to analyze natural genetic diversity documented by massive DNA sequencing efforts from myriad ecosystems in our ever-changing biosphere.
This report details the outcome of the 13th Meeting of the Genomic Standards Consortium. The three-day conference was held at the Kingkey Palace Hotel, Shenzhen, China, on March 5–7, 2012, and was hosted by the Beijing Genomics Institute. The meeting, titled From Genomes to Interactions to Communities to Models, highlighted the role of data standards associated with genomic, metagenomic, and amplicon sequence data and the contextual information associated with the sample. To this end the meeting focused on genomic projects for animals, plants, fungi, and viruses; metagenomic studies in host-microbe interactions; and the dynamics of microbial communities. In addition, the meeting hosted a Genomic Observatories Network session, a Genomic Standards Consortium biodiversity working group session, and a Microbiology of the Built Environment session sponsored by the Alfred P. Sloan Foundation.
Genomic Standards Consortium; microbiome; microbial metagenomics; fungal genomics; viral genomics; Genomic Observatories Network
Despite the availability of effective therapies, asthma remains a source of significant morbidity and use of health care resources. The central research question of the ACCURATE trial is whether maximal doses of (combination) therapy should be used for long periods in an attempt to achieve complete control of all features of asthma. An additional question is whether patients and society value the potential incremental benefit, if any, sufficiently to concur with such a treatment approach. We assessed patient preferences and cost-effectiveness of three treatment strategies aimed at achieving different levels of clinical control:
1. sufficiently controlled asthma
2. strictly controlled asthma
3. strictly controlled asthma based on exhaled nitric oxide as an additional disease marker
720 Patients with mild to moderate persistent asthma from general practices with a practice nurse, age 18-50 yr, daily treatment with inhaled corticosteroids (more then 3 months usage of inhaled corticosteroids in the previous year), will be identified via patient registries of general practices in the Leiden, Nijmegen, and Amsterdam areas in The Netherlands. The design is a 12-month cluster-randomised parallel trial with 40 general practices in each of the three arms. The patients will visit the general practice at baseline, 3, 6, 9, and 12 months. At each planned and unplanned visit to the general practice treatment will be adjusted with support of an internet-based asthma monitoring system supervised by a central coordinating specialist nurse. Patient preferences and utilities will be assessed by questionnaire and interview. Data on asthma control, treatment step, adherence to treatment, utilities and costs will be obtained every 3 months and at each unplanned visit. Differences in societal costs (medication, other (health) care and productivity) will be compared to differences in the number of limited activity days and in quality adjusted life years (Dutch EQ5D, SF6D, e-TTO, VAS). This is the first study to assess patient preferences and cost-effectiveness of asthma treatment strategies driven by different target levels of asthma control.
Netherlands Trial Register (NTR): NTR1756 | <urn:uuid:980ad182-c10c-4e2c-b1c4-f291741db567> | CC-MAIN-2016-26 | http://pubmedcentralcanada.ca/pmcc/solr/reg?pageSize=25&term=author%3A(%22Lewis%2C+Suzanna+E.%22)&sortby=score+desc&filterAuthor=author%3A(%22Sterk%2C+Peter%22) | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00192-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.909993 | 8,763 | 2.578125 | 3 |
Formative assessments provide
educators, parents, and students with feedback during the learning process
to improve student achievement.
Formative assessments can be developed
for all subjects and grades, not limited to questions aligned to state standards
English Language Arts, Mathematics, Science and Social Studies.
Formative assessments can be used
to guide instruction to optimize student learning
careers, and life-long goals. | <urn:uuid:cc90dbf9-e116-47b7-bbd0-7076e7bd2412> | CC-MAIN-2016-26 | http://a3virtuoso.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00114-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.853006 | 83 | 3.453125 | 3 |
Over the past few months, about 180,000 southerners have returned from the north of Sudan. Thousands more are still planning to come.
Many have returned home to help build the new nation. After decades of war, southern Sudan needs to be built almost from scratch, and there are enormous challenges. Most people still do not have access to clean water, sanitation, or adequate schools or healthcare.
Others are coming because of fear and uncertainty about what will happen to southerners in the north after the south secedes.
Many of the returnees have been living in very basic conditions, with no shelter, food or water for them on arrival. The influx has placed enormous strain on local communities, who were already struggling to find enough food, water and other necessities.
Oxfam's emergency response team -- with funding from the European Commission's Humanitarian Aid department, ECHO -- has been providing clean water, building latrines and running health campaigns to stop diseases spreading.
More about Oxfam's work in Sudan | <urn:uuid:c27423fc-01e2-4428-8c8f-4dfe1d67cb8a> | CC-MAIN-2016-26 | https://www.oxfam.org/en/video/2011/oxfam-emergency-response-southern-sudanese-return-home | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00054-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961263 | 209 | 2.984375 | 3 |
Diogenes of Apollonia (5th cn. B.C.E.)
Diogenes of Apollonia is often considered to be the last of the Presocratic Greek philosophers, although it is more than likely that Democritus was still active after the death of Diogenes. Diogenes’ main importance in the history of philosophy is that he synthesized the earlier Ionic monism of Anaximenes and Heraclitus with the pluralism of Empedocles and Anaxagoras. Diogenes serves as a sort of culminating point for Presocratic philosophy, uniting its differing tendencies toward emphasizing the absolute indivisibility or identity of reality with the equally absolute multiplicity of differing beings. Just as for Heraclitus, the truth for Diogenes was that one self-identical thing is all different things. By abiding by the Presocratic natural law that out of nothing comes nothing and into nothing, nothing goes, Diogenes proposed a definition of nature that identified it with life and explicitly affirmed that it is generated from itself. Diogenes’ main idea was that nature, the entire universe, is an indivisibly infinite, eternally living, and continuously moving substance he called, following Anaximenes, air. All the natural changes occurring throughout the universe—the various forms, the incalculable multiplicity the singular being takes—are one substance, air, under various modes. Air is also intelligent. Indeed, air is intelligence, or noesis in the Ancient Greek. Noesis is the purely intuitive, rational thinking that expresses and sustains all cosmic processes. As the self-causal power of rational, intuitive intelligence, air is also a god. When defining air solely as an atmospheric condition, as we do today, and in relation to the three other main elements, namely, fire, water, and earth, Diogenes’ air becomes the soul of singular beings. The soul is the source of every living thing’s sensitive ability to live, know, and thus also affect and be affected by other singular beings. The soul is also the way the absolute cosmic air identifies itself through a number of living differentiations as the means by which living creatures exhibit their differing degrees of temperature and density. Through the soul, air is sometimes rarer or more condensed, and likewise sometimes hotter or cooler. The soul is the life-principle that, when mixed with and operating through other aerated forms like blood and veins, allows for the living functions of all singular beings to remain self-sustaining until the necessary process of decomposition affects them. Such decomposition, however, is just another means for nature’s processes to continue to function insofar as each decomposed being is the simultaneous site for the next modification that air will engender and express through itself. Ultimately, for Diogenes, the essence of all reality, identified as intelligent and divine air, is that it is both nature and life, as nature and life are identical as one absolute substance.
Table of Contents
- Life and Work
- Substance Monism
- Intelligence and Divinity
- Cosmology and Physiology
- Influence and Historical Role
- References and Further Reading
The exact chronology of the life of Diogenes of Apollonia is unknown, but most accounts place the date of his acme somewhere around 460-430 BCE. It was once believed that he was from the Cretan city of Apollonia, but it is now thought that the Apollonia of which he was a citizen was the Milesian colony on the Pontus that was actually founded by the Presocratic philosopher Anaximander, and which is today the Bulgarian Black Sea resort town of Sozopol. It is also thought Diogenes lived for some time in Athens and that while there, he became so unpopular (being thought an atheist) that his life was in danger. Further proof of Diogenes’ probable residence in Athens is the parody we find of him in Aristophanes’ The Clouds, even though it is Socrates who is portrayed as holding Diogenes’ views. Diogenes Laertius writes, “Diogenes, son of Apollothemis, an Apolloniate, a physicist and a man of exceptional repute. He was a pupil of Anaximenes, as Antisthenes says. His period was that of Anaxagoras” (IX, 57). Theophrastus also mentions that Diogenes of Apollonia was ‘almost the youngest’ of the physical philosophers. It has been persuasively put forward that Diogenes Laertius was more than likely confused when he wrote that Diogenes of Apollonia was a pupil of Anaximenes, considering the agreed upon earliness and geographic location of Diogenes by most commentators. Like Anaximenes, however, Diogenes held that the fundamental substance of nature is air, but it is highly unlikely he could have studied with him. On the other hand, the view that Diogenes flourished in roughly the same period as Anaxagoras is uncontroversial.
There has been much debate over whether Diogenes wrote a single book or even as many as four. Only fragments of Diogenes’ work survive. A majority of the fragments that we have of Diogenes’ work come from Simplicius’ commentaries on Aristotle’s Physics and On the Heavens. Simplicius writes,
Since the generality of enquirers say that Diogenes of Apollonia made air the primary element, similarly to Anaximenes, while Nicolaus in his theological investigation relates that Diogenes declared the material principle to be between fire and air…, it must be realized that several books were written by this Diogenes (as he himself mentioned in On Nature, where he says that he had spoken also against the physicists—whom he calls ‘sophists’—and written a Meteorology, in which he also says he spoke about the material principle, as well as On the Nature of Man); in the On Nature, at least, which alone of his works came into my hands, he proposes a manifold demonstration that in the material principle posited by him is much intelligence. (Kirk, Raven, and Schofield: 1983, 435)
The debate is over whether On Nature is the one book that Diogenes wrote and which covered many different yet nevertheless interrelated topics (such as man, meteorology, and the Sophists), or that On Nature, On the Nature of Man, Meteorologia, and Against the Sophists were four separate works. Diels, the early German collator of the Presocratic fragments, preferred the former option (DK 64B9), while commentators like Burnet (EGP 353) prefer the latter view. It also entirely possible that Simplicius was either confused or misinformed in his reading of Diogenes because of the fact that the quotations of Diogenes’ work, which he himself provides, contain discussions, for example, on the nature of man, which should have been impossible if indeed he only had a copy of On Nature in his possession. At the same time, we have evidence from a work of the medical author Galen that a certain Diogenes wrote a treatise that dealt with a number of diseases and their causes and remedies. It is probable that this was Diogenes of Apollonia because we have other reports from Galen (and Theophrastus) that Diogenes held views about diagnosing a patient by analyzing his tongue and general complexion. This evidence, along with his discussions regarding anatomy and the function of veins, leads to the probability that Diogenes was a professional doctor of some sort who could have produced a technical medical treatise. Another interesting piece of evidence that suggests Diogenes could have been a doctor is the methodological claim he makes regarding his own form of writing, and which sounds very similar to what is said in the beginning of some of the more philosophical works in the Hippocratic corpus. Diogenes Laertius says that this was the first line of Diogenes’ book: “It is my opinion that the author, at the beginning of any account, should make his principle or starting-point indisputable, and his explanation simple and dignified” (Fr. 1). Such a no-nonsense approach to writing was often championed by the early medical thinkers.
Following his own recommendation that an author should clearly state his purpose up front, Diogenes began his account of nature by explicitly establishing his principle, or starting-point. He writes:
My opinion, in sum, is that all existing things are differentiated from the same thing, and are the same thing. And this is manifest: for if the things that exist at present in this world-order—earth and water and air and fire and all the other things apparent in this world-order—if any of these were different from the other (different, that is, in its own proper nature), and did not retain an essential identity while undergoing many changes and differentiations, it would be in no way possible for them to mix with each other, or for one to help or harm the other, or for a growing plant to grow out of the earth or for a living creature or anything else to come into being, unless they were so composed as to be the same thing. But all these things, being differentiated from the same thing, become different kinds at different times and return into the same thing. (Fr. 2)
Diogenes was what we today call a ‘substance monist’. Substance monism is the idea that everything is one thing. In other words, it means that all putative different things essentially are one self-identical thing. Substance Monism is an answer to the question, ‘what is and how many are there? According to Diogenes, for anything to be it must paradoxically be both identical to and different from the one, the thing that is - the one substance that is everything. The differences, however, of things from the one thing that is, are never ‘proper,’ as Diogenes argues. That is to say, the differences of things are never substantial, but rather they are only adjectival differences.
Now, while we do not find the term ‘substance’ in the fragments we have of Diogenes’ writing, the idea of a substance, and, moreover, the idea of substance monism, can help us understand what Diogenes meant when he said ‘all existing things are differentiated from the same thing, and are the same thing.’ A substance is what a thing is. It is the basic being of a thing; the essential reality a thing has to have in order for it be what it is. Things are substances if they essentially are the things they are. The essence of a substance is its own existence. This line of arguing was common to all the Presocratics because for them it was a natural law that out of nothing came nothing and into nothing, nothing went. To truly be, something had to be the essential source or cause of its own existence. Reality or being, therefore, for most of the Presocratics, and especially for Diogenes, is absolutely immanent to itself, and so all the differences there are in nature inhere in, or are internal to, it. This line of reasoning was an early version of what was to become the ontological argument. A Substance is a thing that exists because that is what it is: a thing that exists, a thing that exists on the basis of its own immanent self-sufficiency.
Diogenes was concerned with understanding what it is that makes a thing be what it is, what a thing’s substantial being is, and how many of these things or substances there really are. He wanted to know what makes a thing substantial. To understand what things are, what makes things be what they are, and how many of them there are, Diogenes simply observed both what he himself was composed of and what the primary qualities of everything he had ever experienced and thus thought about were. Like all the Presocratic philosophers, Diogenes’ chief observation was that all things are natural or physical. Diogenes observed that all things of this ‘present world-order’ are natural or physical elemental qualities such as earth, water, fire, and air. The observation that all things are natural or physical also implied that all things change, and that everything is moving in some degree, both growing and decaying, composing and decomposing, and speeding up and slowing down. For Diogenes, then, all things are physical and moving, for they are all natural and living. Therefore, the one self-identical substance that is in essence all different things is nature itself, which is the mobile, living, and absolutely physical identity of the universe. Furthermore, all the different things nature expresses of itself, or modifies via itself are variable forms of earth, water, fire, and air, which compose and decompose with each other in many ways as nature lives and moves. The elemental qualities of nature differ from each other only in degree and are in essence simply a variety of ways in which nature is identical to itself.
The observation that all things are physical, mobile, and different only in elemental degrees led Diogenes to note that if this is indeed the case then all things must be interrelated in some way. Relations, however, seem to demand some form of proper or substantial difference in order to occur. Diogenes was troubled by the apparent demands of proper duality implied by the living and flowing relations he observed as occurring throughout all of nature. The problem he had was that if all the things he observed relating throughout nature were really different from each other, then there was nothing in them or about them that made such relations even possible in the first place (for how could things truly relate that are really different from each other?) and thus, even more threateningly, everything he perceived as expressing a certain substantial identity was then utterly deceptive and false. In response to this dilemma, he noticed that if things relate in some degree, as they certainly seem to, there must be at least something they share, something in common between them that enables them to relate. That it is manifestly clear that things relate allowed Diogenes to assert the equally indubitable fact that there must be something between them they must all share that allows them to relate. If things were so different from each other that either they could not relate at all or that their relations brought about only their total fragmentation or annihilation, nothing in nature could grow or move or become in any way radically contrary to what he observed as happening in nature. For this reason, Diogenes posited that there must be some one thing, some self-identical substance that allows all the naturally different things to interact, relate, and compose and decompose with each other. Without a fundamental substance implicitly and inherently linking all things together, nothing would have a common ground to share and work upon or a situational medium through which to change and grow. Therefore, there must be a thing that makes all things relatable, a thing that allows all things to be different from each other to some degree, yet still be connected enough to each other to allow them to interact and compose and decompose with each other. This thing, for Diogenes, was going to have to be every where, all the time because there was nowhere at any time that he did not observe natural bodies moving, growing, and relating.
Substance monism, therefore, served not only to explain the absolute immanence and essential self-identity of nature to itself, it also explained how all the kinds of living, growing, and interacting of singular beings occur throughout nature. By sharing the common substance they all modify, all the different things of nature, all the elemental and formal means of composing and decomposing could relate, interact, and help and harm each other through the infinite and eternal process of natural or physical growth and decay. In other words, for Diogenes and his kind of substance monism, being is becoming, nature is nurturing, and all forms of movement, work, creation, destruction, and causality are so many ways one self-identical substance naturally lives the life of all its self-differentiated forms. For Diogenes, substance monism entails that nature is life and that, in essence, the universe lives. One absolutely physical identity underwrites all the apparent diversity.
Diogenes’ substance monism may seem radically opposed to what we believe today, especially with respect to our definitions of nature and life. Yet, even in Diogenes’ own time, his thinking was considered to be as peculiar and eclectic as that of many of the other Presocratics. Presocratic philosophy was often considered, in its own time and even today, to be neither religious nor scientific, but rather idiosyncratic and esoteric because of its emphasis on achieving the experience of a direct and immediate intuition of the essence of nature. Such an intuition defines the rarity and excellence of Presocratic wisdom. Like other Presocratics, Diogenes was a sage-like independent spirit who neither followed nor founded a school and who made use of the best elements of other philosophies he thought worthy of greater elaboration and which could yield him the wisdom he sought and loved. One such philosopher he borrowed from, as we mentioned, was Anaximenes. Like Anaximines, Diogenes maintained that air is the one substance of which everything is made, and is a mode of. In his Refutation of all Heresies, Hippolytus reports,
Anaximenes…said that infinite air was the principle, from which the things that are becoming, and that are, and that shall be, and gods and things divine, all come into being, and the rest from its products. The form of air is of this kind: whenever it is most equable, it is invisible to sight, but is revealed by the cold and the hot and the damp and by movement. It is always in motion; for things that change do not change unless there be movement. Through becoming denser or finer it has different appearances; for when it is dissolved into what is finer it becomes fire, while winds, again, are air that is becoming condensed, and cloud is produced from air by felting. When it is condensed still more, water is produced; with a further degree of condensation earth is produced, and when condensed as far as possible, stones. The result is that the most influential components of generation are opposites, hot and cold. (Kirk, Raven, and Schofield: 1983, 145)
Diogenes agreed with Anaximenes and proposed that air is the one substance that is reality. Following Anaximenes, Diogenes argued that air is the essential identity of all different things and that all different things are just so many forms of condensed or rarefied air. Nature, as air, is an infinite and eternal process that, through its indivisible mobility and continuity, constantly becomes all the ways it comes to be and passes away through an absolute multiplicity of singular beings. All different things are momentarily denser or finer forms or modes of one ubiquitous air. Through Simplicius, Theophrastus tells us,
Diogenes the Apolloniate, almost the youngest of those who occupied themselves with these matters (that is, physical studies), wrote for the most part in an eclectic fashion, following Anaxagoras in some things and Leucippus in others. He, too, says that the substance of the universe is infinite and eternal air, from which, when it is condensed and rarefied and changed in its dispositions, the form of other things comes into being. This is what Theophrastus relates about Diogenes; and the book of Diogenes which has reached me, entitled On Nature, clearly says that air is that from which all the rest come into being. (Fr. 2)
Now, there is for us something obviously problematic about Diogenes’ thinking regarding air. The problem we have with trying to reconcile Diogenes’ thinking with what we know today is figuring out how ‘air’ can still be an absolutely cosmic, indivisibly infinite, and eternally living substance when it is limited to only the earth’s atmosphere. We understand air today to be reducible to other properties. To approach this problem it must first be understood what Diogenes meant by the term we are using. Aer in Ancient Greek was rooted in the verb ‘to blow, or breathe’ and the term often denoted a certain sense of loftiness and light, spirited movement. Aer was also associated with the wind, the sky, and brightness. What Diogenes meant by air was the celerity and rapidity of the light and fluid movement of nature’s waxing and waning, its constant condensing and rarefying, its expanding and contracting. Air, for Diogenes, is the gaseous fluidity of all living and natural phenomena. It is important to understand that by ‘air’ Diogenes did not intend the grand total of all the substantially distinct atoms of oxygen, nitrogen, argon and so on that compose our atmosphere, but rather the simple fact that all things are natural, living, and moving. Air, for Diogenes, was both the constant stirring of the atmosphere as a singular elemental formation, and also all the ‘inhalations’ and ‘exhalations’ of the planetary and celestial movements. Air expresses the becoming of being, the living of nature. A mobile movement, a movement conceived not as the attribute or property of an immobile substance, but rather as a substance itself, movement itself conceived as substance, is what Diogenes understood by air. Air is the indivisible body that is the universe, all that is: “this very thing [air] is both eternal and immortal body, but of the rest some come into being, some pass away” (Fr. 7). And of the rest that come into being and pass away, they are all ways air modifies itself. Atmospheric air is, therefore, another way absolute, substantial air (aer) becomes and expresses itself.
Diogenes, moreover, says that air is intelligence. The Ancient Greek term for intelligence is noesis. Noesis is not just intelligence in the sense of being sharp or smart. What Diogenes designated by noesis was the active power of a mind to immediately intuit and know what it thinks. Noesis is not so much a belief held by a mind, as it is the activity of thinking itself that is a mind. A mind is an actively thinking thing. Now, we might be wondering how the absolute cosmic substance, air, could also have an immediately intuitive and active mind, that is, how it could also be a thinking thing. First, it is important to keep in mind that everything was physical for Diogenes. Thinking was a physical process for him that was not limited to only organisms with brains. (There will be more on this in the next section.) In other words, thinking did not solely mean cognition for Diogenes. Air is intelligence itself; pure thought intuitively thinking itself. Just as all singular bodies are in air as modes or ways it modifies and transforms itself through condensation and rarefaction, so too are all minds, all intellects or intelligent beings, in air as modes or ideas through which it immediately intuits and thus thinks itself. If air is intelligence, or purely active thinking, and intelligence is thus the one indivisible body that imbues everything, then every singular body is also going to be imbued with mind. Second, Diogenes argued that intelligence was the power inherent to air with which it could absolutely and internally differentiate itself in a rational and measured fashion. We have already seen the four main elements of nature as an example of this rational and measured differentiation. Intelligence was for Diogenes a sufficient reason for all the differences of degree found throughout nature:
For, he [Diogenes] says, it would not be possible without intelligence for it [sc. the substance] so to be divided up that it has measures of all things—of winter and summer and night and day and rains and winds and fair weather. The other things, too, if one wishes to consider them, one would find disposed in the best possible way. (Fr. 3)
The intelligence and the soul, the thinking and the living of singular beings are modifications of substantial air-intelligence. Through the cessation of breathing, sensing, and knowing, living beings decompose and lose their intelligence, but only so there can be a simultaneous re-composition of air-intelligence elsewhere. Diogenes says, “Men and the other living creatures live by means of air, through breathing it. And this is for them both soul [that is, life principle] and intelligence, as will be clearly shown in this work; and if this is removed, then they die and intelligence fails.” (Fr. 7)
Diogenes also says that air is divine. Divinity designated natural power for the Presocratics, who also tended not to anthropomorphize their gods. Instead, a divinity for the first philosophers was more a natural force, usually an elemental power found permeating all of nature and imbuing it with all its creative and destructive power. Along with substance monism, pantheism—the idea that everything is divine, that God is all things—was an idea shared by many of the Presocratics. For Diogenes, his substance monism definitely entailed pantheism. Air-intelligence is divine. Only a god could remain identical to itself while also rationally differentiating itself through an infinity of singular beings. Only a god as well could have the intuitive intelligence to actively and affirmatively know all the self-identical differentiations it expressed of itself. As Diogenes says, it is only nature conceived as an absolutely immanent and divine air-intelligence that could be “both great and strong and eternal and immortal and much-knowing (Fr. 8).” Diogenes summarized all these points wonderfully when he wrote:
And it seems to me that that which has intelligence is what men call air, and that all men are steered by this and that it has power over all things. For this very thing seems to me to be a god and to have reached everywhere and to dispose all things and to be in everything. And there is no single thing that does not have a share of this; but nothing has an equal share of it, one with another, but there are many fashions both of air itself and of intelligence. For it is many-fashioned, being hotter and colder and drier and moister and more stationary and more swiftly mobile, and many other differentiations are in it both of taste and of color unlimited in number. And yet of all living creatures the soul is the same, air that is warmer than the outside, in which we exist, but much cooler than that near the sun. But in none of living creatures is this warmth alike (since it is not even so in individual men); the difference is not great, but as much as still allows them to be similar. Yet it is not possible for anything to become truly alike, one to the other, of the things undergoing differentiation, without becoming the same. Because, then, the differentiation is many-fashioned, living creatures are many fashioned and many in number, resembling each other neither in form nor in way of life nor in intelligence, because of the number of differentiations. Nevertheless, they all live and see and hear by the same thing, and have the rest of their intelligence from the same thing. (Fr. 5)
Singular beings are not only composed of air, they also live and have intelligence by breathing air. The soul or life principle of all things is an absolute and divine air-intelligence that, in a sense, breathes through itself in all the forms it takes on. Air is both eternal and omnipresent as it takes on an unlimited number of forms. Like many of the Presocratics, Diogenes provides an account of how air modifies itself through a variety of physical compositions ranging from galaxies and solar systems to respiratory, circulatory, and cognitive systems. Diogenes provides us with a cosmogony that explains the creation of the earth and sun on the basis of the condensation and rarefaction of air. In The pseudo-Plutarchean Stromateis, which Eusebius preserved, it is stated that:
Diogenes the Apolloniate premises that air is the element, and that all things are in motion and the worlds innumerable. He gives this account of cosmogony: the whole was in motion, and became rare is some places and dense in others; where the dense ran together centripetally it made the earth, and so the rest by the same method, while the lightest parts took the upper position and produce the sun. (Kirk, Raven, and Schofield: 1983, 445)
Diogenes also made some cosmological observations. He gave an interesting account of heavenly bodies that included an attempt to explain meteorites.
Diogenes says that the heavenly bodies are like pumice-stone, and he considers them as the breathing-holes of the world; and they are fiery. With the visible heavenly bodies are carried round invisible stones, which for this reason have no name: they often fall on the earth and are extinguished, like the stone star that made its fiery descent at Aegospotami. (Kirk, Raven, and Schofield: 1983, 445)
There are many similarities between Diogenes’ cosmogony and cosmology and that of his fellow Presocratics. First, he posits the existence of innumerable worlds like many other Presocratics. It makes sense that Diogenes asserts an immeasurable plurality of worlds because he places no restrictions to the amount of differentiations and composition air can take. Why wouldn’t there be a plethora of worlds littered throughout the universe insofar as worlds are, by definition, just momentary formations of the universe (air) anyway? Secondly, it is from Anaxagoras that Diogenes likely borrowed the idea of a noetic substance forming a vortex within itself. Thirdly, it was common in the Ionic tradition to describe the origin of the earth as the formation of more concentrated and denser material in the center of such a vortex. Likewise, the rarer material would go to the extremes of the vortex, following the law that differentiation is a symmetrical process whereby like follows like. Lighter air, therefore, tends towards greater heights and extremities while denser air tends to concentrate into relative core positions. With respect to astronomical objects, it seems Diogenes said heavenly bodies were like pumice stone because pumice is both glowing and light, or ‘airy,’ and composed of translucent and very porous bubble walls, which are, once again, qualities that accommodate the substance that Diogenes countenances.
From extrasolar objects and the solar system down to the earth itself, Diogenes continues to explain all physical and psychological phenomena as so many self-modifying processes of one substantial air. Within and through the atmospheric air of our planet, Diogenes addresses the thinking and sensing of particular organisms. The law of like following like is as applicable on earth as it is throughout the cosmos. From Theophrastus’ de sensu, Diogenes is reported as having a detailed theory of sensation and cognition based on the reception and circulation of air within and between singular beings. Each of the five senses are dealt with in terms of how they process air. Degrees of intelligence or cognitive ability are also delineated by the amount and kind of air each being possesses. The differences between beings are defined by how swiftly, and with how much agility, they engender and circulate. Some beings, for example, have more intelligence, or more complex brain activity while others have say, a better sense of smell. All kinds of perception, however, are ways that air processes and modifies itself.
Diogenes attributes thinking and the senses, as also life, to air. Therefore he would seem to do so by the action of similars (for he says that there would be no action of being acted upon, unless all things were from one). The sense of smell is produced by the air round the brain…Hearing is produced whenever the air within the ears, being moved by the air outside, spreads toward the brain. Vision occurs when things are reflected on the pupil, and it, being mixed with the air within, produces a sensation. A proof of this is that, if there is an inflammation of the veins (that is, those in the eye), there is no mixture with the air within, nor vision, although the reflexion exists exactly as before. Taste occurs to the tongue by what is rare and gentle. About touch he gave no definition, either about its nature or its objects. But after this he attempts to say what is the cause of more accurate sensations, and what sort of objects they have. Smell is keenest for those who have least air in their heads, for it is mixed most quickly; and, in addition, if a man draws it in through a longer and narrower channel; for in this way it is more swiftly assessed. Therefore some living creatures are more perceptive of smell than are men; yet nevertheless, if the smell were symmetrical with the air, with regard to mixture, man would smell perfectly….(Kirk, Raven, and Schofield: 1983, 448).
It seems that for Diogenes correspondence in perception entails a matching-up of the degrees of air within the brain with air that is being received through the sensitive faculties. Sensation itself is the reception of air by air and so is a mixing of airs through the aerated blood channels that are themselves oxygenated through respiration. (Diogenes also attempted an anatomy of the veins.) Usually, the reception of air by air takes place in an organism as an agitation or irritation of the sense organs and thus also the brain. An accurate or adequate perception is one in which there is a mutually interpenetrating coalescence of finer air flows within, between, and amongst the parts of organisms and the finer air received through sensations. This entails that a certain kind of affective or sensitive openness, which can be regarded as a susceptibility to finer air, allows for greater perceptual correspondences with the other kinds of air-composites. Such affective openness implies that one must come to pursue or avoid interaction with other air-composites in accordance with how they increase or decrease one’s respiratory and cognitive abilities. The trick is to have sensitive correspondences serve the rationally differentiated regulatory systems that allow organisms to survive and persevere. Overall, Diogenes was one of the first thinkers to emphasis the relationship between sensation, respiration, and cognition.
Theophrastus continues in his report of Diogenes’ thinking regarding sensation and cognition. Pleasure and pain are also definable by the sensitive reception and circulation of air.
That the air within perceives, being a small portion of the god, is indicated by the fact that often, when we have our mind on other things, we neither see nor hear. Pleasure and pain come about in this way: whenever air mixes in quantity with blood and lightens it, being in accordance with nature, and penetrates through the whole body, pleasure is produced; but whenever the air is present contrary to nature and does not mix, then the blood coagulates and becomes weaker and thicker, and pain is produced. Similarly, confidence and health and their opposites…Thought, as has been said, is caused by pure and dry air; for a moist emanation inhibits the intelligence; for this reason thought is diminished in sleep, drunkenness, and surfeit. That moisture removes intelligence is indicated by the fact that other living creatures are inferior in intellect, for they breathe the air from the earth and take to themselves moister sustenance. (Kirk, Raven, and Schofield: 1983, 448)
The key to cultivating a stronger intelligence, greater pleasures, and a good sense of taste (for the wise man is the sage, the sapiens, the one who tastes well) is to take in, breathe, and allow to permeate one’s organic structure the finer, lighter, dryer, warmer, and swifter air. To breathe well is to live well. To stand erect, awake, warm-blooded, firm, and at attention is to manifest a stronger and more well-regulated and attuned disposition. Like Heraclitus, Diogenes advises that one must avoid excessive moistening. To become more god-like, more substantially identical with what one essentially is, one should actively, aggressively, and affirmatively seek out other aerated bodies of similar dispositions and compose well with them. Certain compositions lead to the reproduction of new organic forms. Since air is the vitality of its own natural and substantial existence, it will continuously reproduce itself through the distribution of its own aerated seeds. Indeed, air, understood as nature’s ubiquitous and eternal living, is constantly conceiving itself, impregnating and giving birth to its own various forms of gradients of denser or finer air.
Diogenes, it is worth mentioning, also had an interest in embryology. The self-conception of air takes place through the intermingling of aerated sperm and eggs. For Diogenes, life grows naturally and intelligently at all levels because of the aerated nature of blood and veins.
And in the continuation he shows that also the sperm of living creatures is aerated and acts of intelligence take place when the air, with the blood, gains possession of the whole body through the veins; in the course of which he gives an accurate anatomy of the veins. Now in this he clearly says that what men call air is the material principle. (Fr. 5)
The Eleatic philosophers were monists, believing that were there two things, we would have to say of one that it is not (the other). They thought, however, that one may not speak of what is not, as one would be speaking of nothing. The fact that there is only one thing in existence was thought to entail that change could not occur, as there would need to be two things for there to be the relata required for a causal relation. Diogenes seems to have agreed with the monistic aspect of the Eleatic philosophy while attempting to accommodate the possibility of change. His move was to claim that one thing might be a causa sui, and that the change we experience is the alteration thereof. The substance best suited as the substrate was thought to be air, and here rings reminiscent the view of Anaximenes. One also finds, arguably, the influence of Anaxagoras, when one considers the claim that this substance is intelligence or nous. Finally, it is worth noting that the idea that the universe is a living being is broached in Plato’s Timaeus. And the idea of substance monism has had other advocates in the history of philosophy, most famous perhaps being Benedict Spinoza.
There are no monographs on Diogenes of Apollonia in English. Unfortunately, Diogenes has been given rather brief attention throughout the secondary literature. Diogenes is usually addressed in chapters in books on the Presocratics.
- Barnes, Jonathan. The Presocratic Philosophers. London: Routledge & Kegan Paul (1 vol. ed.), 1982, 568-592.
- Burnet, J. Early Greek Philosophy. London: Black (4th ed.), 1930.
- Diels, H. “Leukippos und Diogenes von Apollonia.” RM 42, 1887, 1-14.
- Diller, H. “Die philosophiegeschichtliche Stellung des Diogenes von Apollonia.” Hermes 76, 1941, 359-81.
- Guthrie, W.K.C. The Presocratic Tradition from Parmenides to Democritus. Vol. II. Cambridge: Cambridge University Press, 1993, 362-381.
- Huffmeier, F. “Teleologische Weltbetrachtung bei Diogenes von Apollonia.” Philologus 107, 1963, 131-38.
- Jaeger, Werner. The Theology of the Early Greek Philosophers. Oxford: Oxford University Press, 1967, 155-171.
- Kirk, G.S., J.E. Raven, and M. Schofield. The Presocratic Philosophers. 2nd edn. Cambridge: Cambridge University Press, 1983.
- Laks, Andre. “Soul, Sensation, and Thought.” The Cambridge Companion to Early Greek Philosophy. Cambridge: Cambridge University Press, 1999, 250-270.
- Laks, Andre. Diogene d’ Apollonie. Paris: Lille, 1983.
- McKirahan, Richard D. Philosophy Before Socrates. Indianapolis: Hackett Publishing Company, 1994, 344-352.
- Shaw, J. R. “A Note on the Anatomical and Philosophical Claims of Diogenes of Apollonia.” Aperion 11.1, 1977, 53-7.
- Warren, James. Presocratics. Berkeley: University of California Press, 2007, 175-181.
University College Cork | <urn:uuid:22be40d2-f326-4daa-81fb-34bcd689f15a> | CC-MAIN-2016-26 | http://www.iep.utm.edu/diogen-a/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00154-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968099 | 8,674 | 3.59375 | 4 |
Archerfish tune their shots to universal properties of prey adhesion
Archerfish exhibit the remarkable ability to hunt for insects and other small terrestrial animals by firing precisely aimed streams of water that knock prey onto the water's surface. These water shots were once thought to be all-or-none in quality, but researchers have now discovered new levels of sophistication in the archerfish's hunting strategy that shed light on how this impressive predatory behavior has evolved. The findings are reported by Thomas Schlegel, Christine Schmid, and Stefan Schuster of the Universität Erlangen-Nürnberg in Erlangen, Germany, and appear in the October 10th issue of the journal Current Biology, published by Cell Press.
By employing high-resolution imaging of water streams fired during archerfish hunting, researchers have discovered that archerfish automatically tune the force they use to dislodge prey according to prey size, and that this strategy appears to be resistant to alteration by experience: It occurs even when the fish have been placed for two years in an environment that has been manipulated to make such tuning unnecessary for successful hunting. The findings suggest that the tuning aspect of the archerfish's hunting strategy is not as plastic in response to learning as might have been thought. Instead, the strategy may reflect the evolution of archerfish behavior in accordance with a recently discovered scaling law: Among animals such as flies and lizards, an animal's adhesive force--its natural tendency to stick to a surface--is closely proportional to the animal's size.
The researchers showed that for any given size of prey, the archerfish tune their attacks such that prey are hit with about ten times the force that adhesive organs of animals of that size could sustain.
The new work also revealed that the archerfish's hunting technique is metabolically costly and that the fish tune the force of their water shots by adjusting the mass of water in a shot, rather than altering the initial release pressure and speed of the shot. This turns out to be the most efficient way of adjusting force--by doubling the mass of water shot, the fish double the force that is applied to prey in a way that only doubles the energetic cost of the shot; doubling speed of the shot would require quadrupled energetic cost.
The researchers include Thomas Schlegel, Christine J. Schmid and Stefan Schuster of Universität Erlangen-Nürnberg, Institut für Zoologie II, in Erlangen, Germany.
Schlegel et al.: "Archerfish shots are evolutionarily matched to prey adhesion." Publishing in Current Biology Vol 16 No 19, R836-7. DOI 10.1016/j.cub.2006.08.082. www.current-biology.com
Last reviewed: By John M. Grohol, Psy.D. on 30 Apr 2016
Published on PsychCentral.com. All rights reserved. | <urn:uuid:e662042c-3e43-475c-babb-eec4516701f3> | CC-MAIN-2016-26 | http://psychcentral.com/news/archives/2006-10/cp-att100506.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00020-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.927731 | 601 | 3.46875 | 3 |
CHAPEL HILL, NC — Roundworms and humans have more in common than you’d think. That’s why the first-ever integrated analysis of the molecular processes that control genome function in an animal has the potential to speed understanding of the molecular processes in human cells. A collaborative group of scientists across the world, including researchers at UNC, carried out the research.
“It’s like moving from a satellite view of a land mass to an on-the-ground perspective,” explains Jason Lieb, PhD, a senior author on the Dec. 24 Science paper. Dr. Lieb is a professor of biology in the UNC College of Arts and Sciences, and director of the Carolina Center for Genome Sciences.
“And since the C. elegans worm genome is much smaller than but functionally very similar to a human’s, we make discoveries quickly and then translate our findings to humans.” Lieb is a member of UNC Lineberger Comprehensive Cancer Center.
The paper, titled "Integrative Analysis of the Caenorhabditis elegans Genome by the modENCODE Project,” was authored by Lieb and other members of the model organism ENCyclopedia Of DNA Elements (modENCODE) Consortium, which is funded by the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health. A companion Science paper was published reporting findings of a similar study in the fruit fly (Drosophila melanogaster). In addition, more than a dozen companion modENCODE papers are published online in the journals Nature, Genome Research and Genome Biology.
The roundworm and the fruit fly genome sequences were initially sequenced alongside the Human Genome Project. Their sequences are routinely compared to the human genome sequence, in experiments that rely on millions of years of evolution. It turns out that particularly important stretches of DNA in the genome are “conserved,” or retained throughout evolutionary history. The distant evolutionary connection between flies, worms and humans is what makes research in these model organisms relevant to human biology.
“These findings will enable scientists everywhere to carry out experiments in flies and worms to better understand the relationship between molecular and biological activities in these animals,” said NHGRI Director Eric D. Green, MD, PhD. “What we learn from these model organisms will contribute significantly to our understanding of health and disease in humans.”
Lieb studies chromatin, the protein superstructure that packages DNA and controls which sections of the genome are accessible to regulatory molecules that convert the genetic code into cellular action. ‘What we found was that the unique properties of the C. elegans chromosomes- for example the way their chromosomes behave during cell division and the way that their X chromosomes are regulated- gave us new insight about the functions of proteins that are conserved in humans.”
The researchers examined the organization and structure of chromatin in the cells throughout the life stages of each organism. Strikingly, both groups – those studying the worm and those studying the fly -- discovered specific chromatin signatures associated with the regulation of genes in their respective organism. Unique chromatin signatures were associated with distinct regions of the genome that either turn genes on and off.
Other authors are from Cold Spring Harbor Laboratory, Cold Spring Harbor, N.Y.; Dana-Farber Cancer Institute, Boston, Mass.; European Molecular Biology Laboratory, Heidelberg, Germany; Fred Hutchinson Cancer Research Center, Seattle, Wash.; Harvard School of Public Health, Boston, Mass.; Lawrence Berkeley National Laboratory, Berkeley, Calif.; Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany; National Human Genome Research Institute, Bethesda, Md.; New York University, New York City, N.Y.; NimbleGen Systems, Madison, Wis.; Ontario Institute for Cancer Research, Toronto, Canada; Sloan-Kettering Institute, New York, N.Y.; Stanford University, Palo Alto, Calif.; State University of New York at Stonybrook; University of Toronto, Ontario, Canada; University of California- Berkeley, University of California- San Diego, University of California- Santa Cruz; University of Cambridge, Cambridge, UK; University of Michigan, Ann Arbor, Mich.; University of Washington, Seattle, Wash.; Vanderbilt University, Nashville, Tenn.; Weizmann Institute of Science, Rehovat, Israel; and Yale University, New Haven, Conn.
Media contact: Dianne G. Shaw (919) 966-5905, firstname.lastname@example.org | <urn:uuid:06461c87-74da-45b8-8f5a-93b8fc3db976> | CC-MAIN-2016-26 | http://www.med.unc.edu/www/newsarchive/2010/december/international-team-including-unc-scientist-probes-dna-function | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00026-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.892547 | 937 | 3.109375 | 3 |
CLIPPINGS: COLLECT THEM OR LET THEM LIE?
Should you collect grass clippings when you mow? No. It’s almost never a good idea to collect clippings from your lawn for several good reasons:
- Clippings return a lot of nutrients to the lawn;
- They do not add to thatch;
- There’s no more room for them in landfills.
It’s true that for years it seemed like a good idea to bag grass clippings, but new research and environmental concerns have changed all that. Grass recycling now makes the best sense. Simply repeat this mantra before you mow: “Cut it high and let it lie.”
Lawns stay greener when the clippings stay
Clippings recycle as much as 15% of all the food value of the fertilizer applied. This means a lawn that recycles grass clippings will be greener and better fed than one where clippings are removed. And because grass clippings have a high water content, they break down quickly and return moisture and nutrients to the soil fast. Letting your clippings lie taps into the natural cycle of nature and saves you time and work.
Getting to the root of the thatch myth
Thatch is the layer of living and dead roots and stems that form on top of the soil. A small amount of thatch is a good thing, but when thatch builds up faster than the soil can break it down, all sorts of lawn problems start to crop up. The misunderstanding is that grass clippings add to this thatch. This isn’t true. Thatch is not composed of grass blades. Bagging the grass clippings does not reduce thatch buildup.
Caring for the environment we all share
Besides the direct benefits of leaving your clippings to feed your lawn, there’s the additional environmental benefit of keeping clippings from filling up our shrinking landfills. Most people who bag their lawn clippings put them out for the trash collector. This “trash” is usually put in plastic bags that don’t decompose. The result is that as much as 10% of landfill space has been taken up just from grass clippings alone. We’re running out of space to dispose of our trash, so recycling clippings naturally makes great sense.
Have questions about grass clippings?
Contact NutriGreen — your local fertilization & weed control experts. | <urn:uuid:c16fc8fe-0f93-4e81-912c-df3646b2054e> | CC-MAIN-2016-26 | http://nutrigreenlawncare.com/clippings | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00200-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934871 | 527 | 2.59375 | 3 |
This may sound little unusual, but their exists a factory whose test track is on its roof top. As it turns out, the factory that comes out with this marvelous piece of engineering is ‘Fiat’s’ factory based in Lingotto (Italy).
Fiat’s Lingotto factory was completed in 1923. Unlike any other automobile factory to date, the factory featured a spiral assembly line that moved up through the building and a concrete banked rooftop test track.
Building of the Lingotto factory began while World War I was still raging in 1916. 7 years later, in 1923, Lingotto was completed and ready to open for business.
It was the biggest automobile factory Europe had ever seen and was the second largest in the world. Upon its completion Lingotto instantly became of a symbol of Italy’s proud manufacturing history. Only Ford’s massive River Rouge Factory Complex could compare in size and scale.
Designed by engineer Giacomo Mattè-Trucco, the Lingotto factory was one of the first buildings of its size to rely heavily on reinforced concrete in the construction process. The five story building featured a simple loop rooftop test track with two banked turns that consumed a 1620 foot x 280 foot portion of rooftop. The test track’s banked turns were constructed from an intricate series of concrete ribs in a construction technique that had not been used frequently before Lingotto’s construction. It is safe to say the technique had never been used for a test track 6 stories in the air.
The original Lingotto rooftop test track can be seen briefly during the getaway sequence in the film The Italian Job (1969). | <urn:uuid:36aeb726-cd69-48b1-8ea1-bb427aa88a28> | CC-MAIN-2016-26 | http://autospace.co/test-track-on-roof-top/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00003-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.976473 | 342 | 2.640625 | 3 |
About The Charts and Nutrition Facts
- For accuracy, the calorie chart and fat chart are based on the biggest serving size available.
- These nutrition facts came directly from the USDA or manufacturer/restaurant.
- If you're using a calorie counter, remember that Fat, Carbs, and Protein calories are just close estimations based on the Atwater factors:
Fat: 9 cal/g Carb: 4 cal/g Protein: 4 cal/g
- Percent Daily Values are based on a 2,000 calorie diet. Please remember this when using this information to make healthy food choices for your diet.
Calories - One serving has a total Calorie count of 70 Calories. This breaks down as 0 Calories from Fat, 56 Calories from Carbohydrate, and 8 Calories from Protein. See the calorie chart below.
*Fat/Carb/Pro calories based on the Atwater (9/4/4) calculations.
Fat - This is a healthy food if you're watching your fat intake. There is no Fat in this food.
Cholesterol - Unfortunately, the amount of Cholesterol was not provided to us.
Carbohydrates - Total Carb count for one serving: 14 grams. Sugar: Unknown, Fiber: 1 gram, and Net Carbs: 13 grams (helpful to know if you're counting carbs).
Protein - For Protein, one serving of this food has a total of 2 grams.
Minerals - We were unable to obtain the Iron and Calcium quantities for this food.There are 10 mg of Sodium in this food.
Vitamins - This food may contain unknown quantities of Vitamin A and Vitamin C -- we could not obtain the values in this case. | <urn:uuid:47ab130c-a417-4548-b13c-18b6acbdef01> | CC-MAIN-2016-26 | http://www.quitehealthy.com/nutrition-facts/alpineaire-foods/L106081.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00022-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.895179 | 353 | 2.78125 | 3 |
Illustration: Gender disparity in pay.
Donna Grethen, Tribune Media Services
Minnesota women on average are paid $400,000 less than men in comparable jobs over a lifetime of work, the Center for American Progress reported in December 2008. Nationally, the biggest gap was experienced by women in management and financial occupations, followed by sales and professional ranks.
Minnesota's pay equity achievement
- Article by: Star Tribune Editorial
- April 10, 2013 - 10:30 AM
Today is Equal Pay Day — the day when, according to the National Committee on Pay Equity, the wages earned by an average working American woman since Jan. 1, 2012, finally equal the amount the average male worker earned through Dec. 31. For women on the receiving end of gender-based discrimination in compensation, it’s a day to seethe.
But if you’re a woman working for the state of Minnesota or one of its local governments, today is a day to celebrate. On average, your pay has kept pace with that of your male counterparts for several years.
Minnesota is one of only a handful of states to require equal pay for work of equal value in government employment, as measured by a job-specific point system. That’s been the law in state employment since 1982 and county and municipal employment since 1984. But achieving the law’s goal took many years. Full pay equity was finally achieved in state employment in 2010; it’s still a matter for regular, though typically minor, pay adjustments in local governments.
A well-deserved salute to that achievement and to the women’s equity crusaders who engineered it is planned at the State Capitol this morning. Among those expected to be on hand are Nina Rothchild, the founding director of the Council on the Economic Status of Women, and Minneapolis state Sen. Linda Berglin, the council’s first chair and pay equity legislation’s prime sponsor.
Pay equity was a major feminist breakthrough when it was signed into law by Republican Gov. Al Quie. But it wasn’t particularly controversial, Rothchild recalls. That’s because the council and its allies in both parties had been at work for six years documenting systemic gender discrimination in government employment. They had the advantage of a newly developed Hay system for scoring a job’s value, which could be used to spot patterns of bias in matters of hiring, promotion and compensation.
A task force reported in March 1982 that female-dominated jobs consistently were paid less than men’s jobs with the same Hay scores. For example, dining hall coordinators (a female-dominated job) made $300 per month less than an auto parts technician whose job measured the same on the Hay scale. A clerk typist took home $267 per month less than the identically scored delivery van driver.
Such disparities are “bad old days” stuff in Minnesota state government. But they remain the reality in the private sector, according to the latest semiannual wage report of the American Association of University Women. It found that in 2011, Minnesota women on average were paid 80 percent as much as average men — $40,416 per year, compared with $50,580 for men.
Private-sector employees often justify the persistence of gender disparities in pay by saying they are guided by the marketplace. But, as Rothchild noted, the American employment marketplace has always undervalued caregiving, which is a component of many jobs dominated by women.
Changing the private marketplace’s estimation of such work’s worth was one of the goals of pay equity crusaders 30 years ago. They sought to make government a model employer for women, hoping both to set government on firmer moral ground and to give other states and the private sector a competitive reason to follow suit.
Those hopes have been only partially realized. As recently as two years ago, some legislators and local governments sought to repeal pay-equity requirements, arguing that private-sector employers face no similar requirement.
Fortunately, that repeal effort died as Minnesota women spoke up. They told legislators that pay equity has been good for women, their families, their communities and the Minnesotans that female government workers serve on the job. A reprise of those arguments at the Capitol today is welcome, and not just for history’s sake.
© 2016 Star Tribune | <urn:uuid:ded36a36-af68-4c8a-98f8-85044d8b1221> | CC-MAIN-2016-26 | http://www.startribune.com/printarticle/?id=202027901 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00135-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967665 | 896 | 2.546875 | 3 |
Don’t Become a Victim
A type of cyber-scam known as “phishing”
is on a dramatic upswing in Wisconsin, and I’ve heard from
a number of consumers who thought I should warn readers about
it. The scam starts with consumers receiving what on the surface
appear to be e-mails from trusted organizations—their banks,
familiar on-line companies, even government agencies.
One consumer, for example, received
an e-mail informing him, “Your card was used by another
person or stolen. It could happen if you have been shopping on-line,
and someone got your billing information including your card number.”
He was then told, “To avoid and prevent any billing mistakes
and to refund your credit card, it is strongly recommended to
proceed filling in the secure form on our site.” The consumer
was referred to a website that looked authentic. However, he became
suspicious and decided to complain about the e-mail to state Consumer
Protection officials. They were able to verify that the site was
a phony, designed to extract information such as Social Security
numbers and then route the information to identity thieves.
Bank One recently alerted its customers
that criminals are sending out e-mails asking for personal identifying
information to maintain Bank One accounts. Consumers are asked
to supply the information on a web site Bank One said looked very
much like the bank’s legitimate website.
AOL customers have also been “phished.”
Last November, the U.S. Department of Justice announced the successful
criminal prosecution of a Virginia woman who sent fake e-mails
to AOL customers announcing that they must update their credit
card information to maintain their AOL accounts.
Other consumers received e-mails
purporting to come from the Federal Deposit Insurance Corporation
(FDIC) that ask for verification of personal financial information—including
bank account numbers—at a phony web site created to look
like the real FDIC home page.
Wisconsin Consumer Protection spokesperson
Glen Loyd told me his agency has received many complaints about
fake websites, and several investigations are underway.
How do you protect yourself from
phishing? First, ask your Internet service provider how you can
screen unwanted e-mails that are commonly referred to as “spam.”
Second, avoid e-mailing personal and financial information. If
you get an unexpected e-mail from a company or government agency
asking for your personal information, contact the company or agency
cited in the email using a telephone number you know to be genuine
or start a new Internet session and type in the web address that
you know is correct. Third, report the e-mail to the Federal Bureau
of Investigation at www.ic3.gov
or contact Wisconsin Consumer Protection at 1-800-422-7128.
If you fall victim, notify your
bank and credit card company immediately. Discuss with them whether
your account should be closed and whether a fraud watch should
be placed on your credit report. Be careful out in cyberspace! | <urn:uuid:908fcea7-3749-4eb9-8c4c-b7f1f5b585e0> | CC-MAIN-2016-26 | http://www.wecnmagazine.com/consumer/2004/ccpapril04.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00148-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940841 | 660 | 2.578125 | 3 |
Egypt has been exposed to many civilizations, such as the Greek, Roman and Islamic ones. The marriage customs of Egyptians make it easy for a couple to get to know one another, for the families meet often.
It starts by the suitor's parents visiting his fiancee's house to get her family approval to complete the marriage and reaching an agreement, which contains two main items: an amount of money, called Mahr, paid by the suitor to his fiancee's family to help them prepare the furniture of their daughter and a valuable jewelry gift, called Shabka, given by the suitor to his fiancee. The value of this gift depends on the financial and social levels of the suitor's family.
When the two parties complete the agreement, they fix an appointed date for the engagement party.
When the house of the new family becomes ready, the two families fix a date for the wedding party.
The night before wedding day, the relatives, friends and neighbors get together to celebrate "the Henna Night".
The next day, the marriage contract is signed and registered. After sunset, the wedding party starts and the couple wears their best dresses and jewelry. | <urn:uuid:b14f8d09-fb0b-456a-9c48-a47011215ebd> | CC-MAIN-2016-26 | http://worldtraditions.blogspot.com/2009/08/wedding-traditions-of-egypt.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00102-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.969058 | 240 | 2.84375 | 3 |
Apprentice Lineman Electrocuted
The National Institute for Occupational Safety and Health (NIOSH), Division of Safety Research (DSR), performs Fatal Accident Circumstances and Epidemiology (FACE) investigations when a participating state reports an occupational fatality and requests technical assistance. The goal of these evaluations is to prevent fatal work injuries in the future by studying: the working environment, the worker, the task the worker was performing, the tools the worker was using, the energy exchange resulting in fatal injury, and the role of management in controlling how these factors interact.
On November 4, 1987, a 30-year-old male apprentice lineman working as a member of a power line construction crew was electrocuted while installing a new length of overhead distribution conductor.
City police officials notified DSR concerning this fatality and requested technical assistance. During November 4-5, 1987, a DSR research team conducted a site evaluation, interviewed company officials and co-workers, and photographed the incident site.
Overview of Employer's Safety Program:
The victim was employed by a large power line construction company with more than 350 employees. The company has been in operation for 66 years and has a formal safety program. Both classroom and on-the-job training are provided to employees. The victim had been employed by the company for 2½ months but had not attended the formal company training program.
Synopsis of Events:
On the day of the incident, the victim was part of a five-man crew stringing a new circuit conductor beneath an existing 12,000-volt, 3-phase overhead electrical service. The crossbars for suspending the new circuit were mounted 5 feet below the energized conductors. This circuit was to be approximately 2,400 feet in length, supported by power poles at 200-foot intervals. The new conductor was being supported on crossbars mounted on the existing power poles. Due to the hilly, wooded terrain and two turns in the system, it was impossible to see from one end of the pull area to the other.
At the time of the incident, the victim was working at a trailer-mounted line tensioner. The victim was leaning over the side of the tensioner trailer and a co-worker was at the rear of the trailer. The new conductor was being pulled from the tensioner by a pulling rig located at the other end of the run. A "loop" developed in the new conductor between the tensioner and the nearest pole because of insufficient tension on the line. A second loop of cable occurred on the spool when the cable struck the trailer axle, which prevented it from feeding properly from the spool. Tension on the new conductor increased as the pulling unit continued to operate. This caused the loosely-strung new conductor to raise by several feet between the supporting crossbraces. The new conductor contacted an existing energized conductor which was sagging approximately 10 feet below the elevation of the crossbrace mountings for the new conductor. Current flowed through the new conductor and energized the tensioner trailer. The victim was electrocuted when his body provided a path to ground from the trailer. The co-worker was apparently struck on the foot by the second loop of the energized new conductor.
A supervisor standing several feet away from the trailer heard the co-worker cry out, and turned in time to see him fall backward down a steep embankment. The supervisor notified the operator of the pulling unit, via radio, to stop pulling operations. The supervisor ran to the victim lying on the ground near the trailer and began cardiopulmonary resuscitation (CPR). He continued CPR until advanced cardiac life support (ACLS) procedures were administered by rescue squad personnel. The victim was transported to a local hospital where he was pronounced dead. The co-worker received serious burns to the left foot.
Cause of Death:
The medical examiner reported that the victim had electrical burns on his stomach and right arm. Electrocution was cited as the cause of death.
Recommendation #1: The employer should perform a job hazard analysis of each project prior to initiating work, and communicate hazard information and control measures during work crew safety meetings.
Discussion: Each project differs in the scope of work to be accomplished, the makeup of the work crews, the physical layout of the job site, and the equipment required to perform the work. This uniqueness creates differing situations for exposure to job hazards. Therefore, the hazards associated with each work effort must be analyzed so that appropriate control measures can be planned and implemented. A serious safety hazard which existed at this job site, the potential that the new conductor being pulled would contact an existing energized conductor, was not recognized and, therefore, not controlled. Two factors combined to increase the potential: 1) a lack of communication during the line stringing operation, and 2) the sagging condition of existing energized conductors. These factors should have been identified prior to the initiation of work. Corrective measures to prevent the hazardous contact might then have been adopted and communicated to the crew.
Recommendation #2: Where new conductors are being installed near existing energized conductors, the employer should install guards, as necessary, to prevent inadvertent contact between new conductors and existing energized conductors.
Discussion: A system of guards, such as an inverted "U"-shaped configuration composed of utility poles erected between the two levels of conductors, could minimize the chance of contact during installation of the new conductors.
Recommendation #3: All equipment used in line-stringing operations should be grounded when work is being performed in proximity to energized power lines.
Discussion: Although work was being performed in proximity to existing energized power lines, neither the tensioner trailer nor the truck to which it was attached was grounded. Grounding of the units could help prevent electrocutions should inadvertent contact with energized conductors occur.
Recommendation #4: The feasibility of incorporating electrical isolation into the design of the tensioner trailer should be studied.
Discussion: In this incident, the new conductor made contact with conductive parts of the trailer as it exited the spool. This allowed the trailer to become energized when the new conductor contacted an energized conductor. If the new conductor and spool were electrically isolated from the body of the tensioner, inadvertent energization of the trailer and truck would be less likely, thereby enhancing worker safety.
Recommendation #5: The employer should train all employees in the identification, recognition, and control of electrical hazards prior to assigning them work on energized systems.
Discussion: Although the employer does provide formal classroom training to employees who work with or near energized electrical equipment and systems, the victim had been employed for 2½ months as an apprentice lineman without benefit of this formal training.
Return to In-house FACE reports
- Page last reviewed: November 18, 2015
- Page last updated: October 15, 2014
- Content source:
- National Institute for Occupational Safety and Health Division of Safety Research | <urn:uuid:9bfc3718-8d88-44ce-a4c4-1fd0ce2fa4ce> | CC-MAIN-2016-26 | http://www.cdc.gov/niosh/face/In-house/full8803.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963057 | 1,435 | 2.640625 | 3 |
Americans have always been a people on the move—on rails, roads, and waterways (for travel through the air, visit the National Air and Space Museum). In the transportation collections, railroad objects range from tools, tracks, and many train models to the massive 1401, a 280-ton locomotive built in 1926. Road vehicles include coaches, buggies, wagons, trucks, motorcycles, bicycles, and automobiles—from the days before the Model T to modern race cars. The accessories of travel are part of the collections, too, from streetlights, gas pumps, and traffic signals to goggles and overcoats.
In the maritime collections, more than 7,000 design plans and scores of ship models show the evolution of sailing ships and other vessels. Other items range from scrimshaw, photographs, and marine paintings to life jackets from the Titanic.
"Transportation - Overview" showing 1 items.
- No Image Available
- The Baldwin Locomotive Works was started as a sole proprietorship by Matthias W. Baldwin in 1831. The company was the largest railroad engineering plant of its kind in the world. It is now out of business.
- Four scrapbooks containing items relating to the Baldwin Locomotive Works, including: blueprints, photographs, examples of company letterhead and blank company forms, clippings and articles, business records such as contracts and specifications, trade literature, and miscellany.
- Cite as
- Baldwin Locomotive Works Scrapbooks, 1867-1929, Archives Center, National Museum of American History
- 20th century
- 19th century
- Baldwin Locomotive Works
- Work and Industry, Division of, NMAH, SI
- Transportation, Division of, NMAH, SI
- Baldwin, Matthias W. industrialist
- Local number
- 2009.3088 (NMAH Acc.)
- Data Source
- Archives Center - NMAH | <urn:uuid:02789a2f-9c2e-4866-a979-5c6ba2373e61> | CC-MAIN-2016-26 | http://americanhistory.si.edu/collections/object-groups/transportation?edan_start=0&edan_fq=object_type%3A%22Articles%22&edan_fq=place%3A%22Pennsylvania%22 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00022-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.916361 | 402 | 2.78125 | 3 |
All deliverables under the contract, which included a full area cell efficiency of a minimum of 20% and reliability tests according to IEC 61215, were submitted ahead of schedule and certified by the DOE’s National Renewable Energy Laboratory (NREL).
TetraSun’s new cell concept is based on a novel surface passivation technology that enables the use of 40 um wide copper electrodes instead of screen-printed silver metallization. Without demanding special equipment for the manufacturing process, cell Voc values in excess of 700mV are achieved on monocrystalline Czochralski grown silicon, the company says.
“Exceeding the performance of traditional heterojunction technology on 156mm cells – without the need for a transparent conductive oxide or special module assembly – is a significant advantage when it comes to high volume manufacturing of the TetraCell”, says Dr. Oliver Schultz-Wittmann, VP of Device Engineering at TetraSun.
Since its founding in 2009, TetraSun has raised $12 million from equity and strategic customer investors, which has helped the company bring its passivation and patterning technologies to “production ready” status, as evidenced by ongoing small scale production and deployments in the U.S. and Japan. In 2010, the company received a $2.3 million grant from the DOE to support the development of back-surface passivation research and development.
“This has always been about creating a transformational technology,” continued Schultz-Wittmann. “Technology that will allow for the highest efficiency at the lowest cost, enabling solar to become a real part of our energy supply. With the support of our investors and the DOE we’re now in small scale production and ready to realise the full promise of our technology.” | <urn:uuid:421cec20-1c34-4054-9257-5ae5fc1d66d0> | CC-MAIN-2016-26 | http://www.renewableenergyfocus.com/view/27093/nrel-certification-boost-for-tetrasuns-passivation-solar-pv-cell-technology/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00017-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934253 | 374 | 2.671875 | 3 |
All parents want good oral health for their children, but the dentist's office can be a scary place. Anyone who has ever taken a child, especially a young one, to the dentist knows how painful it can be. And that's when all goes well. When the child has a cavity it makes the whole process 100 times harder.
In the past, cavities did not start until much later in childhood, perhaps around 10 or 12. However, a 2012 New York Times story uncovered the fact that cavities are now starting earlier than ever. Even preschoolers are coming to the dentist with several cavities in their baby teeth.
The New York Times reported that dentists across the country are seeing more three to five year olds with 6 -10 cavities or more. These kids are from all socioeconomic backgrounds and many of them had a level of tooth decay which made general anesthesia necessary.
Children at this age have a hard time tolerating the fillings, root canals, and other extensive dental repairs while awake, and thus have to deal with the risks that come along with anesthesia.
Kids do not have have to get cavities, though. There are simple steps parents can take to promote good oral health and keep teeth, especially baby ones, cavity-free.
The first thing parents need to do is to take their children to the dentist. The American Academy of Pediatric Dentistry recommends taking a child to the dentist as soon as the first tooth erupts, or by his first birthday. They want to get children in early so they don't have more oral health problems later on.
Even though baby teeth will fall out anyway they need to be protected. Untreated cavities can cause a multitude of orthodontia issues in the future as well as more pain, and expense than is necessary.
Thankfully, cavities in children can be prevented and good oral health achieved. Here are six steps to keep kids cavity-free.
1. Take children to a dentist early and often. Twice a year is recommended.
2. Give kids fluoridated tap water to drink. Bottled water generally doesn't have any (or very little) fluoride.
3. Make sure kids have a balanced diet and limit juice and sugary drinks. | <urn:uuid:0a90ca73-3c9c-498d-a720-aca09363bbfd> | CC-MAIN-2016-26 | http://www.empowher.com/parenting/content/keep-kids-cavity-free-and-good-oral-health | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00199-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.976244 | 457 | 2.84375 | 3 |
February 21, 2013
Roger Knox and the Pine Valley Cosmonauts: Stranger in My Land
by Jason D. 'Diesel' Hamad
For untold millennia, music traveled from person to person. This was how it was preserved, stored only in the consciousness of those to whom it brought joy and transmitted organically via a never-ending game of telephone. One person might misremember a word or a note here or there and pass that on to the next generation of listeners. Another might take an old song and change it wittingly, adding his own mark on top of the tradition. Iteration after iteration after iteration the songs changed, morphed, lived, breathed, and sometimes were forgotten and died. This was the norm for almost the entirety of human history. This was folk music. This is where we came from.
The invention of written music changed things. Now it wasn’t necessary to hear a piece of music in order to learn it. Music could be transferred further than the reach of a voice and transmitted across time, exactly as transcribed. Part of the organic process was lost. Different artists might still give songs their own inflection, but a performance of Bach’s Brandenburg Concerto No. 1 in F major today sounds virtually the same as it might have in 1721. The folk tradition still lived, but it now had competition from the forces of musical conformity.
In 1877, Thomas Edison revolutionized the transmission of music once again. With the invention of the phonograph everything changed. Now, the organic process had become completely sterilized. Those first words Edison spoke into his new device, “Mary had a little lamb; its fleece was white as snow…” sound exactly the same today as they did when needle first touched wax. Now, not just music, but actual performances could be passed down to listeners across the world and across generations exactly as they were heard by those present at the creation. Again, the folk tradition survived, but it was pushed further down into the cultural consciousness, replaced by the profit-driven corporate impetus to sell as many records, tapes, cd’s or downloads as possible.
The folk tradition was replaced as the dominant means of transmitting music from one person to another, striking it a devastating blow. Children now didn’t turn to their parents or grandparents to learn the music of their culture, but had it handed to them over the radio waves or accompanied by the ching of a cash register. Music became monetized, began to conform to national and international standards, and many of the regional or cultural traditions that had relied on the folk tradition to keep them alive began to die.
Now, if you’ve followed the circuitous route of this so-called “music review” up to now, stop and reimagine the preceding paragraphs with each instance of the word “music” replaced by “culture.” This is not some theoretical thought experiment; it is exactly what has happened as our world becomes more and more homogenized, as traditions that have lasted for hundreds or thousands of years are subsumed by a cookie-cutter corporate culture that demands conformity in the name of profits. This is how tribal languages are lost and replaced by a broader lingua franca, how culinary traditions are replaced by Starbucks on every corner and how folktales are replaced by the latest Hollywood blockbusters. This is how the entire cultural tradition of humanity dies and is replaced by bland sameness. This is the juggernaut, the great, slow, inexorable genocide that is the story of the modern world. It is the quiet genocide, the forgotten genocide, the one in which lives may or may not be lost, but ways of life certainly are, pulverized beneath the weight of the one true culture.
But there are those who—knowing they could never turn the juggernaut—fought to save whatever they could from beneath its crushing weight. Ironically, in the battle to preserve musical traditions it was recording—the technology that struck many of these traditions a fatal blow—that came to the rescue. A few visionaries like John and Alan Lomax and Moses Asch realized that with the traditions themselves dying, the only way to preserve their artifacts—the songs—for future generations was to record them, and so they dedicated their lives to capturing as much of the world’s fading musical traditions as they could. Even if the traditions themselves were lost forever, their legacy could be maintained for future generations, if only in a dusty museum.
More than just preserving a particular song and a particular performance by a particular person, these archives allow the folk tradition to continue in a modified form. Every time some new listener discovers one of these treasures, they have the ability to incorporate and reinterpret it for themselves, whether they merely sing in the shower or are international recording stars. Every time someone new sings “Goodnight, Irene,” “This Land Is Our Land” or any of the thousands upon thousands of songs that have been thusly preserved, the dream of the Lomaxes and Asches of the world are made manifest. And every time these new singers pass along the music they’ve learned, complete with their own additions and reinterpretations, new life is breathed into the culture that created them in the first place.
Roger Knox (right)—as much an activist as a native Australian country music legend—corroborated with American Jon Langford (left) and a host of talents spanning two continents to create this powerful collection of aboriginal country and Western music.
A project of this type has recently been completed by Roger Knox, an aboriginal singer known as Australia’s “Black Elvis” but closer in spirit to Billy Bragg or the Man in Black himself, with an activist streak that often leads him to perform in prisons. Accompanied by Jon Langford of the Mekons and Waco Brothers along with a coterie of musical artists from around the world collectively billed as the Pine Valley Cosmonauts—including country great Charlie Louvin in what may be his last recording—Knox has just released Stranger In My Land, an album of covers focusing on country songs written by his native Australian brethren. This group—among the most culturally endangered in the world—adopted the country music that was imported along with American servicemen during the Second World War and made it a vehicle for self-expression, often writing about the alienation they felt as scions of a lost society being forced to assimilate to the culture of their conquerors, thus co-opting a foreign artistic form for their own purposes. Whether overtly political or merely observational, these songs became some of the most powerful means to articulate the story of an all-too-often voiceless people. Many of these singer-songwriters rarely or never recorded, and so their music might be completely lost without such caretakers as Knox, making Stranger In My Land an incredibly important project, perhaps the last chance to preserve these Australian cultural treasures.
Stranger isn’t just an intellectual exercise, though. It is filled with well-executed, catchy songs exhibiting terrific musicianship and centered around the warm, comfy baritone of Knox’s voice. And while many of the stories told in these songs are serious and focus on deep, often saddening concepts, there are bright and comic moments mixed in to balance the listener’s emotions. All of these factors contribute to making this collection an enjoyable experience, even as it fulfils its curatorial and educational missions.
The album starts with a rocking tune called “The Land Where the Crow Flies Backwards,” written by the hard drinkin’ Dougie Young, who sometimes worked as a cowhand when he wasn’t in jail. This outback ranch life is the setting for the song, but by no means its limit. About the same time Mohammed Ali was declaring, “Black is beautiful,” Young was proclaiming his own take on racial identity:
Yes I’m tall, dark and lean, every place I’ve been
The white man calls me Jack.
It’s no crime; I’m not ashamed
I was born with my skin so black.
Likewise, it describes the Western settlement of native lands and even touches on the great atomic power:
Now they laugh in my face;
They say I’m a disgrace;
They say I’ve got no sense.
The white man took this country from me;
He’s been fightin’ for it ever since.
These governments and presidents they arguin’,
Every day they try to start a brawl,
And if there’s going to be a nuclear war
What’s gonna happen to us all?
Knox presents this sharp social commentary in a wrapping of electrified country groove, making it a pill much easier to swallow. The song has a distinct backbeat and is layered with guitars and steel piled upon one another, and occasionally breaking out for a bit of dueling action (with Dave Alvin supplying the blazing wah-wah lines). It’s a damn infectious tune that belies the intricate critiques of its lyrics.
The title song, “Stranger in My Country,” was written by Vic Simms and originally recorded while he was serving time at Bathurst Jail. It confronts the massive alienation felt by members of native groups all around the world who have been turned into second-class citizens in their own conquered homelands:
Well, forgive me if it seems as though, but I am not mistaken,
‘Cos I’m the one that you forgot after my land was taken.
In early years we were put down, cast aside as vermin,
Women men and kids shot down then paid by bible sermon.
Featuring, among others, the Sadies and punk bluesman Andre Williams, the song clips along at pace that makes it hard not to toe tap along.
In contrast to many of the album’s other selections, which mask their commentaries behind country catchiness, “Wayward Dreams” has a powerful, epic feel. The driving bass marches along beneath the layers of strummed strings, chilling violin and operatic backing vocals that set the scene of a Morricone-scored Western faceoff, all as Knox bellows his discontent:
It just so happens that we’re all a part of this land so large and free.
Why can’t people realize how the black man wants to be?
Maybe we can’t walkabout the way we used to do,
But we’d like to be on equal terms and believe in democracy, too.
Yes we are a part of this vast and peaceful place.
Freedom then was a common thing for whole aborigine race.
Why the white man tried to change a way of life, it seems,
Is because it would not fit into their wayward scheming dreams.
Written—again in prison—by Knox’s oft-times concert partner Bobby McLeod, this song, advocating reconciliation and revolution of the mind rather than by the fist, is the best track in the collection.
Another epic, almost operatic selection, “Warrior in Chains” would have been a perfect selection for one of Johnny Cash’s American Albums, with its tale of a man fighting to maintain his humanity in a prison cell. The only song not written by one of Knox’s aboriginal kin, it was instead penned by a brother from another mother, native Canadian folksinger Daniel Beatty, whom Knox met while touring prisons in that country. Struck by the similarities of the indigenous people on both continents, Knox felt it fit right in on this collection. Centered around a prisoner’s sung-out prayer, it is a classic redemption song if there ever was one:
He sang, “Play for me the song of thunder;
Bring to me a dream.
I left my youth behind me
In all the places I have been.
Let your black clouds open over me.
Oh, cleanse me with your rain.
Let your four winds bring some freedom
To the warrior in chains.”
Both more recent and better known than many of the other selections on this album, “Took the Children Away” was a hit for Archie Roach in 1990. Presented by Knox in a rather sweet incarnation with full organ, weeping steel and lovely harmonies by Kelly Hogan, it tells the story of Australia’s program to forcibly remove aboriginal children and place them in boarding schools to assimilate them into the predominant culture. Similar to policies initiated by the American government starting in the late 19th century, Austrialia’s removal programs ended in the late 1960’s, while many “American Indian boarding schools” didn’t close until the late 80’s or early 90’s. Roach himself was placed in one of these schools, as was Knox’s own mother, an emotional burden that seems to come through in his vocals as he sings lines like:
The welfare and the policeman
Said, you’ve got to understand,
We’ll give them what you can’t give,
Teach them how to really live.
Teach them how to live, they said;
Humiliated them instead,
Taught them that and taught them this,
And others taught them prejudice.
You took the children away,
The children away,
Breaking their mother’s heart,
Tearing us all apart,
Took them away.
Written by Denis “Mop” Conlon of Mop and the Drop-Outs, “Brisbane Blacks” examines another problem common to both the Australian and American indigenous population, alcoholism. This song goes far deeper than just blaming economic woes for the common addiction, but digs deeper into the irony of the native Australians’ situation:
With a little help from his friends Roger Knox has crafted an album unlike anything seen before this side of the Pacific, and an album that no country music fan should miss.
Everyday, each passing day, our culture slowly dies,
Like a piece of paper thrown onto a fire.
Now all we’ve got is ancient weapons as our only trade.
Compared to all the immigrants, look how much we made.
You look down through your noses to see
The black man problem down at your feet,
With weary eyes looking up at you
Waiting for the message to get through.
With all the talk these days about the “immigration problem,” this is a poignant reminder that those shouting the shrillest often have the least ground to stand on (at least ground they didn’t steal). Described as a “great big militant sing-along,” that’s exactly what it is, a light-feeling presentation of a heavy subject, all wrapped up in the Sadies’ country rock flair.
Stranger In My Land is a powerful collection full of pointed commentaries on the state of life for Australia’s Aborigines. Its true brilliance, however, is that these commentaries are presented in such an ear-pleasing manner, within songs that would be enjoyable no matter what their topic. And with a Shakespearean understanding for the timing of comic relief, there are moments of frivolity and nostalgia interspersed throughout, making it an even more entertaining experience. There’s no doubt that Roger Knox and the Pine Valley Cosmonauts’ Stranger In My Land is an important album, but it is also just a great listen that no fan of country music—no matter which hemisphere he or she calls home—should miss. | <urn:uuid:584c1b05-63aa-4edf-bc34-faaf534faa1b> | CC-MAIN-2016-26 | http://nosurfmusic.com/thenosurfreview/reviews/rogerknox-strangerinmyland/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00140-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963983 | 3,284 | 2.5625 | 3 |
So how did they do it? A botnet is just the overly sci-fi name for a bunch of computers that are controlled by a central command-and-control structure. The number one challenge for botnet operators is hiding their command-and-control servers to avoid being taken down (the chances of actually being arrested are pretty close to nil). The Torpig botnet uses an increasingly popular technique where client machines try dialling into a set of pre-determined domain names and accept the first server to respond as the botmaster.
This is where the UCSB researchers moved in - they took over the Torpig botnet by sneakily claiming the domain name that was the next in line to be the command-and-control server. The botmasters behind Torpig had not claimed all the domain names that their victims were meant to dial into, either to save money or because they didn't see this coming. In any case, the UCSB found itself in control of a botnet with hundreds of thousands of hosts.
Don't try this at home. The researchers cooperated with law enforcement and other entities to avoid legal problems. This appears to have helped them steer clear of the hot water the BBC found itself in a few weeks ago for actually purchasing a botnet from criminals.
Botnets and the Hype Cycle
You've probably heard botnets talked about on the evening news. Botnets are a particularly successfully marketed part of the FUD-cycle of the information security industry.
But how bad is the botnet problem in reality? Not as bad as previously thought, according to the UCSB team. Previous studies have counted IP addresses rather than actual hosts when estimating the size of a botnet. Getting from IP addresses to actual machines is tough - DHCP leads to an overcounting, NAT to an undercounting, and there are many other factors at play. In the botnet the UCSB team analyzed, they counted 182900 hosts versus 1,247, 642 IP addresses, and there is evidence that IP addresses generally overcount actual machines.
But in many security reports IP addresses and computers are treated synonymously - the latest MacAfee report actually contains the sentence "In this quarter we detected nearly twelve million new IP addresses, computers under the control of spammers and others". Arrghhhh...
Coverage of the UCSB work in the MSM did not mention the overcounting. "Botnets smaller problem than originally thought" doesn't make much of a headline...
So I'm part of a botnet, so what?
Good question. Theoretically, a botmaster could read your email and abuse your other accounts to their heart's desire. In fact, the UCSB researchers performed a keyword analysis of their victims' emails (not sure how they got the legal clearance to do that...). But they are probably the only ones who bothered reading those emails. Botmasters want control of computers to make money and not to read about your date last Saturday. When someone breaks into your house they steal your valuables, not your diary.
Most online accounts and credit cards do not hold their users liable for fraudulent charges. In this way botnets operate a lot like insurance fraud or old-school credit card fraud. They are an annoyance that creates an indirect cost for everyone, but a cost that is sufficiently low that people are willing to bear it. We live in a society where people want to be able to use a 16 digit number they have given out hundreds of times to pay for stuff. If that means that everything costs 1% more to deal with fraud, so be it.
Brian Krebs (who should be on your reading list if he isn't already) posted a piece today about the dangers of allowing your PC to be compromised. Reading through his list of spam, click-through fraud, DoS attacks, and the like, I couldn't get past the feeling of dangerous for society - yes, dangerous for the user - not really. As far as some of the more nefarious password stealing stuff, there is little to no evidence so far that botnets are actually using user credentials for anything other than non-personal misuse of a person's credentials. This isn't great for society, but isn't something the average user is going to care about.
Seems like just the kind of situation that calls for Uncle Sam (or Uncle Barroso)...
Laying Down the Law
The UCSB authors fault registrars for not sufficiently responsive to requests for taking down botnets. While ISP responsibility for content and traffic is a tricky political issue, the content industry has been very successful in forcing ISP accountability for peer-to-peer traffic on their networks. Of course the content industry has a bunch of well paid folks in Washington, Brussels, and other corridors of power pushing their agenda. Botnets do not directly affecting an entire industry's bottom line and so there is no lobbying effort to move responsibility from the client to the registrars and ISPs.
This could change significantly if the national security angle of botnets takes flight. The apparent role of botnets in Internet disruptions during the Russia-Georgia conflict last year, allegations of Chinese cyber-espionage, and frequent stories in the press about the vulnerability of critical infrastructure have attracted the attention of US policy makers. There are even signs that countries like China - long considered a safe haven for hackers - are taking regulatory steps to address botnets.
Regulatory measures will not completely address the botnet issue, but would potentially significantly change the risk/time-invested/reward ratio. Botnets take a high degree of technical expertise to set-up and are of only limited value. A tighter regulatory regime could significantly reduce the incentive for botmasters.
You often hear about user education in botnet/information security stories, which all too often is vendor-ese for user indoctrination to buy security products. But the UCSB researchers - who have done a great piece of research and aren't selling anything - also focus on user education as a solution to the botnet issue. Their statement that the "malware problem is fundamentally a cultural problem" places the onus for preventing complex and sophisticated criminal activity on the people least capable of preventing it.
It would be nice if all users were capable of being system administrators. For enterprise users, it is fair to expect a minimal level of technical skill. But the truth is that the technical measures a home user needs to take to secure his or her computer are simply beyond the grasp of significant portion of Internet users. The stuff you can educate home users about - choosing better passwords, not recycling passwords, etc is not going to make a real dent in the botnet problem. | <urn:uuid:4b803931-b94f-474c-af06-50a84dee3470> | CC-MAIN-2016-26 | http://www.boazgelbord.com/2009/05/botnets-and-security-hype.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00053-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960362 | 1,367 | 3.015625 | 3 |
When Rahm Emanuel assumed office as Mayor of Chicago, he inherited very tough challenges — heavier than the politics-as-usual passing of the baton from one administration to the next. He inherited a legacy of violence that had been generations in the making. That legacy can make the violence seem intractable, but it isn’t.
(MORE: Read this week’s TIME Magazine cover story “Chicago Bull,” available to subscribers here)
I grew up in Chicago, but I spent much of my professional career working to control epidemics in Africa and Asia for the World Health Organization and others. When I returned home in 1995, I saw violence in Chicago spreading in the exact same patterns as diseases like tuberculosis and HIV. At the time, people used the term “violence epidemic” as a metaphor, but I and others saw parallels that could be scientifically documented. Maps and graphs that chart the spread of violence look almost identical to those that chart infectious diseases with maps showing clusters and graphs showing waves upon wave. Properties of transmission, though just as invisible as microbial counterparts, can be witnessed spreading from one individual to another, one community to the next.
It has taken years of investigation to validate these observations. For example, brain research tells us that brain cortical patterns are involved in copying behavior, and that damage to the limbic system can occur by victimization. These are some of the ways in which the contagion occurs. Some of these effects can make someone lose their temper quickly and respond to a situation aggressively. They turn yesterday’s victim or witness into tomorrow’s aggressor.
The good news is once we recognize violence as a contagious process, we can treat it accordingly, using the same methods that successfully contain other epidemic processes – interrupting transmission, and behavior and normative change. Cure Violence and its partners have been putting this public health approach to violence into practice in Chicago, Baltimore, New York, Philadelphia, New Orleans and more than 15 cities and 8 countries by putting specially selected workers into communities to interrupt violence and encourage behavior change through outreach. Research conducted by the U.S. Justice Department, Centers for Disease Control, Johns Hopkins University and others have credited this approach with dramatically reducing shootings and killings in neighborhoods where violence had been epidemic. The Institutes of Medicine —the health arm of the National Academy of Sciences — and the U.S. Conference of Mayors have recognized the importance of using this public health model to prevent the spread of violence.
Mayor Emanuel comes from a family of doctors. He understands health very well and he values the role that the public health sector — working alongside law enforcement — can play in reducing shootings and killings, not just to help individuals but to reduce violence across an entire community. He also wants results. And we’re beginning to see them.
Woodlawn is one of the communities hit hardest by violence last year. It is also one of the communities where Mayor Emanuel invested in a comprehensive anti-violence strategy that includes law enforcement and public health. We have already seen a 100 percent reduction in homicides there this year. Based on data from the Chicago Police Department from January to April 2013, there has been a 40 percent reduction in shootings and killings across the 14 communities where both law enforcement and public health strategies are being used. Every shooting that does not happen helps to create a new legacy for Chicago and for every community that is plagued by violence. Perceptions are hard to change. But we can save lives. And because we can, we must.
Gary Slutkin is a physician and founder and executive director of Cure Violence. The views expressed are solely his own.
Click here to read editor-at-large David Von Drehle’s full magazine story on Chicago and Mayor Rahm Emanuel available exclusively for TIME subscribers.
Not a subscriber? Subscribe now or purchase a digital access pass. | <urn:uuid:234c3710-a888-421f-b510-b99ffaab677e> | CC-MAIN-2016-26 | http://ideas.time.com/2013/05/30/how-to-reduce-crime-treat-it-like-an-infectious-disease/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00070-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955321 | 788 | 2.75 | 3 |
A sinus toothache, often described as a referred pain because the throbbing occurs in a tooth or teeth rather than at the actual site of infection, can plague a person for as long as the contagion rages. The solution of course, is to cure the sinusitis. Maybe.
To understand the whole toothache sinus problem we need to ask some basic questions:
What do sinuses do?
Basically, your sinus cavities purify and moisten your inhaled air, lighten the weight of your head, act as echo chambers to enrich your voice with deeper, resonant qualities or act like bubble wrap for the brain. Those are some of the theories anyway, proposed by the American Rhinologic Society.
They also get clogged up with mucus from bacterial and fungal infections and cause sinus toothache for too many people ... not that we consider that an actual function. But remember that just about any moist environment in your body can become infected -- toothache sinus issues are practically bound to occur!
Why do sinus infection toothaches occur?
When infected, those maxillary sinuses in the cheeks put pressure on the upper jawbones. Because of proximity, the inflamed sinus cavity also pressure roots and nerves in the teeth, causing a sinusitis toothache.
What are the symptoms of sinus toothache?
Pressure under the eyes, coughing or sneezing are the main precursors to a sinus infection toothache. Watch for dental pain coming from cold, chewing and percussion. You'll probably feel a sinus toothache in your upper jaws and in one or more teeth, usually on one side of the mouth.
Acute sinusitis lasts from days to weeks, while chronic sinusitis stretches on for over two months and reoccurs. But symptoms for acute sinusitis tend to be more severe than chronic infection symptoms.
How do you avoid toothache sinus problems?
A sinus infection usually follows a cold. Allergies, nose injuries and deviated septums may also trigger bouts of infection and eventual sinus toothache, as do lowered immune systems, according to Penn State University's College of Medicine.
Some other perpetrators? Frequent air travel, deep scuba diving and believe it or not, a tooth abscess. In this rare case, bacteria spread from the roots of your teeth causing sinus infection and a sinusitis toothache.
Does that work in reverse? Can sinusitis cause tooth decay?
According to Columbia University's College of Dental Medicine, sinus problems may cause dental damage beyond temporary sinus toothache pain in an indirect fashion.
Medications may cause xerostomia, or dry mouth. And we need our saliva to clean plaque-building bacteria from our teeth. Mouth breathing because of sinus infection also causes xerostomia.
So now we have a chicken and egg question: Did the tooth decay cause sinusitis, or did the sinusitis toothache and decay come from a sinus infection? Ask yourself if you had dental pain or abscess first or if you experienced sinus toothache after you noticed upper respiratory problems. Talk to your dentist in either case.
What You Can Do for Toothache Sinus Pain
If a sinus infection toothache flares up, your physician can recommend antihistamines, decongestants or antibiotics for the sinusitis. Inhale steam or place warm, moist washcloths on your face.
For the sinusitis toothache itself, brush twice a day, floss before going to bed, use an antiseptic mouth rinse and stop smoking. You might also try a softer food diet and chewing on the less painful side of your mouth until your infection clears up. Severe toothache sinus issues may warrant taking an over-the-counter pain killer for general relief.
Solving the Sinus Toothache Problem
Your dentist should check for infections caused by sinusitis or for sinus infections causing tooth decay, particularly if you are a chronic sufferer. And if you're experiencing dental decay anyway, any bout of sinusitis may cause toothache sinus pain.
To learn more about the way each condition affects the other, visit the dentist.
Call us at 1-866-970-0441 and we'll find a great dentist near you to root out your sinus toothache problem. | <urn:uuid:c893b9aa-7f35-4abe-93a6-550cc2de3b01> | CC-MAIN-2016-26 | http://www.dentistry.com/conditions/toothache/sinus-toothache-a-pain-out-of-orbit | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00120-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.895541 | 913 | 2.71875 | 3 |
Stephen E. Sachs, Duke University School of Law, has a new essay: Conflict Resolution at a Medieval English Fair. It appears in LA RÉSOLUTION DES CONFLITS EN MATIÈRE DE COMMERCE TERRESTRE ET MARITIME (LA RÉSOLUTION DES CONFLITS: JUSTICE PUBLIQUE ET JUSTICE PRIVÉE: UNE FRONTIÈRE MOUVANTE, Albrecht Cordes, ed., 2012. Only the abstract is posted:
Recent studies of commercial conflict resolution have emphasized the role of informal norms and extralegal incentives as compared to the formal legal system. Yet the merchants who frequented medieval English fairs, whose example has been invoked as a precedent for modern dispute resolution, may not have fit this model. These merchants frequently litigated before the courts of the fairs, local tribunals of general jurisdiction that retained formal procedures and traditional methods of proof. Why did these traders rely on existing authorities rather than their own private institutions? And why did they appear before local tribunals, rather than alternative fora such as the English royal courts?
This essay examines the records of the fair court of St. Ives, one of England’s largest and best-documented fairs in the late thirteenth and early fourteenth centuries. It argues that the fair court managed to attract litigants in the face of jurisdictional competition through an effective alignment of legal and extralegal incentives. The court offered not only reputational sanctions, but also the coercive process necessary to govern a heterogeneous trading community. Although it lacked the reach and authority of a royal court, it offered merchants greater speed and flexibility in the application of specific customs, relying on community knowledge rather than official fact-gathering. The fair court of St. Ives provides an illuminating example of the interaction of law and society, demonstrating how fragile legal systems can succeed by making use of, and coordinating with, extralegal norms and incentives to accomplish official ends. | <urn:uuid:d89ded41-ac9a-4990-99f9-16471409c2e6> | CC-MAIN-2016-26 | http://legalhistoryblog.blogspot.com/2012/01/sachs-on-conflict-resolution-at.html?m=0 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.915549 | 416 | 2.71875 | 3 |
The figure shows a conical pendulum, in which the bob (the small
object at the low end of the cord) moves in a horizontal circle at
constant speed. (The cord sweeps out a cone as the bob rotates.)
The bob has a mass of 0.040 kg, the string has a length L = 0.90 m
and negligible mass, and the bob follows a circular path of
circumference 0.94 m.
(a) What is the tension in the string
(b) what is the period of motion? | <urn:uuid:f461b32a-8067-430a-892a-181a9e8a812a> | CC-MAIN-2016-26 | http://www.chegg.com/homework-help/questions-and-answers/figure-shows-conical-pendulum-bob-small-object-low-end-cord-moves-horizontal-circle-consta-q1942497 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00169-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.894366 | 117 | 3.21875 | 3 |
Location: Northern Greece, Macedonia
The vardar is a cold northwesterly wind blowing from the mountains down to the valleys of Macedonia. A type of ravine wind, enhanced by a channelling effect while blowing down through the Moravia-Vardar gap, bringing cold conditions from the north to the Thessaloniki area of Greece. Most frequent during winter, it is blowing in the rear of a depression when atmospheric pressure over eastern Europe is higher than over the Aegean Sea. In general, the vardar is similar to the mistral wind.
A strong vardar event occured on January 12 and 13, 2003. Compare the significant drop in maximum temperature by 17°C (from 19C down to 2C) with the darstic jump in prevailing wind direction from easterly to northerly and northwesterly, and the wind speed and peak gusts during this time period recorded at Thessaloniki. | <urn:uuid:6991f28c-8348-4fa8-9e73-5000f42e4e2b> | CC-MAIN-2016-26 | http://www.weatheronline.co.uk/reports/wind/The-Vardar.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00194-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940123 | 198 | 3.4375 | 3 |
At this week’s State Board of Education meeting the usually mundane topic of transportation was addressed, revealing some serious issues relating to transportation at charter schools. Under North Carolina law, charter schools are exempt from statutes and rules applicable to traditional public schools. The purpose of this law is to allow for innovation and to let charters to circumvent teaching licensing standards. Oddly, this law also exempts charters from the school bus safety regulations that are followed by public schools.
Derek Graham, chief of the Department of Public Instructions Transportation Division, stated that currently only 40 of 98 (41%) of charter schools provide transportation for their students even though they do receive funds for transportation. This causes students who cannot provide for their own transportation to be excluded from these schools and contributes to the higher levels of segregation found in North Carolina charter schools . Mr. Graham was concerned that if charter school buses were held to the same standards as public school buses, the few charters that do provide transportation will stop because most of the buses they use are retired public school buses that no longer make the grade.
However, as Chairman of the State Board of Education William Harrison rightfully pointed out, student safety comes first and there is really no way to get around that. Most of the regulations that have been enacted were in response to accidents and incidents involving buses in the past that nobody wants to see repeated. The Board now faces a no-win situation and must decide whether they should effectively decrease the already minimal level of transportation services at charter schools or push student safety by the wayside. | <urn:uuid:1c66d00f-5cea-4000-8a9b-8160092ad3d9> | CC-MAIN-2016-26 | http://pulse.ncpolicywatch.org/2011/09/01/charter-school-transportation-is-a-thorny-problem/print/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00180-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.980636 | 312 | 2.5625 | 3 |
BOSTON -- The rollout of antiretroviral therapy in rural South Africa has resulted in a rapid recovery of life expectancy for HIV-infected individuals, with women appearing to have benefited more than men, researchers reported here.
Since 2004, overall longevity for childhood survivors of HIV-infection in the region has risen from about age 50 to age 60. However, in men longevity has gone from 46 to 55 years, while in women it has increased from 50 to 64 years, said Till Barnighausen, MD, PhD, from Harvard School of Public Health in Boston, and colleagues.
"We see that the benefits of HIV-therapy in rural South Africa are highly unequal," Barnighausen said at the annual Conference on Retroviruses and Opportunistic Infections."Men are being left out."
In a rural area northeast of Durban, South Africa -- considered the epicenter of the HIV epidemic in that country -- researchers have tracked more than 90,000 individuals as to their HIV status, socioeconomic status, and a host of other demographic and health variables.
In the current study, researchers followed outcomes among 52,964 women and 45,688 men. There were 6,140 deaths among the women in the study for a mortality rate of 1.85 per 100 person-years compared with 6,150 deaths among men (mortality rate 2.17 per 100 person-years).
There were 3,729 HIV-related deaths among women (HIV-mortality rate 1.12 per 100 person-years) compared with 3,500 HIV-related deaths among men (HIV-mortality rate 1.23 per 100 person-years), Barnighausen reported.
He stated that the age gap in longevity between men and women beginning in 2001 was almost 7 years, but that gap decreased to less than 4 years by 2006.
Since then the gap has steadily increased to its present differential of more than 8 years.
"Men do benefit from the rollout of antiretroviral therapy, but not as much as women," Barnighausen said at a CROI press briefing. "We need to think about what this difference means, and what needs to be done to potentially rectify this differential."
He said that his research team determined that women are 2.5 times more likely to utilize antiretroviral therapy than men, and that may explain part of the story. "Being a woman appears to give you a potential advantage in gaining a closer-to-normal life expectancy," he said.
The average life expectancy for women without HIV in rural South Africa is 75 years; for men without HIV, it is 65 years.
When the researchers scrutinized the cause of death in the study population, they found that 70% of the men who died from HIV-related diseases succumbed to those illnesses without ever having accessed antiretroviral treatment.
"The health system never had a chance to put them on antiretroviral therapy," Barnighausen said. "The same number for women is 40%. That differential is powerful."
Barnighausen told MedPage Today that the discrepancy between men and women in longevity might be driven by patterns of healthcare. That is, women are often seen earlier than men because of treatment for pregnancy and other women's health matters. "Some of the differential can be explained by this," he said, "but the largest part of the differential is not explained by this phenomenon."
"We have thought about what those other factors may be," he said. "This is pure speculation, but it may be due to the role of masculinity in the Zulu culture that men are not expected to seek help. There is also the fact that the health system is nurse led, and there is an attitude that men are perpetrators of HIV and women are victims, so that women are more deserving of treatment."
Press conference moderator James McIntyre, MD, executive director of Anova Health Institute, Parktown in Johannesburg, South Africa, told MedPage Today the reason men are not gaining longevity at the same rate as women is "multifactorial. Men do not have health-seeking behavior; there is no facilitation of that behavior by making clinics accessible -- if men are working, where do they go? The way men are treated in clinics is very bad."
He said that most studies of access to HIV therapy in sub-Saharan Africa show that women seek treatment more than men. "I think you would see this same situation played out across the entire region," he said.
McIntyre noted that in the rural area where the study was performed, traditional values may play a larger role in healthcare access than in urban areas.
"Despite these sex difference, the rebound in longevity since the rollout of antiretroviral therapy is absolutely remarkable," he said.
Barnighausen reported no relevant relationships with industry.
McIntyre reported no relevant relationships with industry.
Conference on Retroviruses and Opportunistic InfectionsSource Reference: Barnighausen T, et al "Unequal benefits from ART: A growing male disadvantage in life expectancy in rural South Africa" CROI 2014; Abstract 150. | <urn:uuid:eef2eb54-fc91-49f2-8652-efce60d3f167> | CC-MAIN-2016-26 | http://www.medpagetoday.com/MeetingCoverage/CROI/44681 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00107-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970002 | 1,065 | 2.65625 | 3 |
This is Claudius' first "set" speech; he looks good, explains the quickness of the marriage while the country was supposedly still in mourning, handles the major problem was hinted at scene 1, takes care of Laertes well, and finally turns to a problem within the royal family. His language is masterly. His oxymorons such as "defeated joy", "mirth in funeral and dirge in marriage" allow the audience to focus in on whatever emotion they want to find. Those who are still mourning will hear "defeated", "funeral", and "dirge"; those who are celebrating the wedding will hear "joy", "mirth", and "marriage." The language is appropriate for the formal, ritualistic situation.
In successful Machiavellian manner, Claudius has figured out a way to handle the threat of Fortinbras at the cheapest possible cost.
Claudius flatters Laertes by using his name often, using his first name which suggests a more informal relationship with Laertes and Polonius. During this speech, Claudius calls himself "The Dane" as the King
Hamlet's language, like his costume, is strikingly different from the formal, ritualistic quality of the King's speech; here, Hamlet is cynical, sarcastic. His pun suggest that he is closer in kin to Claudius than he would like to be, yet he does not believe Claudius feels kindly towards him. Notice that the pun would have worked as well if, instead of "kind" Hamlet had said "king." He is still the Prince, not the King as he perhaps should have been through the law of primogeniture as the son of the Old King.
Hamlet acknowledges that his appearance could be faked, but his use of the "inky cloak" truly denotes his inner feelings, and he would certainly not fake anything. He is claiming to be the True in a world of Counterfeits. This suggests that Hamlet believes that appearance and reality are one and the same; later after seeing the ghost, he changes his mind and acknowledges that "a man may smile and smile and be a villain." But it comes to him as a revelation or epiphany when his suspicion of Claudius' villany is confirmed by the ghost. This first view point is closer to the Renaissance Christian Humanist than the Machiavellian. The view that a man may smile and be a villain is more of a Machiavellian view.
Claudius tries to sound friendly but is very critical of Hamlet, here suggesting that his grief makes him unmanly. Claudius lectures Hamlet, saying about the same thing that Gertrude did, but suggesting that Hamlet's actions are against heaven. Of course, when we realize that Claudius killed Old Hamlet, this speech in retrospect is the ultimate in hypocracy. In scenes 2,3, and 5 of this act, fathers (or in this case a step-father) lecture to their children, telling them what to do.
Again, in a smart Machiavellian move, Claudius tries to forestall Hamlet's potential ambition to be King, which might find supporters among the court or the people, by announcing that Hamlet is his heir. In Denmark the succession to the throne is decided by the court, not the way England would use, the oldest son inheriting the crown. However, this would be an issue to the Elizabethan audience who was very concerned about whether there might be civil war after her death if she did not have an heir or clearly name her successor.
Hamlet acknowledges Gertrude's position in his family as deserving of duty, but ignores Claudius. But Claudius, being the smart manipulator, can turn this answer into a good way to "win" in front of the court. He quickly leaves the stage before this impression can change.
Hamlet gives us the first view of death, that it would be relief to one who is deep into grief. But he also verbalizes the view that suicide is a sin. He sees the world as a declining world, going from the ideal of the past when his father was King to the corruption of the court by Claudius.
Memory brings the past into his awareness and makes him sad because it seems so much worse. How one is remembered and for how long after his death is a theme that Hamlet wrestles with throughout the the play.
Hamlet is as upset about his mother's remarriage as he is about his father's death. His mother's willingness to marry Claudius undercuts Hamlet's view that his mother and father loved each other deeply.
This is an ironical statement; Hercules accomplishes 12 mighty labors, yet Hamlet cannot even perform one task given him by the ghost. Hamlet is also suggesting that he himself is part of this declining world; he is cynical even about himself.
Hamlet tests everyone. He wonders how Horatio might have heard that first speech of Claudius'; is he still grieving for the dead king or celebrating the marriage of the queen.
Again Hamlet tests Horatio. Here he tries to use a falsehood to reveal that Horatio's tale of seeing a ghost of his father was just made up, but Horatio corrects him.
This world view is a remnant of the Renaissance Christian Humanist view that God has a plan and he will revenge all wrongs, bring to light wrong-doings. Here Hamlet is willing to wait on God's timing to reveal why things are so rotten in Denmark. | <urn:uuid:92726d0e-aff2-4814-af7e-419955bf86b0> | CC-MAIN-2016-26 | http://daphne.palomar.edu/christine/e250/Hamlet/Hamlet1-2com.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00112-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.981328 | 1,144 | 2.6875 | 3 |
About ten years ago, astronomers discovered a single dwarf planet, which they named Sedna, beyond any then-known object in the solar system. Its discoverers half suspected it was a freak of nature. It wasn't, and isn't. Another astronomical team now reports a new object, a companion to Sedna. They say further this new object won't be the last such object anyone discovers. But they miss one important implication. Other teams will likely confirm their opinion by discovering a third Sedna, and a fourth, and many more. When that happens, astronomers will have to admit the solar system is significantly heavier than they suppose. And that extra weight, far from supporting their conventional models, undermines them.
A second Sedna
This second Sedna has no name yet. The team under Dr. Chadwick W. Trujillo designate it 2012 VP113. In their letter to the journal Nature, they give its perihelion, or closest approach distance to the Sun: 80 AU. Sedna, at 76 AU, comes somewhat closer. 2012 VP113 measures 450 kilometers across, about half the size of Sedna.
The detection of 2012 VP113 confirms that Sedna is not an isolated object; instead, both bodies may be members of the inner Oort cloud, whose objects could outnumber all other dynamically stable populations in the Solar System.
Trujillo uses the term inner Oort cloud to describe a population of objects between the Kuiper belt (technically from 30 to 50 AU) and the hypothetical outer Oort cloud (the "Oort cloud" of most discourse), beginning at 10,000 AU.
Alicia Chang of the Associated Press interviewed several other astronomers. They included members of the team that discovered Sedna. They agree with Trujillo. Chang quotes David Rabinowitz of Yale University, a co-discover of Sedna:
Sedna is not a freak. We can have confidence that there is a new population to explore.
All this is perfectly true. But Trujillo, Rabinowitz, and others now hailing 2012VP113 all miss this vital point. If an "outer Kuiper belt" exists, with many objects of this size, the Solar system is significantly heavier than astronomers have thought. The added mass then solves one of the few problems for an alternate theory of the origin of comets. According to that theory, comets did not form in a "cloud" waiting for objects inside them to "perturb" them to fall toward the sun. Instead, comets formed from material that launched from earth, from the most violent event the earth has ever known, the one event deserving of the name cataclysm.
Competing theories on comets
Today, the favorite theory on where comets came from, is that they exist in a cloud, 10,000 AU to 50,000 AU from the sun. They originally formed among or near the gas giants. Some of these, passing close to the gas giants, fell into "short period" orbits. Others fell outward, to the present Oort cloud region.
Walter T. Brown, PhD, offers a radically different theory. Comets did not form in any "nursery" among the gas giants. Instead, a subcrustal ocean on Earth broke containment and shot straight up into the air, carrying large amounts of rock and mud with it. Most of this water fell back to earth, and this is the water of the Global Flood. But at least one percent of the earth's mass stayed in space, and accreted to form the comets, asteroids, and meteoroids we know today.
Brown has always acknowledged one problem with his theory. The Solar System has a large population of comets with aphelia (farthest distances from the sun) of up to 50,000 AU. That, of course, is the outer limit of the Oort cloud. By conventional estimates of the size of the solar system, these comets should have periods ("years") of about 4 million years. Yet the theory depends on a violent event happening about 5300 years ago.
Brown has always suspected the Solar System has more mass than the total catalogued amount. He also suspected astronomers would find this extra mass beyond the Kuiper Belt. Here's why: orbiting mass beyond any one object exerts no pull on that object. But once the object moves beyond that mass, it is now subject to the cumulative pull of that extra mass and everything else inside it. So an object could achieve an aphelion of 50,000 AU, yet have a resultant period far shorter than 4 million years. With enough mass, such a comet could have a resultant period well shorter than the 5300 years that elapsed since the Global Flood.
And now, conventional astronomers report they have every reason to find this mass, about where Brown said they would.
Brown cites Iorio's 2007 paper, showing how heavy the Kuiper Belt must be:
There are at least 70,000 Trans-Neptunian Objects (TNOs) with diameters larger than 100 km in the 30-50 AU region.
And now, Trujillo and Rabinowitz eagerly wait for their colleagues to discover other, equally heavy objects, beyond 50 AU.
In other news: a ringed Centaur
Sedna's new companion wasn't the only big news to break in the journal Nature. Braga-Ribas and company reported finding rings around another object, 10199 Chariklo.
Chariklo is a centaur, a satellite of the Sun, between Jupiter and Neptune, that has some of the qualities of an asteroid, and some of a comet. Chariklo is the largest of that class of objects. So if any centaur would have a ring system, Chariklo would.
But: how could Chariklo, or any object that small, have rings?
Brown discussed the problem with this Examiner this morning:
Astronomers used to scoff at the idea that asteroids could have moons. When the first astronomer reported seeing one asteroid orbiting another, his colleagues at first laughed at him.
They aren't laughing now. But they ignore this key fact: orbital capture cannot take place without something to slow down the object to be captured. That something, says brown, was water vapor.
During the Global Flood, a lot of the water that escaped into space simply evaporated. It persisted as a gas. That was enough to slow down small objects, so that larger objects could capture them.
He also said those objects could never form unless all the material was falling together.
Brown confidently predicted the discovery of more centaurs with rings, and asteroids and Kuiper Belt objects with moons. More to the point, he welcomes the eventual discovery of more companions for Sedna, enough to solve the riddle of how a comet, forming from matter launched from earth, could itself launch into an orbit carrying it to 50,000 AU and maybe beyond. That solution would make the Oort cloud unnecessary.
Update, March 28, 2014: Dr. Walt Brown reminded this Examiner today of a passage in his book that addresses the aphelion of long-period comets:
The distance (50,000 AU) is in error. Comets more than about 12 AU from the Sun cannot be seen, so both the distances they have fallen and their orbital periods must be calculated from the small portions of their orbits that can be observed. Both calculations are extremely sensitive to the mass of the solar system. If this mass has been underestimated by as little as about 17 parts in 10,000 (about the mass of two Jupiters), the true distance would be 585 AU and the period only 5,000 years. | <urn:uuid:a1f1a657-8a73-4d6c-82f7-460235664d12> | CC-MAIN-2016-26 | http://www.examiner.com/article/sedna-has-company-and-lots-of-it?cid=rss | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00129-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952223 | 1,596 | 3.625 | 4 |
The Nano Membrane Toilet, invented at Cranfield University in the United Kingdom, has been designed to separate solid waste and urine. The solid waste is incinerated to generate heat that powers the toilet as well as small appliances, and the urine is filtered into potable water.
“By having to defecate in the open air, especially at night, many women and girls run the risk of rape and sexual assault.”
Though not designed specifically to tackle the needs of women and girls, if successful, this waterless toilet will do so in a number of ways for the millions who lack access to a basic, hygienic toilet.
Firstly, women and girls will welcome a toilet that provides the indoor privacy and the clean water necessary to address menstrual needs. At least 500 million lack adequate facilities for managing menstrual hygiene, according to a joint UNICEF and WHO report published earlier this year.
This is at home, but the problem is identical in schools too: many girls, especially in Sub-Sarahan Africa, drop out once they reach puberty because of inadequate sanitation facilities to cope with menstruation.
Secondly, women and girls in poor countries are often tasked with fetching water for domestic use, often spending at least half an hour on a single trip. They would welcome a toilet that also produces clean water — with added benefits related to health and hygiene, for which women are normally responsible. Thirdly, a clean private facility that is either in or close to home may even prevent violence. By having to defecate in the open air, especially at night, many women and girls run the risk of rape and sexual assault. And if the waterless toilet is placed in refugee camps, perhaps the energy generated by the solid waste could power a light bulb for extra security.
There are plenty of cheap and easy-to-build toilet designs on offer. But if this one works, the benefits it could bring for women are significant.
Henrietta Miers has worked across Africa and Asia as a gender and social development consultant for 20 years, specialising in gender policy. She is senior associate of WISE Development, a consulting company that focuses on boosting economic opportunities for poor women. | <urn:uuid:27e87265-1460-42ae-a375-12824aa2f199> | CC-MAIN-2016-26 | http://www.scidev.net/global/technology/analysis-blog/nano-membrane-toilet-help-women-hygene.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961725 | 452 | 3.71875 | 4 |
It may sound like science fiction rather than cutting edge science, but Smartphones are now poised to revolutionize the healthcare industry. This is especially true for lifestyle types of diseases such as diabetes or obesity.
Soon, Smart apps will be able to monitor blood sugar before and after a meal, amount of exercise, stress, caloric intake and other measurements of how a body is doing. This information can empower individuals who want to take charge of their own health plus if desired this information can also be passed on to doctors who will also weigh in on preventative care.
According to CNN, “Currently, health care systems focus primarily on chronic ill health, rather than the preventative measures of living a healthy lifestyle, but this may be the adjustment that both the health care system and insurance companies make as a result of digital health devices.
“The ultimate health care applications will allow us to monitor our body and use the insight from that data to create an actionable, preventative health solution, such as an individual exercise or nutritional plan, in addition to devising a personal cocktail of medicine.”
Retinal displays will do retinal scans, heart rate, blood pressure and other body health indicators can be continuously monitored if desired. As the U. S. struggles to cut down on healthcare costs, leading edge Smartphone technology will save the day. And that day is very soon at hand. | <urn:uuid:53533615-c759-42b5-9760-cc40d59f1904> | CC-MAIN-2016-26 | http://www.futuretechnology500.com/index.php/smart-phones/your-smartphone-will-soon-tell-you-to-turn-your-head-and-cough/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00066-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936573 | 282 | 2.6875 | 3 |
HTML as common glue
Netscape and Opera use namespace URIs
Both Netscape and Opera recognize HTML elements in the namespace "http://www.w3.org/TR/REC-html40". (To conform with XHTML, it should be ""http//www.w3.org/1999/xhtml"", but Opera doesn't yet support that.) This means that the crucial
img elements are available for Web page layout, along with HTML tables.
Microsoft uses fixed html: prefix
Microsoft will let you use any URI you like, provided that the prefix used on the elements is html:. As a result, declaring
xmlns:html="http://www.w3.org/TR/REC-html40" allows documents which use both XML and HTML.
Moving toward an XHTML base?
XHTML is the W3C's effort to place the HTML vocabulary on an XML foundation. It's far more formal than the ad hoc approach outlined above, but it could conceivably build on this approach.
Previous Page <
> Next Page | <urn:uuid:ae25338c-a55b-478d-97ba-acd024e4d6ea> | CC-MAIN-2016-26 | http://simonstl.com/articles/xbrowse/xbrowse7.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00033-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.838672 | 222 | 3.5 | 4 |
, Diod. 5.76
), a city of Crete, which, according to the legend of the purification of Apollo by Carmanor at Tarrha, is supposed to have existed in the neighbourhood of that place and Elyrus. (Comp. Paus.) The Cretan goddess Britomartis was the daughter of Zeus and Carma, granddaughter of Carmanor, and was said to have been born at Caeno. (Diod. l.c.
) Mr. Paslley (Trav.
vol. ii. p. 270) fixes the site either on the so-called refuge of the Hellenes, or near Hághios Nikólaos,
and supposes that Mt. Carma, mentioned by Pliny (21.14
), was in the neighbourhood of this town. (Comp. Hoeck, Kreta,
vol. i. p. 392.) | <urn:uuid:103006d8-c7be-4439-bf70-c3dba082aaff> | CC-MAIN-2016-26 | http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.04.0064:alphabetic%20letter=C:entry%20group=1:entry=caeno-geo&toc=Perseus%3Atext%3A1999.04.0064%3Aalphabetic+letter%3DH | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96373 | 194 | 2.625 | 3 |
These are stories Report on Business is following Wednesday, July 16, 2014.
Minimum wage effectively flat
Real minimum wages in Canada basically haven’t budged from almost four decades ago.
A report today from Statistics Canada, which looks at the weighted average among the provinces, comes amid a debate over inequality, high unemployment among the nation’s youth, and accusations of abuse of the Temporary Foreign Workers program.
According to the statistics agency, the average minimum wage in Canada was $10.14 an hour last year. And when you translate the 1975 equivalent into 2013 dollars, it was “almost identical” at $10.13.
Having said that, it certainly varied over the course of almost 40 years, dipping to $7.53 between 1975 and 1986, then rising to $8.81 in 1996.
Up until 11 years ago, it held stable at about $8.50, and has climbed since then.
“In other words, after inflation, Canada’s lowest-paid workers gained only a penny an hour over the past four decades,” said Erin Weir, an economist with the United Steelworkers.
“With many provinces now indexing their minimum wages to inflation, this problem should not repeat itself,” he added. “However, further increases above the inflation rate are needed to actually make low-paid workers better off than four decades ago.”
One of the troubling findings in the Statistics Canada report is the rise in the number of Canadians earning just minimum wage.
The proportion of paid workers earning the minimum stands at about 6.7 per cent, compared to 5 per cent in 1997. The bulk of the jump occurred between 2003 and 2010, according to the agency.
“To some degree, the increase in the proportion of minimum-wage employees during those years was the result of increases in the minimum-wage rate in many provinces,” Statistics Canada said.
“That is because a portion of those who were paid just above the former minimum rate became paid at the new, revised rate and joined the group of minimum-wage earners.”
Consider this statistic: The proportion of young workers, between the ages of 15 and 19, who earned the minimum rose to 45 per cent in 2010 from 30 per cent in 2003, while those who earned between minimum and 10 per cent higher fell to 21 per cent from 31 per cent.
“Young employees, less educated employees, part-time employees and those working in service industries were most likely to be paid at minim wage,” Statistics Canada said.
Indeed, half of those 15 to 19 earned the minimum last year, compared to 13 among those in the 2024 age bracket.
There are huge differences among the provinces, as well.
Alberta, home to the oil industry, for example, boasted the lowest percentage of workers earning the minimum, at just 1.8 per cent last year. Prince Edward Island had the highest rate, at 9.3 per cent.
Another interesting fact is the change in the ratio of minimum wage to average hourly earnings. This, too, has been on the rise, to 46 per cent last year from 41 per cent in 2005.
“This is because the average minimum wage rose faster than the average hourly earnings (in constant 2013 dollars), Statistics Canada said.
Barrick Gold Corp.'s chief executive officer Jamie Sokalsky is stepping down in September, the company said today in a surprise shake up that will nix the CEO role and promote two senior managers to the roles of co-president.
The move comes less than three months after Barrick’s new executive chairman John Thornton took the reins of the world’s largest gold producer and is a blow to Mr. Sokalsky, who was instrumental in steering the company through the toughest year in its history, The Globe and Mail's Rachelle Younglai reports.
Mr. Sokalsky, who rose through the ranks at Barrick to become its chief financial officer, was appointed chief executive in 2012 when the company’s former CEO Aaron Regent was ousted amid a weakening share price.
In the following two years, he helped put Barrick on sounder financial footing by raising funds to pay down the company’s debt load, suspending a key gold project in the Andes and selling off underperforming mines.
Time Warner surges on rejected bid
Shares of Time Warner Inc. are surging this morning after news that Rupert Murdoch offered recently, though unsuccessfully, to buy the media giant for $80-billion (U.S.).
"21st Century Fox can confirm that we made a formal proposal to Time warner last month to combine the two companies," the Murdoch enterprise said in a statement today after reports in The New York Times were followed by other news organizations.
"The Time Warner board of directors declined to pursue our proposal. We are not currently in any discussions with Time Warner."
Time Warner also confirmed the bid today, saying the Murdoch Group’s offer was $32.42 in cash for 1.531 of a Twenty-First Century Fox share.
It rejected the advance, it said, because it believes its own planned course will deliver results, and that there’s “significant risk and uncertainty” surrounding Twenty-First Century Fox non-voting shares, as well as its “ability to govern and manage” a combined company of that heft.
“The board is confident that continuing to execute its strategic plan will create significantly more value for the company and its stockholders and is superior to any proposal that Twenty-First Century Fox is in a position to offer,” it said.
Still, the Times said, last month's “bold approach” could well put Time Warner into play.
Investors certainly seem to think so, driving up the stock.
Poloz trims outlook
The Bank of Canada trimmed its economic outlook today as it made no change in policy.
As The Globe and Mail’s David Parkinson reports, the central bank projected economic growth will average about 2.25 per cent during 2014-16, and now won’t reach full capacity until mid-2016.
The Canadian dollar dipped as Bank of Canada Governor Stephen Poloz and his colleagues released their rate statement and monetary policy report, though later shot higher.
The central bank held its so-called neutral bias, which means it’s giving no signal to the markets on where interest rates could be headed, or when.
“Given the downgrade to the global outlook, economic activity in Canada is now projected to be a little weaker than previously forecast,” it said.
It said, though, that it still expects the lower Canadian dollar, which has eroded since Mr. Poloz took the helm of the central bank, and forecast stronger global demand, will lead to his hoped-for rebound in exports.
“For now, the growth outlook still hinges on getting exports and capital spending going, and the former is then linked in part to a weaker Canadian dollar, a clear sign that the bank will lean against any further C$ appreciation,” said chief economist Avery Shenfeld of CIBC World Markets.
- David Parkinson: Bank of Canada cuts economic outlook
- Bank of Canada's shifting tone 'undercut' Canadian dollar in five ways: BMO
- Infographic: The loonie and factory jobs
- Video: Why the Canadian dollar will ultimately tumble again
- Infographic: Copper and loonie: A 'curious' connection
- David Parkinson in ROB Insight (for subscribers): This dollar bull rally has no horns
- Brian Milner: Poloz faces dilemma as loonie strengthens
Rogers to demand order
In the wake of a Supreme Court decision last month that upheld Canadians’ right to online privacy, Rogers Communications Inc. says it will now require a court order or warrant for all law enforcement requests for customer information.
As The Globe and Mail's Christine Dobby reports, the Supreme Court of Canada ruled in June that police require judicial authorization before asking Internet providers for basic information that would identify their customers, including in cases involving child exploitation.
The court made it clear that police must obtain a search warrant, even if they are asking only to obtain the name and home address of a consumer who has signed up for Internet use.
Toronto-based Rogers, one of Canada’s biggest providers of Internet, wireless and cable services, said Tuesday it will adjust its policies to comply with the decision.
Factory sales rise
Canada’s factories scored their fourth gain in five months in May as sales climbed 1.6 per cent.
That increase, Statistics Canada said, was largely on the back of the oil, coal and auto sectors.
Over all, sales in May increased in 11 of 21 industries measures, or about 61 per cent of all manufacturing.
Notably, Ontario chalked up hefty increases, with sales climbing 2.3 per cent.
That’s important given that last week’s employment report showed factory jobs in Canada’s most populous province at the lowest on records dating back to the mid-1970s.
Manufacturing sales in the province, however, are now at their highest since the summer of 2008, before the recession. That’s largely thanks to the auto industry.
Inventories across Canada, meanwhile, slipped 0.6 per cent, for the first drop in five months.
The inventory-to-sales ratio, or the time it would take to exhaust inventories amid constant sales, fell to its lowest since late last year.
Unfilled orders dipped 0.5 per cent, and new orders 0.1 per cent.
- Canada's May factory sales jump to near record high
- Unemployment rate climbs to 7.1% as Ontario hit hard
Beijing should feel ‘vindicated’
There are always questions about Beijing’s official numbers, but today’s reading of second-quarter economic growth is still a strong sign.
China’s economy expanded in the three-month period by 7.5 per cent, according to the official data, a slightly faster pace than the first quarter’s 7.4 per cent.
“This should assuage hard landing fears and leave policy makers feeling vindicated in their decision not
to pursue more forceful stimulus,” said Julian Evans-Pritchard of Capital Economics.
“Looking ahead, we still expect growth to slow slightly during the second half of the year and are keeping
our forecast for 2014 unchanged at 7.3 per cent,” he added in a research note.
“Today's data demonstrate that policy makers have plenty of room to ease policy and shore up growth if necessary. We expect that further targeted measures may be rolled out to offset continued weakness in the property sector."
Streetwise (for subscribers)
ROB Insight (for subscribers)
- Onex strikes $1.3-billion deal to acquire York Risk Services of U.S.
- Bombardier hits 500 deal milestone for Q400 and C Series
- Bank of America profit tumbles 43 per cent
- British pay growth slows to record low even as jobless rate falls | <urn:uuid:56bcca41-7844-445d-89f5-9da44373665c> | CC-MAIN-2016-26 | http://www.theglobeandmail.com/report-on-business/top-business-stories/real-minimum-wages-in-canada-havent-budged-in-almost-four-decades/article19630636/?cmpid=rss1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00008-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953876 | 2,325 | 2.90625 | 3 |
Millions of Americans depend on sunscreen. Protecting our skin from harmful ultraviolet rays is important—UV light can cause skin cancer and prematurely age the skin. Yet, beyond the knowledge that sunscreens can prevent sunburns, for many years little has been known about their safety and efficacy. As researchers have looked at sunscreens more closely, some concerning information has come to light—leaving many of us utterly confused about how to best protect our health. Fortunately, we’re beginning to understand more about which sunscreen ingredients are safe, and with a little education we can feel good about what we’re putting on our family’s skin.
• Homemade Sunscreen Recipe
The Sunscreen Facts
Ultimately, a little sun is good for us—a complete lack of sun exposure can lead to a deficiency in vitamin D. We should all aim to absorb about 20 minutes of sun each day. Yet at the same time, we need sunscreen to protect ourselves from the potentially harmful effects of overexposure.
Unfortunately, some of the most popular sunscreens on the market may not be as effective as we’d like and may even be harmful. Scientific evidence shows that some sunscreens only protect against UVB radiation rather than the more harmful UVA. What’s worse, many conventional sunscreens have been found to contain potential carcinogens including oxybenzone, benzophenone and retinyl palmitate. Research shows some of these sunscreen ingredients may actually promote skin cancer growth and free radical cell production in the body—some experts caution that exposure to these toxic ingredients may completely erode the benefits of sunscreen.
Armed with this knowledge, it’s wise to seek out sunscreens that are safest for our skin instead of just buying whatever is cheapest at the local drugstore. To learn how to choose the best sunscreens, read “Sunscreen Shopping Tips” later in this article, or visit the Environmental Working Group's Guide to Sunscreen. Or if you wish to take matters into your own hands, you can make your own sunscreen at home using the Homemade Sunscreen Recipe from Katie, the Wellness Mama. The basic ingredients should be available at your local health-food store; all you need to do is mix it up!
Sunscreen Shopping Tips
• Avoid sunscreens with vitamin A—generally listed as “retinyl palmitate” or “retinol” on labels. It may speed up the development of cancer on skin exposed to sunlight.
• Avoid sunscreens that contain oxybenzone. Found in 80 percent of chemical sunscreens, oxybenzone can penetrate the skin, causing allergic skin reactions and disrupting hormones.
• Skip sunscreens with insect repellent. While sunscreens should be applied liberally, repellents should not.
• Opt for creams instead of sprays and powders, which can emit tiny particles that may not be safe to breathe in.
• Beware of sunscreens with SPFs higher than 50. Many researchers suspect SPFs higher than 50 do not provide additional protection, creating a false sense of security.
• Look for mineral-based natural sunscreen with zinc and titanium dioxide listed as active ingredients, usually in the form of nanoparticles. The Environmental Working Group favors these two ingredients for their superior UVA protection—while no ingredient is completely effective or without hazard, these ingredients have shown less than .01 percent or no skin penetration and are stable in the presence of sunlight.
Try these editor-recommended products, which are also recommended in the Environmental Working Group's 2013 Sunscreen Guide.
A little goes a long way with this Sunny Face Natural Sunscreen, SPF 30, from Goddess Garden Organics, made with nourishing aloe and sunflower oil. $18, goddessgarden.com
Lightly scented with organic lavender, this Broad Spectrum Sunscreen Cream, SPF 30, from Badger is super moisturizing and safe. $16, badgerbalm.com
Indulge in the subtle scent and healing benefits of coconut with the Broad Spectrum Sunscreen, SPF 30, from True Natural. $20, truenatural.com
Kiss Your Kids
Protect the little ones in your family with the gentle, water-resistant Obsessively Natural Kids Natural Mineral Sunscreen, SPF 30, from Kiss My Face. $15, kissmyface.com
Use this Mineral Sunscreen, SPF 32, from Dolphin Organics for everyday use. It's easy to apply and fragrance-free. $20, dolphinorganics.com
Gina DeBacker is the associate editor at Mother Earth Living, where she manages the health section of the magazine. | <urn:uuid:5404fc8c-ac82-49f0-a6a2-b953cd8c93d9> | CC-MAIN-2016-26 | http://www.motherearthliving.com/health-and-wellness/natural-beauty/sunscreen-ingredients-zmez14mjzpit.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00127-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.899993 | 977 | 3.21875 | 3 |
In this movie regarding the life of St. Therese of Lisieux, the girl who wants to become a saint, there is a scene wherein she changes the color of her clothing and headdress after participating some time at the monastery. What is the meaning of this color change that marks her becoming a nun?
- Anybody can ask a question
- Anybody can answer
- The best answers are voted up and rise to the top
The first official stage of the religious life is the noviciate. Novices are not admitted to vows until they have successfully completed the prescribed period of training and proving, called the novitiate. This usually lasts one year.
This is the period that the member of a religious community undergoes prior to taking vows (poverty, chastity and obedience) in order to discern whether she or he is called to vowed religious life.
The novice's habit is often slightly different from those of professed members of the order. For instance, in communities of women that wear a dark veil over the head, novices often wear a white one (like the case of the Carmelite Order). | <urn:uuid:16f6dcc5-aa4f-49f0-b811-b87a3f91b54d> | CC-MAIN-2016-26 | http://christianity.stackexchange.com/questions/25366/why-do-carmelite-nuns-change-headwear | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00071-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968323 | 234 | 2.9375 | 3 |
JERUSALEM, Israel -- In a pre-construction excavation in Jerusalem, a team of Israeli archaeologists unearthed a section of a major road between the Mediterranean coastal city of Jaffa and Jerusalem.
Israel Antiquities Authority (IAA) archaeologists discovered the ancient thoroughfare in the northern Jerusalem neighborhood of Beit Hanina where city engineers plan to lay a drainage pipe, the IAA said in a press release Tuesday entitled "Greetings from the Roman Empire!"
"This is the first time we have encountered such a finely preserved section of the road in Jerusalem," said an enthusiastic David Yeger, director of the excavation.
The team uncovered curbstones on both sides of the 8-meter (yard) road, built with large flat stones snuggly fitted to make a pleasant walking surface.
Yeger said some parts of the road were excavated previously, but this section is the most beautifully preserved.
"The Romans attached great importance to the roads in the empire," he said in the IAA press release.
According to Yeger, the Roman occupants invested a lot of money and used the most advanced technology of its time "to crisscross the empire with roads."
This particular section of road is one of an "imperial network of roads" leading from the coastal plain to Jerusalem.
"These [roads] served the government, military, economy and public by providing an efficient and safe means of passage," he explained. "Way stations and roadside inns were built along the roads, as well fortresses…to protect the travelers." | <urn:uuid:aea56efc-2b0b-4eeb-8263-e099a4e04dc6> | CC-MAIN-2016-26 | http://www.cbn.com/cbnnews/insideisrael/2013/June/Archaeologists-Unearth-Roman-era-Road-to-Jerusalem/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00067-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952595 | 318 | 2.921875 | 3 |
Undaunted by the drone of traffic on nearby highways, the distinctive call of an endangered California clapper rail rises from somewhere deep in the LaRiviere salt marsh along the border of Fremont and Newark. The sounds make Florence LaRiviere, the woman for whom the restored marsh was partly named, smile with delight.
Although Florence, in her late 80s and suffering from failing eyesight, sees only glimpses of her beloved marshland, the sound is proof that her 47-year crusade to restore the wetlands, marshes and salt ponds of San Francisco Bay is working.
"It takes persistence," Florence says. And of that, she has plenty.
Plodding along the freeways that skirt the bay, many of us don't give much thought to the fragile coastlines, and yet their survival is crucial to the millions of creatures that live in the mud and salty marshes, and the millions more that pass through twice a year on migrations.
They also are critical to human survival. The wetlands and marshes are our first line of defense against a rising sea level.
They act as sponges, reducing the impact of high tides and floods. They also purify the water.
Almost 85 percent of the bay's original marshes and shorelines, conservationists say, have been changed by development and commercial salt operations.
Some of the areas have been mined and abandoned to the point that they now resemble moonscapes, vast stretches of encrusted white nothingness. Except, of course, life does exist there, and with the help of Florence and her Citizens Committee to Complete the Refuge, life can thrive there.
Florence and her late husband, Philip, cofounded the all-volunteer group in the late 1960s when they saw what was happening. They initially focused on the salt marshes, but realized that the entire bay needed to be preserved and enriched.
The volunteers went door-to-door, handing out fliers and educating the public. They spent countless hours at city planning and council meetings, reading environmental impact statements until their vision blurred, and talking reason to anyone who would listen and quite a few who wouldn't.
Eventually, U.S. Rep. Don Edwards joined the efforts and got congressional approval to purchase the lands for what would become the first national urban wildlife refuge, what was later named the Don Edwards San Francisco Bay National Wildlife Refuge.
The result of their work is a joy to see. The "wildlife island in an urban sea" supports migratory birds and year-round residents. It also is home to the endangered salt marsh harvest mouse, which drinks salt water and lives on a diet of pickleweed -- a salt marsh stalwart.
Those moonscapes have been transformed into lush marsh, rippling with life.
The work is far from finished. The Citizens Committee's goal is to restore the available bay shoreline and marshes to wildlife habitat.
For years, the Citizens Committee -- still small and still dedicated -- has been trying to bring an area known as Area 3 and Area 4, Whistling Wings and Pintail Duck clubs in Newark, into the refuge, but developers have plans for luxury homes and a golf course.
The group has mounted a lawsuit seeking to protect these valuable lands. It is a fight they will not give up on, even though it seems a never-ending battle.
If you're interested in the Citizens Committee preservation efforts, it could use your help and your tax-deductible donations. Go to www.bayrefuge.org or mail to P.O. Box 50991, Palo Alto, CA 94303.
Joan Morris' column runs five days a week in print and online. Contact her at email@example.com. | <urn:uuid:cf539893-767c-48a3-a611-cc8037cab20f> | CC-MAIN-2016-26 | http://www.mercurynews.com/animal-life/ci_23725030/morris-saving-san-francisco-bay | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.966231 | 782 | 2.984375 | 3 |
Studies Examine the Nutritional Value of Human Milk
To Christina Valentine, MD, MS, RD, medical director for neonatal nutrition services at Nationwide Children’s Hospital and principal investigator in the Center for Clinical and Translational Research at The Research Institute, the catchphrase “milk does a body good” is more than an ad slogan; it’s a nutritional fact for newborn babies.
While other physician scientists examine how a baby’s aerodigestive system functions in order to improve how that baby feeds, Dr. Valentine focuses on the nutritional aspects of babies’ meals. Pictured below, Dr. Valentine works with a new mom in the NICU to explain the value of human milk.
Dr. Valentine says nutrition is vital, especially in the first couple weeks of life, as nutrition leads to healthy weight gain for the baby. Without proper weight gain, newborns are at risk for delays in mental development, especially cerebral palsy.
One of the most surefire ways to ensure a baby properly gains weight is for the baby to receive human milk.
“Human milk is medicine,” said Dr. Valentine. Research has shown if as little as half of a baby’s feedings are human milk, the baby is significantly less likely to develop devastating diseases of infancy such as necrotizing enterocolitis, which destroys the bowel. This is one of the reasons Nationwide Children’s works closely with The Mother’s Milk Bank of Ohio, one of 10 operations in the country that is meant to collect, screen, process and distribute human milk donated from lactating mothers.
Dr. Valentine’s recent research has shown the nutritional value of donor milk as provided by milk banks. “We found that for the preterm infant, protein and the fatty acid Docosahexaenoic acid (DHA) may be limited and the baby either needs supplementation in their diet or through mom’s diet,” said Dr. Valentine. To expand on the Ohio donor milk research, Dr. Valentine is now investigating whether milk from the other national milk banks have similar findings.
In addition to milk quality, Dr. Valentine’s research team aims to help mothers with the process of feeding their baby, as not all new mothers can properly breast feed. “We see many mothers who want to produce milk for their babies, but are physically unable to due to the early delivery of their premature infant,” said Dr. Valentine. Using mice models in collaboration with mentor Lynette Rogers, PhD, in the Center for Perinatal Research, Dr. Valentine’s team is studying how inflammation and conditions like diabetes, preeclampsia and obesity may affect the function of the mammary glands. “The goal is to identify inflammatory markers that could help determine early which mothers might have trouble lactating and find ways to help them,” said Dr. Valentine.
All of this research will benefit from new technology in Nationwide Children’s main campus neonatal intensive care unit. The team now has access to a PEA POD Infant Body Composition Machine, one of 50 in the world. PEA POD provides a noninvasive way to measure body composition in the smallest of infants. “We are now able to look at how our feeding protocols affect fat and muscle distribution in our patients. We hope to examine how human milk and increased amino acids impact body composition in babies compared with formula-fed babies,” said Dr. Valentine. | <urn:uuid:a65db18c-e23f-4ce5-998c-c29b6e912233> | CC-MAIN-2016-26 | http://www.nationwidechildrens.org/medical-professional-publications/milk-as-medicine?contentId=82720&orgId=12276 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00007-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.93544 | 723 | 3.234375 | 3 |
By: Bruce LaFontaine
Well researched, this volume features 28 realistic, ready-to-color drawings that trace the experiments and activities of Wilbur and Orville Wright from their initial glider tests (1900) and the flight of the Wright Flyer I in 1903 to a picture of Orville at the controls of an aircraft in 1947 — one year before his death. Free Teacher's Manual available. Grades: 3–5.
Boost: Seriously Fun Learning!
Keeping children entertained and engaged is the key to learning, and the Boost series offers a wide range of fun-filled coloring and activity books that help teach a variety of basic skills. Each title is targeted to a specific grade range and carefully aligned with the Common Core State Standards, which are listed at the bottom of each page.
|Availability||Usually ships in 24 to 48 hours|
|Grade level||3 - 5 (ages 8 - 11)|
|Dimensions||8 1/4 x 11| | <urn:uuid:1f5935c2-b4a6-484f-806a-7148d6780234> | CC-MAIN-2016-26 | http://store.doverpublications.com/0486494403.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00034-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.883832 | 200 | 3.390625 | 3 |
Learn something new every day
More Info... by email
The middle cerebellar peduncle is one of the three stem-like structures that connects the brain's cerebellum to the pons. It is also the largest of such structures. The middle cerebellar peduncle gets its "middle" prefix from its placement. It is located between the topmost peduncle, which is called the superior cerebellar peduncle, and the bottommost peduncle, which is referred to as the inferior cerebellar peduncle. The middle cerebellar peduncle is also known as the brachia pontis.
As part of the center of the nervous system known as the "little brain," the cerebellum is a distinct region located at the rear and bottom area of the brain. It is primary responsible for regulating the body's movement. The middle cerebellar peduncle receives a copy of the data used for motor control via its connection of the cerebellum to the pons. Located below the peduncles, the pons comprises the upper part of the brainstem.
The middle cerebellar peduncle is mainly composed of fibers derived from its site of origination, the pontine nuclei. Also called the griseum pontis, the pontine nuclei is the specific part of the pons that helps in regulating motor activity. From there, the middle peduncle crosses the midline of the pons' ventral part called the basilar pons. Then the stem-like structure comes out of the opposite side of the brainstem, curving along the pons' dorsal side referred to as the pontine tegmentum. The peduncle ends its course when it goes into the cerebellum.
There are three fasciculi, or bundles, into which the fibers of the middle cerebellar peduncle are organized. They are called the superior fasciculus, the inferior fasciculus and the deep fasciculus. Each fiber that composes the peduncle is centripetal, meaning that it transmits nerve impulses toward the brain.
The superior fasciculus is the topmost fiber bundle of the middle cerebellar peduncle. Originating from the pons' upper transverse fibers, the superior fasciculus is primarily dispensed to the small lobes of the cerebellar hemispheres. These are the two lateral lobes that comprise the cerebellum.
Traveling under the superior fasciculus is the inferior fasciculus. This particular bundle consists of the pons' lowest transverse fibers. It is dispersed to the cerebellum vermis that unites the hemispheres.
The pons' deep transverse fibers make up the deep fasciculus of the middle cerebellar peduncle. Part of it is covered by the two aforementioned fasciculi. The fibers of the deep fasciculus go to the upper front cerebellar lobes. | <urn:uuid:a1cf0504-d316-45eb-9c6c-ba21aa06eb09> | CC-MAIN-2016-26 | http://www.wisegeek.com/what-is-the-middle-cerebellar-peduncle.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00141-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.921708 | 601 | 3.65625 | 4 |
Kids are curious about absolutely anything, so that's what the books in the non-fiction picture book category will be about--absolutely anything! As long as it's true and factual, of course. (it's gotta be "not fiction", after all!). Science, art, history, sports, current events--and more--are all fair game, from slice-of-life biographies and other true stories kids will read beginning-to-end, to list books and other compendiums of information that will delight the browsers in the crowd.
We're looking for fresh subjects or old favorites approached in a new way; writing that sizzles and sings; illustrations that make you say, "Wow."; and all of this between covers young readers (and those read to!) will want to open again and again.
Non-fiction picture books will be 48 pages or less and aimed at younger readers. Non-fiction books 48 pages or more, with longer, denser text divided into chapters, belong in the Middle Grade/Young Adult Nonfiction category.
--Fiona Bayrock, organizer
Please leave a nomination -- including author and title -- in the comments below. One nomination per person, per category, please. | <urn:uuid:60165fa3-ffb9-4307-9bea-1a86e92dd55a> | CC-MAIN-2016-26 | http://dadtalk.typepad.com/cybils/2008/10/2008-nominati-6.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00003-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.909482 | 250 | 2.71875 | 3 |
The Rev. George Washington Gale had a dream in the 1830s that led to the establishment of Galesburg, Ill., and the formation of a manual labor college called Knox. The first 25 settlers arrived in 1836 and built temporary cabins in Log City near what is now Lake Storey.
One of Gale’s objectives for Knox College was to offer equal educational opportunities for both men and women. In 1843 a Female Seminary Building was in operation near the intersection of Seminary and Simmons Street. Unfortunately the structure burned to the ground shortly after opening. An Academy Building was then built on the Public Square area in 1847 with the upper floor devoted to educational facilities for Knox College female students.
Seven years later in 1854 Gale’s fledgling town began to flourish when the CB and Q Railroad began operations. Knox College donated land needed for the railroad’s expansion. The college grew to 66 women and 51 men enrolled in full-time classes. In 1857 Knox College expanded with the construction of two major buildings that remain viable today. | <urn:uuid:9a504c41-2b3e-4db4-8c88-5c6b7e1363fc> | CC-MAIN-2016-26 | http://blog.knox.edu/knoxinthenews/?p=31 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00024-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974794 | 215 | 2.84375 | 3 |
This is something I did during my teaching diploma. It is helpful to learn what a child understands about scientific concepts so that you can plan experiments to challenge their misconceptions and improve their understanding.
We went around the house and found a variety of objects and set out to test whether they would float or sink.
First we made some predictions. My son divided the objects into ones he thought would float and ones that he though would sink …
Float – marble, toy pig, plastic spoon, pom pom, syringe, ping pong ball, lid, centre from a tape roll, 5 cents
Sink – golf ball, shell
Then we tested each object and found that two of our predictions were wrong – the marble and the 5 cent piece sink.
As we tested I asked a few questions.
Why do some things sink?
“things that sink are too heavy”
Once we had tested everything we started to experiment with getting floating objects to sink and sinking objects to float.
How could we make the plastic lid sink?
“make it heavier. If we put the golf ball in it, it sinks.”Could we get these things that sink to float?
“we need to pump it up with air or not make it heavier (he means we need to make it lighter but didn’t know the right word). Put it in a boat.”Boats are heavy. Why do boats float?
“They are pumped up with air”Scientists use Archimedes’ principle to determine whether objects will float or sink. Objects are said to be floating if they are supported by the water (that is, they do not have to float at the surface). For an object that is floating, the mass of the material equals the mass of the water that is displaced by the object. Dense objects cannot displace enough water to counterbalance their weight. Objects made of material denser than water (such as a boat) can still float if they contain air so that the mean density is less than that of water.
Some of the earlier understanding that children need to develop about floating and sinking to understand the scientific explanation are …
- whether something floats depends on the material it is made of, not on its size
- objects float if they are light for their size and sink if they are heavy for their size
- an object can be light for its size if it contains air, such as a hollow ball
- materials with a boat shape will float because they effectively contain air
- water pushes up on objects with an upthrust force
So, my son understands that floating or sinking does not depend on the size of the object, but is still coming to grips with whether or not something is light or heavy for its size. I would like to give him some plasticine to play with in the water (a ball of plasticine will sink, but it can be made into a boat shape that will float), so that he can experiment with changing the shape to make the plasticine float.
Our play with floating and sinking is an example of how play builds children’s understanding. I doubt my son would understand much about floating and sinking if I had told him, rather than let him experiment for himself (not just this time but many times in his life).
What are your child’s ideas about floating and sinking? Have your asked them? Or perhaps you could ask them about another scientific concept. | <urn:uuid:386a5d5d-3cca-445f-9911-a8527beec22e> | CC-MAIN-2016-26 | http://www.indirectobservations.com/2010/05/floating-and-sinking.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00014-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968757 | 710 | 3.96875 | 4 |
Lion’s a real bully, but he may have met his match when wily Rabbit takes him on.
Tired of Lion’s bullying but not brave enough to confront him, all the animals advertise for someone to “make Lion stop bullying us.” A bear, a moose and a tiger respond, but Lion quickly defeats each. When Rabbit arrives, Lion’s confident he’ll win and tells Rabbit to pick the contest, so Rabbit chooses a marshmallow-eating competition and wins. Disgruntled, Lion complains he was sick, so Rabbit offers a quiz contest. Rabbit wins this, as well as hopping and painting competitions, but as Lion always has some excuse for losing, Rabbit tells him to choose a final competition. Knowing he’s faster, stronger and a better climber, Lion suggests a race to the top of the mountain, but no matter how fast Lion runs, clever Rabbit always seems to get ahead. Precise, digitized pencil illustrations utilize simple lines, patterns and colors to highlight Lion’s mean and silly bullying antics, his prowess in competitions against the bear, moose and tiger, and his humiliating defeats against wily Rabbit. Readers with sharp eyes will be rewarded with numerous amusing visual details, including hidden hints about how Rabbit outwits Lion.
A droll, nonthreatening tale of bullying in the guise of a modern fable. (Picture book. 4-8) | <urn:uuid:01380f00-b8cb-41f3-8d4a-31daa313b070> | CC-MAIN-2016-26 | https://www.kirkusreviews.com/book-reviews/alex-latimer/lion-vs-rabbit/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959321 | 297 | 2.734375 | 3 |
Claims/realities: Chapter 4
Claim: “Actually, if we really look at gorillas [vegetarian animals] et al., what we find are animals that contain the fermentative bacteria necessary to digest cellulose. We humans contain no such thing. This man writes books about diet without knowing a thing about how humans actually digest (p141).” On the next page she cites a chart that says humans have no bacteria in their stomach.
Reality: Humans currently have over 130 known bacteria in their stomach. Barry Marshall and Robin Warren won a Nobel Prize in 2005 for their research in this area. Keith’s information here came from a chart from 1975 (see below) and second- and third-hand analyses done by Eades and Eades and the Weston A. Price Foundation people. Additionally, the fact that we don’t have an enzyme to breakdown cellulose does not, in any way whatsoever, mean we don’t need cellulose. Keith uses this characteristic of cellulose to claim that we don’t need and weren’t “meant” to eat cellulose. In reality, cellulose is one of our most important sources of fiber. If it broke down in the stomach, our intensines wouldn’t move because they would have no bulk… we wouldn’t poop. Here’s a primer to some things that can happen if you don’t get enough fiber. “This lady writes books about diet without knowing a thing about–” oh wait, that would be obnoxious.
Claim: Humans are carnivores, here’s a chart to prove it (pp. 142-3).
Reality: This is a classic compartive anatomy chart. Here’s one that makes it look like humans are “naturally” vegetarians and if you combine them, you can probably get a chart that makes a good argument for how humans are “naturally” omnivores. Here’s a good article about how such charts are decieving and how we don’t really know what humans “naturally” are.Keith’s chart is from a The Stone Age Diet, a book self-published by Dr. Walter Voeltin in 1975– that’s 35 years ago. And self-published books not only don’t need peer review or feedback, but don’t technically even need an editor, a manuscript reader, a consultant, or anyone else besides the author to decide what should be published. So it was already a dubious book when it came out. As you might guess, tons of research has since been done that severely complicates his theories about meat and plant eating (see all of our chapter 4 discussions, and do your own research.) This diet was a fad in the mid-70s and became faddy again in the 2000s, in part due to this inconclusive yet fairly well-publicized study.
Claim: “If the getting of food, of life, means we are destined for sadism and genocide, then the universe is a sick and twisted place and I want out. But I don’t believe it. It hasn’t been my experience of food, of killing, of participating. When I see the art that people who were our anatomical equals made, I don’t see a celebration of cruelty, an aesthetic of sadism. No, I wasn’t there when the drawings were made and I didn’t interview the artists. But I know beauty when I see it. And the artists left no question about what they were eating. Besides their drawings, they also left weapons, including blades for killing and butchering (p144).”
Reality: By now, hopefully we realize that mainly this isn’t even a “claim”, it’s a subjective anecdote about Keith’s internal eating experience. As for cavemen leaving “no question” about what they ate, this is simply wrong. Palentology is all question and speculation. Since time machines don’t exist, there is no way to truly prove anything in paleontology, even moreso than in many of the other sciences. This is partly why it’s an exciting science, and partly why the palentologists who are worth listening to, are carefully trained not to create overarching, unsubstantiated narratives based on cave paintings, like “all humans should eat meat” or “no one ever ate meat”. This kind of use of the social sciences is biological determinism, which is related to sociobiology. Generally, radicals, especially feminists, have noticed and criticized these methods of logic, which have historically been employed by fundamentalist Christians, eugenicists, racists, misogynists, anti-semites, and others who dismiss loaded, complicated political and social issues by claiming that all correct human behavior is based in biology. This is what Keith’s sources do. This is the practice of using science as scientism– a dogmatic and simplified faith in science– versus using science for the critical and useful tool that it is. Keith, a second-wave radical feminist, apparently either missed or is willfully ignoring how one of the most significant and successful movements inside second-wave radical feminism included a huge, substantiated critique of this kind of science. You can read about this in any intro to women’s studies textbook. See also “paleofantasies” and the myth of the three Ns.
Claim: “One version of the vegetarian myth posits that we were ‘gathererhunters’, gaining more sustenance from plants gathered by women than from meat hunted by men. This rumor actually has an author, one R.B. Lee, who concluded that hunter-gatherers got 65 percent of their calories from plants and only 35 percent from animals (p146.)”
Reality: First off, this “one R.B. Lee” who started a “rumor” is one of the most well-respected and influential living anthropologists, Professor Emeritus of Anthropology at the University of Toronto, and the editor ofThe Cambridge Encyclopedia of Hunter-Gatherers. It’s probably safe to assume she has not read any of his numerous academic opuses, since she only quotes a second-hand analysis. We don’t want to be redundant about Keith’s resources, but suffice it to say, she goes on to use her usuals here plus an article written by Dr. Loren Cordain of the PaleoDiet Brand in an attempt to debunk him. She then uses pages more of anecdote about not feeling good when she was a vegan and how, if you don’t believe her, you, too, should see how you feel after eating beans (p147-8.) In any case, Dr. Lee’s studies present information and possible, though ultimately not provable, conclusions. Keith and her resources present psuedo-informaton plus rampant, unapologetically biased interpretation. Again, this is biological determinism.
Claim: Lectin might be damaging to our digestive tracts, we aren’t really sure (pp147-9), so this is another reason we aren’t meant to eat plants.
Reality: First off, her citations in this lectin discussion are all from our friends Eades and Eades, Davis, and Cordain (see above)–as are the rest of her claims in this chapter about how wheat causes health problems from indigestion, to arthritis, to multiple sclerosis, to schizophrenia. “According to Drs. Eades” almost functions as a catch-phrase in this chapter. She offers a hyperbolic disaster scenario about lectins, but her discussion of lectins’ known, unknown, and potential roles–and the research that has and hasn’t been done on them–is so limited as to basically be useless. Second, let it be noted that lectins are found in meat and dairy foods, not just plants. Thirdly, in the whirl of her hyperbole, Keith conveniently doesn’t mention things like the fact that lectins, specifically ones from plants, might be able to help/cure cancer. See these peer-reviewed studies:
Lectins as bioactive plant proteins: A Potential Cancer Treatment
Lectins: from basic science to clinical application in cancer prevention
Diet and colorectal cancer: An investigation of the lectin/galactose hypothesis
We’re not saying there are no potential problems with lectins. We’re just trying to round out the discussion.
Claim: Vegans can’t get Vitamin D (p180).
Reality: Vitamin D is hard to come by in food. It seems to occur nowhere in plant foods, except for certain mushrooms, and in only a very small handful of animal foods. Some types of fish contain Vitamin D, and small amounts are found in beef liver and chickens’ eggs. In no food is it abundant. No matter what your diet, unless you survive on certain types of fish, you probably get the bulk of your Vitamin D from either A) fortified foods–fortified cow milk and other dairy; fortified fruit juices; fortified cereals, vitamins, etc. or B) the sun–human skin synthesizes Vitamin D from sunlight. It’s not totally clear how much sun exposure one needs in this regard, and seasonal changes and geography play a role, especially in places with extreme weather. It’s worth looking into this based on where you live. The Vitamin D Council writes,“The skin produces approximately 10,000 IU vitamin D in response 20–30 minutes summer sun exposure—50 times more than the government’s recommendation of 200 IU per day!” They also write that people who don’t have regular sun exposure would have to take a 5000-IU Vitamin D supplement daily to catch up… that’s the equivailent of 50 glasses of fortified milk a day. So let’s look at the source Keith points to for her claim that vegans are sick from lack of vitamin D: an article called “Dietary Intake of Vitamin D in Premenopausal, Healthy Vegans was Insufficient to Maintain Concentrations of Serum 25-hydroxyvitamin D and Intact Parathyroid Hormone Within Normal Ranges During the Winter in Finland”. Now, this might be something to consider if it’s winter and you are a premenopausal Finnish vegan. But it cannot be generalized to all vegans, nor does it follow that, if this is indeed a problem, eating meat would be the remedy. In fact, this study shows that people-in-general from other arctic climates might not get enough D, and would benefit from supplements. Keith states, “It is possible to get vitamin D from ingested sources alone, which is how humans survive in the arctic.” This isn’t true. Lots of different people all over the world might have to take Vitamin D supplements.
Claim: “In every cell your body makes the sugar it needs, therefore there’s no need for carbohydrates and in fact carbs don’t actually exist…. There is no such thing as a necessary carbohydrate. Read that again. Write the Drs. Eades, ‘the actual amount of carbohydrates required by humans for health is zero.’ ” (p 154.)
Reality: Compare this simplistic and sensationalist claim, made by a couple proponents of brand-name diets, with over three-thousand research studies done on the mircobiology of carbohydrates. Keith’s entire discussion about carbohydrates and sugar is Eades-based, as is almost the entire ensuing discussion about diabetes. It’s redundant at this point to talk about how problematic the Eades are, so please refer back to our previous discussions. Our only guess is that Keith, following the Eades, is attempting to reframe what has otherwise been a very medically useful paradigm regarding micronutrients. Their reframing is not based on anything reliable and seems to have pretty serious bias/ideology backing it.
Claim: Eating a high-carbohydrate diet can destroy your stomach by giving you gastroparesis. Keith knows, because she gave it to herself (p. 159.)
Reality: To back this claim, Keith cites a no-longer-available internet article from her favorite place, the Weston A. Price Foundation’s website. Keith came to this diagnosis with the help of a doctor who works with “recovering vegans”. We haven’t been able to find information that says gastroporesis is caused by carbohydrates, though there is a lot of information about how eating a low-carb diet can help it. These are two different things. In any case, no matter how many times Keith says it, veganism is not interchangeable with a high-carb diet.
Claim/implication: “Before we go even further, do you even know what cholesterol is?” (p162).
Claim: “The Lipid Hypothesis—the theory that ingested fat causes heart disease—is the stone tablet that the Prophets of Nutrition have brought down from the mountain. We have been shown the one, true way: cholesterol is the demon of the age, the dietary Black Plague, a judgment from an angry God, condemning those who stray into the Valley of Animal Products with disease. That at least is what the priests of the Lipid Hypothesis declared, having looked into the entrails of … rabbits” (pp160-1.)
Reality: In her classic manner, and it what some say is the classic manner of the Weston A. Prince zealots, Keith goes on for pages and pages making claims regarding “cholesterol panic” and “supposed” information regarding cholesterol’s dangers that go against literally thousands of thousands of studies and meta-studies from around the world (not just one study done on a rabbit, as she sensationistically states). She makes these claims based on these resources, including, mainly, the highly questionable Anthony Colpo, whose only expertise is in weight training. That’s three or so wildy dubious sources against thousands and thousands of international studies about how complicated cholesterol and microbiology are, how dangerous too much animal-based cholesterol can be (as opposed to the cholesterol that is naturally manufactured in the human liver– if you really don’t “even know what cholesterol is”, here’s a link where doctors explain it to kids), and so much more . We don’t know what else to say. How can throwing all this away, literally not giving it one paragraph of attention in exchange for giving attention to a handful of people who have no expertise, be a reasonable, helpful, or safe move? We can’t go through all these studies and all this counter-information for you here… there’s literally too much. We trust that you’ll do your own research.
“Not to put to fine a point on it but, duh?” -Lierre Keith, p. 161. Wow. Seriously? Classy.
Claim: Vegans don’t get omega-3s (all over the book.)
Reality: There are many vegan sources of omega-3s, including flax seed, pumpkin seed, canola oil, hemp, walnuts, etc. It is easy to, say, buy a bottle of flax oil and put a little in your food, or toss some pumpkin seeds into your salad. Vegetarian supplements are also extremely easy to come by.
Claim: Vegans get no B12 (all over the book.)
Reality: False. Though it is hard to come by in plant foods, B12 is extremely easy to supplement, and many foods are fortified with it (both plant and animal foods). Keith’s resources here are, again, the Weston Price Foundation, highly selective information, and unsubstantiated personal anecdote. She has, again, completely simplified the issue of how people– meat eaters and vegetarians alike– obtain or do not obtain B12.Here is a wonderful article that discusses B12 specifically in relation to Lierre Keith’s claims. Please read it.
Claim: There are no plant sources for tryptophan. This can cause depression, eating disorders, schizophrenia, and other mental serious problems (see discussions in chapters 1 and 4.)
Reality: False. Tryptophan is found in many plant sources, including potato, banana, wheat flour, sesame, sunflower seeds, spirulina, raw soy, rice, and oats.
Claim: There are no plant sources for saturated fat. This means vegans don’t absorb essential nutrients like tryptophan and fat-soluable vitamins (see discussions in chapters 1 and 4.)
Reality: False. There are so many plant sources of saturated fat. They include various oils, avocado, coconut, nuts, and nut butters. Many nutrition experts say these are actually among the best sources of saturated fat, because they aren’t generally accompanied by the more problematic fats found in many animal products.
Claim: “Listen to your body, reader, a listening that must make your body known to you, less mysterious and more beloved” (p 153.)
Reality: Keith only wants you to listen to your body if it tells you the things she’s telling you. If it tells you something different, you’re stupid and you do not possess an adult mind. We wish we were being flip or exaggerating, but, no matter what you think of her, Keith makes it really clear that this is where she’s coming from.
Claim: Meat is good for you and being vegan isn’t.
Reality: All ethical issues aside: There are bodies upon bodies of research from widely divergent organizations and agencies that vegetarian and vegan diets can be extremely healthy. There are bodies upon bodies of research from widely divergent organizations and agencies that eating meat and dairy can be extremely harmful. There are certain things you should do to be a healthy vegan/vegetarian, like be mindful of your B12 intake. If you’re intent on eating meat, there are lots of things– probably many more things– to be mindful about. Again, there is no way we can go over all of this information. This isn’t to make claims on nature as vegans– if anything, we are trying to get across that all diets are imperfect because evolution and adaptation are imperfect, that there is no one “correct” way to relate to our human bodies, and that lots of people chose veganism for very complicated, valid reasons and execute it in a healthy way.
You don’t have to make the same choices we make. We just ask that you will be as critical and objective a thinker as possible, and no matter what your diet, do your own research if you are going to read this book–because a lot of it is straight-up wrong. Lierre Keith is not a doctor or nutritionist and neither are most of her sources! It is necessary and radical to be critical of scientific paradigms, but this by no means equals throwing away carefully established scientific ideas and methods. The following is one of the most critical points we’re going to make in this blog, so we’re going to make emphatic keyboard choices:
PLEASE DO NOT USE THIS BOOK AS A BASIS OR GUIDE TO MAKE DIETARY CHANGES! INSTEAD, READ ALL YOU CAN FROM THE MOST DIVERGENT AND OBJECTIVE SOURCES POSSIBLE! IF YOU CAN, FIND DOCTORS YOU TRUST, WHETHER “WESTERN” OR “HOLISTIC” OR BOTH– ASK ABOUT THEIR EXPERTISE AND TRAINING AND THEIR PARADIGM. DON’T JUST SEEK OUT PALEO-DOCTORS IN ORDER TO VALIDATE YOUR INTEREST IN THE PALEO-DIET, FOR INSTANCE. | <urn:uuid:1346d337-ba9c-49d4-b0c8-f3a9bfd4b3d5> | CC-MAIN-2016-26 | https://eatingplantsdotorg.wordpress.com/category/reasons-to-go-vegan/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00054-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.944802 | 4,237 | 2.546875 | 3 |
Spending less time sitting and more time standing lowers blood sugar, cholesterol, and weight — all of which translates into a lower risk for heart disease. So says a study of Australian adults published in the July 30 issue of the European Heart Journal.
As part of the Australian Diabetes, Obesity, and Lifestyle Study, researchers monitored the activity levels of roughly 700 adults to determine how much time these volunteers spent sitting, standing, walking slowly, and walking at a moderate to vigorous pace. Results confirmed that sitting worsens risk factors for heart disease.
Every two hours a day spent sitting was associated with an increase in weight and waist size, as well as in levels of blood sugar and cholesterol. As you might expect, time spent walking rather than sitting not only lowered cholesterol and blood sugar levels, but also reduced waist size and weight. Perhaps more surprisingly, simply substituting two hours of standing for sitting also improved blood sugar and cholesterol levels.
Many studies suggest that the more we sit, the more we’re likely to develop heart disease and other illnesses, including diabetes and cancer. And, whether it’s sitting at the computer to get some work done or on the couch watching TV, too many hours spent on our bottoms increases the risk of dying from any cause — even among people who exercise regularly. Why? When your body is still for too long, you stop using blood sugar efficiently. You lose muscle, mobility, and flexibility. Over the long term, you may gain weight or develop depression.
“Regular exercise is an excellent way to prevent cardiovascular disease and improve overall health. One of the biggest perceived barriers to increasing physical activity is time. We always tell people that a little exercise is better than none, and this study reminds us that simple, doable approaches make a difference,” says Dr. Gregory Curfman, a cardiologist and editor in chief of Harvard Health Publications.
Even if your job or your lifestyle doesn’t keep you on your feet, most of us can find ways to stand more. For example, try standing when you are
- waiting for the bus or train
- folding clothes or sorting the mail
- watching TV
- preparing a meal
- talking on the phone.
You can even build standing into your day at the office. See if colleagues will agree to a true “standing meeting.” Get up and walk to your coworker’s office rather than relying heavily on email. And instead of making yourself comfortable in that extra chair, stand while you talk.
You might also ask your employer about purchasing a standing desk. If that option is too expensive, rig one up on your own. You can find good, inexpensive ideas on the Internet. “Decreasing time sitting while at work is a simple, manageable adjustment may add important health benefits,” says Dr. Curfman. | <urn:uuid:d7349235-0e5e-40f4-8ad6-1d6c312053ed> | CC-MAIN-2016-26 | http://www.health.harvard.edu/blog/standing-up-for-better-heart-health-201508058167 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00183-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946462 | 583 | 2.78125 | 3 |
1 Answer | Add Yours
"The Lottery" is an allegorical story that may be read in various ways, as a criticism of patriarchy, mindless violence in society, the darker side of tradition and ritual, or the capitalist social order. This broad scope is created by ambiguity in concrete details; anchoring the narrative too firmly in the real world would limit its power and curtail its range. (The precise date given in the first line of the story prevents it becoming too vague.)
In fact, the village in question is modeled closely after the settlement where Jackson was living at the time, North Bennington, Vermont. Jackson explained,
I hoped by setting a particularly brutal ancient rite in the present and in my own village, to shock the story's readers with a graphic dramatization of the pointless violence and general humanity in their own lives. ("The Lottery: Historical Context")
Given the general tenor of the story, any precise reference to where it was supposed to have taken place would also have been irresponsible. Jackson later commented,
....what [readers] wanted to know was where these lotteries were held, and whether they could go there and watch. ("On the Morning of June 28, 1948, and 'The Lottery'")
She used a number of details from life in her home town to make the narrative vivid and concrete, but if she had gone further and supplied the town's name, she would have turned the story into a slanderous attack on her neighbours, at least by implication.
We’ve answered 327,895 questions. We can answer yours, too.Ask a question | <urn:uuid:8f714f70-dadf-46cd-ab74-cee28893f8dc> | CC-MAIN-2016-26 | http://www.enotes.com/homework-help/why-jackson-purposely-vague-about-where-this-story-24831 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00092-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.972686 | 333 | 2.953125 | 3 |
A randomised-controlled clinical trial to evaluate the effectiveness and safety of combining nicotine patches with e-cigarettes (with and without nicotine) plus behavioural support, on smoking abstinence
Tobacco smoking is one of the greatest modifiable risks to health worldwide. Each year millions of people die prematurely as a result of tobacco smoke exposure. Most smokers want to quit and many try each year. However, the chances of long-term cessation are low, especially for highly dependent smokers. In an effort to help more smokers achieve the goal of quitting for life, the Addiction Research Programme investigates novel approaches to supporting people to change their behaviour.
These studies test the effectiveness of novel delivery systems, variations of delivery of standard treatments, or novel treatments. The research team includes public health physicians and smoking cessation researchers, and has many collaborative networks nationally and internationally. | <urn:uuid:d4dd84e7-9e9e-482d-9b66-a5e1b10b9c39> | CC-MAIN-2016-26 | http://www.nihi.auckland.ac.nz/page/our-addiction-research | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00154-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.948297 | 168 | 2.609375 | 3 |
Millions of years from now, will the geological record of Earth's history display evidence of a "human" epoch? Colin Waters and colleagues have accumulated a massive body of data that suggests the Anthropocene epoch is a geological phenomenon that can be identified in the stratigraphic record. Until now, the Anthropocene as a period distinct from the previous Holocene epoch has been more of an idea used to demonstrate the massive impact that humans have had on the planet's atmosphere and species. But Waters and colleagues now say in this Review that human activity has left a pervasive and persistent signature on Earth that warrants recognition as a new geological time unit. The long-lasting and widespread geological record of the Anthropocene will include stratigraphic layers full of uniquely human products such as concrete, elemental aluminum, and plastics, for instance. Carbon particulates from atmospheric pollution, distinctly high levels of nitrogen and phosphorus from fertilizers and pesticides, and the radionuclide fallout from nuclear weapons also stand as global markers of the Anthropocene. A vast reshaping of coastal sedimentation and widespread species extinction might also be used in the distant geological future to mark the epoch, the scientists say. The starting date for the Anthropocene is still under review, but may be somewhere around 1950 C.E., at the start of the nuclear age and the mid 20th century acceleration of population growth, industrialization, and mineral and energy use. "Not only would this represent the first instance of a new epoch having been witnessed firsthand by advanced human societies, it would be one stemming from the consequences of their own doing," the authors write. | <urn:uuid:ab0ca51d-13b5-42df-9bf4-9356a7e8c4ed> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2016-01/aaft-nmp010416.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00078-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929414 | 321 | 3.328125 | 3 |
A grizzly bear has been photographed in the North Cascades National Park for the first time in more than four decades, confirming estimates of a small population of “ursus horribilis” in mountains of northern Washington.
“It’s a relief after years of waiting: God, I thought they were like vampires, and that you couldn’t get them on camera,” said Mitch Friedman of Conservation Northwest, which has championed grizzly and wolf recovery in the “American Alps.”
Grizzlies once ranged through the Cascades of the Northwest and Sierra Nevada mountains of California: A grizzly decorates California’s state flag, although the bears have been extinct in the Golden Bear State for nearly 90 years.
The only healthy populations of grizzlies remaining in the “lower 48” are in Montana’s Glacier National Park and nearby wilderness areas, and in the Greater Yellowstone Ecosystem in Wyoming.
The North Cascades grizzly was sighted and photographed last October. It took the photographer half a year to contact the National Park Service with the news.
The last confirmed grizzly sighting was in the 1960’s when a bear was shot by hunters in the remote valley of Fisher Creek south of Diablo Lake. In 1968, the valley became part of the newly created North Cascades National Park.
The U.S. Fish and Wildlife Service has estimated a population of as many as 10 grizzlies in the North Cascades. A few bears are also believed to reside in the Salmo-Priest Wilderness Area, in the southern Selkirk Mountains where the borders of Washington, Idaho and British Columbia come together.
Grizzlies were sighted in the 1950’s near the east end of the North Cascades in what is now the Pasayten Wilderness Area. A veteran outdoorsman named Bill Louden — namesake of a small lake in the Pasayten — memorably rode over a ride to discover a grizzly lumbering up the other side. Luckily, the wind was blowing toward Lowden.
The exact location of the latest sighting is not being disclosed. People have shot animals in a newly reestablished wolf pack in upper reaches of the Methow Valley near the east boundary of the North Cascades National Park. | <urn:uuid:7e2f74c5-de3f-4e29-b815-cfe939cb96c1> | CC-MAIN-2016-26 | http://blog.seattlepi.com/seattlepolitics/2011/07/01/grizzly-photographed-in-north-cascades/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00165-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949466 | 478 | 2.890625 | 3 |
Order this book
Department of Sociology, University of Oxford.
John Casti's Would-be Worlds is a non-technical introduction to computer simulation modelling. The book essentially consists of short summaries of many projects that have used computer simulation. Although these examples deal with a wide range of topics - from biology to the stock market - there is a significant amount of social science content.
The main argument of the book is that computer simulation modelling represents a major scientific advance because it allows us to study the mechanisms of complex systems: systems that arise from the interaction of their parts. Casti argues that computers can be used to create "artificial worlds". These artificial worlds can then function as laboratories that allow simulated experiments about the real world. Casti's position is summarised in the following quotation:
"For the first time in history we are in a position to do bona fide laboratory experiments on these kinds of complex systems ... now, thanks to the availability of affordable, high-quality computing capabilities, we can actually construct silicon surrogates for these complex, real-world processes. We can use these surrogates as laboratories for carrying out the experiments needed to be able to construct viable theories of complex physical, social, biological, and behaviorial processes. In many ways this leaves us in the same position that physicists were in at the time of Galileo. We now have an essential tool that can be used to create theories of complex systems, theories that will ultimately compare favorably with the theories of mechanical processes that Newton and his successors developed to describe particle systems." (p. 35)
Although Casti acknowledges that a "decent" theory of complex systems is "not even close", he argues that computer simulation will eventually lead the way. The book's five chapters set out to show this to the reader.
The first chapter gives a brief description of computer simulation, model types and model assessment. Although the description is interesting and easy to understand, it does little to help the thesis that computer simulation provides answers to otherwise unanswerable questions. For example, the book starts by discussing a computer game that simulates the NFL tournament (professional American football). Individual agents in the computer game are assigned scores based on the individual statistics of real life players. As in the real game, the team is theoretically as good as the sum of its players' statistics and their interaction. While the example is intriguing, it would be foolish to use the computer game to determine probable outcomes of real football matches because there are many intangibles that it does not include (e.g. coaching strategies, team morale etc). Moreover, even the statistics used in the program are surely not completely reliable, especially for inter-conference or inter-divisional games, since each team plays mostly within its own division and has a unique schedule. Apparently unaware of these limitations, Casti uses the game to assess the outcome of the 1996 Super Bowl between San Diego and San Francisco by performing 100 simulations of a game between the two teams. Very few would have expected the 1996 Super Bowl to be close, something acknowledged by Casti himself, and the betting odds favoured San Francisco by 19 points. The simulation resulted in San Francisco winning only 54% of the matches with a 95% confidence interval of 6.67 (3.93 for the margin of victory, far off the mark of the actual 23 point victory for San Francisco). Rather than acknowledge that the simulation worked poorly, Casti states, "As the final score was 49 to 26 - a 23-point margin - we see what an anomaly, statistically speaking, this game was." (p. 217) It seems here that Casti knows more about the "would-be world" of American football than its "real world". Although not his intent, this exemplifies how results from computer simulations can be misleading when not enough consideration is given to the mechanisms that operate in the real world. Unfortunately, Casti did not seize the opportunity to discuss this.
Despite this example Casti is clearly aware of the problems associated with model assessment. He states, "As with all computer exercises, it's garbage-in, garbage-out, and the faith we place in the model's answers is inversely proportional to the amount of garbage that goes in." (p. 31) He also argues, "In short, the standard of judgment as to whether the model is good or bad is grounded in how well the model answers our questions about the real world of people, places, and things." (p. 46) Unfortunately, however, later in the same chapter he states, "Finally, the realism of a simulation is measured by the realism of the process, as opposed to the realism of the data." (p. 75) It would be unclear to any novice what this means. A little more elaboration would have helped.
Chapter two discusses simulation models that have been used to study the stock market, geopolitics and biological evolution. I agree with Casti that financial markets are good examples of complex systems that are difficult to understand using conventional methods and his discussion of Arthur and Holland's surrogate market is interesting. Since it is impossible to perform real-world experiments on the stock market, computer simulations with realistic assumptions may provide some insight. The chapter is less convincing in its discussion of Richard Dawkins' simulation of how genetic mutation and natural selection interact to make new organisms. Unfortunately, not enough information is provided to understand exactly how the process of developing a computer organism can be used to understand the development of real, carbon-based organisms. Casti's states that what "... immediately catches the eye about this biomorph is its uncanny structural resemblance to a real-life organism called radiolaria, which includes things like the amoeba." (p. 41) This is ridiculous. A completely different biomorph could and would evolve if different criteria were entered into the model.
Chapter three, which discusses the "science of surprise" is particularly good. Here Casti lists five surprise generating mechanisms associated with computer simulation: paradoxes, instability, uncomputability, connectivity and emergence. Paradoxes result from false assumptions about a system leading to inconsistencies between what we expect the model to show and what it actually tells us. Instability refers to the fact that even apparently stable systems can be sensitive to small disruptions. Uncomputability occurs when the processes under investigation are not rule-based. Quite rightly, Casti acknowledges here that there is "... no a priori reason to believe that any of the processes of nature and humans are necessarily rule based." (p. 89) Connectivity refers to findings of unexpected interconnectedness between elements of the system. Emergence implies interactions among elements that are unexpected when the individual elements are considered individually. Also of interest to social scientists in chapter three are the discussions of Arrow's Impossibility Theorem and Arthur's assessment of rational expectations using computer simulation.
The main concern of Chapter four is how learning mechanisms can be incorporated into computer simulation models. This chapter provides the most convincing practical application of computer simulation, called TRANSIMS. TRANSIMS is a program used by traffic planners in Albuquerque, New Mexico to assess the impact of new road construction on traffic patterns. The chapter's discussion of Sugarscape, which models the evolution of cultural phenomena, is also interesting but less believable. Casti is once again too evangelistic when discussing Tierra, a program that emulates the development of a living organism. He states, "January 4, 1990, a day to be remembered: the day when the first noncarbon-based life form came bubbling up out of the computer machine of Tom Ray, a naturalist from the University of Delaware." (p. 162)
Chapter five places the philosophy of computer simulation in the context of scientific knowledge. Casti argues that despite examining surrogate worlds instead of the real world, simulation is a scientific enterprise because information gathering is based on a set of rules that are reliable, objective, explicit and public. The chapter argues that when considering the limits of scientific knowledge we must consider three worlds: the physical, the mathematical and the computational. Casti states, "It is the relationship among these very different universe that must be kept uppermost in mind if there is to be any hope of creating a viable theory of the limits of scientific knowledge." (p. 202)
The book was clearly written as an introduction to computer simulation for the beginner. Still, it does not provide enough detail for even the beginner to understand how these models are constructed and evaluated. Although there is a good bibliography for each chapter, pointing the reader to related studies, the book would have been better if it had fewer examples, giving them much more elaboration. As a result, the main argument that computer simulation is changing the frontiers of science is not very convincing. I am not denying the potential impact of computer simulation. In the light of the evidence he provides, however, Casti has definitely overstated its importance. Despite my criticisms, Would-be Worlds is interesting and does make one think of the vast number of possibilities for which computer simulation could potentially play an important role.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 1999 | <urn:uuid:d34abc6f-c7d2-4d11-9c35-a84bb945ddd3> | CC-MAIN-2016-26 | http://jasss.soc.surrey.ac.uk/4/1/reviews/andersen.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00091-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949821 | 1,856 | 3.15625 | 3 |
[ End of article ]
In the middle ages a variety of breads were baked in Scandinavia. In Finland people ate a fermented and leavened rye bread, baked in an oven. In Sweden they made a bread known as flat bread or thin bread, made from corn and oats and baked on an iron slab over an open fire.
There was another kind of bread which was known as famine bread. The population was largely of farmers who were exposed to the forces of nature - utterly dependent on the weather. Each crop failure constituted a deadly threat. In the middle ages, and for centuries after, any part of the country where the harvest failed simply starved, abandoned by all others. It was impossible to shift grain from one province to another - roads were almost non-existent and effective means of transport lacking. At such times the peasant farmer turned to the forest for his bread.
As far back into the past as research can probe, bark bread formed part of our ancestor's diet, and for long periods at a time it was the daily bread on their tables. The 1200's were among the most catastrophic in medieval history. A severe winter and icy temperatures joined forces with famine. In the harsh northern climate the winters were the testing time of people.
In the first years of the 1440's Finland was smitten by repeated crop failures. The Finn's main source of bread - rye - was ruined by drought or frost. Dry weather in spring and summer reduced the sowing, and early frosts in the autumn hindered the harvest work. Many farms were deserted as a result of those hard years.
During the years 1596-98 our forefathers were plagued by what is the longest famine in historical times. In 1596 the people who lived wholly on the fruits of the soil had been overwhelmed by a general crop failure. Spring had come early that year. Sowing could take place in favorable weather and everything pointed to a good harvest. But then came a tremendous delayed spring flood, drenching the fields and submerging them so long that the seed was ruined. Spring and summer brought a lasting wetness. Day after day the heavy rain fell. Clothes rotted on the farmers' bodies as they worked in the wet weather. No dry hay could be brought in. It rotted and turned moldy in the barns. And the cattle, affected by the ruined fodder, sickened and died by the hundreds. The meat could not even be used to feed dogs and cats.
As autumn came and supplies ran out, no new grain could be harvested. The bins and pork barrels were empty. People looked for every possible substitute for their normal diet. They ate bark, buds, leaves, husks, nettles, hay, straw and roots. They ground up bones for flour. Their bodies became weak. People became too limp to do heavy work. They collapsed as they worked.
During that winter and the following spring, innumerable victims starved to death. Unburied corpses could be found everywhere, indoors and outdoors, in barns and sheds, on roads and paths. Bodies lying out in the open were eaten by stray dogs. Corpses were found with fistfuls of hay and tufts of grass in their mouths.
People stole everything edible they could find. There are written records in Finland of hungry little children who chewed up their own fingers. A disease known as "blodsot" (blood sickness), caused by malnutrition, brought many people to the grave.
Two more years of crop failure followed. All parts of the country were affected. Bark bread became the daily bread. For bark bread, the membrane immediately under the rough bark was used. This inside layer was scaled off with an iron scraper. All members of the family went out and scraped the trees. Then began the process of preparing the bark.
First it was hung up to dry in the open air; then beaten or crushed; then ground into flour. The bark membrane is very thin, and many trees had to give up their skins before there was enough to make dough. Time and patience were needed. The bark was collected in summer, before the end of July. Thereafter the trees might dry up and wither away. The pine tree was usually chosen - and was the most beautiful.
Bark bread was more frequently eaten in Finland than in any other Nordic country. Villages were more isolated, and help was out of the question. Even during the 20th century wars, when food was rationed, the Finns mingled bark in their bread. The bread tasted sour to the tongue and had no flavor.
Our ancestors also used other ingredients for their famine bread.
They chopped up husks and straw and used mosses of all kinds.
During the many famine years, bark bread has saved numerous people
from starving to death.
© June Pelo[ Beginning of article ] | <urn:uuid:e5abb282-a4b8-4f67-9d66-7ac0c90a695b> | CC-MAIN-2016-26 | http://www.genealogia.fi/emi/art/article43e.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00065-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.987941 | 996 | 3.765625 | 4 |
As the twentieth anniversary of the Rwandan genocide approaches, the international community once again faces questions about the definition of genocide, how and when to intervene to protect civilians and at what cost. The enduring images of human cruelty and brutality that the genocide produced illustrate how hate speech and ethnic violence, combined with international inaction, can spiral out of control. In the space of 100 days, 800,000 people were killed, 200,000 were raped, with two million people leaving the country.
Both during and after the genocide, major powers were criticised for their inaction. As well as failing to strengthen the size and mandate of the United Nations peacekeeping force’s size and mandate, it took weeks for major powers to decide whether the massacres should be classed as genocide. In the meantime, the killings spread far beyond the capital to the centre and south of the country.
After the genocide, leaders of national governments and international institutions expressed their regret at their responses to the crisis in Rwanda. In 1998, US President Bill Clinton apologised for not acting and in 2004, UN Secretary-General Kofi Annan said he personally could have done more to stop the genocide. In 2005, UN member states unanimously agreed that they had a collective ‘responsibility to protect’ those threatened by genocide and other mass atrocity crimes.
Yet, here we are again. Since December 2013, the situation in the CAR has spiralled out of control. Today, vigilante Christian militias, known as anti-balaka, have been carrying out daily attacks on the Muslim minority. The Muslim minority, about 15% of the country, has been targeted by Christians who suffered when predominately Muslim Seleka rebels who overthrew the government in March 2013. According to Human Rights Watch, Seleka rebels committed mass atrocities during and after the coup. Despite the disbanding of the rebel group in September 2013 by its leader and former CAR president Michel Djotodia, executions, rapes and looting continued. Thousands of people have been killed by the anti-balaka, 650,000 have been displaced, over 290,000 have fled to neighbouring countries, and 2.2 million – nearly half of the population of the CAR – need humanitarian aid.
Navi Pillay, the UN High Commissioner for Human Rights, recently described the situation in the CAR as “dire” with the inter-communal violence remaining at a “terrifying level”. The UN’s refugee agency has described the situation as “humanitarian catastrophe of unspeakable proportions. Massive ethno-religious cleansing is continuing”. Levels of hate speech and propaganda are high and comparisons have been made between the press in CAR and Radio Mille Collines, the primary radio station fanning the flames of genocide during the Rwandan genocide. Children have been decapitated, victims have been mutilated and burnt, and there are cases reported of killers eating the flesh of their victims. Pillay stressed that although CAR has received international attention, it is far from commensurate with the needs.
It is clear that the current 6,000 African-led International Support Mission to the Central African Republic (MISCA) peacekeepers and the 2,000 French troops of Operation Sangaris cannot on their own protect the civilian population in the country. The UN Secretary-General Ban Ki-moon has pushed for the deployment of peacekeepers since November 2013, when he told the Security Council that a UN force of up to 9,000 troops and 1,700 policemen would be needed. Since then, he has increased the request to 12,000 and called on the international community to deploy within weeks. However, deploying a Security Council-mandated peacekeeping force will take at least six months and is not expected until September 2014 at the earliest. As a stop-gap measure, on 10th February, the EU committed itself to the rapid deployment of 1,000 soldiers, in addition to supplies and equipment. Despite repeated requests from high-level officials, including EU foreign policy Chief Baroness Catherine Ashton, the troops have not materialised.
Some of the hesitation is linked to the cost of funding a peacekeeping mission. The cost of a 10,000-20,000 peacekeeping force can, based on precedent, rise to $1 billion a year. However, it is possible that that economic, humanitarian and development impact of international inaction would cost more. Amid warnings from the UN that the proliferation of hate speech and the collapse of law and order are likely precursors to grave human-right abuses, including genocide, it remains to be seen whether the international community has indeed learned from its mistakes in Rwanda. Time will tell if the UN and its most powerful member states can move away from declaring regret over actions not taken and give meaning and sincerity to the “˜responsibility to protect’ that is so often espoused.
Hanna Ucko Neill is Global Conflicts Analyst at the International Institute for Strategic Studies (IISS). | <urn:uuid:6db5dbf4-f5cb-41b7-b358-971c8f27a856> | CC-MAIN-2016-26 | http://africanarguments.org/2014/04/01/intervention-in-the-car-less-talk-more-action-by-hanna-ucko-neill/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00159-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960006 | 1,012 | 3.171875 | 3 |
The winter flounder, Pseudopleuronectes americanus
, (also known as black back or lemon sole) is a valuable commercially- and recreationally-fished flatfish. Native to western Atlantic waters, winter flounder are a common North American flatfish inhabiting sandy or muddy bottoms between Newfoundland, Canada and Georgia, USA. Adults prefer water temperatures of 12-15oC; they live inshore in the fall and winter, spawning in relatively shallow waters in the spring or early summer, then often (although not always, depending on food availability, water temperatures and possibly light intensity) migrate offshore, where they are found to depths 50 fathoms, for the warmer months. Unlike the eggs of other flounder species in the same area, winter flounder eggs sink to the bottom, usually in clusters. Winter flounder is a right-eyed ("dextral") flatfish that typically grows to 3-4 pounds, 58 cm (larger on Georges bank) and lives up to 15 years. Small-mouthed, winter flounders feed opportunistically on small invertebrates and larval fish, using primarily vision to locate food. Young and adult flounders are sensitive to waters high in sediment, which restrict feeding and this species is vulnerable to human activity because they spend much time in shallow water. Winter flounder are managed as three distinct natural groupings in the US: Gulf of Maine, southern New England-Mid-Atlantic, and Georges Bank, and three in Canadian waters: western Scotian Shelf eastern Scotian Shelf, and the southern Gulf of St. Lawrence. Heavily harvested, NOAA reports the US fishery has severely declined and although their most recent assessment has a high degree of uncertainty, all three US stocks are likely overfished.
(Bigelow and Schroeder 2002
; Decelles and Cadrin 2011
; Hendrickson, Nitschke and Terceiro 2006
; NOAA Fishwatch
; Pereira et al. 1999
; Wikipedia 2011 | <urn:uuid:7a42883a-03f0-4d52-a7d3-ee96215c4183> | CC-MAIN-2016-26 | http://eol.org/pages/220101/overview | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00073-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934212 | 423 | 3.296875 | 3 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2012 February 8
Explanation: This moon is shining by the light of its planet. Specifically, a large portion of Enceladus pictured above is illuminated primarily by sunlight first reflected from the planet Saturn. The result is that the normally snow-white moon appears in the gold color of Saturn's cloud tops. As most of the illumination comes from the image left, a labyrinth of ridges throws notable shadows just to the right of the image center, while the kilometer-deep canyon Labtayt Sulci is visible just below. The bright thin crescent on the far right is the only part of Enceladus directly lit by the Sun. The above image was taken last year by the robotic Cassini spacecraft during a close pass by by the enigmatic moon. Inspection of the lower part of this digitally sharpened image reveals plumes of ice crystals thought to originate in a below-surface sea.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:f7f8bd61-d045-482f-a021-560825f1df0d> | CC-MAIN-2016-26 | http://apod.nasa.gov/apod/ap120208.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00189-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.914205 | 256 | 3.796875 | 4 |
Everyone loves “free”, and everyone loves wireless Internet access. Combining the two, what could possibly go wrong?
When talking about broadband Internet, we often hear about the cursed “last mile” thrown around. Though it’s not necessarily a true mile, the term represents the wiring that connects an individual subscriber to the closest “central office” (or CO). The CO is a facility that takes all the incoming residential, business, and government lines, and connects them to a much faster network, capable of handling significantly more data at much higher speeds. Upgrading the connection from the CO to the next step along the line isn’t cheap, but it’s very inexpensive when compared to upgrading all the lines from the CO to the individual subscribers.
Some people use traditional copper phone lines to deliver data over DSL, ISDN, T1, or a similar technology. Others get their data access provided over coaxial cable lines. The lucky ones have fiber run right to their home or office. There are even some who get their data delivered over the air via proprietary wireless networks.
There is one other way to handle the “last mile”, and many see it as the perfect solution: municipal WiFi.
Many people are enamoured with the idea of “free WiFi”. Let’s be frank. It’s not free at all. A wireless router costs money. The cabling to that router costs money. The data access to that router costs money. The electricity needed to run it costs money. The weather-proof housing that it’s enclosed within costs money. Installing it costs money. Maintaining it costs money.
Since WiFi has a relatively short reach, you’ve got to multiply all of those costs by each node on the network. Running the numbers, that’s usually several hundred dollars per node for the initial installation. Maintenance and the cost of supplying the data and electricity are variable — but they’re not cheap.
Nothing is free. Those who tell you differently are trying to sell you something.
The other side of the argument is that access is free, not the costs behind it. So ask yourself the question: how does something that’s “free” get paid for?
If you want WiFi in your home or office, it’s you who has to pay for everything. Those costs don’t go away when municipalities provide them. Instead, they pay for them through things like taxes, fees, surcharges, levies, and other fancy government terms that essentially mean you don’t have a choice. Whether you use it or not. Whether you buy your own or not. You have to pay for someone else to access the Internet for “free”.
Some businesses offer “free WiFi” to their customers. Sometimes even editors at Pocketnow park at such places to bring high-quality news, articles, and editorials to you, our readers. These businesses may tell you that it’s “free WiFi”, but what they really mean is that “access to our public WiFi network is included with the cost of your meal”. Again, the costs have to be covered somehow.
Most of the businesses around me that offer WiFi to their customers all suffer from a few issues. First, they typically put you behind a web-based portal through which you must agree to terms and conditions of acceptable use. This covers their collective keisters, but means that although your status indicator may show that you’ve got “full bars”, you might actually have no Internet connection. If you’re waiting for someone to Skype with you and the call never comes, it might be because you’re not really online. Oh sure, you’re connected to the network, just not to the Internet — and you have no way of knowing until you try and access the Internet.
Next is performance. When I go to lunch to escape the monotony that is the life of being a Webmaster (my other job), I often like to open up Netflix and catch an episode of Dirty Jobs with Mike Rowe, or re-runs of Firefly. That’s a perfect use of a restaurant’s WiFi! I seek out those establishments that offer “free WiFi”, and buy their food so I can watch my shows.
Most of the time the experience is lacking. Shows stutter and buffer. Video quality is “low” — and that’s putting it kindly. Running Speedtest.net quickly shows the problem: their speed is slow. How slow? I live in Northern Utah. Cellular data speeds out here aren’t bad, but they’re nothing particularly noteworthy. Cellular data speeds are usually at least double the speeds that I can get over a restaurant’s WiFi. That’s sad.
Stop and think about that for a moment. These food joints have a vested interest in making sure they are attracting customers into their establishment. If they don’t, those people will go next door or down the street. They also have a fixed number of people who can visit at any given time. You’ve seen those “Maximum Occupancy” signs they have to post? Ironically, that’s the maximum amount of people that can occupy the business at any given time. In other words, the business knows what the maximum number of people is that could possibly use their WiFi at any given time — and they still fail to provide adequate coverage and speed.
Compare this to a city. Although the City may know how many residences and businesses are under a particular bubble, there’s no way to know what the maximum number of people under that bubble could be, let alone to account for it. What’s even worse is that, unlike getting up and going to the restaurant next door, you can’t easily pick up and move to the next town when your municipal WiFi isn’t up to speed.
If we haven’t made the case already, let’s look at security. WiFi is a pretty widely known and used standard. It’s not all that secure. The original security protocol, “Wired Equivalent Protocol” or WEP for short, isn’t secure at all. It’s easily hacked.
WPA is much the same. WPA2 is quite a bit better, but what you really need is a Radius server and the highest encryption available. Remember, this is your personal information that’s literally flying around the air, and anyone with an antenna can pick it up. Whether or not they can read it depends on how much security your smartphone or tablet and the access point provide.
We do our banking online. We share pictures of our children online. We post our vacation notes online. We store our calendar online. We do our shopping online. This is very private data that we don’t want to broadcast to the world (well, except that we do, we just call it “Social Networking”, but that’s another topic entirely). Security is of paramount importance, but because setting up a secure encryption key is so “difficult” or “inconvenient”, many WiFi networks are run wide open, ready for anyone to eavesdrop on our most intimate conversations.
Are you the kind of person that turns off the geotagging feature of your phone’s camera to help maintain your privacy? Do you turn off your GPS because you don’t want websites and who knows who else to know where you are? Yeah, me too. Unfortunately that’s not going to do it! Cell towers have been used since long before smartphones were around to triangulate our positions. Using signals from as little as one node on one tower “they” could know roughly where we were. Add more nodes on more towers and this data got even more accurate.
Cell towers cover a wide area compared to WiFi. Once “they” know which WiFi node you’re connected to, your location is much more accurate than trying to figure that out from cellular data. They’re doing it today — and telling us it’s a “feature”. It is a feature. WiFi location finding is usually much faster and significantly less costly when talking about power consumption on our mobile devices. Even if we have WiFi turned off, modern operating systems usually have a feature to “peek” at WiFi every once in a while to get a rough location without hammering our batteries. Why do they do this? What’s around you is much more relevant to you than stuff that’s half a continent away. It’s a “feature”!
Even still, it’s your location. It’s your movements. It’s your data, and “they” need to respect that — but you need to know what you’re giving them to help “pay” for “free”.
Once municipal WiFi is a reality, just look at what law enforcement will be able to do. Let’s say there’s a robbery in town and someone dies. The police know from your MAC (which can be spoofed, by the way) that you (or someone pretending to be you) were in the area. The perp is roughly your height, your weight, and has the same color skin as you. Guess what, you just became a suspect.
The next day police officers will show up at your work, handcuff you in front of your coworkers and after having shown their warrant to your boss and take you “downtown”. You’re not going to get paid for those hours that you’ll spend being “interrogated”. Your coworkers will pass judgement on you, and you may lose your job. Ultimately, you’ll be released because you have an alibi, but even though you weren’t imprisoned, damage has already been done. You may even have lost your job. Why? Because your phone was near the scene of the crime. Not you… your phone. No one needed to see you there. In fact, no one did.
The other side of that argument is “if it helps solve just one crime” or helps put “just one criminal behind bars” isn’t it “worth it”? No. No, it’s not. One of the reasons these United States have our Fourth Amendment is to ensure security from unreasonable searches and seizures, to enshrine our Right to Privacy in an Amendment to our Constitution — the document that is the very basis of our entire government. So while we don’t want to see a violent person roaming the streets, taking away our privacy in order to do so is too high a price to pay. We’ll all be suspects all the time, just like the Salem Witch Trials. We don’t want to repeat those dark days.
Let’s say you’re looking for a new backpack for the camping trip you’re going to take this summer. Your spouse also wants you to get some canning supplies so this year you’ll be able to save some of those apricots from grandma’s old tree in the backyard. Guess what? You just looked for “big bags” and “pressure cookers”. You know what that makes you? You’re a terrorist, just like the ones who carried out the attack in Boston.
Congratulations! Your home will be visited by some very heavily armed men who will break down your door, scare your wife and kids, and may end up killing you in the process because you didn’t know what was going on and were somehow “resisting”.
When a government agency has access to your browsing information, they’ll concoct all kinds of crazy ideas. They’re only doing it to “protect the children”, so it’s okay, even if a few innocent people get killed along the way, right? No! It’s not.
I know, I know. This sounds crazy and could never possibly happen, but it already has. We’re seeing stories in the papers increasing every day where over-zealous law enforcement agencies use some online data to justify a raid of an innocent person — and people get hurt or killed.
Municipal WiFi is a bad idea
Hopefully we’ve covered the major points behind why municipal WiFi is a bad idea. There are many, many more, but these are a few of the biggies. You don’t have to agree with our conclusions, but we hope that we’ve opened your eyes to what could potentially happen so you can take whatever measures you feel are right to protect your data, yourself, and your loved ones.
Now it’s your turn. Do you have other concerns that we didn’t include? Are we being paranoid and need to add another layer to our tin-foil hats? Perhaps you’re glad that someone finally had the nerve to say what you’ve been thinking. Whatever your opinion is, we want to hear it! Head down to the comments and let us know what you think about municipal WiFi. | <urn:uuid:3542f74c-dc4c-45c6-a8d9-22d89d531672> | CC-MAIN-2016-26 | http://pocketnow.com/2014/05/08/municipal-wifi | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00076-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959769 | 2,806 | 2.703125 | 3 |
Exercise and Sports Nutritionist, Dr Mahenderan Appukutty from the Faculty of Sports Science and Recreation, Universiti Teknologi MARA, Shah Alam shares what athletes can do to fuel themselves for better performance.
Do: Consume a Balanced Diet
There are three essential macronutrients for runners:
1. Carbohydrates are converted to glucose and then stored in your muscles as glycogen. This gives you energy, but it gets used up quickly – in the first 20 minutes of running! After this, your body will turn its fat stores for energy instead.
2. Protein is essential to develop muscles and maintain healthy tissue. Make sure you consume enough protein in a balanced diet; after running, taking protein in the form of milk or protein drinks will help your body repair wear and tear of muscles and aid recovery.
3. Fat is needed to fuel exercise too and also for other body requirements. Fat plays as primary source of energy at rest and during low-intensity exercise. If your body doesn’t contain enough fat, says Dr Mahenderan, it will use up the carbs quickly and burn protein instead, which is needed for healthy growth and regeneration of muscles. It is important to know the amounts and types of dietary fats found in foods.
Don’t…neglect any of these macronutrients. Macronutrient need to be balanced at all time:
- Higher carbohydrate/protein intake typically means lower fat intake
- Severe restriction of fat intake is not recommended
· Dr Mahenderan says the average breakdown should be 70% carb, 15-20% protein, and the remaining 10-15% should be fat. All depends on the type of sports and training cycle too, that requires individual nutrition advice.
Do: Plan for Fuel Before and During the Race
Carbohydrate-containing foods have different effects on blood glucose levels. Foods with a low glycaemic index (GI) cause a slower in releasing glucose to the blood, whereas foods with a high GI fast in raising the blood glucose. It has been suggested that low GI foods could be useful in the pre-event meal where it helps to release of glucose during exercise maintaining blood glucose levels for a longer period. Dr Mahenderan recommends a low-GI food approximately 4 hours before a run or exercise and a lighter snack about 1-2 hours before exercise. This allows sufficient time for the body to convert and absorb the energy it needs. Avoid fatty foods before running or exercise, which will slow digestion. During the run, use gels and sports drinks as they are absorbed more quickly. As exercise intensity increases (as a percentage of VO2max), the percentage of energy provided by fat metabolism decreases and the percentage of energy from carbohydrate metabolism increases.
Don’t… overlook the importance planning for nutrition and energy before and during the race; your strategy should be fine-tuned while training; it’s not advisable to try anything new during a competition.
Do: Hydrate and Replenish
Everybody has different hydration needs; Dr Mahenderan recommends weighing yourself before and after training or running to see how much fluids you’ve lost and estimate how much fluid you need to replace. In addition, an electrolyte drink is an important source of fluids and energy while exercise or running; the carbohydrates found in sports drinks helps to replenish your body’s energy supply for better performance. For long training sessions or marathons, refuel at regular intervals; when the weather is particularly hot, increase your intake fluid and electrolyte intake to compensate for increased sweating.
Don’t… wait until you are thirsty or tired before refuelling; by the time you feel thirsty, you are already dehydrated!
Do: Educate Yourself
Know what you are consuming, read the nutrition labels and work with a qualified nutritionist or dietician to customise a plan that’s suitable for your specific needs. For example, all isotonic drinks provide electrolytes; key words to look out for are potassium, sodium, and carbohydrates for energy. Dr Mahenderan also urge active individual to meet their micronutrients requirements such as calcium for muscle contraction and prevent muscle cramps; iron for assisting the body's ability to transport oxygen and improve healthy immune system.
Don’t… rely on unproven remedies or products without scientific evidence.
Do: Take Time for Recovery
The importance of recovery nutrition depends on the type and duration of exercise just completed.
Immediately after a race, within 60-90 minutes, help your body to heal with protein and carbohydrates – this allows your body to repair tissue wear and tear resulting from prolonged exertion. The recovery nutrition is mainly for replenish fuel (glycogen) stores used during the training session or competition; deliver protein to assist with muscle repair and synthesis and lastly to restore fluid and electrolytes lost in sweat. Dr Mahenderan recommends milk is a wonderful and wholesome food as an option for recovery food that contains the protein, carbohydrates, fat and micronutrients, which your body needs for post-race. A recovery plans should be periodised and tailored to meet the individual goals.
Don’t… force your body to start training immediately after a run. Even experienced runners who are fit and in good health would experience fatigue for several days afterward; taking a week to rest will provide you with a physical and psychological break before you begin training again. Rest is very important. Too few rest and recovery days can lead to overtraining syndrome—a difficult condition to recover from. | <urn:uuid:a0f7b340-013b-434d-9c66-e73369a1276c> | CC-MAIN-2016-26 | http://pacesettersmalaysia.blogspot.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00073-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936233 | 1,150 | 3.265625 | 3 |
Why do some letters have crowns (Keter)? What is the significance of the crowns on some Hebrew letters?
Every letter in the Hebrew alphabet contains within it certain spiritual powers. The ‘tagin’ (the small lines going up above the letters) represent spiritual powers that are associated with the letter, but are above the letter, rather than in the letter. In the case of the letters with three lines sticking up, the crowns that you refer to, they indicate a great amount of that power being outside of the letter, rather than an integral part of it. | <urn:uuid:c6f18c89-77d7-4a9c-9fd6-8fc5e0dea8aa> | CC-MAIN-2016-26 | http://www.jewishanswers.org/ask-the-rabbi-851/genesis/?p=1614 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00179-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95954 | 119 | 2.78125 | 3 |
|Asia-Pacific Forum on Science Learning and Teaching,
Volume 5, Issue 2, Article 8 (Aug., 2004)
Winnie Wing Mui SO
Assessing primary science learning: beyond paper and pencil assessment
Different strategies for the assessment of science learning
The assessment strategies that are discussed involve more of the continual assessment strategies to allow teachers to understand the progress of their pupils. These strategies have common features that differ from those of traditional strategies. First, they are less judgmental, and are more descriptive in the information that they provide to both teachers and learners on avenues for improvement. Second, they are not concerned solely with correct or incorrect answers, but emphasize more on how well pupils perform. These strategies provide a general picture of what pupils understand, what they are able to do, and how they apply the knowledge that they have learned.
There are a variety of strategies and opportunities for teachers to choose from in the measuring the progress of different aspects of the science learning of individual pupils, some of which are more appropriate than others, depending on the area of science that is being covered and the age range of the pupils (Hollins & Whitby, 1998). The assessment strategies that are available to assess the science learning of primary pupils include performance-based assessment in science projects and investigations, science journal writing, concept maps, portfolios, and questions and answers. Hughes and Wade (1996) suggest that it is important that a variety of methods should be used, because pupils may demonstrate their abilities differently with different approaches. For example, some pupils may perform better in "public" tasks such as oral discussion, and others may do better in "private" tasks, such as writing.
a) Performance-based assessment in science projects and investigations
The message that science is not only a body of knowledge but also a way of working seems to have reached teachers, but has not yet trickled down to pupils (Goldsworthy & Feasey, 1994). Although the processes of science are stressed, the continuous emphasis on subject knowledge in assessment has not allowed pupils to grasp the equal importance of science knowledge and science processes.
Science investigations and projects require pupils to explore science issues that they are interested in, or to apply science knowledge in designing things or finding ways to solve problems in everyday life. Diffily (2001) suggests that any science topic can become the focus for an investigation or a project. Any group of elementary pupils can learn to come to a consensus about a topic to study, conduct research, make day to day decisions about locating resources, organize what is being learned, and select a way of sharing with others what they have learned. Farmery (1999) recommends that investigations should be chosen carefully for primary school pupils. They should be adequately resourced, be easily adaptable, and be relevant to the curriculum so that they are assessable.
So and Cheng (2001) find that the multiple intelligences of Hong Kong primary pupils are developed through science projects. Active participation in science projects can help to sharpen the observation and thinking skills of pupils, cultivate their creativity, strengthen their exploration and analytical skills, facilitate their understanding of the relationship between science, technology, and society, and promote their desire to invent and explore.
Reinhartz and Beach (1997) suggest that it is often helpful to develop a set of criteria, or a grading rubric, for the evaluation of the responses and performance of pupils with performance-based assessment tools. The two performance-based assessment rubrics that are suggested in Demers' (2000) article are here merged to provide a clear picture of how the progress of pupils in observation skills, classification, and other areas of performance might be assessed (Table 1).
Level of performance Process of inquiry Evidence of inquiry Depth of understanding Communication Presentation
Observations show evidence of careful study using multiple senses when appropriate. Descriptions contain intricate details. Classification systems clearly reflect careful observations made. System includes all samples provided Questions are clearly identified and formulated in a manner that can be researched. Evidence and explanations have a clear and logical relationship. Methods of study generate valid data that addresses the question. Variables are controlled. Conclusions are based upon results and clearly explained. Scientific information and ideas are accurate and thoughtfully explained. Patterns and trends are identified, discussed, and extended through interpolation and extrapolation. Scientific information is communicated clearly and precisely and may include expressive dimensions. Presentation is effectively focused and organized. Sentences are both complex and grammatically correct. Core words are spelled correctly. Punctuation is used appropriately. Script is neat and easy to read. Observations show evidence of careful study but are relegated to one sense. Descriptions are clear enough for samples to be accurately identified by another scientist. Classification systems are based upon the observations made. Questions are clearly identified. Evidence and explanations have a logical relationship. Methods of study generate data that is related to the question. Variables are controlled. Conclusions are based on results. Scientific information and ideas are accurate. Patterns and trends are identified. Scientific information is communicated clearly. Presentation is focused and organized. Sentences are grammatically correct. Most of the words, including the core words, are spelled correctly. Punctuation is used appropriately. Script is easy to read. Observations reflect the obvious characteristics of samples provided. Descriptions lack intricate detail. Classifications do not necessarily reflect observations made, and may not include all samples provided. Questions are implied. Evidence and explanations have an implied relationship. Methods generate data related to the question. Variables are not controlled. Conclusions are related to the data. Scientific information has occasional inaccuracies or is simplified. Patterns and trends are suggested or implied. Scientific information has some clarity. Presentation has some organization and focus. Sentences make sense but may contain grammatical errors. Text includes frequent spelling and punctuation errors. Script is legible. Observations lack clarity and detail, and are not clear enough to be interpreted by another scientist. Classification system is not based on observable characteristics and does not include all of the samples provided. Questions are unclear or absent. Evidence and explanations have no relationship. Methods generate questionable data. Variables are not controlled. Conclusions are unclear or unrelated to the data. Scientific information has major inaccuracies or is overly simplified. Patterns and trends are unclear or inaccurate. Scientific information is unclear. Presentation lacks focus and organization. The number of incomplete sentences and grammatical errors render the text difficult to interpret. Spelling and punctuation errors are prevalent. Script is illegible.
Table 1: Performance-based assessment rubric (Source: Demers, 2000, pp. 27-28)
Teaching for progression in experimental and investigative science is very difficult (Crossland, 1998). Crossland attempts to show how an aide-mémoire, laid out on one side of A4 paper, helps teachers to focus their short-term planning in terms of the curriculum and formative assessment. At the same time, he also shows the "pupil contribution" component, which provides very useful guidelines for the assessment of the progression of pupils in experimental and investigative science (Table 2). In addition, Farmery (1999) explains the development of a model for ensuring progression in experimental and investigative science. Table 3 shows an extract from this model that demonstrates the possible progression in "obtaining evidence" in experimental and investigative science.
Level Pupil Contribution Level 1
Observe using senses, talk, and draw
Help the teacher...
Make comparisons between observations and expectations
Respond to suggestions and, with help, make their own...
Use the simple equipment provided ...
Describe and compare ...
Record (in simple tables)...
Compare results with expectations...
With some help, carry out a fair test
Respond to suggestions and make their own ...
Simple predictions that can be tested...
Measure using a range of equipment...
With some help, carry out a fair test...
Record in a variety of ways...
Explain observations and any patterns arising out of their results ...
Say what they found out...
Recognize the need to carry out a fair test through description or action
Make predictions with a reason based on similar experience ...
Recognize the need for a fair test by descriptions or actions ...
Select equipment ...
Make a series of observations/measurements adequate for the task...
Present findings clearly in tables and bar charts...
With help, plot graphs to find patterns and to relate conclusions to scientific knowledge and understanding...
Carry out a scientific test in simple contexts involving only a few factors
Identify the crucial factors...
Prediction based on scientific knowledge and understanding...
Select and use apparatus with care...
Measure and record with care...
Ongoing interpretation of the results...
Present data as line graphs...
Draw conclusions that are consistent with the evidence...
Begin to relate conclusions to scientific knowledge and understanding...
Table 2: Pupils' contribution in experimental and investigative science (Source: Crossland, 1998, p.19)
Obtaining Evidence Level 1 Level 2 Level 3 Level 4 Children use familiar equipment independently, e.g., weighing scales, ruler. They use unfamiliar equipment with support. Children use a range of equipment independently, e.g., ruler, tape, weighing scales, thermometer. Children use a wide range of equipment independently and accurately. Children use a range of equipment, including instruments with fine divisions. Children make observations of at least one event within each part of the investigation. Children understand the need for detailed observations. Children begin to realize the need to make a series of observations. Children make a series of relevant and detailed observations Children make measurements with some degree of accuracy. Children identify and take relevant measurements. Children recognize the need to make a series of measurements. Children make a series of measurements with increasing precision. Children occasionally repeat measurements to check if they have the same results as someone else. Children recognize the need to repeat measurements to check accuracy. Children repeat measurements to check accuracy. Children repeat observations and measurements and begin to offer simple explanations for any differences recorded.
Table 3: Pupils' progression in "obtaining evidence" in experimental and investigative science (Source: Farmery, 1999, p.14)
b) Science journal writing
Pupils record procedures and results from investigations and observations, hypotheses, and inferences about science phenomena (Lowery, 2000). Free writing and drawing can also be used when the concept area involves possible long-term changes, and pupils should make regular observations (Hollins and Whitby, 1998). By creating journals, pupils are able to depict their way of seeing and understanding phenomena through their own lens of experience (Shepardson, 1997). The value of drawing and writing science lies in its potential to assist pupils to make observations, remember events, and communicate understandings (Shepardson & Britsch, 2000). Hollins and Whitby (1998) find that drawings and diagrams in response to a particular question are particularly revealing and informative when pupils add their own words to them, that is, annotations can help to clarify the ideas that a drawing represents.
Science journal writing with writing or drawings captures a dimension of conceptual understanding that is different from other types of assessment. Science journals can serve as diagnostic tools for informing practice, because they convey the understanding of pupils and so provide a window through which to view this understanding (Doris, 1991).
Shepardson and Britsch (1997) examine the ways in which science journals serve as a tool for teaching, learning, and assessment. They also discuss what science journals can say about what pupils are learning (Shepardson & Britsch, 2000). However, Shepardson and Britsch find that journals that were written by pupils on the topic of mixing and separating five different materials - clay, silt, sand, pebbles and gravel - give no indication that they have understood why sand and pebbles could be mixed and separated, and only show that it happened. Thus, journal writing might only indicate that pupils have learned the activity but not that they have learned the science. Therefore, Shepardson and Britsch remind teachers to employ multiple modes of assessment.
The ways that are suggested by Shepardson and Britsch (2000) to assess journals are simplified to help teachers to use journals as a meaningful tool for the assessment of the science learning of pupils.
- Determine whether pupils are representing the science activity or an understanding of the science.
- Look for differences between content understanding and science processes.
- Note which medium the primary pupil uses (i.e., drawing or writing).
- Look for details that indicate an understanding of the characteristics of objects or phenomena.
- Look at the ways in which the graphic context indicates the development level of pupils.
- Note the grammatical complexity of the writing.
The assessment logs in Table 4, which have been adopted and modified from Shepardson and Britsch (2000), can be used by teachers to monitor the performance of pupils in journal writing and drawing skills.
Assessment logs Pupil performance
Representing science activity/understanding science
Content understanding/science processes
Drawing/writing/drawing and writing
Grammatical complexity of the writing
Table 4: Assessment logs to monitor the performance of pupils in journal writing and drawing skills (Source: Adopted and modified from Shepardson & Britsch, 2000, p.32)
c) Concept maps
The use of concept maps in teaching and learning was initiated and developed by Novak and Gowin (1984). Concept maps measure or reflect more complex levels of thinking in the same way that science journals, science projects, science investigations, and other performance-based assessment methods do. In comparison with other assessment methods, however, concept maps are quicker, more direct, and considerably less verbal than essays or other types of written work. The visual nature of concept maps helps pupils to organize their conceptual framework (Willerman & MacHarg, 1991). White and Gunstone (1992) note that concept maps portray a great deal of information about the quality of learning and the effectiveness of the teaching. Stow (1997) states that concept mapping is a useful tool to help pupils to learn about the structure of knowledge and the process of knowledge production or meta-knowledge.
The use of concept mapping as an elicitation and assessment tool has been widely discussed (Atkinson & Bannister, 1998). Concept mapping has been shown to allow links to be made between concepts, and thus reveals scientifically correct propositions and misconceptions. The concept maps that are devised by pupils reflect their own ideas and understanding, and so cannot be marked wrong or right (Comber & Johnson, 1995), even if their ideas do not match with what is regarded as scientifically correct. Atkinson and Bannister (1998) have discovered that concept mapping can be a useful assessment tool, even with very young children.
By looking at the maps that were drawn by the pupils in Stow's article (1997), it is possible to see how the understanding of mapping and of the water cycle topic that is the subject of the maps has developed in the pupils. One pupil drew a fairly well connected map before the investigations (Figure 2) that seems to show a mixed understanding of the concepts involved. After the investigations, the same pupilís map is significantly more sophisticated (Figure 3), and shows a far greater range of connections and a greater understanding of the grammar that is needed to complete the connections. It demonstrates a clearer understanding of the concepts that are involved; for example, evaporation is linked to condensation, and also to the sun. The motivational benefits of the comparison of the two maps and the pupil's self-evaluation of their progress are clear. The opportunity that concept mapping provides for pupils to examine the progress of their own learning is instrumental in the encouragement of meaningful learning. The mapping and subsequent evaluation provides a framework of reference within which pupils can analyze their own thinking, which enables them to identify their strengths and weakness and set themselves future learning targets.
Figure 2: A pupil's concept map before carrying out the activity (Stow, 1997, p.13)
Figure 3: A pupil's concept map after carrying out the activity (Stow, 1997, p.13)
Concept maps serve both formative and summative purposes in the assessment of student science learning. Over the past twenty-five years, concept maps have been adopted by thousands of teachers in elementary and secondary schools (Edmondson, 1999). The following are comments from science educators on the advantages of using concept maps as assessment tools.
- The concept map can be read quickly and can show a large body of information in a concise and clear manner (Comber & Johnson, 1995).
- The advantage of concept maps is that they are formative, and can be completed quickly (Hollins & Whitby 1998).
- Concept maps may be used in classroom activities to provide students with immediate feedback about the depth of their understanding, or to assess learning from specific instructional units that might not otherwise be reflected by paper and pencil tests (Markow & Lonning, 1998).
- Concept mapping is unique in comparison to traditional achievement tests, the limitations of which render them inadequate for tapping certain characteristics of knowledge structures (Hoz, Tomer, & Tamir, 1990).
- Trowbridge and Wandersee (1996) use concept maps to analyze differences in the comprehension of pupils and find concept mapping to be a highly sensitive tool for measuring changes in knowledge structure, and particularly for depicting changes in pupils' selection of super ordinate concepts.
- Wallace and Mintzes (1990) use concept maps to document conceptual change in biological concepts, and the concept maps of the pupils in their study reveal significant and substantial changes in the complexity and prepositional structure of their knowledge base.
- Concept maps are particularly helpful in representing qualitative aspects of learning. They may also be used by teachers to evaluate learning. They are meta-cognitive tools that can help both teachers and pupils to better understand the content and process of effective, meaningful learning (Edmondson, 1999). Concept maps are tools for representing the interrelationship between concepts in an integrated, hierarchical manner.
Novak and Gowin (1984) suggest that teachers could construct a "criterion map" against which the maps of pupils could be compared, and the degree of similarity between the maps could then be given a percentage score. However, White and Gunstone (1992) argued that though scoring is not helpful for formative assessment, scoring becomes more sensible when concept maps are used in summative assessment. There are various other schemes for scoring concept maps, but most of them are variations of the scheme that is outlined by Novak and Gowin. Markham, Mintzes, and Jones (1994) modified Novak and Gowinís scheme to include three more observed aspects of concept maps for scoring: the number of concepts, which is evidence of the extent of domain knowledge; concept relationships, which provide additional evidence of the extent of domain knowledge; and branching, which provides evidence of progressive differentiation. Table 5 shows a summary of the schemes that are suggested by Novak and Gowin (1984) and Markham, Mintzes and Jones (1994).
Novak and Gowin (1984) Markham, Mintzes & Jones (1994) Criteria for evaluating concept maps Scoring Criteria for evaluating concept maps Scoring Validity of relationship 1 point for valid relationship Concept relationships that provide additional evidence of the extent of domain knowledge 1 point for each valid relationship Levels of hierarchy 4 points for hierarchy Hierarchies that provide evidence of knowledge 5 points for each level of hierarchy Validity of the propositions and cross-links 10 points for each cross-link Cross-links that represent evidence of knowledge integration Each cross-link receives 10 points Use of examples 1 point for each example Examples that indicate the specificity of domain knowledge Each example receives 1 poin Number of concepts as evidence of the extent of domain knowledge 1 point for each concept Branching, which provides evidence of progressive differentiation 1 point for each branching, 3 points for each successive branching
Table 5: A summary of the schemes for scoring concept maps
There are other suggestions for the scoring of concept maps. Trowbridge and Wandersee (1996) suggest a concept map "performance index," which they describe as a compound measure that includes the pupilís concept map scores, the difficulty level of each map produced, and the total number of maps submitted. Rice, Ryan, and Samson (1998) developed a method of scoring concept maps that is based on the correctness of the propositions that are outlined in a table of specifications of instructional and curriculum goals. They find high correlations between concept map scores and scores in multiple-choice tests that are aimed at assessing the same instructional objectives. Edmondson (1999) suggests that the scores for particular attributes of concept maps could be used as a basis for a comparison of the extent to which different dimensions of understanding have been achieved. The purpose of such assessment is for teachers to make adequate provision for pupils' learning to further develop their understanding.
Although there are many suggestions for the scoring of concept maps, there are also criticisms of these scoring systems. Regis, Albertazzi, and Roletto (1996) therefore suggest a shift in emphasis and focus toward the assessment of changes in the content and organization of concept maps over time.
Spandel (1997) asserts that any collection of student work, which includes tests, homework, and laboratory reports, can be included in a portfolio as representative samples of student understanding. Portfolios provide examples of individual student work, and can indicate progress, improvement, accomplishment, or special challenges (Lowery, 2000). Portfolios should be a collection of many meaningful types of materials that provide tangible proof of the progress of a pupil (Reinhartz & Beach, 1997). As part of the portfolio exercise, Buck (2000) has pupils pick out their best work from a unit and describe what the pieces of work reveal about what they have learned. Vitale and Romance (2000) focus on the value of portfolios as measures of understanding in natural science, and further suggest that portfolios might be defined as collections of student work samples that are assumed to reflect the meaningful understanding of the underlying science concepts (Vitale & Romance, 2000). They highlight that portfolio activities and tasks are open-ended, and constructively require pupils to use and apply knowledge in ways that demonstrate their understanding of science concepts.
Portfolios are one of the assessment measures that were recommended in the recent curriculum reform in Hong Kong: "The portfolio is used to contain students?evidence of learning. During the processes, pupils make their own judgment and select the artifacts (observation sheets, questionnaire and interview results, art produced, etc.) that best meet the criteria for excellence and personal improvement?(Curriculum Development Council, 2000, p.16). As a form of authentic assessment, portfolios are considered by their advocates to offer teachers a more valid means of evaluating student understanding than traditional forms of testing (Jorgenson, 1996).
Within science classrooms, a wide range of products can be included as work examples in student portfolios. The emphasis should be on products that reflect the meaningful understanding, integration, and application of science concepts and principles (Raizen, Baron, Champagne, Haertel, Mullis & Oakes, 1990). These include reports of empirical research, analyses of societal issues from a sound scientific view, papers that demonstrate an in-depth understanding of fundamental science principles, the documentation of presentations that are designed to foster the understanding of science concepts for others, journals that address a pupilís reflective observations over an instructional time span, and analytic or integrative visual representations of science knowledge itself in the form of concept maps.
Vitale and Romance (2000) suggest the development of guidelines for the evaluation of portfolio assessment products. The evaluation of the portfolio by the teacher should be a clinical judgment with two considerations. The first is the degree to which the relevant conceptual knowledge is represented accurately in the portfolio product, and the second is the degree to which the portfolio product meets the specified performance outcomes, which include the degree to which the relevant concepts are used on an explanatory or interpretative basis by pupils. Thus, there is no need to develop numbered scoring systems or rubrics, because they are not specific enough to provide evidence of meaningful student learning.
e) Questions and Answers
Open-ended questions mimic good classroom strategies and encourage thinking (Lowery, 2000), both of which are helpful to teachers in understanding how pupils go about finding an answer, solving a problem, or drawing a conclusion. Hughes and Wades (1996) also suggest that both open-ended and closed questions might be asked to gain information about pupils?investigational abilities. Some examples of open questions are:
- What questions are likely to be asked about the cars and the slopes?
- How did you ensure that you carried out a fair test?
- How do your observations compare with your prediction?
Hughes and Wades (1996) acknowledge the flexible nature of one to one or group questioning. These techniques enable supplementary questions to be asked to clarify what was really meant by a childís vaguely worded response or to verify whether omitted details from a written account were due to forgetfulness, laziness, or a lack of understanding and ability.
However, Black and Wiliam (1998) opine that the dialogue between teacher and pupils that arises when the teacher asks questions is unproductive in the assessment of learning. There may be a lowering of the level of question to facts that require very little thinking time, and the dialogue only ever involves a few pupils in the class. To enable thoughtful, reflective, and focused dialogue between teacher and pupils to evoke and explore understanding, Black and Wiliam (1998) suggest that teachers should:
- give pupils time to respond, ask them to discuss their thinking in pairs and in small groups, and ask a representative to speak on behalf of a small group;
- give pupils a choice between different possible answers and ask them to vote on the options; and
- ask all of the pupils to write down an answer and then read out a selected few.
In addition to questions from teachers, Watts, Barber, and Alsop (1997) assert that the questions of pupils can be very revealing about the way that they think, their worries and concerns, what they want to know and when they want to know it. Gibson (1998) uses a similar technique in his study, but with an emphasis on the answers of pupils to their own questions. The process of asking questions is emphasized, and the construction of meaning should be continuous. Asking pupils to generate questions on a regular basis also shows their development, as the questions really start to probe the big issues, or narrow the topic down to very specific queries. Gibson shows a sample of the range of pupils' answers in her article. The small selection of pupils' explanations clearly shows their thinking, and is possibly even more revealing than their questions. Gibson states that the answers that pupils give to their own questions can be a valuable learning and assessment tool. Although some of the pupils in his study showed a shallower understanding than others, the answers of all of the pupils give an insight into how they are developing. This "Any answer" session that is suggested by Gibson can follow on from sessions that are designed to generate questions, either before or after practical investigations, and will reveal the thinking of the pupils and their ability to make a hypothesis.
Copyright (C) 2004 HKIEd APFSLT. Volume 5, Issue 2, Article 8 (Aug., 2004). All Rights Reserved. | <urn:uuid:f6971441-886f-477e-86d8-116db8f04543> | CC-MAIN-2016-26 | http://www.ied.edu.hk/apfslt/v5_issue2/sowm/sowm6.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00005-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940699 | 5,638 | 3.5625 | 4 |
Expected lifetime in an Exponential Distribution
Date: 10/28/1999 at 07:11:16 From: Kevin Williams Subject: Exponential distribution The lifetime X of an electronic component has an exponential distribution such that P(X <= 1000) = 0.75. What is the expected lifetime of the component?
Date: 10/28/1999 at 09:35:28 From: Doctor Mitteldorf Subject: Re: Exponential distribution Dear Kevin, They're telling you that the probability has an exponential form. What does that mean? P(x) proportional to exp(-ax) seems to be the logical interpretation. Then we could figure out a from the specification that P(x) = 0.75. But even this is a bit ambiguous. How do weinterpret P(x)? Does it mean, for example, that the probability that the part lasts 1000 hrs or less is proportional to exp(-ax)? Well, it can't mean that. Obviously, the probability that it lasts (1000 hours or less) is GREATER THAN the probability that it lasts (500 hours or less), because the former subsumes the latter. The sensible way to interpret the equation is that the probability of the part failing at any given moment time = x is proportional to exp(-ax). The reason that the probability goes DOWN with increasing time is that there are fewer and fewer parts left operational that have an opportunity to fail as the time grows longer. So we'll interpret the question to mean the only thing it can mean sensibly: that the probability of failure at any moment time = x is proportional to exp(-ax). How do we find a? Well, before we can even think about that, we need to find the constant of proportionality. We can do that by saying that the probability of failure for all times 0 to infinity must add up to 1. Therefore, if p(x) = C*exp(-ax), we can evaluate C by demanding that the integral of p(x) from 0 to infinity should equal 1. Do this first. Solve for C. The next step is to find a. You can do this, now that you know the constant of proportionality, by taking another integral, and setting that integral = 0.75. What integral is that? Now that you know a, you can take the final step, and calculate the mean failure time. That is the same as the expected lifetime. To calculate the expected lifetime, we take the integral x*p(x), again from x = 0 to x = infinity. This is like saying: take a weighted average of all the times that it could fail (x), using the probabilities associated with those times (P(x)) as the weights. That's a lot of steps. Let me know if you get stuck anywhere along the way; and if not, will you write back and let me know what you got for an answer? - Doctor Mitteldorf, The Math Forum http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2015 The Math Forum | <urn:uuid:495b2b8e-5dfa-4676-b74c-f5360a2d5a66> | CC-MAIN-2016-26 | http://mathforum.org/library/drmath/view/52183.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00034-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.937592 | 638 | 2.515625 | 3 |
|Fact File 2016|
Statistics brought alive
Online Expansion Pack
(1 year) £49.95+vat
Have you got easily understandable facts and figures at your fingertips?
Fact File gives students the essential facts behind the issues.
It's an accessible, colourful and user-friendly resource which is easy to use.
All pages are copiable and available to view online, download and print.
For UK & international statistics about:
Britain & its citizens; Charity; Education; End of life; Environment; Family & relationships; Finance; Food & drink; Health; Internet & media; Law & order; Religion; War & conflict; Wider world; Work
"An amazing array of carefully selected, topical social information - ideal for so many teachers & subjects. It is so useful for school librarians to have online access. To be able to search and view the entire archive is wonderful for schools!"
Geoff Dubber, SLA, Education Libraries and Learning, Nov 2015
"An excellent resource with ready to use and easy to understand information. Very user friendly ... it would appeal to a wide variety of teachers – English, maths, geography, science, history, sociology, PE, health and economics. A definite must have for all schools."
TES review 5 STAR RATING
"An assemblage of facts and figures themed for maximum school usefulness. The variety of presentation is terrific."
The School Librarian
The Online Expansion Pack gives you access to any printed editions of Essential Articles and Fact File that you have purchased for 1 year. This means you and your students can search the books in one place and view the contents online.|
The Online Expansion pack also includes a site licence allowing you to use the Complete Issues website on multiple computers at once and also allows students to login at home.
For best value become a Complete Issues subscriber today.
| Online Expansion Pack|
(1 year) £49.95+vat
|If you subscribe to Fact File you will save £10 on the book this year and in the future you receive the new volume each year automatically. We are so confident that you will like and use Fact File that even subscription copies are sent out on approval and can be returned
When you subscribe to Complete Issues, which includes both Fact File and Essential Articles, you get 10% discount off all our resources - click here.
|Subscription price £39.95
For best value: get Fact File as part of Complete Issues! | <urn:uuid:306b7cc3-96bd-46ea-9b6e-561d68f66d56> | CC-MAIN-2016-26 | http://www.carelpress.co.uk/factfile/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00101-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.889943 | 512 | 2.625 | 3 |
Mammography is a type of medical imaging that uses x-rays to capture images (mammograms) of the internal structures of the breasts. Quality mammography can help detect breast cancer in its earliest, most treatable stages; when it is too small to be felt or detected by any other method.
The two types of imaging currently used for mammography are
- Screen-film mammography where x-rays are beamed through the breast to a cassette containing a screen and film that must be developed. The image is commonly referred to as a mammogram.
- Full field digital mammography where x-rays are beamed through the breast to an image receptor. A scanner converts the information to a digital picture which is sent to a digital monitor and/or a printer.
Mammography uses x-rays to produce an image of the breast, and the patient is exposed to a small dose of radiation. The Mammography Quality Standards Act (MQSA) established baseline standards for radiation dose, personnel, equipment, and image quality.
The benefits of mammography in detecting breast cancer at an early stage outweigh the risks of radiation exposure. In some cases, early detection of a breast lump may mean that chemotherapy is unnecessary. | <urn:uuid:74efbef1-f5b7-48f4-9924-af72711cafa8> | CC-MAIN-2016-26 | http://www.fda.gov/Radiation-EmittingProducts/RadiationEmittingProductsandProcedures/MedicalImaging/MedicalX-Rays/ucm115355.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00112-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940772 | 250 | 4.03125 | 4 |
It started with a trickle, a few thousand families moving across a badly demarcated border. In the summer of 2011, there were 5,000 refugees in Lebanon. Most had moved in with family members, some were staying in mosques or abandoned schools. That their stay would be temporary was unquestionable at the time. But it was not to be.
At al-Ibna school in Wadi Khaled, where 85 people were living, the sign outside had the word “refugees” scribbled out with a key and replaced with “visitors”.
Now Lebanon is hosting over a million refugees. By the end of the year, the UN expects the number to hit 1.6 million. That means Lebanon’s population of four million will have swelled by almost 40 per cent. Three years after the start of the conflict, there are more than nine million people; almost half of Syria’s population, on the move. More than 2.5 million Syrians have streamed across Syria’s borders into Lebanon, Jordan, Turkey and Iraq. The UN predicts that if the conflict continues, Syrians will become the largest refugee population in the world.
A map of Lebanon that uses red dots to indicate where Syrian refugees have settled down is a sea of red. And Lebanon’s infrastructure was already straining to supply water and electricity before the tidal wave of refugees hit. Official camps are still banned; the myriad of tents along the roads are referred to as informal tented settlements. The refugees often live in cramped conditions, with up to a dozen people sharing a single room or tent. Medical concerns are rife. The first suspected case of polio among refugees in Lebanon was detected this week.
Many new arrivals have been displaced multiple times within Syria before deciding to leave. “We were displaced four times before we came here,” says Abu Uday, a soldier who defected and moved his family to Lebanon last month. Every time they would move, the bombing would follow them. The family of five now live in a brightly coloured container, part of a new informal settlement in the no-man’s-land between Syria and Lebanon, outside of Arsal. That border town has tripled in size because of the refugees and is full. “I don’t know who is right anymore, the regime or the rebels,” his wife says .
Throughout the region, Syrians are streaming across the border. In Jordan, Zaatari camp with its 100,000 inhabitants is the second-largest refugee camp in the world and the kingdom’s de facto fourth-biggest city. “It’s a temporary city,” says its “mayor”, UNHCR’s Kilian Kleinschmidt, of the sprawling construction in the desert. But how temporary, nobody knows. The camp started out as just tents; now longer-term residents live in containers and people have even built gardens with fountains. The main shopping street is called the Champs-Élysées ; there are over 2,500 shops in the camp, including cafés, wedding-dress rental shops and even a nightclub.
Although Jordan has curtailed its borders, more refugees keep coming. Azraq camp, which will able to accommodate 130,000 people, will open on 30 April. The site, situated in the middle of the desert 60 miles east of Amman, has newly paved roads, a police station, two schools and a hospital and families will live in containers. The UN has called the opening “timely” as 600 Syrians are entering Jordan daily. The country is already hosting almost 585,000 refugees; 80 per cent live outside of the camps, mostly in urban areas throughout the country. In Iraq, which is experiencing its own uptick in violence, there are 225,000 Syrian refugees. Turkey, where the government has taken sole responsibility for the camps, has more than 630,000.
Most refugees settle in an urban environment. The work that is available to them is mostly of the manual, badly paid kind. Professionals often find themselves barred from practising their profession. For example, the medical syndicate in Lebanon has closed ranks and charges an admission fee of over £200,000 to Syrians.
As the conflict continues and their savings run out, a growing number of refugees are forced to rely on aid for their survival.
But the regional Syria appeal, which at over £2.5bn is the largest in UN history, has only been funded 14 per cent for 2014. UNHCR has had to cut off a third of its beneficiaries in Lebanon over the past year, at least in part due to funding constraints.
Yet over the past three years, refugees from across the country have been united in one thing. Their dream is to return to Syria, no matter how little remains of the life they used to lead there. | <urn:uuid:d3b634af-2e96-472b-b647-6b6f80f24a63> | CC-MAIN-2016-26 | http://www.independent.co.uk/news/world/middle-east/refugee-crisis-syrians-are-scattered-but-united-by-the-same-dream-9193597.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00134-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.977933 | 1,005 | 2.578125 | 3 |
Lily Workneh -Huff Post Black Voices
The State of Black New Orleans report released by the National Urban League on Wednesday gives clear context for the many ongoing challenges black residents of New Orleans face in the quality of their day-to-day lives.
“[It’s a] commemoration and continuation,” Marc Morial, CEO of the National Urban League and the city’s former mayor, told The Huffington Post about New Orleans now. “It’s about recognizing and remembering those who died. It’s about patting on the back the good things that have happened but continuing to recognize all of the issues our report highlights,” he added.
The comprehensive report examines the poor conditions of black life in New Orleans based on data mostly from 2013 and also gives guidance on how to overcome them. It addresses the inadequacies across a series of statistics that disproportionately affect black New Orleanians across categories including median income, unemployment, health care conditions, education and economic and crime levels, among others.
There are 100,000 fewer African Americans living in New Orleans now than there were when Katrina hit, the New York Times reports. Population change or not, many of these issues were concerning challenges to the city’s black residents prior to Katrina’s landfall — and some were only exacerbated in the aftermath of its devastation.
Among the most glaring disparities between the city’s black and white residents the report highlights is that of median household income. In 2005, the median income in New Orleans for African-Americans was $23,000. In 2013, the most recent year with available data, median income had only increased to $25,000. Meanwhile, median income for white city residents in 2005 was $49,000. In 2013, that number jumped to $60,000.
The most recent data shows that 52 percent of black males in New Orleans are unemployed. In comparison, that number stood at 48 percent in 2000.
“This single statistic is potentially the single largest contributor to many of our societal ills,” the report reads.
In 2005, the overall black unemployment was 18.5 percent — for white residents it was 4.1 percent. By 2013, black and white unemployment rates stood at 13.6 percent and 4.6 percent, respectively. (The national unemployment rate as of July stood at 5.3 percent.)
Child Poverty Rates
With income rates low and unemployment rates high, the dots connect to help explain why more than 50 percent of the city’s black children under the age of 18 live in poverty. That percentage equates to more than 45,000 children and has grown since 2005 when the black youth poverty rate was 44 percent.
“The child poverty gap creates a cause for an even greater concern,” Erica McConduit-Diggs, the President of the Urban League Chapter Of Greater New Orleans, told HuffPost. “But it makes sense right? When you don’t have a job or even for those that do have a job, you’re actually not earning a livable wage so it effects the kids that are still living in poverty.”
Education in New Orleans is one of the marginally hopeful examples of some of the improvements that have taken place since the storm. Ten years ago, nearly 75 percent of the schools in New Orleans were failing by state standards, according to the report. That number now stands at 7 percent, while the high school graduation rate has increased from 54 percent to 73 percent. The city is now the national leader in black male graduation rates.
Some of this success can be attributed to the $1.8 billion invested in repairing many of the schools that were demolished in the storm. But there have also been quite a few initiatives to help get New Orleans’ youth on track.
Higher education rates for women have increased in the city, too — 21 percent of black females earned bachelor degrees or higher in 2013 (compared with 19 percent in 2005). In contrast, 13.7 percent of black men are earning the same degrees (down from 19 percent in 2005).
“It goes to show you various different dynamics that our kids are experiencing in their households,” Jamar McKneely, CEO and Co-founder of InspireNola charter schools, which represents two faculty-run schools, told The Huffington Post. “They’re just looking for somebody who can believe in them to provide a safe, secure atmosphere. Somebody they can just talk to and… give them a sense of normalcy that, as a black male, as a black female, that we can want more.”
“It’s a whole process of being able to just really expose the youth and get them to kind of see something different, to want more from themselves,” McNeeley added. “Otherwise, everyday on our news, we’re seeing killings.”
New Orleans’ high crime rate is still deeply concerning. The city itself has the highest incarceration rate per capita than any other city in the country and nearly 90 percent of its prisoners are black, according to the report.
Reform efforts are well underway, McConduitt-Diggs said. “The city has aggressively taken on the challenge of reducing the murder rate at the same time the community has collectively engaged in a conversation about the adequate and appropriate use of our jail, and making it more constitutional,” she said.
These days, summons are more often issued for petty offenses and the jails are used for more violent offenders. “In 2005 when Katrina hit we had about 6,300 inmates in our prison — that’s a lot,” McConduitt-Diggs said. “Today our average daily rate of inmates stay is about 1,900, that’s a drastic reduction.”
Focusing On The Next 10 Years
Half of white New Orleans residents say their quality of life in their local community is better while nearly half of blacks say it is worse, according to a new study conducted by the Public Policy Research Lab at Louisiana State University.
The 10-year-anniversary marks a moment of reflection for black residents on reform, recovery and the resiliency to continue to bounce back.
“As difficult as Katrina was for us, it provided a platform for us to have more substantive and policy oriented conversations about who we want to be,” McConduit-Diggs said.
“It’s not just a matter of let’s go back to where we were. It’s where do we want to be.” | <urn:uuid:127d2795-c37a-4ce9-ad32-119709c9b32a> | CC-MAIN-2016-26 | http://communityjournal.net/tag/katrina/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00101-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.957723 | 1,375 | 2.765625 | 3 |
As the season starts to change calves will begin to suffer from the cold sooner than one might think. Even at a temperature of 60 degrees F, cold stress can impact the growth and health of dairy calves.
“At temperatures of 60 degrees F we may be comfortable, but our calves start to divert energy away from growth and immune function to regulate body temperature,” says Dr. Dari Brown, director of livestock young animal marketing with Purina Animal Nutrition LLC. Calves become cold stressed at fairly moderate temperatures because they have a higher surface-area-to-bodyweight ratio than older animals.
As temperatures decrease, energy requirements of the calf start to increase. In fact, for every degree the temperature drops below a calf’s lower critical temperature, the amount of energy needed for maintenance increases by 1 percent.
If calves don’t receive the energy they need, resources are diverted from growth. When energy is diverted from growth, calves will not gain weight and they become more susceptible to diseases like pneumonia and scours; calves also could die.
Financial losses also begin to mount from treatment costs, poor growth — which delays age at first calving — and future milk production potential.
Don’t let energy be a limiting factor to your calves’ performance this winter; instead implement a feeding program that supports these increased energy demands. Added energy can be provided to the calf by adding a third feeding of milk replacer (preferably late in the evening) and increasing the amount of starter offered.
Seasonal formulations of milk replacer are now available and calf starter will soon be available; both are designed specifically to meet the needs of calves during inclement weather. | <urn:uuid:88675a75-c80e-4a41-ba34-ad4988a0560d> | CC-MAIN-2016-26 | http://www.dairyherd.com/dairy-resources/calf-heifer/Will-you-meet-your-calves-needs-this-winter-173117321.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00041-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.928181 | 344 | 2.796875 | 3 |
Charles Cornwallis, 1st Marquess Cornwallis
by Roberts, after James Barry Click on the links below to find out more
stipple engraving, (circa 1775-1800)
4 3/8 in. x 3 1/4 in. (112 mm x 84 mm) paper size
Bequeathed by (Frederick) Leverton Harris, 1927
- James Barry (1741-1806), Painter. Artist associated with 10 portraits, Sitter in 8 portraits.
- Roberts (active before 1841). Artist associated with 1 portrait.
Act of Parliament extends inventor James Watt's patent (first granted in 1769) and the first steam engines are built under it.First known building society - Ketley's Building Society - is established in Birmingham by Richard Ketley, landlord of the Golden Cross Inn.
Art and science
First performance of Richard Brinsley Sheridan's play The Rivals
at the Covent Garden Theatre in London.Artist J.M.W. Turner is born. Satirist James Gillray's first engravings and etchings are published.Navigator Captain Cook publishes his discovery of a preventive cure against scurvy, in the form of a regular ration of lemon juice.
War of American Independence begins with British defeat at Lexington and Concord and lasts until 1783. British achieve a narrow and costly victory over the Americans at the Battle of Bunker Hill. Edmund Burke delivers a speech to the British Parliament on conciliation with the American colonies. First performance of Pierre Beaumarchais' comic opera The Barber of Seville
in Paris. Pope Pius VI succeeds Pope Clement XIV as the 251st pope. | <urn:uuid:3de414bc-a182-46c5-9b38-686137d81421> | CC-MAIN-2016-26 | http://www.npg.org.uk/collections/search/portrait/mw68430/Charles-Cornwallis-1st-Marquess-Cornwallis?LinkID=mp01033&role=sit&rNo=12 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00007-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.895196 | 342 | 2.6875 | 3 |
There has always been a reciprocal relationship between technology and film style. The development of different types of lighting equipment and the introduction of new film stocks have both expanded the range of lighting methods and effects available to the cinematographer. Many types of lighting units were first developed for nonfilmic uses, such as street lighting or searchlights. Only later was their potential for producing cinematic lighting effects explored. Although certain styles of film lighting arose in response to technologies that already existed, many other technical innovations were the result of experiments by enterprising cinematographers and gaffers. In some instances, the name of a certain lighting effect has derived from its first use in film. One example is the "obie," a small spotlight that was designed by the cinematographer Lucien Ballard (1908–1988) during the filming of The Lodger (1944) in order to conceal the facial scars of actress Merle Oberon. The history of film lighting is a complex chronicle of intersecting influences involving technological and aesthetic innovations, periods of relative stasis, and the gradual development and refinement of existing techniques.
The lighting techniques used in the early cinema of the late 1890s and the first years of the twentieth century were astonishingly primitive in comparison with those used in still photography. Filmmakers of that era did not adopt the range of artificial lighting that was already standard equipment in photographic studios and widely used by photographers to enhance the aesthetic appearance of their work. Instead, filmmakers relied almost entirely on bright daylight. For this reason, when films were not shot on location they were filmed on rooftop sets, or else in studios built with either an open air design or a glass roof. Thomas Edison's famous Black Maria studio, built in 1892, was based on a rotating structure that allowed its glass roof to be maneuvered to follow the direct sunlight. A greenhouse-like studio built by the French filmmaker Georges Méliès (1861–1938) in 1897 that featured both glazed roof and walls and a series of retractable blinds proved to be an influential model for the design of later studios. The availability of many hours of bright sunlight was so important to early filmmakers that it has often been cited as one of the reasons that the American film industry shifted its base from New York to California (although other reasons, such as the wide range of landscapes California could offer for location shooting, also were important).
The use of daylight as the main source of illumination provided visual clarity. It did not allow as many opportunities to create dramatic effects as artificial lighting did, however. Nor did it permit indoor or night-time cinematography. The first uses of artificial lighting have been traced back as far as 1896, when the pioneering German filmmaker Oskar Messter (1866–1943) opened his indoor studio in Berlin. By 1900 the Edison studio in America had begun to make regular use of artificial light to complement naturally available light. Examples of this practice can be found in Why Jones Discharged His Clerks (1900) and The Mystic Swing (1900). Although the use of artificial lighting was initially confined to replacing or augmenting sunlight in order to provide a clear image, by 1905 filmmakers had begun to explore the creative possibilities of artificial light. In spite of the fact that the technology had long been available, the potential value of harnessing it to further the aesthetic development of film style does not appear to have been recognized in the early cinema.
Two main sources of artificial light were used at this time. One source was arc lights, which produced illumination by means of an electric spark jumping between two poles of carbon. The other was mercury vapor lights, which worked in a way similar to modern fluorescent lighting tubes. These sources allowed the creation of directional lighting, meaning that a chosen area of the set could be lit more brightly than the other parts. As the practical and aesthetic benefits of electric lighting came to be accepted both in America and abroad, some producers adopted it as their primary source of lighting, and the first "dark studio" opened in Turin, Italy, in 1907.
In America, experiments with lighting effects continued, both indoors and out. A range of new techniques were discovered, although no significant technological innovations appear to have been introduced until the 1910s. The director D. W. Griffith (1875–1948) and his cameramen were particularly active in their exploration of lighting effects, which can be found in such films as Pippa Passes (1909), The Thread of Destiny (1910), and Enoch Arden (1911). The last of these is often cited as the film that introduced a significant new technique: the creation of a soft lighting effect on faces by using reflectors to redirect strong backlight. The innovation was claimed by the cameraman Billy Bitzer (1872–1944), although questions have been raised as to whether he was really the first to use this strategy. In the mid-1910s, Griffith also began to make increasing use of high contrast lighting that cast deep shadows across characters and sets. This style had emerged a few years earlier in the Danish and German cinemas. Due to its earlier use by the famous Dutch painter, it is sometimes known as Rembrandt lighting, a term attributed to the Hollywood director Cecil B. DeMille (1881–1959), who used the technique in films such as The Warrens of Virginia (1915) and The Cheat (1915).
During the latter half of the 1910s, filmmakers adopted two significant new techniques, both derived from other art forms. One was the use of carbon arc spotlights, which had previously been used in theater and which allowed a strong light to be directed from a distance onto a particular actor or area of the set. The other was the use of diffusing screens, which already belonged to the repertoire of the still photographer. Diffusers could be used to transform a hard light into a soft light that did not cast such severe shadows. The increasing use of soft lighting techniques, whether they relied on reflectors or diffusers, had particular benefits for facial lighting. Soft lighting produced more flattering effects and, with the rise of the star system during this decade, it was becoming ever more important to make the actors look attractive.
The range of lighting sources that were used in film, and a growing appreciation of their potential to create specific effects, encouraged the development of more sophisticated lighting styles. It became common to use a combination of several lights to create a pleasing aesthetic that flattered the appearance of the actors and the sets as well as serving the film's narrative requirements. One of the best known lighting setups is the so-called three-point system, which was used primarily for figure lighting. The brightest of the three lights was the "key" light, which was directed toward the actor's face from the front-side. If this light were used on its own it would leave one side of the face in virtual darkness and cause the actor's nose to cast a large, unflattering shadow. To prevent this from happening, a second softer light known as the "filler" light was directed at the other side of the face. This light was normally positioned close to the camera, on the opposite side from the key light. It helped to balance the composition, reducing the dark shadows cast by the key light while preserving the facial sculpting. A third "backlight" was positioned behind the actor in order to create a halo of light around the hair. This served to separate the actor from the background and also helped to emphasize the fairness of blonde hair, which did not otherwise show up well on the monochromatic film stock that was used until the late 1920s.
A third type of light that came to be used in conjunction with the arc and mercury vapor lights was the incandescent light, which used a glowing metal filament, much like most modern domestic lighting. The cinematographer Lee Garmes (1898–1978) claimed to have used this type of light as early as 1919, although its first use is more commonly identified in Erich von Stroheim's Greed (1924), which was photographed by Ben Reynolds (c. 1891–1967) and William Daniels (1901–1970). Whatever the case, it was not until the introduction of panchromatic film stock in 1926 that it came into common use, when it was found that the color temperature of incandescents, or "inkies," was better matched to this stock than was that of the arc lights. Studios were quick to embrace the benefits of incandescents, as these lights required less electrical power and less manpower than other forms of electrical lighting. It was widely predicted that their use could halve the cost of film lighting as well as significantly reduce the amount of time spent in setting up and operating lights during the film shoot. A further decisive factor in the wide adoption of incandescent lights was the temporary abandonment of arc lighting with the coming of sound. Filmmakers discovered that the humming noise emitted by arc lights was picked up by recording equipment. Only in the early 1930s, after a way was found to silence them, were arcs reintroduced as a supplement to the incandescents that had taken their place as standard studio equipment.
The wide range of easily governed incandescent spotlights introduced in the 1930s allowed an ever more precise control of lighting effects. Complex systems were designed to ensure that every detail of the image was carefully governed. In his 1949 textbook, Painting with Light , the Hollywood cinematographer John Alton (1901–1996) described an eight-point system for close-up lighting (p. 99). It was based on the three-point system described above but included some extra lights that helped to improve the aesthetic effect. Three were directed at the actors: an "eyelight," which brought out a sparkle in the actors' eyes; a "clothes light," which showed up the details of their costumes; and a "kicker light," which added further definition to their hair and cheekbones and was normally positioned between the backlight and the filler light. Additionally, a "fill light" provided diffused lighting for the entire set while a "background light" illuminated the set behind the actors.
Around 1947 a new lighting aesthetic was introduced that had arisen in response to the techniques used for shooting newsreels during World War II. Shooting combat footage did not allow filmmakers any opportunities to create complicated lighting setups; instead, they had to rely on daylight, or else on a handful of powerful lights that provided a general illumination. The photofloods first introduced in 1940 were ideal for this purpose. Some fictional films began to emulate this rough and ready aesthetic. A wave of documentary-like thrillers ensued, which eschewed such complicated schemes as the eight-point lighting system in the service of greater realism. Many of these, such as Boomerang (1947) and Call Northside 777 (1948), were based on real events and filmed on location.
The 1950s saw a further erosion of the dominance of the lighting techniques that had characterized films of the 1930s and 1940s. One reason for this was the growing popularity of color filmmaking. The range of different hues meant that fewer lights were needed to differentiate between one surface and another. The backlight, which had been used to separate figures from the background plane, passed into near redundancy for a time. It still had other uses, though, one of which was to illuminate rainfall, far more visible when lit from the rear than when lit frontally. Some of the other changes in lighting technique during the 1950s can be attributed to the rapid expansion of television production. Television relied heavily on the use of live, multi-camera shooting on a studio stage. The lighting style that best suited this mode of production was one that offered a bright, even illumination of the whole set. Even though theatrical films continued to light shots with greater individual care than did TV productions, the high-key style associated with television became a widely accepted norm.
Regarded as one of Hollywood's most eminent cinematographers, John Alton is best known for his work in film noir during the 1940s and 1950s. His contribution to more than a dozen noirs helped to define their characteristic style of high-contrast black-and-white photography. Alton was also responsible for some very fine work in color, and he received an Oscar ® for the ballet sequence of the lavish musical An American in Paris (1951). His enduring reputation was cemented further by the publication of his classic textbook Painting with Light in 1949, the first book on lighting technique by a Hollywood professional and still one of the most revealing and readable.
Alton's work is characterized by a tendency to use as few lights as possible, an approach that allowed him to create arresting images both quickly and cheaply. The speed with which he worked and his refusal to follow in the established traditions of lighting technique reportedly made him extremely unpopular with other cinematographers and lighting crew members. Nevertheless, his economical working practices and the innovative effects he achieved made him the cinematographer of choice for such renowned directors as Anthony Mann, Vincente Minnelli, Richard Brooks, and Allan Dwan.
John Alton entered the film industry as an MGM lab technician and soon became a cameraman, working for some years in Europe and then in Argentina before returning to Hollywood. The film that first propelled him to the status of an A-list cinematographer was T-Men (1947), although he had previously racked up well over forty credits. T-Men was the first of his six collaborations with Mann, which would later include Raw Deal (1948) and Border Incident (1949). While it is considered one of the first "documentary-style" noirs, at times Alton's highly stylized lighting aesthetic anticipates his most famous work: The Big Combo (1955).
Like most of the films on which he worked, The Big Combo was a low-budget affair whose apparent production values were greatly elevated by the accomplished lighting technique. Alton's sparse lighting sources sometimes bathed faces in light against backdrops of blackness, or else concealed them in deep shadow. In the final shot, now seen as one of noir's most iconic images, he silhouetted the characters against a dazzling white haze. In this scene, as elsewhere, the set dressing is virtually insignificant since the players act out their parts in a world delimited by little other than darkness and light. For the seventeen-minute ballet sequence of An American in Paris Alton used some of the same techniques including silhouetting and deep shadows. These effects were sometimes used to draw attention away from cuts, producing dramatic results. Throughout the sequence, the rapid shifts between different lighting effects and colors within a single shot are dazzling.
T-Men (1947), Raw Deal (1948), He Walked by Night (1948), An American in Paris (1951), The Big Combo (1955), Visions of Light: The Art of Cinematography (1992)
Alton, John. Painting with Light . Berkeley: University of California Press, 1995. Originally published in 1949. The 1995 edition includes a detailed introduction by Todd McCarthy.
In the 1960s and 1970s further changes in the dominant lighting styles of American cinema derived their main influences from trends in European filmmaking. The films of the French New Wave and, in particular, the work of the cinematographer Raoul Coutard (b. 1924), proved especially influential. Coutard first used his trademark technique of "bounced light" when photographing Jean-Luc Godard's Le Petit Soldat (1963). It entailed directing photoflood lights toward the ceilings of interiors so that a bright, even light was reflected down onto the scene. This technique came to be widely emulated. A contrasting trend of the late 1960s and 1970s saw many color films adopt a darker, more low-key style than had been used in earlier years. This aesthetic was integral to the somber and pessimistic tone of the narratives that flourished in this era, and Bruce Surtees's work for Eastwood can be seen to typify this vogue.
The most significant change of the late twentieth century was the introduction of HMI (hydrargyum medium arc-length iodide) lights. The HMI was a form of arc lamp that was centered on halogen gas enclosed within quartz and that had the same color temperature as sunlight. After some initial unreliability was solved, HMIs became increasingly popular throughout the 1980s. They remain one of the most popular forms of film lighting today, for both indoor and outdoor cinematography, as they are easy to use and consume relatively little power for the amount of light they produce.
At the beginning of the twenty-first century, the advent of digital cinema began to have a significant impact on the lighting requirements for certain types of filmmaking. While most theatrical features continue to be produced on 35mm film, which requires far higher levels of light than does the human eye, digital cameras are able to produce a clear image with a very low level of available light. This facility has proved especially popular with documentary filmmakers, as even indoor scenes can now be shot without additional lights. For compositional purposes, supplementary lighting is often preferred, however. Digital filmmaking using available light also has gained favor with filmmakers wishing to adopt a documentary style in the service of enhanced realism, as in the case of Michael Winterbottom's 9 Songs (2004), a digital feature that was shot entirely on location using only available light.
Fashion in lighting style has varied considerably over the years. Nevertheless, in spite of this historical variation, certain conventions concerning lighting styles have developed.
In Painting with Light , John Alton identified three main lighting aesthetics that he designated "comedy," "drama," and "mystery." Comedies, he argued, should be brightly lit with low contrasts in order to create an overall mood of gaiety; dramas should vary their lighting schemes according to the tonalities of the narrative situation; while mystery lighting, used in thrillers and horror films, is characterized by a low-key approach that swathes much of the set in deep shadow. Countless films confirm the dominance of this way of thinking, from the cheerfully illuminated comedies, Way Out West (1937) and Les vacances de Monsieur Hulot ( Monsieur Hulot's Holiday , 1953), to the moody chiaroscuro of horror movies like The Black Cat (1934) and La Maschera del demonio ( Black Sunday , 1960). The continued relevance of this model is borne out by a project at the University of Central Florida where researchers in the Department of Computer Science have made significant headway in developing a computer system to identify film genres in contemporary American cinema. The programmers used lighting as one of the four formal criteria by which to differentiate genres (the others being color variance, average shot length, and the level of movement within the frame). Such a measurable relationship between lighting and different kinds of narrative shows the extent to which filmmakers have adopted lighting as an important narrational tool, and emphasizes the
Bordwell, David, Janet Staiger, and Kristin Thomson. The Classical Hollywood Cinema: Film Style and Mode of Production to 1960 . New York: Columbia University Press, 1985.
Higham, Charles. Hollywood Cameramen: Sources of Light . Bloomington: University of Illinois Press, 1970.
LoBrutto, Vincent. Principal Photography: Interviews with Feature Film Cinematographers . Westport, CT: Praeger, 1999.
Lowell, Ross. Matters of Light and Depth: Creating Memorable Images for Video, Film, and Stills through Lighting . Philadelphia: Broad Street Press, 1992.
Malkiewicz, Kris. Film Lighting: Talks with Hollywood's Cinematographers and Gaffers . New York: Prenctice-Hall, 1986.
Rasheed, Z., Y. Sheikh, and M. Shah, "On the Use of Computable Features for Film Classification." IEEE Transactions on Circuit and Systems for Video Technology 15, no. 1 (2005).
Salt, Barry, Film Style and Technology: History and Analysis. 2nd ed. London: Starword, 1992. Original edition published in 1983. | <urn:uuid:f97b5b3a-15e8-48b3-b773-f9c87beaa632> | CC-MAIN-2016-26 | http://www.filmreference.com/encyclopedia/Independent-Film-Road-Movies/Lighting-LIGHTING-TECHNOLOGY-AND-FILM-STYLE.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00155-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.973692 | 4,151 | 3.46875 | 3 |
Young scientists explore nature during new day camp
JANESVILLE—Armed with clipboards, notepaper, pencils and magnifying glasses, several young scientists spent Tuesday morning learning about metamorphosis.
They searched for different stages of a butterfly in the prairie, Thomas Jefferson, Jungle and Pollinator's Paradise gardens at Rotary Botanical Gardens. When they happened upon what they were searching for, they were encouraged to chronicle their findings.
"Write it down, we are scientists taking notes of all the creatures we see," said Kris Koch, the gardens' education coordinator.
The activities all were part of a new Nature Explorer Day Camp, offered through to children ages 6-11 through a partnership between the gardens and Janesville Leisure Services.
As soon as campers arrived on the prairie Tuesday, Koch introduced them to milkweed, asking them to examine the plants to see if they could find any eggs.
"I see one!" one girl shouted as the others grouped around her for a glimpse.
As part of the hands-on outdoor adventure camp, eight young sleuths found holes in leaves that caterpillars had eaten, aphids, a ladybug, crickets and grasshoppers.
“It's been common practice of Janesville Leisure Services to bring their day campers to Rotary Botanical Gardens for field trips every summer, and Rotary Botanical Gardens Education has wanted to delve into day camp for a long time," Koch said. "Hopefully this will lead to expanded day camp and other program opportunities for 2015."
Koch explained the value of the camp.
“Kids need to have hands-on opportunities to get up close and personal with the wonders of nature,'' she said.
The camp's goal is to inspire children to slow down and pay attention to the complex systems functioning around them.
“We're all connected, and by fostering connections between kids and nature we show them we are part of a really big picture and not the only part,” Koch said.
Koch's hope for the camp is that attendees gain a sense of how awesome nature is, and that they and become inspired enough to want to spend more time outdoors learning through play.
The camp is an “opportunity to engage a child's sense of wonder without a constant barrage of technological stimulation,” Koch said.
That's why Anne Wanke, Janesville, enrolled her 9-year-old grandson, Christopher Mullen, in the camp.
"He needs to get off the computer," she said. "This is a good way of getting hands-on science and the science component he'll need for the STEM reading program at Hedberg Public Library."
With education being a critical component of its organizational mission, Koch said the gardens would continue to collaborate with community partners, expand offerings geared toward youth and families and install interactive components to enhance experiences for all visitors. | <urn:uuid:8968d631-d7b3-482c-96cd-fccfc0f34cf3> | CC-MAIN-2016-26 | http://www.gazettextra.com/20140806/young_scientists_explore_nature_during_new_day_camp | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00081-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958582 | 601 | 2.578125 | 3 |
Arctic sea routes could open up for shipping by mid-century
The Canadian Coast Guard icebreaker Louis S. St-Laurent makes its way through the ice in Baffin Bay, Thursday, July 10, 2008. (Jonathan Hayward/THE CANADIAN PRESS)
Published Monday, March 4, 2013 2:06PM EST
Last Updated Monday, March 4, 2013 4:42PM EST
Shipping vessels could be cutting directly through the Arctic by mid-century, as sea ice melts and new routes open up to commercial traffic for the first time in known history, according to a new study from two UCLA scientists.
The researchers combined sophisticated climate modelling with northern shipping route information for ships passing from the Pacific to the Atlantic.
This is the first time such mapping has been done to show how melting sea will open the previously frozen frontier to commercial traffic between the years 2040 and 2059.
Their findings were published Monday in the journal Proceedings of the National Academy of Sciences Plus.
"It's both an exciting and worrisome prospect because you're talking tourism, cruise-liners, fishing ships,” said lead researcher Laurence C. Smith. “And as these waters open up the incentive will be there, the motivation will be there to explore them with common ships.”
Smith, a professor of geography at the University of California, Los Angeles and author of "The World in 2050," co-authored the report with Scott R. Stephenson, a Ph.D. candidate at UCLA's geography department.
- Standard open-water vessels will be able to navigate through the Arctic by mid-century.
- Polar Class 6 vessels, which are strengthened against ice, will be able to take a direct line from the Pacific to the Atlantic by cruising directly over the North Pole -- a route that would be 20 per cent shorter than current routes.
- Canada's treacherous Northwest Passage will become a viable route for Polar Class 6 vessels.
- The Northern Sea Route along the coast of Russia would become navigable for standard ships.
The researchers used the month of September, when Arctic sea ice is typically at its lowest point, as their benchmark for predicting future trends.
Using seven different climate models, Smith and Stephenson looked at forecasts that incorporated both medium-low and high increases in carbon emissions, and the corresponding rise in global warming. Interestingly, the result was the same under either model.
"The bottom line is once the ice thins enough, further warming doesn't much matter at least for the Polar Class 6 vessels," Smith said. "And even for the open water vessels the take-home message is the same -- the Northwest Passage opens up half of the time on average, and the Northern Sea Route is fully viable to open water vessels regardless of which climate change scenario you assume."
The predictions, if proven to be true, could result in significant changes to the shipping industry.
Today, the Northwest Passage is theoretically navigable only one out of every seven years, making it too unpredictable and dangerous as a shipping route for commercial operations. However, according to the study the route will be open every other year by mid-century, vastly increasing its viability.
That would have political implications as well. While Canada considers the Northwest Passage to be part of its sovereign territory, the U.S. and other nations consider it to be an international strait. However, until now the point has been largely moot because the waterway has been almost entirely unnavigable.
If that were to change, the disagreement could quickly become a sore point between the two nations.
"This study suggests this issue will need to be resolved in the coming years because this route will become viable, especially for light ice breakers and eventually even to open water vessels," Smith said.
If shipping traffic increases in the Arctic, Canada would likely have to increase its presence and patrols in the region as well, he said.
Smith said the modelling shows the Arctic will remain unfriendly to shipping traffic during the winter, and there is no danger that climate change will put the Suez Canal out of business anytime soon. | <urn:uuid:702df920-5aa9-42a1-b271-4dcc8e7ee668> | CC-MAIN-2016-26 | http://www.ctvnews.ca/sci-tech/arctic-sea-routes-could-open-up-for-shipping-by-mid-century-1.1180836 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96185 | 838 | 2.984375 | 3 |
You might imagine that having two of some organs is redundant. We have two lungs, two kidneys, two eyes -- each doing the same job at the same time. But Dr. Tony Neff, a professor of anatomy and cell biology at Indiana University - Bloomington, warns against downplaying the role of duplicate organs. It takes both organs in those sets to carry out their job fully; Although one can function alone, the process it carries out will not be done at full capacity, and the rest of the body suffers. For example, you can see with only one eye, but the eyes' function of providing depth perception will suffer and you'll bump into things much more frequently seeing with one eye than you would with two.
So if you need both lungs to function at full capacity, what would happen if you had an extra heart? Would the performance of the processes it carries out double?
Not at first, says physiologist Bruce Martin, a colleague of Dr. Neff's at Indiana University. Your body is a system, and it's built so that the system is always functioning at its full capacity. When the system is attacked -- for example, through starvation -- all parts of the system suffer at the same rate. Conversely, when one part breaks down, the whole system suffers. If your lungs become irreparably damaged -- say, through emphysema -- the rest of the system will slow down to accommodate the broken part.
So since your system is already functioning at full bore, the addition of an extra heart wouldn't do much. But your system also possesses potential function, as seen in the muscles, when they're called upon to act beyond their normal capacity, like in the case of hysterical strength. We can train our bodies to function at higher levels, like athletes do. Since the heart pumps blood to the muscles, with a second heart your muscles would eventually grow stronger with time. Once the rest of the system is used to having a second heart, a person could grow stronger and have more endurance [source: Martin].
But the same can't be said for your brain. The brain is already getting more than enough blood to it, so it wouldn't function at a higher level, theorizes Dr. Martin.
Interestingly, when we are in the embryonic stage of development, we actually do have two hearts. The heart primordia (which describes the stage of the heart's development) in the embryonic stage is actually two hearts, which eventually fuse together into one heart with four chambers. Embryologists in the 1920s and '30s kept the heart primordia from fusing in embryonic frogs, and the frogs that grew up developed two hearts. The same also goes for our eyes. We begin with one primordia of the eye, which eventually separates to form two. If the primordia is kept from splitting, one central eye develops, like a cyclops, says Dr. Neff.
So it's theoretically possible for us to develop two hearts. And if we could determine how to use both fully, we could also advance ourselves into a species of super-strong, intellectually average beings. But wouldn't tampering with our own evolution as a species be dangerous?
"We've already taken ourselves out of evolution," says Rutgers' Susan Cachel. "[Humans are] all effectively tropical animals, and through our use of technology, like winter clothes, we've shielded ourselves from the effects of cold weather."
So we've beaten natural selection by the elements. We'll see what we can achieve with two hearts.
For more information on physiology, evolution and related topics read the next page. | <urn:uuid:45845747-fd8d-421a-bfbf-6fef404c9426> | CC-MAIN-2016-26 | http://health.howstuffworks.com/human-body/systems/circulatory/two-lungs-one-heart1.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00145-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961964 | 739 | 3.296875 | 3 |
Amazing Miracles: Supernatural Intervention in a Natural World
Amazing miracles are events in the natural world brought about by the intervention of supernatural agency. Throughout the Bible, miracles are used by God to visually represent His divine power and authority over man and nature. From time to time, God also empowered His followers to use miracles in order to authenticate their commission as teachers and writers on His behalf. In the Gospel accounts of the New Testament, Jesus used miracles to fulfill Old Testament prophecy and confirm His deity.
Amazing Miracles: Suspension of Natural Law is Not Unreasonable
Amazing miracles are represented by four Greek words in the New Testament of the Bible: Semeion (a "sign"), Erga ("works"), Dunameis ("might works"), and Terata ("wonders"). Since miracles fall outside material and mathematical explanation, man cannot take credit for them. Therefore, by definition, miracles declare the presence, authority, power and glory of a supernatural God.
The suspension (or violation) of natural laws involved in Biblical miracles is really no different than what we witness on a day-to-day basis. There are inherent natural forces represented by the laws of physics, chemical properties and mathematical formulae, and there are volitional forces that can interact or counteract the natural ones. For instance, the laws of gravity that hold a rock to the ground are not suspended (or violated) when a boy counteracts gravity by applying a greater physical force to pick up and throw the rock. The same logical concept is true when we witness Jesus walking on water or turning water to wine. He merely applies a volitional force outside what we know as the natural laws within our four material dimensions.
Basically, the laws and formulae that underlie the complex universe have been "interpreted" by men living within that same universe. The universal assumption of mankind is that the laws and formulae inherent in the universe have been created by, and are under exclusive control of, the natural universe itself. Once we establish the potential for a supernatural dimension beyond the viewable and knowable forces of nature, we comprehend the possibility (and ultimately, the reality) of phenomena such as miracles. Many scientists reject the Biblical miracles because they can't apply scientific tests such observation and replication. However, a miracle such as the resurrection of Jesus is by definition an unprecedented event. No scientist can reproduce this event in a laboratory. Therefore, "science" cannot be the final word as to the historical credibility of Biblical miracles.
Like all other historical events, the credibility of a Biblical miracle should be viewed in accordance with standard rules of evidence, weighing factors such as the veracity of the recorded account and the credibility of the eyewitnesses to the miracle event. With a bit of earnest investigation, we find the witnesses to the miracles of Jesus to be competent, and their testimony trustworthy. First, many of the witnesses were still alive when the written accounts were published and distributed. We now know that a fairly short period elapsed between Jesus' miracles and the writing of the gospel accounts. This period was not long enough to allow for the development of myths. Many eyewitnesses were still alive to correct any untrue or legendary miracle accounts. Second, the eyewitnesses to the miracles were simple men of character. The historical record shows that the apostles and many disciples were considered credible and reliable witnesses. More dramatically, all of these eyewitnesses were willing to give up their lives rather than deny their testimony. Third, there were many hostile witnesses to the life and miracles of Jesus. The record shows that none of the Jewish religious leaders disputed the miracles they saw. Rather, they saw Jesus' miracles as a threat, and focused on stopping Jesus' miraculous public ministry.
Amazing Miracles: Judge Them for Yourself
Amazing miracles cannot be dismissed as a scientific presupposition if we establish that the existence of a supernatural Creator is possible. Once we accept the possibility of God, each miracle event must be judged like any other historical event, based on standard rules of evidence and eyewitness testimony. The miracles of Jesus are contrary to 21st Century experience, but that does not establish that they were contrary to the experience of those who witnessed them approximately 2000 years ago. Today, all of us believe numerous facts and events outside our experience, solely based on the reliable and trustworthy testimony of others.
What is your response?
Yes, today I am deciding to follow Jesus
Yes, I am already a follower of Jesus
I still have questions | <urn:uuid:c0eb172c-fa2f-4c4d-8c9a-e3079fedf605> | CC-MAIN-2016-26 | http://www.allaboutjesuschrist.org/Amazing-Miracles.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00093-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.94945 | 904 | 2.859375 | 3 |
It's fun to imagine what the day was like for the Pilgrims and Native Americans during the first Thanksgiving. There are numerous historical accounts that paint a broad picture of the clothes they must have worn and the foods they might have eaten. There are even clues as to the entertainments they may have had. No, there was not football at the end of a huge turkey dinner but they likely did have games and song and story telling after a feast of fish and deer and small game.
However the historical record is lacking in one aspect of that first three day Thanksgiving. The weather. Perhaps David Ludlum has provided the most amazing chronicle of the earth's historical weather of any writer. I heard him speak once and I can tell you that the number of obscure facts he retained was truly flabbergasting. In his wonderful book Early American Winters he scours written accounts of the times to fashion a good guess as to the weather report for the first Thanksgiving.
There was much written about the brutal weather that greeted the Pilgrims arrival in Massachusetts. Frigid Winter storms kept the settlers on their boat for weeks before the weather broke. Account hold that the winter of 1620-21 ended mild after the hard start. There were several written accounts of the snow depth and frozen soil and windy conditions.
However little was written about the weather on the days of Thanksgiving. Edward Winslow, a Separatist who traveled on the Mayflower did write that the winter was characterized by "remarkable mildness". But it was the lack of notation regarding the weather that suggests that it was uneventful.
The first Thanksgiving was likely held in the first weeks of October, which is climatologically a comfortable time in New England. It would be over 100 years later that George Washington would change the official date to November. So with the clues given and the historical weather records it would be a reasonable guess that the first Thanksgiving was not too hot, or too cold, or too rainy, or too sunny, or too cloudy but rather just average and probably in the 60's. | <urn:uuid:5ec97e9c-420c-4077-bb18-2b15a1421cc3> | CC-MAIN-2016-26 | http://www.mysuncoast.com/what-the-weather-was-like-on-the-first-thanksgiving/article_fde9bdf2-5843-11e3-bb04-0019bb30f31a.html?mode=story | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.982978 | 423 | 2.9375 | 3 |
Some areas of the world such as England, the Benelux countries, and Eastern United States have had considerable experience with these apple varieties having three sets of 17 chromosomes (51 total), triploids.
Unlike the more common diploid (two sets of chromosomes, 34 total), these triploids produced much bad pollen and many defective ovules; however, for nearly 300 years they have been important to the apple grower because of their quality and often large size.
My former horticulture teacher will probably turn over in his grave when I call these sterile, but for practical terminology we are not far from wrong, so let us do it; however, the old Thompkins County King variety does have some self-fertility despite being triploid.
Perhaps a little apple history is in order before getting to Jonagold. On my experimental orchard, there have been 13 triploid varieties at one time or another.
The oldest probably is Gravenstein (triploid) with a known history to about 1670 in South Denmark, but almost surely an Italian variety whose scions were given to the Duke of Gravenstein in the middle of the 17th century.
Then there is Ribston Pippin long the classic apple of England before its seedling (Cox Orange) was widely grown. Our new Gala (a diploid) variety has Golden Delicious, Kidd’s Orange Red, Cox Orange and Ribston Pippin in its genetic background. No wonder it tastes so good.
Ribston Pippin (triploid) was planted in 1708 at Ribston Hall, Yorkshire and the original trunk did not die until 1835. It then sent up a new shoot and on the same root lived until 1928, 220 years!
Bramleys Seedling (triploid) is yet quite important commercially in Ireland and England as a cooking apple, and is one of my favorite pie and sauce apples, with its one percent acid and huge size. The original tree was planted probably in 1813 and was still alive in 1956 at Southwell, Notts and may still be living 171 years later. A famous triploid which has no peer as a cooking apple.
Belle de Boskoop (triploid), commercially important in Holland, Belgium, Denmark, etc, with many modern offspring it is an old English variety still grown in backyard orchards.
Early American triploid seedlings were Baldwin the most important commercial red apple in the Northeast for 75 years until a few days of minus 40 degrees F. killed then out in 1933-34. They were replaced by McIntosh.
Rhode Island Greening (triploid), still grown commercially in New York – Stayman (triploid), resulting from a seed of Winesap planted in 1875.
Some of the newer apple crosses are also triploids – notably Jonagold, Mutsu (Crispin), Spigold, and Suntan (European).
Mutsu (Crispin) and Spigold often have apples too large unless sold in gift boxes, if one leaves the king blossom* on. I have seen Spigold picked from the king bloom only, that ran 35 to 40 per box but were still of good flavor. What a sight on a dwarfing tree.
Jonagold also sizes well and fruiting the king bloom is not that important unless one wants considerable apples in the 64 to 88 size. There are at least two ways to pollinize ‘Jonagold’ depending a great deal on the apple growing area.
First, no matter what geographical area you want to grow them in, you cannot use Golden Delicious as a pollen source. Golden delicious is cross-unfruitful with Jonagold.
Remember we usually need two other varieties with variable pollen to pollinize Jonagold. Don’t use just one variety, because even though that variety may give a good fruit set to the Jonagold – what is pollinizing the pollinzer. If you want apples from your pollenizers, Jonagold won’t do it. It’s virtually sterile.
Depending somewhat on climate, Jonagold is usually about in the middle to the late part of the mid-bloom. Suggested varieties as pollinators are Akane and Spartan.
For the latter part of the bloom, Melrose and Paulared would cover it well. Akan and Melrose would be my choice, as Melrose is an outstanding keeper and an all purpose apple.
Another alternative is for those who like Yellow Newton. Since Yellow Newton is quite self-fertile and has a good overlap in bloom period, it would work fairly well. But triploid should normally have pollen from two sources.
However we do it, there is a way for any area – but doing our homework is important.
*The king blossom is the central, first opening blossom in the flower cluster of apples. It will produce the largest fruit. Many times all other blooms and/or small fruit, but the king, is removed during tile thinning process. | <urn:uuid:222c5945-100c-49a7-b0b1-84fbd108be25> | CC-MAIN-2016-26 | http://www.homeorchardsociety.org/growfruit/apples/pollinizing-jonagold/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00034-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.964554 | 1,063 | 2.796875 | 3 |
CHILD Protection & Child Rights » Vulnerable Children » Children's Issues » Children without Parental care
According to UNICEF, children worldwide lose their parents in conflict, or due to poverty, disability, HIV/AIDS. Hence there is a large population of children that grow up without one or both of their parents. Children without parental care are at a high risk of abuse, exploitation and neglect. Large numbers of children end up in institutional care. Inadequate individual care of institutions can socially and emotional impair children. About 1.5 million children in the Central and Eastern Europe and the Commonwealth of Independent States live in public care institutions. In Europe and Central Asia, over 1 million children live in residential institutions. In 2003 there were an estimated 143 million orphans in 93 countries of sub-Saharan Africa, Asia, and Latin America and the Caribbean. Asia has the highest number of orphans due to all causes, with 87.6 million children.
Children may be destitute, for the interim or permanently of parental care for many reasons including the illness, death or imprisonment of parents, separation due to migration or armed conflict, the removal by child welfare authorities and/or the courts based on the child's best interests, detention of the child, or following the child's own initiative to leave home.
In India the child parent relationship is often seen as one of obedience of a social order more so than a right of the child. Hence when a child is separated from his/her parent it is not viewed as the duty of the state to provide that child with a family environment. None the less adoption is supervised by the state, but India does not have a long term foster care or alternate care system outside of institutionalisation.
UNICEF estimates that there are 25 million orphaned children in India in 2007. Another study estimates there are about 44 million destitute children and over 12 million orphan and abandoned children in India, yet there are only 5000 (0.04%) adoptions every year. The institutions for children in conflict with the law host about 40,000 children. The wide gap that exists in the knowledge of and attitude towards child adoption and intention to adopt a child between people from different socio-economic backgrounds exposes the need of the state to initiate promotion of child adoption and creating a system of non-institutional care for children above the adoption age.Adoption in India comes under the provisions of three acts and is carried out centrally by CARA
- The Hindu Adoption and Maintenance Act 1956
- The Guardian and Wards Act 1890
- The Juvenile Justice Act 2000 | <urn:uuid:de013f50-46f7-485b-b80d-93d33e11682b> | CC-MAIN-2016-26 | http://www.childlineindia.org.in/children-without-parental-care-india.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00048-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939458 | 521 | 3.71875 | 4 |
News item: the estimated cost of building California's high-speed rail system has increased, again, to at least $60 billion, perhaps as much as $80 billion, and maybe more, according to the Mercury News.
This comes on top of news that ridership of the system likely will be lower than previously estimated.
How much is that? Put it this way, space travel is much, much cheaper than a fast train to San Francisco.
The annual budget of NASA is $18 billion. Estimates of a manned mission to Mars are between $30 and $40 billion -- half as much as California high-speed rail.
Right now, I can't get to Mars. But I have a host of ways -- flying, driving, taking the bus, or taking a slow train -- of getting from my home in LA to the Bay Area.
So why not take that high-speed rail money and use it instead for a state space program? Yes, space travel is notoriously expensive. But a space program would be cheaper and more likely to produce innovation and economic breakthroughs than high-speed rail.
And if it seems outlandish for a state with California's fiscal problems to start its own space program, just thnk about what that says about the practicality of high-speed rail.
Or to put it another way: If we're going to spend that kind of money on a transportation program, shouldn't it take Californians out of low earth orbit? | <urn:uuid:f26e4bd4-a51c-45f9-88e9-53b6e4d7c0e1> | CC-MAIN-2016-26 | http://www.nbcsandiego.com/blogs/prop-zero/Could-High-Speed-Rail-Take-Us-to-Mars-127453243.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00159-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.972232 | 300 | 2.65625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.