text stringlengths 199 648k | id stringlengths 47 47 | dump stringclasses 1 value | url stringlengths 14 419 | file_path stringlengths 139 140 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 50 235k | score float64 2.52 5.34 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
(Photo: Associated Press)
Yellowstone National Park is in the northwestern corner of Wyoming, along with a little bit spilling over into Idaho and Montana. At 3,468 square miles, it's larger than the states of Delaware and Rhode Island together.
(Image: National Park Service)
Yellowstone rests on a geological hot spot--specifically, a volcanic caldera. The enormous heat contained beneath the Earth's surface bubbles up through geysers and hot springs.
The geothermal heat can damage human structures on the surface. This photo from the Associated Press shows a strip of Firehole Lake Drive. The subsurface heat has melted the asphalt.
P.S. Harry Turtledove, a fantasy and alternate history novelist, wrote a trilogy about life in the United States after a massive Yellowstone eruption. The first book is called Supervolcano: Eruption. | <urn:uuid:498430de-5d8a-401f-9edf-3048cebb7be8> | CC-MAIN-2016-26 | http://www.neatorama.com/2014/07/13/Its-So-Hot-in-Yellowstone-that-a-Road-Literally-Melted/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00093-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.889274 | 180 | 3.109375 | 3 |
|Our scabies medicated products are recommended for anyone two years of age and up.|
- Scabies is a very versatile disease of the skin and is caused by minisacule mites.
- The Scabies Mite brings about the infection by eating into the skin to produce eggs.
- The mite is practically invisible to the sufferer and it is a generally common occurrence to be troubled with scabies.
- A female scabies mite digs to lay, and barely into the very first layer of the skin. She eats the skin as she tunnels and stops in the burrow for her lifetime.
- Once settled in the burrow the scabies mite will initiate laying almost at that moment and will do so more than once each day and over a period of up to two months.
- The scabies eggs are born in a few days, and emerge from the burrow to feast on a hair follicle.
- In as few as four days the scabies mite reaches maturity and searches for a mate, at which point the female will go over the process again, burrowing into the skin to lay her eggs.
- The Scabies patient may notice small bites or pimples in the first instance and it is worth being aware that the mites breed in warm and moist areas.
- Scabies will most frequently be seen in the armpits or on the chest, and in the genital area, the fingers, and anywhere where jewellery creates a warm enclave.
- Parts of the body where there are creases in the skin are often infected with the scabies mite and are common spots for the condition to be found.
- Those infected with Scabies could notice itching - often very intense and most commonly at night - and the appearance of a red rash, and will be inclined to scratch the area affected.
- In youngsters it is usual for the scabies mite to appear on the soles of the feet and the palms, and maybe also on the scalp, while in babies it is often the neck and head that are most often affected.
- Itching and irritation is becausae of an allergic reaction that the body has to the presence of the scabies mites, and is sometimes very painful indeed.
- As the scabies suffering spreads the infected person could experience hardening of the skin, with crusty and scaly areas appearing in time.
- In persons with particularly sensitive skin, or those with serious scabies infection nodular scabies may be the result.
- Nodular Scabies is a disease that comes about when debris left behind by the mite is embedded under the skin.
- As scabies may become nasty and painful if left untreated it is essential that the right treatment is given for the specified time.
- The elderly and others with weak immune systems are most likely to suffer from severe cases of scabies, and should be careful as a result.
- Like many similar complaints scabies is notably contagious and it does not necessarily need one to come into direct contact with a patient to become infected.
- Sleeping in a bed or resting in a chair that has been inhabited by a sufferer can bring about scabies infection, as can close contact with the individual.
- Scabies is sometimes found in nursing homes where the elderly reside, and in those who work in the attached profession and come into contact with sufferers.
Last update: 01:12 PM Thursday, April 2, 2009 | <urn:uuid:ff46ff34-b21b-46d1-9e91-7c8e877814bc> | CC-MAIN-2016-26 | http://www.allstop.com/scabies/faq-articles/index.php?action=article&cat_id=002&id=21 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00145-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961628 | 714 | 3 | 3 |
It is hard to overstate the importance of The Lord of the Rings to the development of modern fantasy. The work, the subject of cultish ardor in the first years following publication, rocketed to worldwide popularity in the mid-1960’s. Counterculture readers embraced its exaltation of nature and simple living above progress and the will to power; fans of adventure stories were captivated by its headlong pace; and scholars began to appreciate the extraordinary craft with which Tolkien, over a period of decades, had constructed his imagined world. It is not too much to say that virtually every subsequent fantasy writer owes Tolkien a substantial debt, either directly as inspiration (dozens of lesser works are clearly modeled on Tolkien’s) or indirectly for having vastly expanded the audience—and market—for adult fantasy.
The Lord of the Rings has accumulated a substantial body of scholarship, from the appreciative work of such early enthusiasts as W. H. Auden through more recent formalist approaches. The book has also generated a popular companion literature in the form of reference works, illustrative texts, glossaries, and assorted “guides.” The Silmarillion, Tolkien’s own lengthy mythology of Middle-earth, was published in 1977 but failed to attract a large readership.
Interpretation of so massive a work is a daunting task. Tolkien took pains to refute the popular early views that the Ring was meant to suggest the atomic bomb and that the East-West struggles of Middle-earth were modeled on the political order of either World War II or the Cold War. He noted that he had begun the story decades in advance of such developments and added that he “cordially” disliked such allegory. He further denied that The Lord of the Rings had an intended “meaning,” asserting that in writing it he had wished primarily to tell a riveting story that would enthrall readers. The prolonged popularity and enduring influence of his masterwork attest his success. | <urn:uuid:6a2773a8-f8cf-4c96-8273-ab58f41c1a81> | CC-MAIN-2016-26 | http://www.enotes.com/topics/lord-rings/critical-essays/analysis | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00024-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975492 | 407 | 2.78125 | 3 |
The following information is provided by St. Mary’s Food and Nutrition Services to explain the diet ordered for you by your physician. If you desire further information or would like to speak with a dietitian, please contact 706-389-3660 ext. 3660.
The regular diet can also be referred to as a general or normal diet. Its purpose is to provide a well-balanced diet and ensure that individuals who do not require dietary modifications receive adequate nutrition. Based on the Dietary Guidelines and the Food Guide Pyramid, it incorporates a wide variety of foods and adequate caloric intake.
The mechanical soft diet consists of foods soft in texture, moderately low in fiber, and processed by chopping, grinding or pureeing to be easier to chew. Most milk products, tender meats, mashed potatoes, tender vegetables and fruits and their juices are included in the diet. However, most raw fruits and vegetables, seeds, nuts and dried fruits are excluded.
To leave little residue in the GI tract, this short-term diet provides clear liquids that supply fluid and calories without residue. It is often used with acute illness, before and after surgery, and other procedures such as x-ray, CT scan, etc. It includes coffee, tea, clear juices, gelatin and clear broth.
As a transition between clear liquid and a soft or regular diet, this plan provides easily tolerated foods. The diet includes milk, strained and creamed soups, grits, creamed cereal and fruit and vegetable juices. We also serve scrambled eggs because of their high water content and they are an excellent source of protein.
This diet can serve as a transition between a full liquid and a regular diet by providing foods low in fiber and soft in texture. Most raw fruits and vegetables, nuts, seeds, coarse breads and cereals are avoided. Milk, lean meats, fish, most forms of potatoes and white breads are served on this diet plan.
This type of diet tried to limit fiber, a kind of carbohydrate found in some plant-derived foods. The diet limits intake around ten grams of fiber daily and is designed to minimize the frequency and volume of residue in the intestinal tract.
Sodium controlled diets are usually prescribed for patients with hypertension and for those with excess fluid accumulations. Intake of commercially prepared foods such as cured or smoked meats, canned vegetables and regular soups as well as buttermilk, salt and salty foods are limited or avoided. White milk, fresh or frozen meats, unsalted vegetables and fruits and low sodium foods are included.
This diet is often prescribed for patients with gastrointestinal disorders or excessive body weight. It limits the intake of fatty food such as margarine, mayonnaise, dressings, oils and gravies. The diet usually includes whole wheat breads, lean cuts of meat, skim milk, low-fat cheese products, eggs, vegetables, and other food items prepared without extra fat.
Lowering blood cholesterol can reduce your risk of heart disease. Cholesterol is found only in foods of animal origin. Certain oats, beans, and fruits are actually effective at lowering cholesterol levels in the body. A cholesterol-restricted diet limits the intake of meats, poultry, fried foods, egg yolks, and whole milk products. Food high in saturated fat and trans fatty acids such as palm kernel oil, coconut oil, margarine, and shortening are also limited. The diet includes skim milk, lean meats, fruits, vegetables, and whole grain products.
This diet varies widely depending on personal choice. It may include only plant foods- grains, vegetables, fruits, legumes, nuts, seeds, and vegetables fats. Some variations designed to be lower in cholesterol and saturated fat and higher in dietary fiber. Thus, it may be helpful in the prevention of heart disease and cancer risk.
A diabetic diet varies from patient to patients depending on the type and intensity of the diabetes, the patients’ personal history, and individual nutrient needs. The Exchange List for Meal Planning established the serving size amount of carbohydrates per meal based on calorie recommendations. Meals are basically like those found on a regular menu, but carbohydrate servings are carefully controlled and small snacks may be included in the meal plan. Carbohydrates are starches, starchy vegetables, juice, fruit, milk, and sugars.
A renal diet is carefully planned with special consideration of nutrients, and it is often adjusted as kidney disease progresses. A renal diet may serve the purpose of attempting to slow down the process of renal dysfunction. If dialysis treatments are not being taken, the doctor may restrict protein intake of foods such as potatoes, tomatoes, oranges, and bananas. A phosphorous restriction may limit the intake of milk and dairy products, dried beans and peas, while grain breads and cereals, coffee, tea, and “dark-colored” soda beverages. | <urn:uuid:81dc523c-c75d-4078-9b9d-ce0fbd1241a3> | CC-MAIN-2016-26 | http://www.stmarysathens.org/health-resources/nutrition/special-diets | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00024-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950845 | 984 | 2.96875 | 3 |
Help build a sustainable US food system by putting USDA data into the hands of farmers, researchers, and consumers.
Being a farmer in America is not for the weak of heart. For decades, it has been a game of both chance and experience - making farming a difficult industry to enter. In fact, the number of entry-level farmers has fallen by 30% since 1987 (http://www.cfra.org).
For those that do enter this industry, farming is becoming an increasingly data and technology driven activity. Known as “precision farming” or “precision ag,” farmers are now utilizing data from satellites, market reports, weather forecasts, surveys, and sensors that provide on-demand GPS monitoring and mapping tools. (http://www.usatoday.com)
Still, it’s not enough.
American farmers need more data in order to create a sustainable food system for the United States. They need to analyse the food supply coming from farms and ranches and the economics of consumer demand. They need to know how yields have changed over time so they can prepare for and predict future crops. They need to know what is growing well in their area and what isn’t. Similarly, consumers and researchers want to know where their food is coming from and how we can make US agriculture more sustainable.
The USDA has a tremendous amount of food supply, economic demand, and remote sensing data as part of its Agricultural Statistical Service (NASS) and Economic Research Service (ERS), the challenge is to explore how to make this data accessible and provide insights for potential users.
Help create a sustainable, competitive, and healthy US food system. Use USDA data to create working, interactive applications to get farmers the information they need – and help feed America.
This challenge is open to:
- Individuals (who have reached the age of majority in their jurisdiction of residence at the time of entry)
- Teams of eligible individuals
- Organizations (up to 50 employees)
- Large Organizations (with over 50 employees) may compete only for the non-cash Large Organization Recognition Award.
- Employees of Large Organizations (with over 50 employees) are eligible as long as they enter the competition independent of their company and meet all other requirements.
Employees of the USDA, Microsoft, ChallengePost and contractors currently under contract work for USDA, Microsoft, or ChallengePost are not eligible.
What to Create: Submit a working, interactive application that integrates one or more of the required USDA datasets.
Static data visualizations will not be eligible. Applications must include interactive functionality (e.g. the user can change parameters to update the visualization and/or result).
- Smartphone or tablet (iOS, Android, Blackberry, Kindle, Windows 8 Mobile)
- Web (mobile or desktop)
- Desktop (Windows PC, Mac Desktop)
- Software running on other publicly available hardware (including, but not exclusive to, wearable technology, open source hardware, etc.)
Supplemental Material: You must submit a demo video (hosted on YouTube, Vimeo, or Youku) that walks through the main functionality of the application via screencast or video. You must also submit a text description and at least one image/screenshot of your working application.
Testing: You must make your app available for testing by providing a link to access your installation file, an uploaded installation file, a beta distribution build, etc. See full testing access options.
New & Existing Solutions: Apps may be newly created or pre-existing. If the submitted app existed prior to the competition’s submission start date, it must have been updated to integrate the required USDA data during the submission period.
How to enter
- Click “Register” to sign up for important challenge communications.
- Visit the Resources page for a list of the eligible USDA datasets to get you started.
- Create your app!
- Shoot your demo video and take screenshots of your functioning app.
- Provide a way for us to access your app.
- Get started on your draft and submit early!
Dr. Debra Peters
Senior Advisor for Earth Observations, U.S. Department of Agriculture, Office of the Chief Scientist and Research Scientist, USDA Agricultural Research Service
Chief of the Research Support Branch at USDA’s Economic Research Service
Head of NASS’ Spatial Analysis Research Section in Fairfax, Virginia
Environmental Scientist, Microsoft
Deputy Managing Director, Microsoft Research Outreach
Dr. Brian Lutz
Third Generation Farmer, Ray Lutz Farms
Quality of Idea
Includes creativity and originality of the idea.
Implementation of Idea
Includes the quality of the design and user experience, as well as the level of analysis difficulty of the required data.
Clarity and Accuracy of Solution
Includes the completeness of the documentation and the accuracy of the data usage.
Includes the extent to which the application could be useful to researchers, agricultural workers, etc. | <urn:uuid:e237cd6e-00b0-41b0-82b0-6b86cd124af7> | CC-MAIN-2016-26 | http://usdaapps.devpost.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00009-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.895505 | 1,021 | 2.625 | 3 |
The action of bending or the condition of being bent, especially the bending of a limb or joint: flexion of the fingers
More example sentences
- During flexion, the spinal cord lengthens.
- The wound was closed in layers, and a compressive dressing was applied with the ankle in slight plantar flexion.
- The patient should be supine with some flexion of the dorsal spine to relax the tension of the anterior abdominal wall.
Early 17th century: from Latin flexio(n-), from flectere 'to bend'.
For editors and proofreaders
Line breaks: flex¦ion
Definition of flexion in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. | <urn:uuid:a770a71b-a438-46f7-a5e0-cd056f4dfbee> | CC-MAIN-2016-26 | http://www.oxforddictionaries.com/definition/english/flexion | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00170-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.874195 | 166 | 3.546875 | 4 |
It won’t be long before spring is here, pushing dormant leaf and flower buds into action. Some important orchard chores need to be accomplished before these sleeping buds begin to swell. This includes treating peach trees for scale with a dormant oil spray and the annual pruning of peaches.
Dormant Oil Sprays
Winter time is prime time to apply a dormant oil spray to deciduous fruit, nut and certain landscape trees and shrubs to control scale and other insect pests. Horticultural oils are highly refined petroleum products for controlling scale, mites and other overwintering insects and their eggs on plants. Horticultural oils work mainly by coating pests with a suffocating film of fine oil. Their toxic action is more physical than chemical and is short-lived.
Horticultural oils for controlling insect pests have been around for many decades. Initially, their only use was as a dormant spray on deciduous trees since early formulations could injure plants when applied during the growing season. Great advances in formulations have been made and horticultural oils are now safely used, when manufacturer’s directions are followed, during both the growing and dormant season.
Horticultural oils have several advantages over conventional pesticides. They have a wide range of activity against scale, mites, and other insects and their eggs. There is little or no resistance to oils by pests. A major advantage is that oils are usually less harmful to beneficial insects and predatory mites than other insecticides with longer residual activity. Oils are safe to handle and relatively harmless to humans, animals and birds, leave little or no residue on crops, and some formulations can be used by organic farmers in the TDA Organic Certification Program, if they are OMRI listed.
Some potential disadvantages of horticultural oils include injury to weakened or stressed plants when used during the growing season. Therefore, time applications during the growing season to avoid high temperatures, drought conditions, and prolonged wind, and don’t spray plants severely weakened by insect or disease. Always read and follow labels carefully to avoid problems.
Scales are the most serious pest to control in the winter time. Scales are tiny, sucking insects that attach themselves to tree limbs and branches with thin, smooth, tender bark. They suck sap from the plant and a heavy infestation of scale insects can weaken and kill branches or entire trees. Each scale insect covers itself with a waxy, protective material. This waxy armor makes scale difficult to control with most insecticides.
Scale is often overlooked because they often blend in to branch and bark color. Look for the presence of small bumps that can be flicked off with a fingernail.
Most fruit trees including peaches, plums, apricots, apples and pears can have scale infestations. Pecans also benefit from an oil spray, although phylloxera are more of a problem than scale on pecan trees. If pecan leaves and leaf stalks had galls or large bumps on them last year – an indication of phylloxera infestation – then an oil horticultural oil application can help reduce the reoccurrence.
Late January or February, shortly before bloom or leafing, is the best time to apply oil. Scale insects grow weaker through the winter and are more vulnerable to the suffocating oil film if it is applied late in the dormant season. Do not apply dormant oil after trees have begun to bloom or leaf out.
Mix oil according to label directions. Dormant oil works best if applied when the temperature is above 55 degrees although it can be applied when the temperature is between 40 and 70 degrees F.
Good spray coverage on the upper and lower sides of branches is critical for effective control. Use sufficient volume of solution to thoroughly wet limbs and bark.
Trees with a really severe scale infestation may need two oil sprays. If a second application is needed, wait at least 3 weeks between sprays. It can be difficult to tell if scales are dead since they don’t move around, and dead scales don’t fall of the bark or leaves. Take a knife blade or your finger nail and press on a scale. If a bit of “juice” comes out, it is alive. If the scales are flaky and dry, then they are dead. Scales with tiny holes have been parasitized by small wasps.
For more on using oils as pesticides, click here to go to an online publication by that title.
Proper pruning of peach trees, starting from the day they are planted, helps keep trees to a manageable harvest height and maintains the productivity of trees.
Newly planted peach trees should immediately be cut back to about two to three feet tall. Several branches will develop near the cut during the spring. During the summer, select three or four well-spaced, wide-angled branches to form a bowl-shaped framework of scaffold branches. Cut these back about 18 to 24 inches from the main trunk to force side shoots. These new shoots will become the secondary main branches from which most fruit wood will be produced. Remove all suckers sprouting from the base of the tree.
The ideal shape for training a peach tree is an open center like a bowl with three or four major branches radiating out from the trunk like spokes on a wheel. Plums can a have a central, main trunk.
Training during the next few years after planting depends on the tree’s rate of growth.
Remove vigorous, upright shoots and larger branches that grow into the open, bowl-shaped center of the peach tree. Leave enough short, leafy growth and fruiting branches on the interior to prevent sun scald of the main scaffold branches.
On bearing trees, clip the secondary main branches and other branches to maintain a practical tree height, about 7 to 8 feet tall. Fruit are produced on 1-year-old shoots, so there must be plenty of new growth every year. The 2012 peach and plum crop will be produced on wood that is produced this year. Yearly pruning helps stimulate this new growth. Thin out crowded shoots that will receive little sunlight. Remove low shoots that are in the way and which might sag to the ground under a heavy crop load.
Yearly pruning is needed to keep the center of the peach tree free of excessive growth. Light is critical for development of fruit wood and flower buds, and good air circulation helps reduce disease problems. | <urn:uuid:9898e017-b355-4007-a515-432a0db1ba65> | CC-MAIN-2016-26 | http://agrilife.org/etg/2011/01/16/prepare-peaches-for-spring/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00025-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952866 | 1,320 | 2.921875 | 3 |
Issue Theme: Staying Human
English Leadership Quarterly, Volume 35, Number 4, April 2013
Stories separate us from the other animals on our planet. It is through stories that we learn how to survive, cooperate, empathize. But as technology becomes more pervasive in our lives, how do we keep the humanities at the forefront of our thinking? Current educational initiatives like Common Core and STEM licensing may actually provide opportunities to integrate the humanities in STEM-related instruction and help us keep our human-ness at the center of our thinking and our teaching. Contributors Patsy Callaghan, Brandon Sams, and Sandra Jane Greek help us navigate these educational waters and look with hope at our possible futures.
* Journal articles are provided in PDF format and can be opened using the free Adobe®
Reader® program or a comparable viewer.
Click here to download and install the most recent version of Adobe Reader.
Anonymous commenting is not allowed. Please log in with an individual NCTE account to post comments to this page.
Oldest to Most Recent
Most Recent to Oldest
There are no comment postings on this page yet.
Copyright © 1998-2016
National Council of Teachers of English. All rights reserved in all media.
1111 W. Kenyon Road, Urbana, Illinois 61801-1096 Phone: 217-328-3870 or 877-369-6283
Looking for information? Browse our FAQs,
tour our sitemap
and store sitemap,
or contact NCTE
Statement and Links Policy.
This document was printed from http://www.ncte.org/journals/elq/issues/v35-4.
NCTE - The National Council of Teachers Of English
A Professional Association of Educators in English Studies, Literacy, and Language Arts | <urn:uuid:1936af1a-65fc-4a5a-bb58-2f43aa62cd5b> | CC-MAIN-2016-26 | http://www.ncte.org/journals/elq/issues/v35-4 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00137-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.853774 | 378 | 3.296875 | 3 |
Now that the jury is out on the very real threat of climate change, we must focus on what needs to be done. The recent report of the Intergovernmental Panel on Climate Change (ipcc) should make climate-sceptics like us president George Bush blush. It establishes that the concentration of greenhouse gases in the atmosphere has increased manifold; that this increase is due to human activity, and is leading to the warming of the climate system, evident in increases in global air and ocean temperatures, widespread melting of snow and ice and rising sea levels. All that was feared is coming true, in our lifetime.
We are devastatingly vulnerable. Recent studies in the Himalaya by Indian scientists confirm glaciers are receding, at unnatural rates. This means our northern rivers, fed by glacier melt, will first see floods and then shortages of freshwater flows. We will also see more heat waves, more extreme events—floods—and loss of crop productivity. And we are not talking about the far future, but changes as early as 2030.
The industrialised North has done little to reduce emissions so that we can grow. The growing cry is that India and China must join in cutting emissions. The global media is replete with images of polluting factories in China, of growth in India, of reasons these countries must take on emissions reduction. The cry is becoming a scream. George Bush and his ilk have always argued that the Kyoto Protocol was fundamentally flawed because it excluded big polluters like China or India. Now, even our very own Davos-returned glitterati are making similar noises: “So what if we did not create the problem in the first place? So what if our emissions are much lower than the rich north, historically and even presently? We must join in”.
A re-cap is in order here. In 1991, as the world was first learning about climate change, my colleague Anil Agarwal and I published a report, provocatively called Global warming in an unequal world: a case of environmental colonialism . We established that the Northern countries had a natural debt—borrowed emissions from future generations—and like the financial debt of the poorer countries it needed to be paid. We also put forward the proposition of trading in the unused emissions of the South so that it would provide these countries with incentive to invest in technologies that were energy efficient or low in carbon emissions. But so that the trading system would not discount the price of carbon, we suggested a system of per capita rights and entitlements to be established.
Our logic was used in the climate change negotiations. But instead of a regime built on rights of individual nations over the global commons, a compromised deal was struck to set up the clean development mechanism (cdm). It was established on the principle of the emission-indebted North paying Southern countries to invest in cleaner technologies. But its rules were made (as we had cautioned) to ensure the North got the cheapest deal to reduce emissions. cdm today is profitable for certain companies of the South, but is not leading to real and effective change. It is not made to pay for the transition needed in our countries. It pays peanuts and gets monkeys.
In all this, the Indian government is lost. It believes if it raises the spectre of climate change, it will be forced to take on commitments. It knows that growth of greenhouse gas emissions is linked to economic growth. This is why the rich world has found it difficult to substantially cut and restructure its economies. Therefore, it finds comfort in ostrich-like behaviour—don’t fuss about climate change; don’t do anything that will rock the boat or force scrutiny on our emissions. Simultaneous play the cdm-game, which is benefiting a few industrialists. Don’t worry. Be happy.
This is not the time for weak-hearted and mild reactions. The challenge of climate change means we demand much more than the pusillanimous actions the rich are prepared to undertake.
One, we must be strident in demanding deep cuts in emissions from the rich world. We must put forward the best science that shows adverse impacts on us, our economies and our people, to explain the costs of the rich world’s emissions on, particularly, the poor in the world.
Two, we must use our good offices, or bad ones, to insist the us and Australia take on emission reductions. We must immediately walk out of the dirty deal we have signed with the us and Australia, innocuously called the Asia Pacific Partnership. This deal was and is designed to destroy the multilateral agreement on climate change. It will give the world’s dirtiest polluters a way out. It is unacceptable we should be party to it.
Three, we must be willing to engage in climate reduction targets. Not by taking on commitments, but re-designing the clean development mechanism for effective action. We should examine sectors in our own economy where we can take advantage of clean technologies and reduced cost of fuel—power, public transport, energy-intensive sectors like steel or cement. We should benchmark the existing technology and the cost of where we can go with the best and most energy-efficient and alternative technology in the world. We should demand this be paid for, so that we do not invest today in something that will make the world more insecure.
Four, we should create an internal entitlement system at the national level. The rich in India also overuse the share of the climate quota. The investments in low carbon technologies must be used to provide alternative energy and economic options for the poor, who under-use their share of the global commons and provide the rich ‘space’ to exhale. | <urn:uuid:4cc71d1b-e5be-49aa-85cf-f2cf63e7f022> | CC-MAIN-2016-26 | http://www.cseindia.org/print/706 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00140-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953812 | 1,165 | 2.578125 | 3 |
The Veggie team at Desert RATS
Creating a space farm is a such a common assumption that SciFi writers almost routinely include some kind of plant growth or space farm area in any show that involves long distance space travel or space-based colonies. Off the top of my head, I can think of an episode of Doctor Who, and the film Sunshine, and the New Yorker story Lostronaut.
But growing plants is hardly straightforward. Indeed, straightness is one of the problems: Plants rely on both light or gravity to orient themselves, so their roots grow down and their stems grow up. But then there’s the problem of providing the right levels of humidity, ensuring the water actually goes down to the roots in a zero-G environment, providing enough nutrients, and doing it all in a space- and energy-efficient way.
To solve the problems of growing plants in space, Orbital Technologies Corporation has been working on “deployable vegetable production units” or, as they’re more affectionately called, Veggies. The latest iteration was based on astronaut food containers, and offers astronauts a way to grow plants as a hobby during their free time, as well as give NASA a chance to experiment on the problems of growing plants in microgravity.
Each Veggie unit is a box offering 0.13 square meters of growing area (about 1.4 square feet). Astronauts inject water into a foam base with a hypodermic needle, and light is provided by multi-colored LEDs from above. The boxes can grow small herbs or other plants that can supplement astronauts food supply.
NASA gave the unit a short demo at Desert Research and Technology Studies (Desert RATS), a two-week research trip to test all sorts of different space equipment that ended on Friday. Orbitec officials said they expect the units to get a test run on the International Space Station sometime soon. NASA may even try putting some of the units into a centrifuge to test out different levels of microgravity and its effect on growing time.
Links to this Post
- ‘Veggies’ in space « Virginia Hughes | September 20, 2010 | <urn:uuid:ae2eb537-e136-4074-a764-d84da468171d> | CC-MAIN-2016-26 | http://blogs.discovermagazine.com/sciencenotfiction/2010/09/20/would-space-plants-be-called-botanauts/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00152-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939941 | 440 | 2.890625 | 3 |
Dr. Adolphus H. Noon arrived in Tucson in October 1879, with his oldest son Alonzo and a friend. Noon was looking for a place to settle, where he could set up a medical practice and also do some mining. To pay for the trip, he wrote articles for the Chicago Tribune, describing the business prospects in southern Arizona from Arivaca to Tombstone. “I breathe more freely in these mountains of Arizona,” Noon confided in his readers.
He located a homestead site about seven miles south of Arivaca near Oro Blanco and brought the rest of the family out including his wife Emma in 1880. They built an adobe house and office and Noon prospected and built his medical practice. He was elected recorder of the Oro Blanco Mining District and served as a notary public and a school trustee. He supported construction of the school building and assisted in attracting qualified teachers.
His five sons learned how to make adobes and build houses. Often Noon received cattle and chickens in payment for his medical services, so he developed a ranch. The boys learned to ride, rope and butcher, with all the accompanying skills of making ropes, leather work, and preserving meat.
The Noons’ daughter Sarah Elizabeth was born in 1883. Noon’s wife Emma brought an organ to Arizona and it provided entertainment and spiritual support during the hard times. Emma consoled other women who were homesick in the mining camps. She had been born into a pioneering South African family, originally from England, as was Adolphus.
The Noon family homesteaded as soon as possible, proving up in 1892. They planted trees, including a mulberry tree that is still there. The children attended the school at Oro Blanco, and each began to specialize in what they liked best, which turned out to be their life’s work. Alonzo and Arthur were the ranchers of the family, but they also prospected. Alonzo married Annie McClenahan and they moved to Pena Blanca Canyon where Alonzo was the postmaster.
Subsequently they moved over the mountain to the east of Oro Blanco where Alonzo built an adobe house with rock chimney, the latter being all that is left in the canyon that bears its name.
Alonzo passed away in 1903. The second son, Adolphus S. Noon, became a master mechanic and a civil engineer. In the 1890s he moved to Nogales where he married Anna Menzel and had a blacksmith shop. He did the first restoration work on the Tumacacori Mission, using the adobe making skills he had developed as a boy back in Oro Blanco.
Arthur Noon prospected in Mexico before he took over the ranch in 1903. He married Martha Clayton, who had come to teach in Oro Blanco. His descendents still own the family ranch and some of the mining claims.
The fourth son, Edward E. Noon, began work in a mine at the age of 14 and by the age of 19 was a shift boss at the Montana Mine. He studied mining engineering at the University of Arizona in 1895. He married Estelle Barnard and lived for many years in Mexico before returning to run the Yellow Jacket Mine and then moved to Nogales.
The fifth son, Samuel Frederick, taught himself law and passed the Arizona bar exam. He became the clerk of the court and later the Santa Cruz County attorney. He married Natalie Bonsall and they moved to San Diego where he practiced law.
Adolphus received most of his medical training through apprenticeships, only attending the College of Physicians and Surgeons in San Francisco for one year in order to receive a certificate and license. Formal education was emphasized, however, for the subsequent generations, and at one point the Noon family received an award for having more members attend the University of Arizona than almost any other family.
In the early 1890s the ranchers experienced their first bad drought. The 1870s and early 1880s had been a rainy period. During the panic of 1893, silver prices dropped drastically so that mining became unprofitable. So the elder Noon family moved to Nogales and Adolphus set up a medical practice.
Adolphus served on the first Board of Supervisors when Santa Cruz County was formed in 1899. He served as a representative to the Territorial Legislature in 1901 and as mayor of Nogales in 1910. The Noons were involved with the establishment of St. Andrew’s Episcopal Church. During this time he continued to practice medicine and retained his interest in the Oro Blanco mining district.
Emma Noon died in Nogales in 1917. Adolphus lived until 1931, practicing medicine almost until the day he died at the age of 94.
— Jane Eppinga. Photo courtesy Photo courtesy of Mary Noon Kasulaitis. | <urn:uuid:47d487ea-d7f8-4816-bee0-6441add9af64> | CC-MAIN-2016-26 | http://azcapitoltimes.com/news/2014/05/16/southern-arizonas-many-noons/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00155-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.986059 | 1,008 | 2.90625 | 3 |
Deterministic and Random Signal Classifications
A signal is classified as deterministic if it’s a completely specified function of time. A good example of a deterministic signal is a signal composed of a single sinusoid, such as
with the signal parameters being:
A is the amplitude, f0 is the frequency (oscillation rate) in cycles per second (or hertz), and
is the phase in radians. Depending on your background, you may be more familiar with radian frequency,
which has units of radians/sample. In any case, x(t) is deterministic because the signal parameters are constants.
Hertz (Hz) represents the cycles per second unit of measurement in honor of Heinrich Hertz, who first demonstrated the existence of radio waves.
A signal is classified as random if it takes on values by chance according to some probabilistic model. You can extend the deterministic sinusoid model
to a random model by making one or more of the parameters random. By introducing random parameters, you can more realistically model real-world signals.
To see how a random signal can be constructed, write
corresponds to the drawing of a particular set of values from a set of possible outcomes. Relax; incorporating random parameters in your signal models is a topic left to more advanced courses.
To visualize the concepts in this section, including randomness, you can use the IPython environment with PyLab to create a plot of deterministic and random waveform examples:
In : t = linspace(0,5,200) In : x1 = 1.5*cos(2*pi*1*t + pi/3) In : plot(t,x1) In : for k in range(0,5): # loop with k=0,1, ,4 ...: x2 = (1.5+rand(1)-0.5))*cos(2*pi*1*t + pi/2*rand(1)) # rand()= is uniform on (0,1) ...: plot(t,x2,'b') ...:
The results are shown here, which uses a 2xPyLab subplot to stack plots.
Generate the deterministic sinusoid by creating a vector of time samples, t, running from zero to five seconds. To create the signal, x1 in this case, these values were chosen for the waveform parameters:
For the random signal case, A is nominally 1.5, but a random number uniform over (–0.5, 0.5) is added to A, making the composite sinusoid amplitude random. The frequency is fixed at 1.0, and the phase is uniform over
Five realizations of
are created using a for loop. | <urn:uuid:6ef0a721-eb80-4771-956a-711355561373> | CC-MAIN-2016-26 | http://www.dummies.com/how-to/content/deterministic-and-random-signal-classifications.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00067-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.840248 | 584 | 4 | 4 |
Gary S. Messinger. The Battle for the Mind: War and Peace in the Era of Mass Communication. Amherst: University of Massachusetts Press, 2011. xi + 293 pp. $28.95 (paper), ISBN 978-1-55849-853-2; $80.00 (cloth), ISBN 978-1-55849-852-5.
Reviewed by Ross Collins (North Dakota State University)
Published on H-War (March, 2012)
Commissioned by Margaret Sankey
War and Public Persuasion
Gary S. Messinger takes on an ambitious project: a study of ways that people have tried to influence the human mind from the beginnings of mass media to the present. Does the work cover democratic government influence? Yes, it does. Newspapers? Yes, it does. It also covers movies, radio television, dictatorships, the army, and industry. It examines these topics across war and interwar from 1850 onward--although most of the book examines the world after 1914. The world that is examined is, indeed, the entire Western world. Even part of the non-Western world is addressed, as the author considers Japan as well. Despite this enormous sweep, Messinger has produced a book of modest size. Its 293 pages should appeal to students of media and propaganda as well as general readers interested in knowing more about how mass minds have been led in war, in prewar, and in postwar.
I have not used the word “propaganda” in reflecting the author’s aim. This is because the author does not himself describe his approach in this manner. In the preface, he explains that his historical research can illuminate the origins of our present-day situation. Those pre-World War I origins coalesced into what he calls the “industrialization of communication,” and this is a key approach that the author chooses to help readers understand “the battle for the mind.” Both peacemakers and warmongers have used mass communication, he writes, as tools for goals both pacifist and militarist. “Important clues to the future of war and peace lie in understanding how they have already been shaped by technologically amplified communication directed at billions of people” (p. xi).
Toward this end, Messinger devotes most of the text to the world wars, the interwar period, and the postwar period as reflected by authorities’ attempts to influence people’s minds. This helpful factual material does address the development of modern propaganda. In particular, the first three chapters offer quite a bit of information regarding its development before 1939. It does not, however, offer much interpretation regarding this period. Perhaps this is unavoidable. So much is covered, in so many countries, considering so many media that the author has space to offer general statements, but usually little possibility to follow up with more thorough explanation.
Some of the more arbitrary statements need a little support. For example, the author notes, “To increase the sale of newspapers, the press exaggerated the possibility of subversion by external sources” (p. 54). Does he mean the entire press? Perhaps research can show this is true--although one doubts that. Our skepticism needs to be addressed with historical evidence, here and in many other places throughout the book.
For example, the author assails U.S. newspapers for “depriving them [readers] of information that could have helped them make intelligent judgments” regarding the Russian Revolution (p. 64). But blaming the American press for purposely distorting coverage of the events of 1917-20 is simplistic. We need to recall the optimism Americans had regarding the Alexander Kerensky government, and the feeling of betrayal as Russia pulled out of a war that, in 1917, Americans felt was against an evil, worldwide threat. The United States knew Communism was extrinsically anticapitalist, and in the United States its sympathizers were launching riots and bombings. Walter Lippman and Charles Merz criticized the New York Times, the author notes, but they wrote in 1920. Today we can more easily assess the purported failings of journalism nearly a century ago. The nuance of historical opinion regarding this era to be fair at least needs to be addressed briefly, instead of flatly stating that the press failed.
Similarly, the author notes, regarding 1930s radio, that the H. G. Wells broadcast, War of the Worlds, caused a “national panic” (p. 55). Some recent journalism history research actually disputes this. I acknowledge that in a book of this expanse it is difficult to consider every historical debate regarding the significance of past media events. But sometimes it is worth noting that historians disagree, if for no other reason than to alert the reader to varying interpretations.
We see greater evidence of this tendency as Messinger moves to contemporary affairs, where he is at his most confident. The author’s analysis of Ronald Reagan’s success includes speculation still controversial--but it is devoid of research references. Most of the author’s critical view of the second George Bush administration’s propaganda attempts also are speculative, and do not seem to be based on research. I tend to believe that the author’s viewpoint is correct, but historical speculation needs to be based on sources. These sources would strengthen the book if indicated in a bibliography. We need one. Historical writing is based on primary and secondary material, and so usually historians list these to demonstrate credibility.
A reader would be able to follow a more clear focus for the text if the author provided orientation through a separate chapter discussing what he and other scholars have said about the battle for the mind. In fact, while this is a nice turn of phrase, it is clear from the text that we are talking about propaganda. What is “propaganda”? The term has been extensively debated over the last century. Is the daily newspaper propaganda? Messinger seems to suggest it is--but at what point does it become propaganda? Dictators controlled media in the 1930s. But democracies did not. Is the latter propaganda in the same way as the former? What about commercial radio? Were such politicians as David Lloyd George and Mahatma Gandhi propagandists, as suggested on page 57? Or is the propaganda only in the media reports of what they said? These are different concepts, and cannot be conflated into one single entity.
Messinger does help us by introducing the idea of “bio-behavioral” as it defined Nazi propaganda. This explains propaganda as a way to reach into human nature to manipulate opinion. But as it relates to mass media research, “bio-behavioral” is an unfamiliar term. It may very well exist in psychological literature. We have no way of knowing that without references or a bibliography. It is certainly all right for a senior scholar to introduce a new concept to help explain Fascist and Communist propaganda of the 1930s, but if he does, it perhaps needs to be explained in considerable detail with ample justification in the literature. In this case, it could even serve as a way to focus the otherwise amorphous idea of persuasion, propaganda, or other influence as practiced throughout world media and national leaders.
And what about the power of that propaganda? Is it as powerful as people have often presumed? Researches have had quite a bit to say about media influence since 1945. Messinger briefly discusses the more famous of the scholars, Marshall McLuhan, Noam Chomsky, and Jacques Ellul, regarding mass media and persuasion. But other researchers even prior to World War II were suggesting limits to the power of mass media persuasion. We could benefit from discussion of the (now generally discredited) magic bullet theory, and the (generally still accepted) agenda-setting theory--ideas that go to the heart of this book’s premise, it seems. In fact, studies since the 1960s tend to reveal limits to the persuasive power of mass media.
The author’s conclusion could have helped the reader tie up this sprawling study of propaganda, persuasion, mind-battle, bio-behavior, or whatever term we would prefer to use in this work. An attempt to write about an ambiguous concept across all media, through all countries, through a century can leave a diffused result. A final chapter could have helped readers to see common threads--if such common threads exist.
Still, the author has produced a well-written work of the kind of reach only a mature scholar can attempt. Messinger entertains us with provocative theses and reminds us of significant facts. While the author does declare a historical approach, however, it seems primarily to be a synthesis of published work. Few primary sources are reflected in the endnotes. This produces somewhat of a textbook, and the author perhaps intends it to be useful to students of mass media history. Yet so much is covered that so little can be said. Perhaps the publisher is to blame for asking too much of the author. Limiting the work to only newspapers, or only the United States, or only dictatorships, or only the postwar period, or only the military, would have offered the author the luxury of expanding and focusing the historical record and debate.
Note for future editions: General Joseph Stilwell’s name is misspelled (p. 131); Milo Radulovich’s name is misspelled (p. 153).
It does seem s
seems to be
, it seems,
It seems t
seem to be
If there is additional discussion of this review, you may access it through the network, at: https://networks.h-net.org/h-war.
Ross Collins. Review of Messinger, Gary S., The Battle for the Mind: War and Peace in the Era of Mass Communication.
H-War, H-Net Reviews.
|This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.| | <urn:uuid:a70deb6e-ea3f-45fb-9388-eca90760dea7> | CC-MAIN-2016-26 | http://www.h-net.org/reviews/showrev.php?id=34383 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00097-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947408 | 2,064 | 2.703125 | 3 |
One of Britain's rarest butterflies has returned to a spot where it has not been seen for more than 40 years.
The Adonis Blue is mostly found in southern England
The Adonis Blue, classified as a priority species, is usually only found at a few places in southern England.
But it has returned in numbers to a former site in the Cotswolds, Gloucestershire, after a National Trust campaign to restore its habitat.
The insect's numbers were decimated 50 years ago when a lot of its natural habitat, chalk grassland, was lost.
The Adonis Blue likes to live in habitats with short grass, and it is unusual for the butterflies to fly far from their home base.
When the rabbit-killing disease Myxomatosis broke out in the 1950s, the lack of rabbits meant grass grew too long and the Adonis Blue's former habitats became unsuitable.
But now large numbers of the species have moved back to its former home around Rodborough and Minchinhampton Common, as trust officers have brought in cattle to keep the grass down.
Matthew Oates, butterfly expert and adviser for the National Trust said: "Never underestimate a butterfly.
"We think that the Adonis Blue may be benefiting from milder winters and hotter summers and that it should produce a bumper brood this August and September.
"It is one of our loveliest butterflies and we are delighted to have it back in the Cotswolds." | <urn:uuid:4db92329-6bbe-4189-a987-5cd54caad5b5> | CC-MAIN-2016-26 | http://news.bbc.co.uk/2/hi/uk_news/england/gloucestershire/5251322.stm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00015-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970783 | 304 | 2.953125 | 3 |
Particle Physics 1.2
OnScreen Particle Physics simulates a particle-detection chamber, the device used by researchers for gazing into the fundamental properties of matter at high-energy accelerator facilities, such as Fermilab.
OnScreen Particle Physics is primarily aimed at teachers looking for a way to introduce modern physics into their classrooms from a "how-we-know" perspective, but it could be of interest to anyone who wants to know how particle research is done.
By enabling students to carry out simulated experiments, OnScreen Particle Physics presents modern physics as a human activity, not just a set of facts.
It is accessible to students at various levels, as no advanced mathematics is required. | <urn:uuid:c9e31280-d0ed-47ff-810c-2cace8c79c06> | CC-MAIN-2016-26 | http://www.tucows.com/preview/205411/Particle-Physics | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932256 | 146 | 3.921875 | 4 |
Washington, DC--Today scientists unveiled the first high-resolution map of the carbon stocks stored on land throughout the entire country of Perú. The new and improved methodology used to make the map marks a sea change for future market-based carbon economies. The new carbon map also reveals Perú's extremely high ecological diversity and it provides the critical input to studies of deforestation and forest degradation for conservation, land use, and enforcement purposes. The technique includes the determination of uncertainty of carbon stores throughout the country, which is essential for decision makers. The mapping project is a joint effort among the Carnegie Airborne Observatory (CAO), led by Carnegie's Greg Asner, the Ministry of Environment of Perú, and Wake Forest University.
Historically two obstacles have slowed accurate carbon inventories at national scales. The first is the inadequate resolution of satellite mapping data and the second is the inaccuracy of on-the-ground surveys. These barriers must be overcome to support policies and markets that depend on timely knowledge of where carbon is stored on land. With its huge range of environments from cold Andean deserts to hot Amazonian rainforests, Perú is an ideal country for advancing high-tech carbon inventories.
Asner remarked: "The international community wants to use a combination of carbon sequestration and emissions reductions to combat climate change. Some 15% of global carbon emissions result from deforestation and forest degradation, which releases carbon dioxide to the atmosphere as trees are destroyed. Our cost-effective approach allows us to accurately map the carbon in this incredibly diverse country for the first time. It opens Perú's door to carbon sequestration agreements and is an enormous boon to conservation and monitoring efforts over vast areas for the long term."
The critical resolution for carbon monitoring is the hectare (2.5 acres). It is the world's most common unit of land tenure and policy enforcement, yet very few countries have advanced their carbon monitoring efforts at such high resolution. The team integrated airborne laser mapping technology using the Carnegie Airborne Observatory with field data, and coupled them with publicly available satellite imagery to scale carbon inventories up to the national level. The CAO sweeps laser light across the vegetation canopy to image it in 3-D, enabling the determination of the location and size of each tree at a resolution of 3.5 feet (1.1 meter). By combining the CAO laser information with satellite maps of forest cover, deforestation, and other environmental variables generated by the Peruvian Environment Ministry's Directorate of Land Management, a cost-effective means to monitor the country into the future has been established.
The new map reveals that the total aboveground carbon stock of the country is currently 6.9 billion metric tons. But the carbon stocks vary by region and land ownership. The average carbon density for Peruvian rainforests is 99 metric tons of carbon per hectare, with the maximum density of 168 metric tons of carbon per hectare. The largest stocks are in the northern Peruvian Amazon and along the Brazil-Perú border. Regions of deforestation, such as Puerto Maldonado where gold mining has ravaged the area, had low to no carbon storage. The team also assessed 174 protected areas, finding that for every hectare of forest put into protection, an average 95 metric tons of carbon are stored on land, with even more carbon sequestered below the soil surface.
Miles Silman, report coauthor from Wake Forest University, added: "The Carnegie map is a monumental effort--from field to remote sensing to computation--that honestly lays out the methods, predictions, and their reliability for each hectare in Perú. Now every person in private enterprise and decision makers in regional, local, and national government has an estimate of carbon content for every place in Perú. It should ignite the imaginations of ecologists and earth scientists, and provide a road map for decision makers. The report also adds another exclamation point to the value of protected areas. If you choose carbon as your currency, parks in Amazonian Perú are the banks, and the bigger the area, the closer it gets to being Fort Knox."
Peru's Carbon Quantified: Map http://carnegiescience.
Carnegie and Peruvian researchers quantified the carbon stocks throughout the entire country of Peru, shown here. Red is highest carbon, dark blue lowest. Image courtesy Greg Asner
This research work was supported by an inter-institutional working agreement between the Carnegie Institution, Department of Global Ecology and the Peruvian Ministry of Environment, Directorate of Land Management. The study was funded by the John D. and Catherine T. MacArthur Foundation and the Gordon and Betty Moore Foundation.
The Carnegie Airborne Observatory is made possible by the Avatar Alliance Foundation, John D. and Catherine T. MacArthur Foundation, Grantham Foundation for the Protection of the Environment, Gordon and Betty Moore Foundation, W. M. Keck Foundation, the Margaret A. Cargill Foundation, Mary Anne Nyburg Baker and G. Leonard Baker Jr., and William R. Hearst III.
The Department of Global Ecology was established in 2002 to help build the scientific foundations for a sustainable future. The department is located on the campus of Stanford University, but is an independent research organization funded by the Carnegie Institution. Its scientists conduct basic research on a wide range of large-scale environmental issues, including climate change, ocean acidification, biological invasions, and changes in biodiversity.
The Carnegie Institution for Science has been a pioneering force in basic scientific research since 1902. It is a private, nonprofit organization with six research departments throughout the U.S. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science. | <urn:uuid:4778435e-9f27-4314-b236-ba23d0da379c> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2014-07/ci-pcq072814.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00103-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.915913 | 1,156 | 3.640625 | 4 |
Awendaw was incorporated in 1992, and this building is a gathering place for its citizens, and serves as office space for town officials.
Prior to desegregation, this building served as a school for white children. Through the 1950s, African-American children were not allowed to ride the bus, nor attend the school, so they had smaller schools dispersed throughout what is currently Awendaw.
However, children of all races in Awendaw have enjoyed having the Francis Marion National Forest as their backyard. We interviewed Awendaw’s Town Clerk Sam Brown, who said that hide-and-seek, tag, riding bikes out to Cape Romain, and pick-up games of baseball and basketball were all common things for kids to do in Awendaw.
“We played hard,” the Awendaw native says. The only thing that would slow Sam and his friends down was an occasional cool-off period in the hot mid-summer afternoons.
This building is now used as a Town Hall, and is a place where Awendaw citizens debate important issues affecting their community – such as affordable public services and new development in the area. | <urn:uuid:829811de-fac2-4553-b34f-750d9e750ae4> | CC-MAIN-2016-26 | http://www.sciway.net/sc-photos/charleston-county/awendaw.html/awendaw-town-hall | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00179-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.989469 | 242 | 2.59375 | 3 |
Which modern day countries did the Roman Empire comprise of
The principal modern day nations of the Roman Empire are in bold print. Other countries which only saw some form of Roman occupation, or of whose effective membership of the empire I am unsure of, are listed in normal print.
|Czech Republic||The initial conquest of German territories up to the river Elbe under emperor Augustus may well have included a small part of the Czech Republic. Also the campaigns of emperor Marcus Aurelius most likely conquered a considerable amount of Czech territory, though these gains were abandoned by his son Commodus without ever being recognised as a province.|
|Slovakia||Slovakia was home to several forward positions of the Roman frontier system. Also the campaigns of emperor Marcus Aurelius most likely conquered considerable parts of Slovakia, though they were abandoned by his son Commodus without ever being recognised as a province.|
|Georgia||With the annexation of the ancient kingdom of Armenia by emperor Trajan part, if not all, of modern day Georgia may have become part of the empire.|
|Armenia||With the annexation of the ancient kingdom of Armenia by emperor Trajan all of modern day Armenia will have become part of the empire.|
|Azerbaijan||With the annexation of the ancient kingdom of Armenia by emperor Trajan part, if not all, of modern day Azerbaijan will have become part of the empire.|
|Kuwait||If any part of northern Kuwait was part of the short-lived province of Mesopotamia, created by emperor Trajan, I am unsure. Though it may well have done.|
|Saudi Arabia||With emperor Trajan's annexation of part of the kingdom of Nabatea as the Roman province of Arabia Petraea, a small part of the Red Sea coast of Saudia Arabia became part of the Roman empire.|
|Sudan||To what extent the Roman province of Aegyptus extended into Sudan I am unsure of. Though it must have extended some way into this country to border on the kingdom of Nubia.|
|Palestine||Not yet an internationally recognised nation, Palestine is situated on Jordanian and Egyptian territory which is currently occupied by Israel.
It would have been part of the Roman province of Judaea. | <urn:uuid:385f5cc3-6f7e-4e39-a44b-50f178b161cf> | CC-MAIN-2016-26 | http://www.roman-empire.net/maps/empire/extent/rome-modern-day-nations.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00076-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.893797 | 468 | 3.1875 | 3 |
A 12x20 shed is a very large shed. It is on the boarder of being a garage. When building a shed the idea is to keep things simple but a shed as large as a 12x20 starts to complicate the construction process simply by the sheer size of the building. Everything from the foundation to the roof becomes more complicated. The roof trusses for a 12 foot wide shed need a few extra additions to make sure that they are strong and adequately cover the shed. There are a few construction techniques that can make the trusses for larger sheds stronger. It is easiest to explain these shed building techniques while describing the various parts of a roof truss.
The Truss Top Chord
Every roof truss has a board at the top that supports the roof decking. This board is called a top chord. The angle of this part of the roof truss is determined by the slope of the roof.
The Truss Bottom Chord
The bottom of the truss rests on and reaches from wall to wall. The truss top chord typically rests on the bottom chord. The two chords are attached to each other using a gusset.
A truss gusset is the plate that holds the truss members together at their connections. When building trusses on the job site the gusset is usually made using pieces of ½” O.S.B. cut several inches larger than the truss chords so the joint can be adequately covered and nails can be put through the gusset into the truss chords. There are three key points to remember when attaching gussets.
- Make sure that the gusset material is substantially larger than the joint so that the nails do not split out the truss members.
- Make sure that a gusset is placed on each side of the truss joint, like a truss sandwich.
- Make sure that the nails are far enough from the ends of the boards so they do not split out and weaken the wood.
Truss King Posts
When building trusses for the larger span that a 12x20 shed requires it is important to install a kind post in the center of the truss. The king post is attached to the truss using a gusset just like all the other connections. This important structural member is simply a piece of 2x4 lumber that is centered on the truss and is installed directly under the ridge down to the bottom chord. The top is cut with the angles of each side of the truss and the bottom is flat to sit on top of the bottom chord.
If each of these truss parts are used correctly the larger trusses needed for a 12x20 shed plan should have plenty of strength to handle regular roof loads. If your area has extreme weather conditions like heavy snow loads or high winds you should consult a structural engineer and your local building department to find out if there are other design elements that need to be added to your large shed design.
Article Views: 10241 Report this Article | <urn:uuid:03dcb548-ed1c-4efe-a37a-01c7650fc7dc> | CC-MAIN-2016-26 | http://www.streetarticles.com/storage-garage/understanding-roof-trusses-for-12x20-shed-plans | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00039-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959238 | 622 | 2.5625 | 3 |
National Water-Quality Assessment (NAWQA) Program
Back to Article Information || <<Previous Figure || Next Figure >>
Model findings show that 22 watersheds are reliably placed in the top 150 category for total phosphorus delivered to the Gulf of Mexico with 75 percent certainty. A total of 573 watersheds for total phosphorus yields are reliably placed outside of the top 150 category with 75 percent certainty. Although probabilistic ranking of watershed nutrient yields can be used as one tool to identify watersheds with the highest nutrient yields delivered to the Gulf of Mexico, this large-scale approach may not address nutrient management needs to protect streams and reservoirs at a local scale. | <urn:uuid:469e1727-2f9e-4197-850e-5850191cc4c9> | CC-MAIN-2016-26 | http://water.usgs.gov/nawqa/sparrow/nutrient_yields/total_p.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.89412 | 132 | 2.828125 | 3 |
Spain and Portugal.
1835 (undated) 17 x 21 in (43.18 x 53.34 cm)
1 : 2304000
This is a beautiful map of Spain and Portugal from Sidney Hall's extremely scarce 1835 New General Atlas. It covers the entirety of Iberia including the Balearic Islands of Minorca, Majorca, and Ibiza. Both countries are covered in full, with Spain being divided into its various semi-autonomous provinces. The Strait of Gibraltar is also noted along with the Tangier in Africa. Several battlegrounds are identified, including the 1808 Battle of Vimeriro, the 1797 Battle of Cape St. Vincent (here noted as Sir J. Jervis), and of course the 1805 Battle of Trafalgar, among others. Towns, rivers, mountains, railroads, canals, marshes, forests, and various other important topographical details are noted. Elevation throughout is rendered by hachure and political and territorial boundaries are outlined in color.
As this map was issued, Spain was in the midst of the First Carlist War. The death of Ferdinand VII saw his daughter, Isabella II, an infant at the time, proclaimed Queen with his wife, Maria Cristina, regent. Infante Carlos, Ferdinand's brother disputed Isabella's claim to the throne. This would eventually result in a civil war from 1833-1839. Meanwhile, in Portugal this map shortly follows on the War of the Two Brothers. In 1826, Peter IV of Portugal abdicated his throne in favor of his seven year old daughter Maria da Gloria, on the condition that she marry her uncle (Peter's brother) Miguel. Miguel deposed Maria and proclaimed himself king, leading to the Liberal Wars, and eventually in Miguel being forced to abdicate and go into exile. Maria da Gloria was proclaimed queen in 1834 and resumed her reign as Maria II of Portugal.
Sidney Hall's New General Atlas was published from 1830 to 1857, the first edition being the most common, with all subsequent editions appearing only rarely. Most of the maps included in the first edition of this atlas were drawn between 1827 and 1828 and are most likely steel plate engravings, making it among the first cartographic work to employ this technique. Each of the maps in this large and impressive atlas feature elegant engraving and an elaborate keyboard style border. Though this is hardly the first map to employ this type of border, it is possibly the earliest to use it on such a large scale. Both the choice to use steel plate engraving and the addition of the attractive keyboard boarder are evolutions of anti-forgery efforts. Copper plates, which were commonly used for printing bank notes in the early 19th century, proved largely unsuitable due to their overall fragility and the ease with which they could be duplicated. In 1819 the Bank of England introduced a £20,000 prize for anyone who could devise a means to print unforgeable notes. The American inventors Jacob Perkins and Asa Spencer responded to the call. Perkins discovered a process for economically softening and engraving steel plates while Spencer invented an engraving lathe capable of producing complex patters repetitively - such as this keyboard border. Though Perkins and Spenser did not win the prize, their steel plate engraving technique was quickly adopted by map publishers in England, who immediately recognized its value. Among early steel plate cartographic productions, this atlas, published in 1830 by Longman Rees, Orme, Brown & Green stands out as perhaps the finest. This map was issued by Sidney Hall and published by Longman Rees, Orme, Brown & Green of Paternoster Row, London, in the 1835 edition of the Sidney Hall New General Atlas.
Sidney Hall (1788 - 1831) was an English engraver and map publisher active in London during the late 18th and early 19th centuries. His earliest imprints, dating to about 1814, suggest a partnership with Michael Thomson, another prominent English map engraver. Hall engraved for most of the prominent London map publishers of his day, including Aaron Arrowsmith, William Faden, William Harwood, and John Thomson, among others. Hall is credited as being one of the earliest adopters of steel plate engraving, a technique that allowed for finer detail and larger print runs due to the exceptional hardness of the medium. Upon his early death - he was only in his 40s - Hall's business was inherited by his wife, Selina Hall, who continued to publish under the imprint, "S. Hall", presumably for continuity. The business eventually passed to Sidney and Selina's nephew Edward Weller, who became extremely prominent in his own right.
Hall, S., A New General Atlas, with the Divisions and Boundaries, 1835.
Very good. Original platemark visible. Minor wear along original centerfold. Some offsetting. Blank on verso.
Rumsey 4224.022 (1830 edition). Philips (Atlases) 758. Ristow, W., American Maps and Mapmakers: Commercial Cartography in the Nineteenth Century, p. 303-09. | <urn:uuid:2b0f1b0b-2e41-437d-aa01-acfaf38d3d48> | CC-MAIN-2016-26 | http://www.geographicus.com/P/AntiqueMap/SpainPortugal-hall-1835 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00167-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96369 | 1,075 | 3.109375 | 3 |
Have you ever heard the rumor that your heart stops or skips a beat when you sneeze? If you’ve ever had a really big “achoo,” then maybe it seemed like this could be true!
The answer is that your heart does not stop or skip a beat when you sneeze – there is no scientific basis for this old belief. (Phew!) Sometimes, the heart’s rate or rhythm will change in relation to a sneeze, but in no way at all does it “stop.”
Most likely, the idea that sneezing affects the heart comes from back before people had the medicine and science to know about what was really happening in the body during a sneeze. When people felt changes in their heart’s rhythm during a sneezing spell, they assumed it meant their heart had stopped! Today, we can sneeze without worry, knowing that our hearts will continue pump-pump-pumping away!! (Gesundheit!) | <urn:uuid:bc95f50c-5271-45fd-9be6-19a9fb73b098> | CC-MAIN-2016-26 | http://whyzz.com/is-it-true-that-your-heart-stops-when-you-sneeze | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00113-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961284 | 213 | 2.515625 | 3 |
Colour naming and classification in a chimpanzee (Pan troglodytes)
Journal of Human Evolution, Volume 14, Issue 3, Pages 283-291, doi: 10.1016/S0047-2484(85)80069-5
A four-year-old female chimpanzee was trained to use symbols to name 11 colours: red, orange, yellow, green, blue, purple, pink, brown, grey, and black. The chimpanzee was then required to name various colour chips from the Munsell colour charts. Colour classification by the chimpanzee was similar to that in a human observer tested under the same condition. Both the chimpanzee and the human observer divided the colour space into the clusters of a broad area within which a single colour name was applied consistently. Areas of consistent colour naming were separated by narrower areas in which the names applied to the two adjacent areas were used and the response latencies were long. These results suggest that, not only the perception of colours, but also the use of colour names have characteristics in common between the human and the chimpanzee.
chimpanzee, colour naming, colour classification, cross-cultural and cross-species comparison
How does a chimpanzee see and describe the world? Riesen (1970) summarized evidence that the chimpanzee's visual system is quite similar to that of the normal human. Behavioural work on colour perception in chimpanzees done by Grether (1940) and Essock (1977) among others suggests that the chimpanzee's colour perception is quite similar to the human. Humans have the capability to describe the perceptual world by the conceptual colour "names" based on such a visual system. The present study consists of a set of experiments investigating colour perception and classification in a chimpanzee who had already learned to name 11 colours: red, orange, yellow, green, blue, purple, pink, brown, white, grey, and black.
In human natural languages, colour-naming systems are not arbitrary but are derived from the common physiological basis (Bornstein, 1973). Berlin & Kay (1969) surveyed 98 languages and found striking similarities in the semantic development of colour classification in various human societies. They described seven stages in the evolution of colour classifications from division into two categories, black and white, to the development of fine distinctions among hues. According to Johnson (1977) the sequence matches the order in which children acquire colour names.
The purpose of the study was to determine how a chimpanzee who had learned to name 11 colours would classify various portions of Munsell colour space.2. General Method
The subject was a four-year-old female chimpanzee, "Ai", born in Africa and received in the laboratory at about one year of age. Prior to this study, she had engaged in the language training program for about two years (Asano et al., 1982). The chimpanzee had extensive experience on "symbolic matching-to-sample" tasks including matching 11 arbitrary symbols with 11 specific training colours (see Figure 1 and Table 1).
A computer-controlled training facility was used in this experiment. For the chimpanzee, colour names were symbols (simple geometric shapes which were white figures on a black background). Figure 1 also shows the corresponding "Kanji" characters used in Japanese. Each symbol was drawn on a key (2 x 2.5 cm) and could appear in various positions in a 5 X 6 key matrix on a console provided for the chimpanzee. This console was interfaced with a PDP1 l/V03 minicomputer that controlled the experiment and recorded key choice. The console was attached to one wall of the experimental room (190 x 220 x 180 cm).
Figure symbols for the 11 colour names used by the chimpanzee and the corresponding "Kanji" characters used in the comparative study in the human.
The stimuli were all colour chips from the seventh loose-leaf edition (1981) of the standard colour chart conforming to Japanese Industrial Standard (JIS) Z-8721. This colour chart contains 1928 colour chips arranged in the Munsell notation system, in which a colour is specified by its hue designation (a number and one or two letters) and a value/chroma fraction to designate brightness and saturation. For example, the expression 7"5RP6/12 denotes a particular red-purple colour (Munsell hue 7.5RP) with some degree of brightness (Munsell value 6) and saturation (Munsell chroma 12). The experiment was conducted in the room which was illuminated by daylight as well as by a daylight fluorescent bulb (Mitsubishi, FL15-SW/NL) approximating the proper CIE illuminant C for the colour chart.
Colour names which the chimpanzee had acquired and their trained colours designated in Munsell notation system and CIE tristimulus values
The experimenter exposed one colour chip at a time to the chimpanzee sitting about 50 cm away on a bench in front of the console. While exposing a colour chip, the experimenter lighted one of two sets of three rows of keys on the console containing the 11 colour-name keys and four blank keys. Two sets of keys were provided to change the position of the available keys within a session. The position of the keys was also changed from session to session in order to prevent the chimpanzee from using positional cues rather than the symbol in naming the colour. The chimpanzee was required to press a key among 11 alternatives, which produced a feedback facsimile of the figure symbol drawn on the key on a screen of an lEE inline projector located above the console. The chimpanzee then pressed a single blank key to the right of the key matrix to conclude her naming response.
Each session consisted of two kinds of trials. On some trials, the "baseline trials", the 11 colour chips that most nearly matched Ai's training colours were randomly presented. Naming responses to these chips in the baseline trials were differentially reinforced. Correct and incorrect naming responses were followed by different feedback sounds. A piece of apple or a raisin was automatically delivered after two consecutive correct responses. On the other trials, the "probe trials", the to-be-tested colour chips were presented. Naming responses in the probe trials were never differentially reinforced. When Ai made her response, no sounds followed and the next trial began. These probe trials were inserted once every four trials on the average.
The number of trials in a session varied depending on how many test chips were presented within the session. An average session consisted of about 200 trials. The various colour chips including the chips which had never been named previously were exposed successively according to a predetermined random order stored in the computer. The chimpanzee performed the task at the rate of about five to six trials per minute.3. Experiment 1: Naming Achromatic Colour Chips
Colour classification along the brightness scale of achromatic colours was investigated in the chimpanzee.
Nine achromatic colour chips with Munsell values from 1 to 9 were used. In her previous naming experiences of naming 11 colours, "black" was trained for a speciic achromatic colour chip NI/0 in Munsell notation, "grey" was used for chip N5/0, and "white" for chip N9/0. For the test, six chips with intermediate brightness were introduced. Each of nine chips was presented eight times on probe trials inserted among baseline trials for the 11 colours discrimination. For this experiment, the colour chips were 6"4 x 9'2 cm on all trials.
Figure 2 shows the probability of three kinds of naming responses ("black", "grey", and "white") and the mean response latencies to each of the nine achromatic colour chips. Although all 11 colour keys were available, on all but one probe trial, the nine achromatic colour chips were classified into one of the three achromatic categories. In both extreme ends of the brightness scale, naming was consistent and the mean response latencies were less than one second. The standard deviation (S.D.) of the response latencies was also small, indicating that the naming responses were stable. The chips covering the middle range of brightness (N3/0, N4/0, N5/0, and N6/0) were also named consistently as "grey" with short response latencies and small SD. Behaviour in the presence of the two chips on the borders between black and grey and between white and grey was markedly different. The chip N2/0 was named as either "black" or "grey". The chip N7/0 was named as either "grey" or "white". The response latencies to these two chips were much longer than those in the other consistently named chips (1"5 s for N2/0 and 2'5 for N7/0 in the mean latencies). The SD of the response latencies was also large. The changes in response latencies suggest that the chimpanzee had difficulty in naming these two chips. These results clearly show that the continuum of brightness was divided into three categories.
Colour classification along the brightness scale of achromatic coIours by a chimpanzee. Proportion of the three achromatic colour names, and the mean response latencies with S.D. for the nine achromatic colour chips are shown.
This experiment was designed to investigate the chimpanzee's perception of the various hues forming the so-called colour circle. It was also a first step in the exploration of naming to various parts of the tridimensional colour space.
Forty chromatic colour chips representing 40 hues in the Munsell colour system were used. The 40 chips used were at the maximum level of saturation for a given hue. The brightness level of the chips varied from 4/to 8/depending on their hues. All chips were 1'7 cm in width and 1"3 cm in height and were attached to the pages of the book. The chips were exposed to the subject by using a grey cardboard mask of Munsell notation N7/0, which revealed only one colour chip at a time. Each of the 40 colour chips appeared once in a daily session in a probe trial. The chimpanzee received 10 sessions in total in this phase of the investigation.
Figure 3 shows the probability of each naming response and the mean response latency for each of the 40 colour chips. Among 400 probe trials in total, there was no trial in which the chimpanzee used the three achromatic colour names or "brown". The chimpanzee pressed the key for either red, orange, yellow, green, blue, purple, and pink although all 11 keys were operative. Again, it was found that each name was used categorically. Twenty-eight out of 40 chips (70%) were consistently named, that is, given the same colour name throughout 10 sessions. The other 12 chips (30%) were consistently assigned the names of the two adjacent categories. The response latencies to these border colours were longer than those to the consistently named colour chips.
Colour classification of 40 hues forming the so-called colour circle. The horizontal axis represents the hue designation in Munsell notation system. Proportion of each colour name and the mean response latencies are shown.
Figure 4 shows the probability of consistent naming for 40 hues across 10 sessions. As testing progressed and the same stimuli were presented repeatedly, the probability of consistent naming decreased at first then remained constant. This result indicates that presenting each stimulus three times is necessary and sufficient for testing the consistency of naming for each colour chip.
The probability of consistent naming for 40 colour chips with various hues across 10 test sessions.
The purpose of the experiment was to gather more detailed information about colour naming and classification of both various hues and various brightnesses. A second purpose was to obtain comparative data on colour classification by the chimpanzee and the human under the same condition.
The chimpanzee was required to name 215 colour chips of maximum saturation for a given hue and brightness value. There were 40 hues and seven brightness levels (Munsell values from 3/to 9/. These chips are contained in the "outer shell" of the Munsell colour solid. In the Munsell colour system, all achromatic colours lie along the central vertical axis of the colour solid and are arranged by increasing brightness. Colours of various hues and brightness but of maximum saturation define the outer shell of the colour solid, that is, these colours are most distant from the achromatic core. The size of the chips was the same as that in Experiment 2. Each of 215 chips appeared once in the predetermined random order in the probe trials during six consecutive sessions. The sessions continued until each of the 215 chips had been shown three times for a total of 645 naming responses. It took 18 sessions, about 40 minutes per session on the average, to complete this experiment.
The human observer, a 26-year-old male graduate student, was tested under the same conditions except that he received a single test session consisting of the probe trials only. For the human subject, Kanji characters representing the colour names were drawn on the keys instead of the figure symbols.
Throughout the experiments, the chimpanzee accurately named the 11 training colour chips during the baseline trials. Accuracy always exceeded 99"5%. Figure 5 shows the chimpanzee's responses to the outer-shell colour chips. Areas where the chips were called by a single colour name all three times are unshaded and are referred to as consistent areas. Areas where a chip was called by more than a single colour name are shaded and are referred to as border areas. In almost all cases, two responses competed on border areas as possible names for a given colour chip, and these were the names for adjacent categories. The chips of Ai's training colours are dotted in Figure 5. It must be noted that these chips do not always lie in the centre of the consistent areas for that hue name. Findings such as this as well as the latency data suggest that Ai's colour-naming responses were not the result of simple generalization. In describing a given colour chip the data suggest that she used the colour names categorically.
Both Ai and the human observer divided the colour space into eight clusters with a broad area within which a single colour name was applied consistently. The chimpanzee applied a single colour name to 74% of 215 chips; the human subject applied the same name to 79% of the chips. Areas of consistent colour naming were separated by narrower areas in which the names applied to the two adjacent areas were used. There were slight differences between the human and the chimpanzee in the location of these border areas.
Colour naming of the chips of maximum saturation for a given hue and brightness value. Hue changes are shown along the horizontal axis and brightness on the vertical axis. Rectangles represent individual colour chips. Solid circles indicate the training colours. (a) Data obtained from the chimpanzee. (b) Data from a human observer under the same conditions.
The chimpanzee named the various portions of the Munsell colour space as consistently as the human observer did under the same conditions. The colour classification data obtained here undoubtedly are a function of the chimpanzee's physiology and her previous experience in the use of colour names. It is difficult to separate the relative importance of these factors in a single subject. Further work is necessary in order to determine how colour naming is affected as the number of colour names are increased, decreased, or restricted. The division of the colour space into areas may also be influenced by the colours used as examples for a each name.
Berlin & Kay (1969) found constraints in human colour perception which are reflected in the verbal colour classifications employed in various languages. Native speakers of twenty languages around the world (which included Arabic, Bulgarian, Cantonese, Catalan, Hebrew, Ibidio, Japanese, Thai, Tzeltal, Urdu, and others) were shown arrays of colour chips in the Munsell system. They were asked to point out the chip well representing each of the principal colour names of their language within the hue-brightness array. The results given in Figure 6 show clearly that the colour names employed by twenty languages from all over the world are grouped into mostly discrete clusters.
Cross-cultural data of colour classification in human languages are related to the consistent colour naming areas in the chimpanzee. Each point marks the average position on a Munsell hue-brightness array of a principal colour name in one language, as estimated by native speakers of the language. The colour names of 20 languages, many of which evolved independently of one another, are grouped into mostly discrete clusters. The shaded areas represent the consistent areas in colour naming in the chimpanzee. The other explanations are the same as in Figure 5. (Modified from Berlin & Kay, 1969, by the addition of the data of the chimpanzee.)
The consistent areas in colour naming by the chimpanzee were shaded also in Figure 6. It is obvious that the focal points of the principal colour names used by various languages are almost always included within the consistent naming areas in the chimpanzee. In other words, the colours for which the chimpanzee did not use a single name were not used as the focal points for principal colour names used by humans. These results suggest that there is a common basis of colour classification not only across human cultures but also across primate family lines, Honfinidae and Pongidae.
These experiments suggest that chimpanzees have sufficient cognitive abilities to use arbitrary codes as colour names, and that they are capable of describing the perceptual world by using these codes. It is further suggested that the chimpanzee and the human recognize their world in similar ways by categorizing some of the features.
Use of colour names in addition to object and number names is reported elsewhere (Matsuzawa, 1985). The author gratefully acknowledges the helpful discussions and support of Drs Kiyoko Murofushi and Toshio Asano. I also thank Dr Sheila Chase for suggesting improvements to the original manuscript and Mr Junzo Inagaki for taking care of Ai. This research was supported by a grant-in-aid from the Ministry of Education, Science and Culture, Japan (240008, 56710041, 57710047, 58710059).References
- T. Asano, T. Kojima, T. Matsuzawa, K. Kubota, K. Murofushi. (1982) Object and color naming in chimpanzees (Pan troglodytes). Proceedings of the Japan Academy, 58, pp. 118–122
- B. Berlin, P. Kay (1969) Basic Color Terms: Their Universality and EvolutionUniversity of California Press, Berkeley
- M.H. Bornstein (1973) Color vision and color naming. Psychological Bulletin, 80, pp. 257–285
- S.M. Essock (1977) Color perception and color classification. D.M. Rumbaugh (Ed.), Language Learning by a Chimpanzee, Academic Press
- W.F. Grether (1940) Chimpanzee color vision. I. Hue discrimination at three spectral points. Journal of Comparative and Physiological Psychology, 29, pp. 167–177
- E.G. Johnson (1977) The development color knowledge in preschool children. Child Development, 48 (1), pp. 308–311
- T. Matsuzawa (1985) Use of numbers by a chimpanzee. Nature (1985)
- A.H. Riesen (1970) Chimpanzee visual perception. in: G. Bourne (Ed.), The Chimpanzee, Vol. 2Karger, Basel/New York | <urn:uuid:7e895ae6-8449-421a-bbd3-32fa4c698e19> | CC-MAIN-2016-26 | http://langint.pri.kyoto-u.ac.jp/ai/en/publication/TetsuroMatsuzawa/Colour_naming_and_classification_in_a_chimpanzee.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956274 | 4,046 | 3.828125 | 4 |
East Leake is a large village and civil parish in the Rushcliffe district of Nottinghamshire, England, although its closest town and postal address is Loughborough across the border in Leicestershire. It has a population of around 7,000. The original village was located on the Sheepwash Brook. Kingston Brook also runs through the village. Near the centre of the village is the historic St. Mary's Church, which dates back to the 11th century. The church has six bells.
One of the earliest mentions of East Leake is in the Domesday book recorded as 'Leche.' The name comes from the Anglo-Saxon word meaning wet land, since the village lies on the Kingston Brook, a tributary of the River Soar.
British Gypsum, a plasterboard manufacturer, has its headquarters in the village. The manufacturing of plasterboard began in this area in about 1880. | <urn:uuid:2a128b41-500d-4fe5-be99-26e06b7f0569> | CC-MAIN-2016-26 | http://www.werelate.org/wiki/Place:East_Leake%2C_Nottinghamshire%2C_England | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00109-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.976974 | 188 | 2.59375 | 3 |
Having accomplished little since taking command of the Western Department, with headquarters in St. Louis, Missouri, Major General John C. Fremont formulated a plan to clear Major General Sterling Price's Rebels from the state and then, if possible, carry the war into Arkansas and Louisiana.
Leaving St. Louis on October 7, 1861, Frémont's combined force eventually numbered more than 20,000. His accompanying cavalry force, numbering 5,000 men and other mounted troops, included Major Frank J. White's Prairie Scouts and Frémont's Body Guards under Major Charles Zagonyi.
Major White became ill and turned his command over to Zagonyi. These two units operated in front of Frémont's army to gather intelligence.
As Frémont neared Springfield, the local state guard commander, Colonel Julian Frazier, sent out requests to nearby localities for additional troops.
Frémont camped on the Pomme de Terre River, about 50 miles from Springfield. Zagonyi's column, though, continued on to Springfield, and Frazier's force of 1,000 to 1,500 prepared to meet it.
Frazier set up an ambush along the road that Zagonyi travelled, but the Union force charged the Rebels, sending them fleeing. Zagonyi's men continued into town, hailed Federal sympathizers and released Union prisoners.
Leery of a Confederate counterattack, Zagonyi departed Springfield before night, but Frémont's army returned, in force, a few days later and set up camp in the town.
In mid-November, after Frémont was sacked and replaced by Major General Hunter, the Federals evacuated Springfield and withdrew to Sedalia and Rolla.
Federal troops reoccupied Springfield in early 1862 and it was a Union stronghold from then on.
This engagement at Springfield was the only Union victory in southwestern Missouri in 1861.
Result(s): Union victory
Location: Greene County
Campaign: Operations to Control Missouri (1861)
Date(s): October 25, 1861
Principal Commanders: Major James Zagonyi [US]; Colonel James Frazier [CS]
Forces Engaged: Prairie Scouts and Frémont's Body Guard [US]; Missouri State Guard troops [CS]
Estimated Casualties: 218 total (US 85; CS 133) | <urn:uuid:483bb626-b5d4-4ccb-83f6-5d62b3ad09b3> | CC-MAIN-2016-26 | http://americancivilwar.com/statepic/mo/mo008.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00166-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.930668 | 485 | 3.03125 | 3 |
An MRI uses magnetic waves and computers to make pictures of the inside of the body. It can make two-dimensional and three-dimensional pictures.
You may have an MRI to diagnose a condition or look for internal injuries. MRIs can look at any body part, from your head to your toes. MRIs can also be used to see if medication or treatment is working for a specific disease like cancer.
MRI of Brain Injury
Copyright © Nucleus Medical Media, Inc.
MRIs can be harmful if you have metal inside your body including:
Make sure your doctor knows of any internal metal before the test.
If you are pregnant or think you may be pregnant, talk to your doctor before the MRI scan about whether an MRI scan is right for you.
A contrast dye may be used to enhance some images. Some people may have a bad reaction to this dye. Talk to your doctor about any allergies you have or if you have liver or kidney problems. Liver and kidney problems may make it difficult for your body to get rid of the contrast.
Follow your doctors instructions regarding eating and drinking before the test. This will depend on what part of the body is being examined.
At the MRI center:
Your doctor may give you a medication to calm you if you are anxious about the test. If your doctor prescribes a sedative you will need to arrange for a ride home. Be sure to follow your doctor's instructions on when to take the sedative. It may need to be taken 1-2 hours before the exam.
You may be:
If a contrast dye is used, a small IV needle is inserted into your hand or arm.
You will lie very still on a sliding table. Depending on your condition, you may have monitors to track your pulse, heart rate, and breathing. The table will slide into a narrow, enclosed cylinder. In some machines, the sides are open, so you can look out into the room.
The technician will leave the room. Through the intercom, the technician will give you directions. You can talk to the technician through this intercom as well. The technician will take the pictures. When the exam is done, you will slide out of the machine. If you have an IV needle, it will be removed.
If you are claustrophobic or unable to lie on a flat table, there are open MRI machines available. They allow you to have the test done without being put in a narrow cylinder. There are also MRI machines that allow a patient to be in a sitting position. This may be important for patients with concerns, like a painful back.
You will be asked to wait at the facility while the images are examined. The technician may need more images.
If you took a sedative, do not drive, operate machinery, or make important decisions until it wears off completely.
If you are breastfeeding and receive a contrast dye, you and your doctor should discuss when you should resume breastfeeding. Information available has not found any ill effects to the baby of a breastfeeding mother who has had contrast dye.
The exam is painless. If you have dye injected, there may be stinging when the IV needle is inserted. You may also feel a slight cooling sensation as the dye is injected.
After the exam, a radiologist will analyze the images and send a report to your doctor. Your doctor will talk to you about the results and any further tests or treatment.
NIH Clinical Center
RadiologyInfo—Radiological Society of North America, Inc.
Public Health Agency of Canada
Hailey D. Open magnetic resonance imaging (MRI) scanners. Issues Emerg Health Technol. 2006 Nov;(92):1-4.
Kanal E, Barkovich A.J., et al. ACR Guidance Document for Safe MR Practices: 2013. J Magn Reson Imaging. 2013;37(3):501-530.
Magnetic resonance imaging (MRI)—body. Radiology Info—Radiological Society of North America, Inc. website. Available at: http://www.radiologyinfo.org/en/info.cfm?pg=bodymr. Updated February 12, 2014. Accessed January 26, 2015.
1/4/2011 DynaMed's Systematic Literature Surveillance http://www.ebscohost.com/dynamed: US Food and Drug Administration. New warnings required on use of gadolinium-based contrast agents. US Food and Drug Administration website. Available at: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm225286.htm. Updated September 9, 2010. Accessed January 26, 2015.
5/17/2014 DynaMed's Systematic Literature Surveillance http://www.ebscohost.com/dynamed: Patenaude Y, Pugash D, et al. The use of magnetic resonance imaging in the obstetric patient. J Obstet Gynaecol Can. 2014 Apr;36(4):349-355. Available at: http://sogc.org/wp-content/uploads/2014/04/gui306PPG1404E.pdf. Accessed January 26, 2015.
Last reviewed January 2015 by Michael Woods, MD
Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
Copyright © 2012 EBSCO Publishing All rights reserved.
What can we help you find?close × | <urn:uuid:8511076b-c460-404e-a02e-dd1f2692cb49> | CC-MAIN-2016-26 | http://www.mbhs.org/health-library?ArticleId=14845 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00126-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.89279 | 1,196 | 3.0625 | 3 |
Deists believe in Deism, a position that God, who is without beginning or end, created the world, set it in motion, but is not involved in the world.
Deists like to say that their religion is natural, not revealed. In other words, they derive their beliefs of morality, God, truth, purpose, etc., not through any direct revelation of God, i.e., the Bible, but through observation of nature and their use of reason. This would negate the idea of the Bible being inspired of God and it would certainly deny the incarnation, death, burial, and resurrection of God in the person of Jesus.
Deists affirm personal responsibility, positive choices, and reject negative attitudes and ideas. Some even claim religions are the reason for the problems in the world. They also say that God is a “universal creative force which is the source of laws and designs found throughout nature.” 1
See Related Articles | <urn:uuid:e0313f17-96af-4e4a-be3d-d2b6711971a5> | CC-MAIN-2016-26 | http://carm.org/questions/about-philosophy/what-do-deists-believe | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00059-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960232 | 192 | 2.671875 | 3 |
Across the country, municipalities are engaging citizens in composting as a means of reducing the waste stream. Anywhere from 15 percent to 30 percent of trash being hauled to landfills is organic matter that could be composted and returned to the soil. There's no reason why the piles of scraps we produce in the coarse of cooking meals can't be recycled to enrich our soil, make more food, and help reduce greenhouse gases. It's a perfect trifecta in which people and the planet win.
So you might think we here in the nation's capitol would be on the cutting edge of the composting movement. As head of a local gardening organization in the District of Columbia, I routinely field questions from citizens eager to compost their kitchen scraps. Even non-gardeners are looking for ways they can do the planet a good turn. So where is the city's compost?
We caught up with the District's head of public works, William Howland, at a recent community and garden club meeting where he was speaking on the subject of recycling. We asked the question and learned that the District of Columbia--our nation's capitol, now presided over by a young mayor who swears we are going to be a green city--has no municipal composting program and none on the horizon.
What about all those leaves the city collects in the fall--10 tons of leaves? According to Howland, these were routinely trucked off to landfills in years past. Recently, there was a pilot program to compost leaves on city property in the Maryland suburbs. A project to compost leaves collaboratively with the University of the District of Columbia at a facility in Beltsville, MD, is being discussed.
Still, local garden legend tells of a municipal compost pile somewhere near the Capital. No sooner did I report on the local blogs that the District has no compost than a local gardener shouts back that this long-rumored compost pile does in fact exist. It has an address. I am soon in hot pursuit.
And now I can tell the world that the nation's capitol does, indeed, possess a pile of what gardeners call "leaf mold," meaning the composted remains of leaves collected in the fall. We're not exactly sure where it comes from. And having finally located it, I can say that there has never been a compost heap more difficult to find or more completely obscured from public view.
This pile is next to a public works vehicle garage and trash dumping site at New Jersey Avenue and K Street SE, a scene of scruffy industrial buildings and dusty lots wedged between an elevated freeway, a busy commuter route and some railway tracks. Since it is not far from an area where development is being spurred by the addition of a new baseball stadium, there are also, oddly, spanking-new apartment buildings rising overhead as well.
I thought I had landed in an outtake from "The French Connection." Before me stood a vast collection of dump trucks, snow plows, salt spreaders and street cleaning vehicles. The lot was jammed with private vehicles as well, yet not a human being in sight. I circled, probed, and circled again looking for this compost. I discovered that to get into the lot, I had to choose one of two ramps leading into and through a rather scary looking brick building lorded over by a tall smoke stack.
Finally I spotted two men working on a water tanker.
"Where's the compost?" I asked.
"There! Over there," they said, pointing to a big, yellow front-end loader off in the distance.
I drove to the spot and, sure enough, there in a far corner of the lot were three different piles of material: sand, mulch and a dark, rich-looking compost. The front-end loader was blocking the path into the area. I had to take my 1997 Toyota Corolla "off road" to get closer.
So here's a picture of what the District of Columbia's compost (or "leaf mold") looks like. Good stuff, if you can get past the bottle caps, pieces of plastic trash bags and other debris that come with it. I returned the following day to fill a trash can and some 5-gallon buckets. It's time to top off the garden containers at my daughter's charter school.
It was a moment of personal triumph: I had finally tracked down our own local, publicly financed compost. And it's free!
But I can't help being nagged by a persistent question: Can't we do better? | <urn:uuid:d0e2d3d3-611e-4390-ac35-5f567546bf5b> | CC-MAIN-2016-26 | http://theslowcook.blogspot.com/2008/04/searching-for-dcs-municipal-compost.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00054-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970513 | 934 | 2.84375 | 3 |
BioEd Online's science video library includes content presentations, lesson demonstrations, professional development, and a lecture series featuring experts in biology, genetics, space life sciences and more.
Many videos have related slide sets for use in preparing and conducting your lessons. All are free and can be viewed on computers and mobile devices.
Our streaming video presentations have been produced and peer-reviewed by content experts, are available from multiple sources, and cover many key science topics.
Learn how to conduct individual lessons found on BioEd Online. The videos cover materials needed, ideas for organizing and pacing each lesson, and options for teaching specific concepts.
Watch streaming video presentations on science teaching strategies and professional skills development for teachers.
The engaging presentations linked below cover basic science concepts and cutting-edge research in the fields of genetics, space life science, biology, and more. | <urn:uuid:6e55c8b0-e480-4a65-8ff1-0dbe7451806b> | CC-MAIN-2016-26 | http://www.bioedonline.org/videos/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00118-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.898902 | 172 | 3.0625 | 3 |
Details about Advances in the Sign Language Development of Deaf Children:
The use of sign language has a long history. Indeed, humans' first languages may have been expressed through sign. Sign languages have been found around the world, even in communities without access to formal education. In addition to serving as a primary means of communication for Deaf communities, sign languages have become one of hearing students' most popular choices for second-language study. Sign languages are now accepted as complex and complete languages that are the linguistic equals of spoken languages. Sign-language research is a relatively young field, having begun fewer than 50 years ago. Since then, interest in the field has blossomed and research has become much more rigorous as demand for empirically verifiable results have increased. In the same way that cross-linguistic research has led to a better understanding of how language affects development, cross-modal research has led to a better understanding of how language is acquired. It has also provided valuable evidence on the cognitive and social development of both deaf and hearing children, excellent theoretical insights into how the human brain acquires and structures sign and spoken languages, and important information on how to promote the development of deaf children. This volume brings together the leading scholars on the acquisition and development of sign languages to present the latest theory and research on these topics. They address theoretical as well as applied questions and provide cogent summaries of what is known about early gestural development, interactive processes adapted to visual communication, linguisic structures, modality effects, and semantic, syntactic, and pragmatic development in sign. Along with its companion volume, Advances in the Spoken Language Development of Deaf and Hard-of Hearing Children, this book will provide a deep and broad picture about what is known about deaf children's language development in a variety of situations and contexts. From this base of information, progress in research and its application will accelerate, and barriers to deaf children's full participation in the world around them will continue to be overcome.
Back to top
Rent Advances in the Sign Language Development of Deaf Children 1st edition today, or search our site for other textbooks by Brenda Schick. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Oxford University Press.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our English tutors now. | <urn:uuid:ed159530-5593-4600-8533-bf7fc8aa55bb> | CC-MAIN-2016-26 | http://www.chegg.com/textbooks/advances-in-the-sign-language-development-of-deaf-children-1st-edition-9780195180947-0195180941 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00191-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955825 | 489 | 3.734375 | 4 |
9-11: Was There an Alternative?
In 9-11, published in November 2001 and arguably the single most influential post-9/11 book, internationally renowned thinker Noam Chomsky bridged the information gap around the World Trade Center attacks, cutting through the tangle of political opportunism, expedient patriotism, and general conformity that choked off American discourse in the months immediately following. Chomsky placed the attacks in context, marshaling his deep and nuanced knowledge of American foreign policy to trace the history of American political aggression—in the Middle East and throughout Latin America as well as in Indonesia, in Afghanistan, in India and Pakistan—at the same time warning against America's increasing reliance on military rhetoric and violence in its response to the attacks, and making the critical point that the mainstream media and public intellectuals were failing to make: any escalation of violence as a response to violence will inevitably lead to further, and bloodier, attacks on innocents in America and around the world.
9-11: Was There an Alternative? includes the entire text of the original book, 9-11, together with a new essay by Chomsky, "Was There an Alternative?" This new edition, published on the tenth anniversary of the attacks, reminds us that today, just as much as ten years ago, information and clarity remain our most valuable resources in the struggle to prevent future violence against the innocent, both at home and abroad.
"9-11 was practically the only counter-narrative out there at a time when questions tended to be drowned out by a chorus, led by the entire United States Congress, of ‘God Bless America.’ . . . it is possible that, if the
"A badly needed corrective to news coverage of the present-day ‘war on terrorism.’" —Norman Solomon, San Francisco Chronicle Review
"Every word of 9-11 is more relevant than ever." —Amnesty International Journal (
About Noam Chomsky
NOAM CHOMSKY is known throughout the world for his political and philosophical writings as well as for his groundbreaking linguistics work. He has taught at Massachusetts Institute of Technology since 1955 and remains one of America's most uncompromising voices of dissent. | <urn:uuid:129f84d9-f5ce-4b9f-b27a-98154917b1dc> | CC-MAIN-2016-26 | http://catalog.sevenstories.com/products/9-11-was-there-an-alternative | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00128-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.954158 | 449 | 2.625 | 3 |
Genetic analysis of Dolly's DNA has revealed she is an amalgam of two animals and is therefore a "chimera" with two genetic "mothers" rather than a pure clone.
The explanation lies in the way Dolly was created, by transferring the nucleus of an udder cell from one sheep and inserting it into another ewe's egg cell that had its own nucleus removed. Scientists have found that although all of Dolly's genes within the nucleus of her cells comes from the ewe who supplied the udder cell, the second ewe has contributed the genes of the cell cytoplasm, the part of the cell outside the nucleus.
About 0.02 per cent of a mammal's genes - including humans - exist outside the nucleus in cellular structures called mitochondria, the power-producing parts of the cell which have their own genes.
Analysis of Dolly's mitochondria by Eric Schon, of Columbia University College of Physicians and Surgeons, and Ian Wilmut, of the Roslin Institute near Edinburgh, has confirmed Dolly has two genetic mothers.
The scientists found all of the mitochondria from Dolly and nine other cloned sheep produced by the nuclear-transfer method are derived from the egg cells that received the donated nuclei. In Nature Genetics they say the results are "surprising" because some of the udder cell's cytoplasm and mitochrondrial genes must have been transferred into the enucleated egg cell. | <urn:uuid:aec367c6-7cb4-4e46-bb8b-0180e6d32e92> | CC-MAIN-2016-26 | http://www.independent.co.uk/news/dolly-exposed-as-fake-clone-1118438.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00013-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961192 | 305 | 2.875 | 3 |
U.S. Army Campaigns of the Civil War
CMH Pub 75-87 Paper
2014; 76 pages, maps, illustrations, further readings
GPO S/N: 008-029-00567-1
In The Civil War in the Western Theater, 1862, author Charles R. Bowery Jr. examines the campaigns and battles that occurred during 1862 in the vast region between the Appalachian Mountains in the east and the Mississippi River in the west, and from the Ohio River in the north to the Gulf of Mexico in the south. Notable battles discussed include Mill Springs, Kentucky; Forts Henry and Donelson, Tennessee; Shiloh, Tennessee; Perryville, Kentucky; Corinth and Iuka, Mississippi; and Stones River, Tennessee.
* View this publication online | <urn:uuid:93da22bd-5feb-48c8-8b84-14d929c310fe> | CC-MAIN-2016-26 | http://www.history.army.mil/catalog/pubs/75/75-7.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00017-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.90312 | 162 | 3.109375 | 3 |
According to Wikipedia diversification means "reducing non-systemic risk by investing in a variety of assets" Photo: Rebecca Hallas
Financial advisers prattle on about the benefits of diversification but not everyone knows exactly what it means, which is a bit of a problem when you’re talking about a fundamental investing principle.
According to Wikipedia, diversification means "reducing non-systemic risk by investing in a variety of assets" or, more colloquially, "don’t put all your eggs in one basket".
The idea is to make sure you’re not overly exposed to the vagaries of luck. If you buy 20 good investments, across a range of different asset classes and geographies, some will have good luck, some bad, but on average, luck will be mostly taken out of the equation.
Textbooks focus on the power of diversification to reduce the risk in a portfolio. But by risk they mean volatility, which is only a risk if you make it one.
Volatility – generally used in the context of the stock market – refers to the amount by which prices move around. A share bought for $10 which, on subsequent days, trades at $12, $9, $13 and $25, is more volatile – and therefore deemed to be more ‘risky’ – than one bought for $10, which slowly drifts down to $9.50 over the same week. Go figure.
Volatility can be risky if you do something racy like take out a $7 margin loan to buy the share because your investment might be sold out from under you when the price drops. But for most of us, permanent loss of capital is the risk we worry about, not what a computer screen shows on a particular day. Diversification won’t magically reduce the risk in your portfolio or improve your returns. If you own shares in Company A and buy shares in Company B, the second purchase doesn’t mean you’ve got less chance of losing your money on Company A, nor will it make it perform any better. But it does mean you’ve no longer got all your eggs in the Company A basket.
A colleague used to say that diversification increases the likelihood you’ll get what’s coming to you. The more good investments you buy, the greater the chance you’ll get a good result. But if you diversify for the sake of it and end up buying poorly, you increase the chance of mediocre results.
Let’s say you have a rush of blood to the head at an auction and pay $1.2m for a house worth $1m. In theory you’ve dusted $200,000 but in practice luck plays a part. The house might have concrete cancer or shoddy foundations, so you sell it for $800,000. Alternatively, the street could become fashionable due to the celebrities up the road and you end up selling for $1.5m.
But if you build a diversified ‘rush of blood’ portfolio of 10 similar properties good luck won’t save you. It’s unlikely every street you buy into is going to become ‘A list’ and some part of your portfolio will probably have problems.
Compared to the single property purchase you’ve narrowed the range of returns you’re likely to experience and your result will start to look more like the (bad) outcome you ‘should’ get.
So diversification is about minimising the impact of luck and improving the odds of getting an outcome closer to what you expect. In practice, it means not just sticking to a portfolio focused on Australian bank shares, or a single investment property, but making sensible investments in other areas like small companies and international shares.
In each case though, avoid the temptation to over-pay. Diversification might help you plan and invest with confidence but if you pay too much for the assets into which you’re diversifying, that defeats the purpose of it.
Richard Livingston is the managing director of Intelligent Investor Super Advisor, an online service providing advice on superannuation and investing. This article contains general investment advice only (under AFSL 282288).
Money readers can enjoy a free Super Advisor trial at super.intelligentinvestor.com.au. | <urn:uuid:ea2c0707-f756-4ee2-8305-fefc08479c5d> | CC-MAIN-2016-26 | http://www.smh.com.au/money/investing/spread-yourself-out-but-not-too-thinly-20140801-3cyzg.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00188-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.935333 | 902 | 2.8125 | 3 |
HMCS Patriot, around 1922
This photograph emphasizes the lean, predatory lines of HMCS Patriot, one of two destroyers acquired by Canada in 1920.
The destroyers Patriot and Patrician had seen British service in the First World War before being transferred to Canada. Along with the light cruiser HMCS Aurora, they formed the core of the Royal Canadian Navy in the early postwar years. Following Aurora's decommissioning in 1922, Patriot was the only significant Royal Canadian Navy unit on the east coast, and saw extensive use as a training ship for naval reservists.
George Metcalf Archival Collection | <urn:uuid:9e0bebbd-9fc3-4d87-b96e-96d3da9a6727> | CC-MAIN-2016-26 | http://www.warmuseum.ca/cwm/exhibitions/navy/objects_photos_search-e.aspx?section=4-A&id=107&page=4 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00015-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950534 | 123 | 2.796875 | 3 |
Technology is the recipe, the know-how, that is used by producers in different industries in order to produce a product or deliver a service. The technology utilized should be the best available technology, in order to produce a product or service at the greatest possible level (and quality), given the available inputs (resources). The production of a product or service at the maximum level (given resources) also implies that it is produced at the lowest possible cost (cost per unit).
The technology of production refers to the mixture, or factor proportions, of the inputs used in production, and the ways (or techniques) by which the inputs are combinedin order to maximize output. For services (as well as for products), the main inputs in production are: labor, capital, land, and intermediate inputs. In practice, there are various types of these main inputs. For example, the capital input includes various type of equipment and structures. The intermediate inputs include purchased materials, services, and energy inputs such as petroleum and electricity.
At a point in time, a firm, or an industry or economy, can maximize its output of a service (or product) by meeting two conditions: 1) full utilization of available resources (labor, capital, land, and intermediate inputs); and 2) by using the best technology that is available for the delivery of a service (or the production of a product). In the case of truck transportation, full utilization of resources means that trucks are full with freight at all times, on originating and return trips. It also means that trucks use roads that minimize any loss of time due to road congestion, construction, or accidents. Full utilization of trucks also implies the minimizing of the out-of-service time of trucks due to maintenance problems. With respect to the second requirement, the use of the best available technology includes the utilization of capital goods (e.g., equipment, machines) that incorporate in them the latest technological advances.This would lead to the highest possible level of output and, consequently, productivity. Capital goods can include equipment such as computers, and software.
When, and if, the level of maximum output is attained, it can only increase further with additional increases in resources (labor, capital, land, and intermediate inputs) and improvements in technology. Either of these two factors require the passage of time. Over time, labor can increase through population growth, which can lead to higher numbers of labor force in the economy. Moreover, man-made capital, such as machines and structures, requires time to be created. In addition, improvements to the technology used in production can entail improvements in the quality of the inputs or by the discovery of new ways of combining the inputs used in the production process. Improvements such as these are typically the result of research and development activity, which requires time as well as expenditures. That activity may take place outside the industry that may eventually be affected. For example, improvements in computers and software can take place in the computer industry; and, subsequently, these improved capital inputs can be used in truck transportation and lead to production increases.
The above effects can be illustrated with a production possibilities frontier, shown in Figure 2. The discussion will use an economy for illustrative purposes; one could substitute an industry or a firm, and the outcomes shown would still apply. Let us assume that an economy uses its resources and makes two outputs: bread and shirts (i.e., food and clothes). The potential levels of these two outputs are shown on the two axes of the diagram.
In that case, the production relationship can be stated as:
Output = depends on (Labor, Capital, Land, Intermediate Inputs), Technology. The meaning of this relationship is that the level of output depends on the amounts of the inputs used in production i.e., labor, etc.and the technology used. Technology is outside the parenthesis; it is not a physical input like labor and capital, and it can influence the productivity of the physical inputs. Its effects are generally incorporated in the MFP or residual.
The production possibilities frontier (Figure 2) represents the various combinations of the two outputs that would result in the maximum level of (total) output. Point B on the curve shows the maximum output, which is possible when the economy is using its resources fully and utilizing the best available technology. Point A, which is below B, indicates an output level lower than the maximum. That level would be attained if the economys resources were not fully utilized. That could be the result if there was unemployed labor or capital in the economy due to an economic recession. That point would also be reached if firms in the economy did not use the best available technology and thus did not maximize their output. That level would also be attained if there were monopolies in the economy that restricted output in maximizing their profits.
In order for the economy to move to a higher production possibilities frontieri.e., at a higher level of output, indicated by point Cthere would be need for time to pass. During this passage of time, it would be possible for resources of the economy to increase. This would include population growth and hence growth in labor. Over time, there could also be an increase in capitalbuildings and equipmentand land. Technology could also improve over time through the discovery of new ways of producing raw materials, intermediate inputs or final products/services.
The above discussion can be applied to the trucking industry. In that case, the trucking industry could be thought of as making two types of outpute.g., the delivery of bread and shirts. The analysis would follow the same lines as for the economy. The main point is that for the trucking industry to deliver the greatest level of transportation services, there is need to: 1) employ fully the needed inputs, and 2) use the best available technology. Also, for the level of output of the trucking industry to increase, over time, there would be need to employ more resources and/or improved technology (including a more efficient industry structure).
A number of factors can affect changes in multifactor productivity at the industry level. In the case of truck transportation, there were increases and decreases in MFP over the period of analysis, and these changes can be divided into three subperiods for assessment: 1) the subperiod of 1987-1995, during which truck MFP increased by an average annual rate of 2.0%; 2) the subperiod of 1995-2001, during which truck MFP declined at an average annual rate of 0.8%; and 3) the most recent subperiod of 2001-2003, during which truck MFP increased at an annual rate of 1.1%. Thus, the analysis has the challenging task of evaluating the factors that resulted in such a changing pattern of truck MFP.
The factors which have affected changes in truck MFPin a positive or negative manner include: 1) Improvements in the quality of capital: computers, software, trucks (information technologies); 2) The efficiency of utilizing intermediate inputs; this includes the fuel efficiency/inefficiency of trucks; 3) Average length of haul; 4) Containerization; and 5) Changes in the structure of the industryparticularly following truck deregulation at the interstate and intrastate levels. The text below examines the effect of these factors over the period of analysis.12
There were improvements, over time, in the quality of capital used in truck transportation. Capital includes buildings, equipmentsuch as trucks and computersand software. In truck transportation, there were increases in the capital input over time; and newer capital is typically more efficient than older capital, as it incorporates in it improvements in technology (embodied technical progress).
Table 7 present data on two measurements of the capital input in truck transportation: capital and capital per worker. These factors are assessed for the entire period of analysis (1987-2003) and for the three subperiods1987-1995, 1995-2001, and 2001-2003. According to these calculations, the capital input increased by 43.4% over the entire 1987-2003 period (column 2). This translates into an average annual growth rate of 2.3%. With respect to subperiods, over 1987-1995, capital used in truck transportation increased by 20.6% or an annual rate of 2.4%. Over the following 1995-2001 subperiod, capital increased by a higher annual rate of 3.7%. Then, during the most recent 2001-2003 subperiod, capital decreased by -2.2% per annum.
With regard to capital per worker, this ratio increased by 20.4% over the period of analysis (column 3), or at an average annual rate of 1.2%. With respect to subperiods, capital per worker increased by 1.7% during the 1987-1995 subperiod; this growth rate declined to 0.9% over the next 1995-2001 period. During the most recent 2001-2003 subperiod, capital per worker did not grow.
The increase in capital per worker during 1987- 1995 would have affected increases in truck MFP through the availability of technological advances incorporated in new capital goods. As new investment takes place in an industry, capital investment of more recent vintage incorporate newer and more efficient technology as compared to capital investment of an older vintage. These technological advances typically contribute positively to multifactor productivity.
During 1995-2001, there was an increase in capital per worker while truck MFP decreased during this period. The decrease can be attributed to the impact of other factors, which are discussed at a later point and are listed in Table 14. In the last subperiod, 2001-2003, capital per worker did not increase while truck MFP increased. In this case, MFP increases were affected by other factors besides technological advances incorporated in capital.
In order to assess more closely the possible sources of technological advances through changes in the capital stock, an assessment is carried out of more detailed types of capital assets. A channel through which technological advances can affect the productivity of truck transportation is through information technologies. This refers to the use of computers and computer software that results in improved delivery of freight. Later text in this section describes the various types of information technology used in trucking over the period of analysis. Consequently, in carrying out the assessment, data were obtained for these two variables in truck transportation, as well as data on capital stock in the form of trucks. These data are presented in Table 8; they are in the form of quantity indexes.
These data show the very rapid increases in the stock of both computers and software used in truck transportation, over the period of analysis. Over time, computers grew more than software. For computers and peripheral equipment, the index increased significantly from 100 in 1987 to 76023 in 2003. For software, the index also increased significantly from 100 in 1987 to 44,232 in 2003. In terms of growth rates, the stock of computers grew at an annual rate of 51.4% over the period of analysis. They increased at a higher annual rate of 82.5% during the first subperiod of 1987-1995. This rate declined to an impressive 30.5% per annum during the second subperiod of 1995-2001. During the most recent 2001-2003 subperiod, computers grew at a still slower rate of 11.8% per annum.
With regard to software, their stock increased steadily and significantly over time, up to 2000; it subsequently declined, but was still maintained at high levels. The pattern for software stock over time is similar to that of computers. During the first subperiod, of 1987-1995, the software stock increased at an annual rate of 93.6%. This was even higher than the rate of increase for computers. However, during the second subperiod, 1995- 2001, software grew at a substantially slower, although still impressive, rate of 15.3%. This rate declined further to 2.2% during 2001-2003.
A very different picture is obtained for the stock of trucks. Light trucks (column 3) increased much slower over the period of analysis than computers or software; they increased by 17.8% over the entire 1987-2003 period. In fact, during the first subperiod (1987-1995), they experienced a decreasefrom 100.0 to 91.5, or about -8.5%. In terms of growth rates, light trucks increased at an annual rate of 1.0% over 1987-2003. During the first subperiod (1987-1995), they actually declined by 1.1% per annum; during the next 1995-2003 subperiod, they increased at 5.5% per year, while during the most recent 2001-2003 subperiod, they decreased at 3.5% per year.
The capital stock of Other trucks, buses, and truck trailers experienced a decline over the entire 1987-2003 period (from 100.0 to 82.7). In terms of growth rates, during the entire period of analysis, the capital stock of Other trucks, etc. decreased at an annual rate of -1.2%. During the first subperiod (1987-1995), their stock increased by 0.4% per annum. However, this changed to an annual decline of 1.5% during the 1995-2001 subperiod; the decline continued at the higher rate of 6.3% during the most recent 2001-2003 subperiod.
In summary, these data indicate the rapid growth, over the period of analysis, of the two IT related capital assetscomputers and software. By contrast, the capital stock of trucks either increased very little or declined over the same period. Consequently, changes in technology in computers and software would have been instrumental in affecting increases in truck MFP during 1987-1995. Increases in computers during the most recent 2001-2003 subperiod are also consistent with an increasing truck MFP during that subperiod. Since computers and software were increasing during 1995-2001 while MFP declined, it would appear that other factors contributed to the decreases in truck MFP during the 1995-2001 subperiod. Such factors are examined in other parts of this study.
Technological advances used in truck transportation include information technologies. These technologies include the use of computers and software as well as various channels of communication such as satellite communications and the internet. These technologies have affected all aspects of truck transportation services, including the operation of the truck, the selection of routes, truck maintenance, and the marketing of truck services. These technologies can be used by themselves or in combination with other IT technologies; the latter framework seems more typical.
The various information technologies that affected motor carrier operations include the following: a) On-board computers (OBC); b) Electronic data interchange (EDI); c) Automatic vehicle location (AVL); d). Satellite communications (SATCOM): e) Computer-aided dispatching (CAD), and Computer aided routing (CAR); f) Truck maintenance; and g) Transactions of truck services (marketing, operations). These technologies and their impact on trucking productivity are discussed below.
On-board computers are truck-based or handheld computers, used to obtain information on truck performance. These computers collect and process data received from sensors, and other devices, located on trucks. They keep records of readings and provide the fleet operator with performance information on the trucks and drivers. OBCs can be used as trip recorders and to monitor drivers hours of service and vehicle performance measures, such as speed and fuel consumption. OBCs are also used in conjunction with computer-aided routing and dispatching systems and with maintenance-scheduling software. On-board computers also become involved in the Automatic Vehicle Location system, described below.
On-board computers can contribute to increased productivity in the following ways:
Business Transactions. The computer on the truck registers delivery times of freight and customer signatures for proof-of-delivery. This has reduced paperwork and thus labor time to do such paperwork.
Driver Log. With OBC, drivers can input records of hours of service and fuel consumption. Such data make possible an assessment of fuel utilization, leading to truck speeds that minimize the use of fuel. Increased efficiency in the use of fuel, an intermediate input, would increase MFP. A reduction of total intermediate inputs, in relation to output, is not observed in trucking over the period of analysisexcept for the last 2 to 3 years. It will be shown that the fuel efficiency in trucking decreased over the period of analysis.
Data Collection on Vehicle Performance. Onboard- computers provide information on various parts of truck performance. These include: engine idling, braking, and patterns of shifting and acceleration. The computer also provides data, from diagnostic systems, on ancillary equipment on the truck such as refrigeration units. Consequently, OBCs allow for remote diagnostics prior to a malfunction of the truck; this can be followed by preventive maintenance. Prompt preventive maintenance and repair improve the performance of trucks and reduce their out-of-service time. This results in higher levels of output and MFP.
Electronic Data Interchange (EDI)
Electronic Data Interchange (EDI) systems include computers and software that are used to send and receive electronic messages and data transmission between computers of two parties. The transmission can occur between trucking companies and shippers (or between any two trading partners). This technology enables the transmission of information, including electronic transactions, between companies in an easier, more accurate, and timely manner. EDI allows for efficient billing and receipt of freight-delivery acknowledgement.
The use of computers for financial transactions reduces paperwork and related labor costs, and thus reduces costs of business transactions. This increase multifactor productivity.
Automatic Vehicle Location (AVL) and Satellite Communications (SATCOM)
Automatic vehicle location (AVL) refers to a broad category of ground-based or satellite technologies, with which it is possible to track the location of trucks. Dispatchers, drivers, shippers, and receivers can track a truck from pickup to delivery of freight; coordinate inter-modal shipments; and perform just-in-time deliveries. In addition to vehicle tracking, SATCOM technologies provide communication between the dispatchers and the truck drivers; this allows for real time coordination of fleet routing and dispatching activities. With an on-board computer, two-way text or voice communications can allow for routing and dispatching of trucks in current time/real time, as well as the (real-time) monitoring of vehicle operating parameters such as speed, etc. With this system, the motor carrier can also locate a truck in case of a breakdown. This results in less out of- service time, and thus higher levels of output (freight delivered) and MFP.
Computer-Aided Routing (CAR) and Dispatching (CAD)
These technologies involve computer hardware and specific software that are used for dispatching, routing, and decision support for route selection of trucks. Good route selection can contribute to minimizing the time and cost of moving freight. These systems are used to schedule drivers and trucks subject to parameters, such as allowable driving hours, size of load, and origin and destination. The basic systems allow for the planning and scheduling of truck activities prior to the dispatch of a truck. In addition, more sophisticated systems allow for routing and dispatch decisions based on real-time truck locations; estimate delivery times and distances; help improve cost estimates; and generate route maps.
The technologies of computer-aided dispatching (CAD) and computer-aided routing (CAR) lead to improved fleet routing and dispatching. This results in an increased utilization of trucks. This includes a reduction in the number, and extent, of empty trucks, particularly on back hauls. This increases trucking output and, consequently, raises truck productivity and MFP.
Computer-aided dispatching and routing provide for improved dispatcher productivity. This technology results in less time needed for truck carriers staff to complete routing procedures as compared to previous manual systems. These technologies also improve communication efficiency. With a computerized system, information to drivers can be relayed instantaneously. Consequently, information on a pick-up of freight can be transacted by the truck carrier and the information relayed quickly to an appropriate truck driver, who is close to the freight. This results in increased output (load) for that particular truck, and greater output for the trucking firmand for the trucking industry.
Technological advances have affected positively truck maintenance through the increased use of maintenance-tracking software (MTS). These software improve the maintenance of trucks by tracking and reordering parts for the repair department of a truck fleet. These software also carry out real-time diagnostics of trucks. As information becomes promptly available on the performance of trucks, maintenance tracking software is used to schedule preventive and emergency repairs, as needed, in the most cost-effective manner. Preventive maintenance reduces maintenance costs as potential problems are repaired before they become bigger and more expensive jobs. This also reduces the out-of-service time of trucks.
Marketing of Truck Services
There has been an increase in the use of computerized systems for the buying and selling of truck transportation services. These include hardware, software, the internet, and satellite communications. The result is higher delivered freight output for the quantity of labor and capital used; this increases production efficiency/MFP.
In summary, information technology contributed to productivity in truck transportation in a number of ways:
On the operations side, computers have been used for communications between the truck carrier and the truck drivers. These communications helped carriers increase vehicle utilization through increased monitoring and reducing unnecessary out-of-route miles by drivers. Information availability on road work or the closing of roads (as a result of accidents) enables the driver to avoid the affected roads and choose other routes. These computers have also been used to schedule trips by trucks, including which freight to deliver and which roads to take. Information technologies would also contribute to lower fuel costs through improved routing. Improved routing would entail the choice of the quickest (and lower cost) route between two points.
On the maintenance side, computers have been used to schedule regular maintenance checks for trucks. Computers have also been used to check for problems developing in trucks. This can prevent a breakdown of a truck on the road with the accompanying negative effects of the truck being out-of-service.
On the administrative side, the use of computers would include personnel transactions and records. Personnel information would relate to the keeping of records for full-time truck drivers and those on a contractual basis. Since the trucking industry has had substantial turnover of drivers, the keeping of correct and updated personnel records would be of particular importance. On the sales side, computers have been used to obtain receipts when the freight is delivered. This entailed electronic transactions and the electronic dissemination of such information. Administrative costs fell as new technologies were adopted that involved paperless transactions.
Finally, it is noted that the data on computers and software would not include information technology equipment utilized on the trucks themselves. The latter would be part of the truck and they would have been included in the measurement of the capital stock for trucks.
An industrys MFP can also be affected by its efficiency in utilizing intermediate inputs. In examining this point, the ratio was calculated of intermediate inputs to gross output of the trucking industry, and the results are shown in Table 9. These results indicate that in terms of current prices, intermediate inputs accounted for about 50% of gross output over the period of analysis (column 3). Moreover, over time, there was an increase in the ratio of intermediate inputs to gross output. Intermediate inputs were 47% of gross industry output in 1987; in subsequent years, the ratio increased and reached a high of 56% in 2000. However, from 2001 to 2003, the ratio declined (from 55% to 51%).
Since the ratio in current prices could be affected by increases in the relative price of intermediate inputs (including fuel), tabulations were also carried out in quantity terms. These tabulations are in terms of growth rates, and are shown in Table 9, particularly columns 7 and 8. They support the results of calculations in current price.
The growth rates in quantity terms indicate that over the period of analysis, the quantity of intermediate inputs increased faster than output of the trucking industry. This is also observed for the two subperiods of 1987-1995 and 1995- 2001. However, this trend was reversed in 2001, and over the most recent 2001-2003 period, both the quantity of output and intermediate inputs decreases. During that period, intermediate inputs decreased at a substantially faster rate annual rate (-7.3%) than output (-3.9%).
Thus, these numbers, in current dollars and in quantity terms, indicate that there was a decline in the efficiency with which intermediate inputs were utilized in trucking, over 1987-1995 and 1995-2001. However, there was an increase in the efficiency of utilizing intermediate inputs over the most recent period 2001-2003. The decrease in the efficiency of utilizing intermediate inputs, during 1995-2001, was a contributory factor to the declining truck MFP during that period. Also, the efficiency of utilizing intermediate inputs in truck transportation was increasing over the last three years of the period of analysis. This would have contributed to the increasing MFP during those years.
In attempting to explain the decrease in efficiency of utilizing intermediate inputs over most years of the period of analysis, one notes that a major intermediate input in truck transportation is fuel. Therefore, an examination is carried out of fuel efficiency in trucking.
One would expect that improvements in the capital input of truck transportation would include the use of newer trucks that incorporate in them the results of new technologies. These new technologies would include truck engines that are more fuel-efficient than older engines. Improvements in fuel efficiency are expected to result in reduced use of fuels and consequently of intermediate inputs. This would contribute to increased efficiency of the industry in using intermediate inputs, which would have contributed positively to truck MFP.
In evaluating such a possibility, data on fuel efficiency are presented in Table 10 and Table 11. Data in Table 10 are for heavy single-unit trucks; they indicate that there was a rather steady increase in the fuel efficiency of these trucks over the 1987- 2002 period. Their fuel efficiency increased over time from 6.4 miles per gallon (mpg) in 1987 to 7.5 in 2001; it declined slightly to 7.4 mpg in 2002. Calculations with growth rates (in the same table) show a similar development. Fuel efficiency of these trucks increased at an annual rate of 0.8% during 1987-1995; it increased rather substantially at 1.6% per annum during the 1995-2001 subperiod.
Table 11 presents data on the fuel efficiency of combination trucks. These trucks use one or more trailers. Consequently, they would carry greater and heavier freight than single unit trucks. The data presented indicate, for one, that these trucks had lower fuel efficiency than single unit trucks. In 1987, the combination trucks obtained 5.7 miles per gallon compared to 6.4 miles per gallon for the single unit trucks. Moreover, the fuel efficiency of the combination trucks decreased over the period of analysis, from 5.7 mpg in 1987 to 5.2 mpg in 2002. That implies a decline of 0.6% per year. Consequently in 2002, these trucks were even less fuel-efficient (at 5.2 mpg) than in 1987 (5.7 mpg); they were also considerably less fuel-efficient than the single-unit trucks which obtained 7.4 mpg in 2002.
With respect to subperiods, the fuel efficiency of combination trucks increased by 0.2% annually, during 1987-1995. However, during the subsequent subperiod of 1995-2001, their fuel efficiency declined significantly at an annual rate of -1.2%. This decline in fuel efficiency would have contributed to the decline in the efficiency of utilizing intermediate inputs during 1995-2001 (shown in Table 9); it would also have been a contributory factor in the declining truck MFP during 1995-2001.
Moreover, the number of miles traveled by the less fuel-efficient combination trucks have been greater than those traveled by the single-unit trucks, by rather substantial magnitudes (column 2 of Table 10 and Table 11). Consequently, the fuel efficiency of the truck transportation industry, in total, declined over the period of analysisand particularly over the last several years of the period. A declining fuel efficiency is consistent with, and contributes to, the decrease in the industrys efficiency in the utilization of intermediate inputs observed previously. This, in turn, is consistent with a declining MFP observed over the 1995- 2001 subperiod.
Changes in the average length of haul (ALOH) can affect multifactor productivity in trucking. An increase in the average length of haulaffected by longer truck trips (distance from origin to destination)can contribute to better fuel efficiency and an improved utilization of other intermediate inputs such as engine oil, etc. This would affect positively the efficiency of utilizing intermediate inputs which, in turn, affects MFP.
It has already been observed that truck transportation experienced a decline in the efficiency of utilizing intermediate inputs, with the exception of the more recent 2001-2003 period. An objective of analyzing the average length of haul will be to assess whether this factor contributed to the decline in fuel efficiency of the industry or whether it served as an offsetting factor to that decline.
Table 12 presents data on the average length of haul (ALOH) of trucks. These numbers indicate that the average length of haul increased over the 1985 to 2001 period. This increase took place steadily over timeso that while in 1985, the ALOH was 589 kilometers, by 1995, it had risen to 669 kilometers. By 2001, it had increased still further to 781 kilometers. In terms of rates of increase, the average length of haul increased faster during the more recent 1995-2001 period (2.6% per year) as compared to the 1985-95 period (1.3% per year).
The data indicate that increases in the average length of haul would have contributed positively to the overall efficiency/MFP of the trucking industry. With regard to subperiods, the increase in the ALOH over 1985-1995 would have contributed to the increase in truck MFP during that time. During the second subperiod, 1995-2001, the ALOH of trucks is shown to have increased while truck MFP declined. During this time, the ALOH acted to offset the negative impact of other factors on the declining truck MFP.
Containerization refers to the movement of commodities in (large) containers rather than in smaller units. The use of containers in transportation includes rail-truck and truck-water transport, and has become more widespread over time. Within the continental United States, containers are used to transport cargo by truck from a point of origin to a particular destination. They are also used in the intermodal market, which includes the transportation of freight by truck to, and from, a train or a ship. Intermodal firms link different forms of transportation for ultimate delivery to the customer. Containers have become an integral component of intermodal transportation, which has been expanding over time.
Containers are part of the capital input of the truck transportation industry. They represent a technological improvement over previous ways of transporting freight (use of smaller boxes, etc.) and are thus an improvement in the quality of the capital input. The technological advances are incorporated into the capital input. Thus, the impact of the use of containers would be measured in the MFP of the industry.
The use of containers resulted in an increased use of automation in the loading and unloading of trucks. Because commodities are in containers, cargo is moved by crane or forklift; this procedure requires less manual labor than the handling of smaller packages. Consequently, the utilization of this mode of handling freight reduces the time required to transfer cargo; this increases productivity and reduces handling costs. The use of containers also tends to reduce the cost of damage or theft of freight. The benefits of using containers include: reduced employee injuries; reduced damage to the truck; and improvement in loading efficiencies. Thus, containerization contributes to increased productivity/MFP.
In order to examine the impact of this factor, data on containers were collected and tabulated. It is difficult to find a central source of such data with a comprehensive data base, for the years that cover the period of analysis. Consequently, the basic tabulation uses data on containers from the railroads, and these data are supplemented with data from other sources.
Data on containers are presented in Table 13, for the period 1990-2004. They refer to containers used in truck-rail intermodal transportation. These data indicate that the number of containers used increased by 7.8% per annum over the 1990-2003 period. Moreover, the first subperiod, 1990-1995, has the highest annual growth rate, at 10.0%. This subperiod is similar to the initial subperiod for truck MFP (1987-1995). The following subperiod, of 1995-2001, has a substantially lower growth rate for containers, at 6.1% per annum. The most recent subperiod, of 2001-2003, has a growth rate that is higher that the previous subperiod, at 7.6%.
The rates of increase in the number of containers used correspond well to the changes in truck MFP. Truck MFP was increasing during 1987-1995, while the number of containers used increased at the highest rate (over 1990-2003) during 1990- 1995. Truck MFP decreased during the following subperiod, 1995-2001, and the numbers of container used increased at the lowest rate during that subperiod. Finally, truck MFP was increasing again during 2001-2003, and the use of containers was also increasing during that subperiod.
Additional data on containers are presented in two appendix tables. These data are consistent with, and reinforce, the findings based on data in Table 13. First, Appendix J presents data on containers used in waterborne trade of the U.S. That is another segment of the container market and relates to truck-ship (or ship-truck) transportation. Although these data cover fewer years than the rail data, they show similar trends. They indicate that the increase of shipping containers during the most recent subperiod, of 2001-2003, was greater than during the previous subperiod of 1998-2001. One notes that truck MFP increased during 2001- 2003, while it decreased during 1995-2001.
Finally, another set of data are presented in Appendix K. These data refer to containers used in trucks that crossed the border of the United States for Canada or Mexico. These data indicate a rather steady increase in the use of containers over the 1996-2002 period (with a decline in 2003). They also show a pattern similar to that which has been observed. During the most recent 2001-2003 subperiod, the use of containers increased substantially more (at 16.4% annually) than during the previous 1996-2001 subperiod (0.6%). And truck MFP also decreased during 1995-2001, while it increased during 2001-2003.
The data on containers indicate that the use of containers was a factor that affected efficiency in truck delivery and truck MFP. The data indicate high growth of containers use during the 1990- 1995 subperiod (or parts of that period) and during 2001-2003. By contrast, low increases of containers use are observed during the 1995-2001 subperiod. Changes in truck MFP corresponds quite well to changes in containers use: During the 1990 (1987) to 1995 period, and the 2001-2003 period, truck MFP increased; while during 1995- 2001, truck MFP declined.
The structure of an industry can change over time as a result of deregulation, mergers/acquisitions, and bankruptcies. Such changes can affect efficiency (productivity) in an industry. With respect to mergers, the acquisition of one firm by another implies that the more efficient firm acquires a less efficient firm. In that case, the more efficient firm has typically grown faster (sales), has gained significant amounts of revenues and profits, and is able to secure financial resources. All of these characteristics enable it to acquire another, less efficient firm. Two types of mergers are relevant to the analysis: horizontal and vertical. A horizontal merger combines two firms in the same industry into one firm. Consequently, in the new post-merger firm, there is expected to be merging of certain functions of the two pre-merger firms; these would include finance, payroll, and advertising. These developments result in the same output being produced but with fewer inputs such as labor, equipment, building space, and materials/ services. This results in a reduction in inputs, and thus costs, and an increase in multifactor productivity. Vertical mergers involve mergers of transportation firms that provide complementary services. The provision of complementary services within the same trucking company can increase efficiency.
The structure of the trucking industry changed considerably over the period of analysis—following deregulation at the interstate level in 1980, and at the intrastate level in 1995. The latter completed deregulation in the trucking industry and made it comprehensive. The Motor Carrier Act of 1980 did not affect restrictions on intrastate commerce; and as time passed, the cost of shipping across state borders widened significantly from the cost of shipping within state borders. In 1994, 41 states still maintained some type of economic regulation over intrastate trucking, and intrastate rates were, on the average, 40 percent higher than rates for interstate freight delivery of the same distance.13 In 1995 the Interstate Commerce Commission Termination Act was passed and it lifted economic regulation from intrastate trucking.
Deregulation—interstate and intrastate—of the trucking industry resulted in significant changes. There was a notable amount of entry in, as well as exit from, trucking. The entry side included the appearance of new truckload (TL) firms, the expansion of less-than-truckload (LTL) firms into new markets, and the emergence of third parties such as brokers. Truckload carriers were no longer restricted to predetermined routes and commodities; some of them merged and consolidated with others to provide national coverage.
The change in truck transportation from interstate deregulation, and which apparently continued after the intrastate deregulation in 1995, resulted in a decrease in the relative importance of less-than-truckload trucking and a corresponding increase in the relative importance of truckload trucking. Data on shipments, in Appendix L, show that in 1989 and early 1990s, the LTL segment of the trucking industry accounted for 39% of total shipments (LTL and TL). In 1998 and subsequent years up to 2003, the relative importance of the LTL segment decreased to 29% of total industry shipments.
While few carriers specializing solely in LTL trucking were formed since 1980, there was significant geographic expansion by existing LTL firms into each others territories, and entry by other carriers, including carriers from other modes (e.g. rail). These new entrants included newly formed subsidiaries of existing LTL firms, and the expanded operations of truckload, small package, package express, and air cargo carriers.14
A comparison of the status of the 100 largest motor carriers (of property) between 1979 and 1991 shows that15 : 1) Forty-nine carriers were operating, 37 of which were still among the 100 largest; and 2) Fifty carriers had ceased operations since 1979. At least 35 of these carriers were identified as having filed for bankruptcy.
Structural changes in the trucking industry included trucking companies diversifying out of the traditional LTL market. For example, Roadway Services, Inc. was an LTL firm, and it created a subsidiary (Roberts Transportation Services) that performed almost no standard LTL business. The subsidiary was in the business of handling rush shipments, of rather high value. Much of its revenues came from shipments that were smaller than 10,000 lbs. (i.e., technically LTL), but these shipments were not routed through traditional LTL sort terminals. Rather, most of these shipments were picked up within 90 minutes of a customer request and were dispatched directly to their destination. 166
During the period of analysis, the LTL segment experienced significant decreases in the number of firms, accompanied by an increase in the size of the average firm. In 1975, this segment consisted of about 528 firms (generating $10.6 billion in revenues). By 1989, the segment had shrunk to 159 firms which had $13.4 billion in revenues; and by the end of 1993, there were only 108 firms, generating $16.7 billion in revenues.17 These data indicate that over the period 1976-1993, there was a substantial change in the structure of the LTL segment of the industry and, thus, in the entire trucking industry.
After interstate deregulation, the LTL firms experienced mergers and bankruptcies. At the same time, a number of LTL carriers, particularly, smaller ones, were able to succeed. From the largest 50 LTL carriers in 1979, twelve companies survived as of 1994 (controlled by 10 corporate parents). Of the top 50 firms, a number of firms merged with others that later closed, while a number of firms shut down operations. Moreover, more closures occurred to firms that were relatively smaller in the group of the top 50 firms. Conversely, more of the relatively larger firms in the top 50 firms were able to survive in the post-regulatory environment (of interstate deregulation).18
With respect to bankruptcies, a number of bankruptcies took place in the truck transportation sub-sector. Since efficient companies are expected to survive and grow over time, and inefficient companies are less likely to survive, bankruptcies in truck transportation would tend to result in increased efficiency (productivity) in the industry. It would appear that bankruptcies related, for one, to increased competition from new industry entrants, typically with lower costs, that followed deregulation in 1980 and 1995.
It takes several years for the impacts of deregulation to show in the industry structure and performance. The positive impacts of deregulation would include the expansion of efficient firms in the industry, the entry of new firms that would need to be competitivei.e., efficientand the exit of inefficient firms from the industry. It appears that the efficiency of the industry was affected positively by the comprehensive deregulation completed in 1995. It would have taken several years for industry adjustments to take placethrough mergers/acquisitions, etc.that would result in increased efficiency in the trucking industry. It would seem that the impact on higher efficiency began to be shown during 2001-2003, during which period truck MFP was increasing again.
There were adjustments in the industry after the interstate and intrastate deregulation of trucking. These two periods of deregulation were probably a shock to the industry, with existing firms attempting to expand while new firms were attempting to enter the industry. One outcome of the new entrants in the industry was more competition, which eventually resulted in a number of (less efficient) firms leaving the industry. In such circumstances, there is typically need for a period of time to pass, in order for adjustments to take place, before the industry reaches some equilibrium between supply of truck services and demand for truck services (the former being affected by the number and type of firms in the industry). It would seem that industry adjustments had taken place to a sufficient degree by 2001, and production efficiency in the industry subsequently began to increaseas shown by an increase in trucking MFP.
In conclusion, it appears that changes in the structure of the (for-hire) trucking industry, as a result of mergers/acquisitions and bankruptcies, over 1987-2003, resulted, in general, in increases of industry efficiency. This would have affected truck MFP increases, during 1987-1995after interstate deregulationand truck MFP increases during 2001-2003, after intrastate deregulation.
12 Improvements in the labor force could also affect multifactor productivity in the industry. These improvements include the effects of additional training and education of labor. Lack of appropriate data prevent the direct quantification of this factor. Consequently, its impact would be included in the multifactor productivity.
13 Federal Highway Administration. Regulation: From Economic Deregulation to Safety Reregulation, p. 5.
14 Interstate Commerce Commission, 1992, p. 38.
15 Ibid., p. 52.
16 Ibid., pp. 89-91.
17 Feitler, Corsi, and Grimm, 1998, p. 5.
18 Rakowski, 1994. | <urn:uuid:fdf176fb-5e4d-4fc0-ad71-e51c51fc1b66> | CC-MAIN-2016-26 | http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/research_papers/estimating_multifactor_productivity_in_truck_transportation/html/section_05.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00199-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.954444 | 9,130 | 3.828125 | 4 |
October 22, 2013
Scientists Use Flickr To Predict Peak Park Seasons And Favorite Attractions
[ Watch the Video: Flickr Photos Will Help Study Tourism Spots ]
Michael Harper for redOrbit.com - Your Universe OnlineNow that the national parks have been reopened following the partial government shutdown, some may begin to use the power of social media to understand where tourists are going and what they’re doing when they get there. Using photos found on photo sharing site Flickr, scientists with the Natural Capital Project at Stanford University have been able to better gauge interest in particular parks and attractions. This data can then be used to understand how many visitors are passing through park gates each year, thereby letting the surrounding communities know precisely the economic value of these attractions.
The same information could be used to understand the popularity of any given tourist destination, including amusement parks, ball parks and museums. A similar study was conducted at Stanford in February to understand the economic value of coastal areas on neighboring communities. This new research was published in the latest edition of the journal Scientific Reports.
The idea behind this research is quite simple. Tourists are often quite shutter happy, snapping pictures of natural landscapes and park markers. The Stanford scientists with the Natural Capital Project simply sought out public pictures of these natural areas, recorded the metadata associated with these pictures (including date, time and exact location) and compiled this information.
Specifically, the group gathered 1.4 million geo-tagged photos on Flickr, then used information found in the photographers profile to determine how far they traveled to arrive at their destination. After applying this information to 836 recreational sites in the US and abroad, the scientists say this method of understanding who visits these attractions and for how long is both reliable and cost-effective. Those in charge of these parks and attractions can also use this data to predict surges in traffic as well as slow seasons.
"No one has been able to crack the problem of figuring out visitation rates and values for tourism and recreation without on-site studies until now," explained lead scientist at the Natural Capital Project Anne Guerry.
Previously, researchers had to either go into the campgrounds to perform surveys or station employees at the gates to get a count of how many visitors have entered. With social media, these parks directors can obtain a wealth of information - including the number of people who visit, the most common areas and attractions, and even how satisfied the tourists were with their visit - without the need for extra staff.
This kind of study can be carried out on the cheap with specially-designed software to scour publicly displayed information on the Internet. Armed with this data, land-use planners and government agencies can provide ample staffing for these attractions, repurpose money to improve the lesser visited areas of their parks, and more effectively run the entire operations.
This isn’t the first time social media has been used to make predictions, of course. Earlier this year, researchers at Boston Children’s Hospital discovered that they could predict geographic areas more prone to obesity by observing what people in these areas “liked” on Facebook. Areas which preferred television shows over outdoor activities, for instance, were more likely to be obese. | <urn:uuid:7dd1e7fc-adf7-4fc5-8c1d-329e2c6c09cc> | CC-MAIN-2016-26 | http://www.redorbit.com/news/science/1112981335/flickr-helps-predict-peak-park-seasons-102213/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00040-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950891 | 648 | 3.1875 | 3 |
Treaty of Lisbon
||This article does not have any sources. (July 2010)|
The Treaty of Lisbon was signed in 2007 between 27 European states that are members of the European Union (EU). It became effective on December 1, 2009. It is now the document that defines the Union, but it is not a constitution. It gives a common set of rules that the member states have agreed to use on subjects where they have decided to work together. It does better than previous treaties such as the Treaty of Rome and the Treaty of Maastricht. It gives the European Union new things, such as:
- a President of the European Council (who is not the President of the European Union – there is no such job).
- a High Representative for Common Foreign and Security Policy, who is a member of the European Commission and who represents the Union around the world.
- stronger role for the European Parliament, which is directly elected by citizens of the EU,
- the ability to make more decisions by "qualified majority". Although some decisions require that all member states agree, there are more that can be decided if there is a large majority.
- better police and justice across the continent. | <urn:uuid:cb0f1766-7c91-4528-82f0-28650c8238f3> | CC-MAIN-2016-26 | https://simple.wikipedia.org/wiki/Treaty_of_Lisbon | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00138-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974325 | 245 | 4.0625 | 4 |
March 21, 1938. Washington, D.C. "Purchasing on an average of 4 million electric light bulbs annually, Uncle Sam is probably one of the largest users of light in the country. The National Bureau of Standards sees that the government gets value received in purchases by continually testing the incandescent lamps to determine their life and the amount of light they give. Using a special machine designed by the Bureau, Louis Barbrow is shown measuring the amount of light given by a lamp." Harris & Ewing Collection glass negative.
| Click image for Comments.
| Browse All Photos | <urn:uuid:953cebcc-0c99-40cb-ac01-d56f0f390f04> | CC-MAIN-2016-26 | http://www.shorpy.com/node/7960?size=_original&ref=88168 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00152-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.927264 | 119 | 2.671875 | 3 |
Sunday, September 10, 2000 Amandala Belize
The Battle of St. George’s Caye was not celebrated until one hundred years later, on September 10, 1898. That celebration was organized, it is said, by a “Creole” citizen of British Honduras by the name of Simon Lamb. Into the twentieth century, then, the “Creole” people referred to the Tenth of September as “Centenary.” (Over the years. , Amandala has found the “Creole” designation for certain Belizeans of some African extraction to be imprecise, confusing and ambiguous, hence the quotation marks around it.)
This weekend Centenary comes up, and the celebrations have changed a great deal. The commemoration of the Battle of St. George’s Caye is no longer the centerpiece of the September celebrations. Carnival has taken over as the biggest event of the September celebrations, with perhaps an argument from the Independence Day afternoon jump-up on September 21, which ends the celebrations.
This year’s September celebrations will actually go on, after a fashion, until September 29, which marks the 50th anniversary of the ruling People’s United Party, the country’s first mass political party and the party which led Belize to political independence in 1981.
The so-called Battle of St. George’s Caye was not considered as significant at the time as it later turned out to be. There were no known casualties on September 10, 1798, whereas in armed confrontations with the Spanish in 1763 and 1779, British Hondurans had actually been killed and taken away in chains. No one knew in 1798 that the abortive invasion of Arturo O’Neil’s armada from Yucatan would be the last armed attempt to dislodge the so-called Baymen from the “settlement of Belize in the Bay of Honduras.”
Belize is a small country with a large amount of divisive fault lines — ethnic, religious, political, etc. There are also an extraordinary amount of more powerful countries which have a special interest in Belize — Great Britain, the United States, Mexico, Guatemala, Cuba, even now Salvador. We mention these divisions and these interests to provide some framework within which you can understand. why the question of Centenary, or St. George’s Caye Day, has become so controversial, in fact confining, over the last four decades or so.
The settlement of Belize became consolidated in 1798 when the black population here, which had been playing off the Baymen against the Spanish across the border in Mexico, decided to commit to a military alliance with the British Baymen. 64 years later the descendants of the 1798 Baymen decided, for financial reasons, to give up their sovereignty and become a colony of Great Britain.
The situation on the northern border of British Honduras changed drastically in the last half of the nineteenth century because of the Caste War in Yucatan between the ruling Hispanic “ladinos” and the rebellious Maya Indians. The British and the Baymen, who made a great deal of money supplying arms to both the warring parties, later decided to accept refugees from both sides into the northern districts - Corozal and Orange Walk, and it was these refugees who introduced the sugar cane cultivation technology into Belize. There has been a special relationship between business/industrial leaders in British Honduras and Yucatan ever since that time. It went on to include mahogany extraction of an unofficial nature, the same way this type of forestry exploitation created business linkages between powerfUl people in the Guatemalan Peten and their British Honduran counter parts.
The Centenary episode of 1898 involving Simon Lamb is one of the strangest of events, to our mind at this newspaper. It would be relatively easy to understand if he were only a paid stooge hired by the ruling mercantile and British colonial elite in the colony — the “backra man.” We don’t think so. But you can’t say for sure about anything that happened in the l9th century and the first half of the 2Oth century in British Honduras, despite Emory King’s breezy attempts to the contrary. The reason is that everything we know of the British Honduras of that time, apart from the opinions of Spain, Mexico, and Guatemala, comes directly from the white minority elite at the very top of the colonial/mercantile socioeconomic pyramid in the colony. There was no Amandala back then to give expression to the feelings of the black/brown working masses of the people. They were faceless, voiceless, and yet powerful in a way which has not been properly appreciated, mostly - because it was never documented.
If we grant the possibility that Simon Lamb may have been acting spontaneously, in an inspired, adult and popular manner, then there are all sorts of intriguing speculations in which we can indulge. But the history of the Belizean masses was completely oral, not written, and so there is a problem. The bottom line is that Centenary survived, became an annual event, and in the first half of the twentieth century, inspired British Honduran poets, musicians, dancers, and other artists. This is a historical fact.
The nationalist politicians of the PUP felt that the Centenary celebrations of the late 1950’s, had too much of a pro-British flavour, and so they sought to deride the actual Battle of St. George’s Caye as a myth. The Battle became a political football in the 1960’s and 1970’s, and Carnival finally decided the issue in the 1990’s. The Battle became primarily an excuse to “fete”, as the masses of the Belizean people do not really care about the specifics of the historical debates and quarrels.
The transformation of the Opposition party from the NIP into the UDP in 1973, diluted the support for the Battle of St. George’s Caye. Of the UDP leaders since then, it was Equivel, strangely enough, who cherished the Tenth pomp and circumstance the most. Lindo and Aranda only paid lip service. Barrow was originally doing the same, only paying lip service, but became a little Bayman-ish last year.
Beginning in the 1950’s, and increasing torrentially since Hurricane Hattie in 1961, “Creole” Belizeans have been migrating to the United States, to the point where “Creoles” are now clearly a minority in Belize.
At September time, many nostalgic Belizeans return from the United States in search of the Centenary flavour. It is now apparent, however, that they took much of that flavour along with them when they went to Americas Those of us who remained in Belize have had other, more deadly demons to battle, and the 1798 Battle has become lost in the mix.
The heyday of Centenary was in a different time and era. It was in the days when the economy of British Honduras was mahogany and chicle dominated, and the mystique of Britain and “God Save The King” still ruled from the Hondo to the Sarstoon. Today mahogany and chicle are dead: Tourism rules, and the relevant anthem begins, “Oh say, can you see...” Think about it. | <urn:uuid:744a2839-7bba-48a4-abac-71f4fd27143c> | CC-MAIN-2016-26 | http://ambergriscaye.com/forum/ubbthreads.php/topics/13140/The_Battle_of_St_George_s_Caye.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00023-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96715 | 1,546 | 3.296875 | 3 |
The DTrace User Guide is a lightweight introduction to the powerful tracing and analysis tool DTrace. In this book, you will find a description of DTrace and its capabilities, as well as directions on how to use DTrace to perform relatively simple and common tasks.
DTrace is a comprehensive dynamic tracing facility that is built into Solaris. You can use the DTrace facility can be used to examine the behavior of user programs or the behavior of the operating system. DTrace can be used by system administrators or application developers on live production systems.
DTrace allows Solaris developers and administrators to:
Implement custom scripts that use the DTrace facility
Implement layered tools that use DTrace to retrieve trace data
This book is not a comprehensive guide to DTrace or the D scripting language. Please refer to the Solaris Dynamic Tracing Guide for in-depth reference information.
Basic familiarity with a programming language such as C or a scripting language such as awk(1) or perl(1) will help you learn DTrace and the D programming language faster, but you need not be an expert in any of these areas. If you have never written a program or script before in any language, Related Books provides references to other documents you might find useful.
For an in depth reference to DTrace, see the Solaris Dynamic Tracing Guide. These books and papers are recommended and related to the tasks that you need to perform with DTrace:
Kernighan, Brian W. and Ritchie, Dennis M. The C Programming Language. Prentice Hall, 1988. ISBN 0–13–110370–9
Mauro, Jim and McDougall, Richard. Solaris Internals: Core Kernel Components. Sun Microsystems Press, 2001. ISBN 0-13-022496-0
Vahalia, Uresh. UNIX Internals: The New Frontiers. Prentice Hall, 1996. ISBN 0-13-101908-2
The Sun web site provides information about the following additional resources:
The following table describes the typographic conventions that are used in this book.
Table P-1 Typographic Conventions
The following table shows the default UNIX system prompt and superuser prompt for the C shell, Bourne shell, and Korn shell.
Table P-2 Shell Prompts | <urn:uuid:d03995c1-783a-466c-ad09-3022a61ffd87> | CC-MAIN-2016-26 | http://docs.oracle.com/cd/E18752_01/html/819-5488/gcfba.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00142-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.825413 | 495 | 2.84375 | 3 |
Science is finally confirming what grandma knew all along: infants wake up taller right after they sleep.
Findings from the first study of its kind measuring the link between daily growth and sleep show the two are inextricably linked. Specifically, growth spurts are tied to an increase in total daily hours of sleep as well as an increase in the number of daily sleep bouts, the time from the onset of sleep until awakening.
"Little is known about the biology of growth spurts," says Michelle Lampl, MD, PhD, Samuel C. Dobbs professor of anthropology, Emory University, associate director, Emory/Georgia Tech Predictive Health Institute and lead author of the study. "Our data open the window to further scientific study of the mechanisms and pathways that underlie saltatory growth."
Practically speaking, however, the study helps parents understand that irregular sleep behavior is a normal part of growth and development.
"Sleep irregularities can be distressing to parents," says Lampl. "However, these findings give babies a voice that helps parents understand them and show that seemingly erratic sleep behavior is a normal part of development. Babies really aren't trying to be difficult."
Lampl's study appears in the May 1 issue of Sleep and is co-authored by Michael Johnson, PhD, professor of pharmacology, University of Virginia Health System.
The researchers also found that longer sleep bouts in both girls and boys predicted an increase in weight and body-fat composition tied to an increase in length. In other words, not only does sleep predict a growth spurt in length, but it also predicts an increase in weight and abdominal fat, implying an anabolic process--growth.
What's more, the study showed differences in sleep patterns related to growth depending on the sex of the baby. "Growth spurts were associated with increased sleep bout duration in boys compared with girls and increased number of sleep bouts in girls compared with boys," says Lampl.
In general, boys in the study exhibited more sleep bouts and shorter sleep bouts than girls. But neither the sex of the infant nor breastfeeding had significant effects on total daily sleep time. However, breastfeeding as opposed to formula feeding was associated with more and shorter sleep bouts.
Unlike previous studies, this study did not rely on parental recall of infant sleep patterns and growth. Instead, data on 23 infants were recorded in real time over a four- to 17-month span. Mothers kept daily diaries detailing sleep onset and awakening and noted whether babies were breastfeeding, formula feeding, or both and whether their infant showed signs of illness, such as vomiting, diarrhea, fever or rash.
The Robert W. Woodruff Health Sciences Center of Emory University is an academic health science and service center focusing on teaching, research, health care and public service. | <urn:uuid:b0acb48a-1ef0-4145-9936-d2eb82cb8097> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2011-05/eu-gwr042911.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00183-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.964068 | 569 | 2.90625 | 3 |
CAPE CANAVERAL (CBSMiami) – NASA jumped back into the discussion of global climate change Tuesday with a new study that showed Earth’s climate will experience roughly 20 percent more warming than estimates originally stated, despite a recent slowdown.
NASA said the new predictions were based on more detailed calculations of the sensitivity of Earth’s climate to factors like greenhouse gas emissions, which help warm the planet.
Global temperatures have risen at a rate of 0.22 degrees Fahrenheit per decade since 1951. But according to NASA, since 1998 the rate slowed to 0.09 per decade, despite an increase in some greenhouse gases like carbon dioxide.
Some studies have since suggested greenhouse gases may not impact Earth as much as previously thought. The Intergovernmental Panel on Climate Change agreed and slightly lowered the range of Earth’s potential warming.
The new research focused on what’s called Earth’s “transient climate response.” This measurement looks at how temperatures will change as carbon dioxide increases until the total amount of carbon dioxide has doubled.
Previous estimates put the transient climate response at anywhere from 1.8 degrees to 2.52 degrees Fahrenheit. According to NASA’s new study, the transient climate response is approximately 3.06 degrees and not likely to fall below 2.34 degrees Fahrenheit.
The study focused on looking at how aerosols from natural sources like volcanoes and wildfire combined with manufacturing activities, cars, and energy production interacted. NASA said depending on the make-up of the aerosols, some cause warming and some cause cooling.
According to the study, the Northern Hemisphere will likely see more of an impact from aerosols as most man-made aerosols are released from industrial zones north of the equator and most of Earth’s landmasses are in the Northern Hemisphere.
“I kept thinking, we know the Northern Hemisphere has a disproportionate effect, and some pollutants are unevenly distributed,” the study’s author, climatologist Drew Shindell said. “But we don’t take that into account. I wanted to quantify how much the location mattered.”
Shindell said that based on his calculations, industrialized countries must reduce greenhouse gas emissions at the higher end of proposed restrictions to avoid the most damaging consequences of climate change. | <urn:uuid:49a63138-2c04-4d5b-99f0-725bdf871d87> | CC-MAIN-2016-26 | http://miami.cbslocal.com/2014/03/11/new-nasa-study-warns-of-climate-changes-pace/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00025-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.933797 | 475 | 3.96875 | 4 |
Land consolidation is a complicated process. It can also be a huge source of conflict if, for example, landholders are not satisfied with the soil quality of the fields that are being exchanged. Mathematicians Prof. Peter Gritzmann and Dr. Steffen Borgwardt at Technische Universität München (TUM) and Prof. Andreas Brieden at the Universität der Bundeswehr have developed a mathematical process that improves consolidation of agricultural land. In July of this year, they were awarded the Euro Excellence in Practice Award by the Association of European Operational Research Societies (EURO) for their work in this field.
Prof. Gritzmann and his team have developed a special software solution that makes the voluntary exchange of rights to use and lease agricultural land more efficient and effective than ever before. In the first step of this new process, each field is color-coded according to its respective owner. “The software then uses our algorithm to reallocate each farmer’s fields and effectively consolidate the agricultural land,” explains Gritzmann. With one mouse click, the software distributes the fields in question as effectively as possible, turning a chaotic patchwork of different colors into a structured, coordinated landscape in just a few seconds. The calculation factors in a number of different variables including soil quality levels, EU subventions and farmers’ wishes. “The great benefit of this new land consolidation process is that it saves farmers money, enabling them to work their lands more efficiently and reduce travel costs and CO2 emissions,” outlines Gritzmann.
Land consolidation, or Flurbereinigung as it is known in Germany, is not a new concept. The German King Ludwig II created the land reform law “Gesetz die Flurbereinigung betreffend” back in the 19th century. Unlike the traditional concept of land distribution, however, the modern-day initiative developed by the Bavarian Ministry of Food, Agriculture and Forestry only involves the exchange of cultivation rights. The actual ownership of the fields does not change. What at first sounds like a simple process is in fact extremely complex. “Even if you are working with just ten farmers who have a total of 300 fields between them, you are still looking at 10 to the power of 300 different possible allocation combinations,” says Gritzmann. By comparison, the number of atoms in the known universe is a comparatively humble ten to the power of 78. To effectively manage this huge number of options and keep computing time to a minimum, the new process uses sophisticated mathematical ideas drawn from algorithmic geometry and optimization. In simple terms, it isolates groups of fields so that each farmer’s fields are as close together as possible yet as far as possible from the other fields.
The new system also fosters a strong sociological component based on group dynamics. At the start of each consolidation process, the landholders only include fields of comparatively poor quality. This is an entirely understandable approach, but it also limits the potential for efficiency gains. To counter this, the mathematicians developed a “swap tool” that enables landholders to “play around” with the fields they submit for consolidation and see the economic impact of their decisions in real time. During field tests, this gave farmers the confidence to include more and more of their fields. As a result, they were able to reduce running costs by up to 30 percent.
The process is not restricted to the reorganization of fields, however. In previous projects, Dr. Borgwardt successfully used it to consolidate woodland areas.
Prof. Gritzmann received the Max Planck Research Award for his groundbreaking work on algorithmic convex geometry. At the end of July, the Garching-based professor and his team received a further accolade: the Euro Excellence in Practice Award presented by the Association of European Operational Research Societies. | <urn:uuid:6b18259b-f745-4f61-9cd5-fb89243f872b> | CC-MAIN-2016-26 | https://www.tum.de/en/about-tum/news/press-releases/short/article/30996/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00103-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946899 | 805 | 3.359375 | 3 |
No word play intended, but William F. Sharpe is a pretty sharp guy. Professor Sharpe, 1990 Nobel prize winner in Economics, writes, “. . . one needs a gestalt from which to make business judgments.” To put it another way, the whole is greater than the sum of the parts, and business people had better darn well know it. The issue is how can colleges and universities instill in students the gestalt they must possess to make sound business judgments.
I'm glad to see that a Nobel prize winner and I agree on the prime importance of economics in creating the gestalt. Critical foundations of the gestalt include microeconomics and macroeconomics. In most syllabi the goal is listed as "the economic way of thinking." Sharpe notes that the micro and macro contexts of business are not very likely to be front and center in a business environment. In short, business leaders must master the economic way of thinking in school or risk forever missing the key elements in the creation of gestalt.
Let me use an analogy. I've never had the patience to put together a jig-saw puzzle, but I can imagine how it's done most effectively. I would pull out all the pieces with straight edges and then begin to build the outside edges of the puzzle. I'd pay attention to colors in order to mate adjacent pieces of the puzzle. I'd use the picture on the box the pieces came in as a guide. For example, if the only thing red in the puzzle was an object in the middle of the picture, I'd find the red pieces and put that part of the puzzle together. Proceeding in a systematic way, utilizing all information, I would put the puzzle pieces together. To me, a systematic process to put all the pieces of the business world together is what economics provides.
Simply put, the job of economics instructors is critical in creating first-class business leaders. But then, you already knew that, didn't you? | <urn:uuid:6dde58c9-153c-4d00-b99e-f6b9e6e27d98> | CC-MAIN-2016-26 | http://economicsacademy.blogspot.com/2005/05/economics-as-foundation-of-gestalt.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00092-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959556 | 405 | 2.84375 | 3 |
Color of the universe
JOHNS HOPKINS UNIVERSITY NEWS RELEASE
Posted: January 11, 2002
Astronomers at The Johns Hopkins University have produced a unique new insight into the nature of existence: They've determined the color of the universe.
"The color is quite close to the standard shade of pale turqoise, although it's a few percent greener," says Karl Glazebrook, an assistant professor of astronomy in the Krieger School of Arts and Sciences at Hopkins. For computer buffs, the RGB values are 0.269, 0.388, 0.342.
Glazebrook and Ivan Baldry, a postdoctoral fellow at Hopkins, are the authors of a presentation at this week's meeting of the American Astronomical Society that includes the new discovery.
Although both authors joke about promoting "color of the universe T-shirts and coffee mugs" and other humorous implications, their determination of the color is really a byproduct of a serious attempt to use the light from thousands of galaxies to assess scientists' theories of the history of star formation and stellar population dynamics.
The scientists worked with data from the Australian 2dF Galaxy Redshift Survey, a survey of over 200,000 galaxies at a distance of 2 billion to 3 billion light years from Earth. The Anglo-Australian Observatory is conducting the survey.
Using the visible portion of the spectrum, Glazebrook and Baldry combined the data on the 2dF galaxies to produce a chart they call the "cosmic spectrum." For any given wavelength of visible light, the chart reveals the intensity -- the total amount of that light -- emitted by all the galaxies emitted by what Glazebrook and Baldry call the "local universe."
The cosmic spectrum initially took the standard scientific form of a graph, but researchers then transformed it into an array of colors, replacing each wavelength into the color the human eye sees at that wavelength, and varying the intensity of the color in proportion to that wavelength's intensity in the universe. That graphic is at http://www.pha.jhu.edu/~kgb/cosspec/.
"This would be what we'd get if we took all the light in the universe and passed it through a prism to break the light into its component wavelengths and produce a rainbow," Baldry says. "We believe that the 2dF survey is large enough, reaching out several billion light years, to make this a truly representative sample."
Included in this rainbow is information on the prevalence of various elements in the universe, discernible by the dark and bright bands the elements leave at characteristic wavelengths of light particular to each element.
Glazebrook, Baldry, members of the 2dF team and other astronomers analyzed the results to check four different models of the rates of star formation through the history of the universe. By looking at each model's predictions for star formation during various time periods, astronomers could make some predictions regarding the cosmic spectrum the model should produce, and check the real cosmic spectrum to see how well it matched.
For what Glazebrook calls "a bit of fun," he and Baldry determined how this universal light would be perceived by the human eye if it wasn't broken into its component parts. They used methods established by ophthalmologists to calculate the eye's response to particular wavelengths of light.
"From one perspective, it's surprising that it turns out to be greenish, because there are no green stars," says Glazebrook. "But it's the large numbers of old red stars and young blue stars in the universe that gives us the green."
(That may puzzle non-scientists accustomed to combining blue and yellow to get green rather than blue and red, but light sources combine in a different fashion than pigments.)
Borrowing a term from the arts, the universe probably started with a "blue period" early in its history dominated by young blue stars, has moved into a middle "green period," and will eventually enter a final "red period" where decreased star formation allows older, redder stars to dominate the universe.
This research was funded in part by a grant from the David and Lucile Packard Foundation. The 2dF survey is funded by Australian and British governments.
Stunning posters featuring images from the Hubble Space Telescope and world-renowned astrophotographer David Malin are now available from the Astronomy Now Store.
U.K. & WORLDWIDE STORE
Get e-mail updates
Sign up for our NewsAlert service and have the latest news in astronomy and space e-mailed direct to your desktop (privacy note: your e-mail address will not be used for any other purpose).
INDEX | PLUS | NEWS ARCHIVE | LAUNCH SCHEDULE
ASTRONOMY NOW | STORE
© 2014 Spaceflight Now Inc. | <urn:uuid:c5b375af-dd72-4b6f-b274-e022ad6bf23a> | CC-MAIN-2016-26 | http://spaceflightnow.com/news/n0201/11color/release.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00025-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922322 | 1,008 | 3.625 | 4 |
The Jews of Pinsk, 1506-1880 is the first part of a major scholarly project about a small city in Eastern Europe where Jews were a majority of the population from the end of the eighteenth century. Pinsk boasted both traditional rabbinic scholars and famous Hasidic figures, and over time became an international trade emporium, a center of the Jewish Enlightenment, a cradle of Zionism and the Jewish Labor movement, and a place where Orthodoxy struggled vigorously with modernity.
The two volumes of Pinsk history were originally part of a literature created by Jews who survived the Holocaust and were determined to keep in memory a vital world that flourished for half a millennium. In this case, the results are extraordinary: no town of Eastern Europe has been described in such fascinating detail, invaluable to Jewish and non-Jewish historians alike.
For the second volume of this two-volume collection, see The Jews of Pinsk, 1881-1941.
Back to top
Rent The Jews of Pinsk, 1506 to 1880 1st edition today, or search our site for other textbooks by Mordechai Nadav. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Stanford University Press.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now. | <urn:uuid:dd429aa8-1180-463d-ac54-59c47bf40b4a> | CC-MAIN-2016-26 | http://www.chegg.com/textbooks/the-jews-of-pinsk-1506-to-1880-1st-edition-9780804741590-080474159x | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00056-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947415 | 276 | 3.046875 | 3 |
Rainforest Park Maybunga, Pasig City
Health Report: Typhoons
Reginald Goodie Tan
Joenny Pavillar Jr.
Erika Marie Salasibar
Janell Lyka Santos
Beatrice Ray Tan
GABBY S. LAMSEN
MAPEH IV- TEACHER
What Are Typhoons?
A typhoon is a region-specific term given to a type of tropical cyclone, usually occurring within the northwestern region of the Pacific Ocean, west of the International Date Line. These same systems in other regions are referred to as either hurricanes, or more generally, tropical cyclones. The center of a cyclone is referred to as the eye. The eye is a circular area of calm, fair weather. On average, a tropical cyclone eye is about 30 miles across. Surrounding the eye are eyewalls which are regions of dense convective clouds. The winds of eyewalls are the highest and generally cause the most damage. Spiraling into the eyewalls are more convective cloud regions referred to as spiral bands. These areas contain heavy winds and extend out from the typhoon eye.
How Typhoons Form
Typhoons or hurricanes form in hot, humid conditions over the ocean when winds traveling in opposite directions meet, a phenomenon known as convergence. As the opposing winds collide, hot air is forced upward and cools to form storm clouds. Usually these storms produce nothing more than lightning or a period of heavy rain, but in some cases high pressure and wind in the upper atmosphere can allow the hot air to continue its upward motion for a sustained period, creating a much stronger type of storm. A hurricane's wind is caused by air rushing up from the surface of the ocean to replace air blown away in the upper atmosphere. Due to the Coriolis effect, which can affect the direction water drains down a sink, hurricanes in the northern hemisphere spin anti-clockwise, while in the southern hemisphere they spin clockwise.
When Typhoons Occur
According to the National Oceanic and Atmospheric Administration (NOAA), high winds push the surface of the waters ahead of the system on the right side of its path and cause over 85 percent of the cyclone's surge.
In order to occur, tropical cyclones generally require ocean temperatures of at least 80 F. The systems begin with heat generated from spiraling water vapor in the atmosphere. This spiraling vapor forms into the convective clouds discussed earlier. Typhoon incidence rate is correlated with sea-surface temperature. Because of this, there may be a connection between global warming and tropical cyclones; as the temperature of the waters increase, so does the incidence of tropical cyclones. Typhoon season is typically between late June until sometime during the month of December.
Effects of Typhoons on:
Buildings and Infrastructure
The two most destructive forces associated with typhoons are wind and rain. According to the Green Fun website, typhoon winds can affect buildings and other structures in two ways: through direct force and through projectiles. Direct force is when a wind gust slams directly into a building or structure and causes physical damage, such as when wind blows the roof off a home. Wind can also inflict damage by picking up and launching debris and other items, such as tree branches and building materials, into buildings and other structures. The heavy and persistent rainfall that typhoons bring can also have devastating effects. In addition to making homes uninhabitable, the flooding associated with typhoons can make roads impassable, which can cripple rescue and aid efforts.
Trees and Other Vegetation
Typhoons can also affect the natural environment, and cause harm to trees and other vegetation, including crops that communities may rely on for sustenance or trade, or both. Strong winds can snap branches; detach and injure leaves, flowers, fruits and... | <urn:uuid:3d012cf3-1128-4fd0-845d-9621e31c0f3f> | CC-MAIN-2016-26 | http://www.studymode.com/course-notes/Typhoons-Philippines-1782490.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.928116 | 783 | 3.6875 | 4 |
What is the
This project is being done to provide off-the-shelf
assistance to college physics instructors in several areas:
- Using symbolic algebra programs
and / or spreadsheets throughout the physics
curriculum. Building or maintaining student
skills with these tools, to provide more insight
about physics problems.
- Having the classroom become
a more 'active' learning environment
a packet contain?
- A number of worked-out example problems or calculations
in Maple, Mathematica or spreadsheet.
Some are very simple, meant to build confidence
with the package. Some are advanced, showing the power and versatility
of the package, capable of providing insight to the student.
- Executable Maple and Mathematica and spreadsheet files - download all
files in a given packet
- Maple file text and input lines - easy to read , search. Can download
individual Maple files on the spot.
- Comments and elaboration on prolems.
- Suggetions for classroom activites which will actively
- For advanced courses, some suggestions are presented
utilizing materials from the CUPS project
(Consortium for Upper-Level Physics Software).
can you benefit from this project?
- You can find resources of interest and download
them. Downloading may consist of
- clicking to get auto-unzipping files for Maple, Mathematica,
- clicking to get zipped files for the Mac
- downloading individual Maple files via 'Save File' .
(Later, launch Maple to run worksheet.)
- clicking to get DOS applications which are in some packets
- printing relevant packet
web pages at your home site
- copying the web source code
and then editing it at your home site
- You can send suggestions and
comments and ask questions
by clicking on the email site. This will automatically bring up and address
an email window. You add the subject and your comments and send it off.
We are likely to be able to help on questions about Maple, and spreadsheets,
but not so likely to help with Mathematica questions.
- Perry Peters
- Greg Williby
- Paul Mason
- Art Western
- Charles Joenathan
- Dan Hatten
- Michael McInerney
- Sudipa Kirtley
- Granvil C. Kyker, Jr.
- Bill Bassichis
in the construction of these web pages
- Netscape Gold 3.0
- GIF Construction 95
- Snap 32
National Science Foundation
Grant - DUE 9455442
to Packets Project Main Page | <urn:uuid:eafa4c39-6d7f-4016-8025-84e58bd16ed3> | CC-MAIN-2016-26 | http://www.rose-hulman.edu/Users/groups/packets/HTML/snipspg.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00079-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.825164 | 542 | 2.8125 | 3 |
2,000 Nigerian Children Face the Effects of Lead Poisoning
Illegal gold mining has altered the lives of nearly 2,000 children who are facing the effects of lead poisoning in several northern Nigerian villages.
The 2,000 children under five have shown signs of lead levels in the blood far exceeding international standards due to exposure to lead-rich gold ore, with some areas yet to be cleaned up despite repeated warnings.
AdvertisementThey live in villages in Zamfara state where lethal levels of lead poisoning were reported in 2010 due to illegal gold mining.
"There are 2,000 children suffering from lead poisoning in eight lead-contaminated villages yet to be remediated," said Nasiru Tsafe, deputy coordinator of Zamfara state's rapid response team.
"These children are exposed to more danger by their constant exposure to lead and delay in treatment."
Jane Cohen, a researcher with New York-based Human Rights Watch who visited the area recently, said the situation was worse than anticipated with "a large number of children exposed to high lead contamination well above the WHO (World Health Organisation) accepted limit."
Most of the victims are from Bagega village, a 9,000-strong farming and herding community where all 1,500 children suffer from lead poisoning.
"Bagega provides the worst challenge because it is more than the size of all the other seven villages combined and all the over 1,500 children in the village suffer from lead poisoning," Cohen said.
The short-term effects of lead poisoning include acute fever, convulsions, loss of consciousness and blindness, with anaemia, renal failure and brain damage among the long-term effects.
"The immediate thing to be done is remediation because there is no point treating a lead-poisoned child who goes back to a contaminated environment where he is exposed to the same contamination," Tsafe said.
P Language Skills of Preschoolers Improve in the Company of Intelligent Classmates Tiny-sized Candies Extremely Popular M | <urn:uuid:b5b7f113-93ed-4baa-9455-864c0a6a15b2> | CC-MAIN-2016-26 | http://www.medindia.net/news/2000-Nigerian-Children-Face-the-Effects-of-Lead-Poisoning-92648-1.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00113-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952172 | 410 | 2.5625 | 3 |
The Qutub Minar is a mammoth tower that was built between 1193
and 1369 to symbolise Islamic rule over Delhi, and to commemorate
the victory by Qutab-ud-din over the city's last Hindu king.
Standing 238ft (72m) tall, the tower is decorated with calligraphy
representing verses from the Koran, and tapers from a 50ft (15m)
diameter at the base to just 8ft (2.5m) at the top. There are five
distinct storeys, each encircled with a balcony: the first three
are built of red sandstone, and the upper two are faced with white
marble. At the foot of the minhar stands Quwwat-ul-Islam - India's
oldest mosque, largely built from the remains of 27 Hindu and Jain
temples destroyed by the Muslim victors. The cloisters that flank
the nearby courtyard are supported by pillars that were
unmistakably pilfered from Hindu temples - but fascinatingly, the
faces that would have adorned these pillars have been removed to
conform to Islamic law, which strictly forbids iconic worship.
Somewhat incongruously, i
Address: Qutab Minar Complex, Mehrauli, 16 km from Connaught
Transportation: There are many local buses from around the city that
stop here, otherwise take an auto-rickshaw, taxi or metro rail.
Qutub Minar is on Delhi's hop-on-hop-off bus route.
Opening Time: Open daily, from dawn to dusk
Admission: Rs. 250 | <urn:uuid:a444963c-5d77-4bc1-910f-d59429518c4a> | CC-MAIN-2016-26 | http://www.justluxe.com/travel/attractions/436__Qutub-Minar.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00176-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.944421 | 346 | 2.53125 | 3 |
1851 (undated) 10 x 14 in (25.4 x 35.56 cm)
This is a fascinating 1851 map of the United States by the English map and atlas publisher John Tallis and his engraver John Rapkin. Covers the United States from Santa Fe north through the Missouri Territory to Canada and east to the Atlantic. The Trans-Mississippi region is exceptionally interesting with a fascinating (if somewhat inaccurate even at the time) depiction of the political geography. A long narrow Nebraska territory extends to Canada. There is a large 'Western Territory' roughly where Oklahoma is today. Probably the most interesting element of this map is its curious treatment of the New Mexico territory. In the previous edition of this map, 1850, the area that is here New Mexico was part of the original Texas annexation. New Mexico Territory was created in 1850 following U.S. acquisition of Upper California (Alta California) in the 1848 Treaty of Guadalupe-Hildago - which ended the Mexican American War. Curiously, though Tallis is fully appraised of these events, as exhibited by his inclusion of the newly created New Mexico Territory, he does not extend it westward beyond the Rio Grande to include its charter claims in Upper California. Why Tallis made this decision is unclear though it may be a case of carto-advocacy in support of Mexico. Washington and Franklin medallions decorate the right and left borders. Decorative vignettes depicts a Buffalo Hunt, Penn's Treaty with the Indians, and the Washington Monument. The whole has the highly decorative presentation and elaborate border distinctive of Tallis maps. Undated but, published by the John Tallis & Company, London & New York in 1851.
John Tallis and Company published views, maps and Atlases in London from roughly 1838 to 1851. The principal works, expanding upon the earlier works of Cary and Arrowsmith, include an 1838 collection of London Street Views and the 1849 Illustrated Atlas of the World. His principle engraver was John Rapkin, whose name and decorative vignettes appear on most Tallis & Co. maps. Due to the decorative style of Rapkin's work, many regard Tallis maps as the last bastion of English decorative cartography in the 19th century. Though most Tallis maps were originally issued uncolored, it was not uncommon for 19th century libraries to commission colorists to "complete" the atlas. The London Printing and publishing Company of London and New York bought the rights for many Tallis maps in 1850 and continued Publishing his Illustrated Atlas of the World until the mid 1850s. Specific Tallis maps later appeared in innumerable mid to late 19th century publications as illustrations and appendices.
Tallis, J., The Illustrated Atlas, And Modern History Of The World Geographical, Political, Commercial & Statistical, 1851.
Very good condition. Original centerfold. Blank on verso.
Rumsey 0466.071. Phillips (Atlases) 804-70. | <urn:uuid:a7c30323-626b-4b25-bfb9-a8ecb9f7c369> | CC-MAIN-2016-26 | http://www.geographicus.com/P/AntiqueMap/USA-tlls-1850 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00042-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.938322 | 622 | 2.96875 | 3 |
Monitoring and Reporting Emissions
As we’ve learned in some recent blogs, one of the key components of successful cap and trade programs is regular monitoring and reporting of emissions by sources. I’ll use our Acid Rain Program as an example to explain why.
In the Acid Rain Program (called ARP for short), all power plants must adhere to monitoring guidelines in The Code Of Federal Regulations Part 75, Volume 40. We have a link on our website to a plain English guide to Part 75. Basically, these regulations say that all power plants must monitor and report accurate emissions data to the EPA. Most power plants are required to use CEMS, which stands for Continuous Emission Monitors. These CEMS monitor important information such as the amount of pollution coming out of a smokestack (pollution concentration) and how fast the emissions are coming out of the stack (stack gas volumetric flow rate).
While some sources are not required to use CEMS, they still must report their emissions; however, they are allowed to do it in a different way depending on the type of fuel that the power plant burns. Regardless of the method used, the highest emitting power plants have to use the most accurate monitoring methods for their fuel type. The power plants that emit a lower level of emissions are allowed to use less rigorous methods as long as that method does not underestimate the amount of emissions. I thought it was interesting that in 2008, 32 percent of power plants in the program used CEMS to monitor their emissions but this meant that over 99 percent of emissions were monitored by CEMS.
No matter what kind of monitoring a power plant uses, the data they collect and report still go through very strict testing to make sure that the data collected are accurate. This process is called Quality Assurance, or QA. In the ARP, these QA procedures have resulted in very accurate and reliable emissions data.
In the ARP, the data that the power plants collect are reported to the EPA’s Clean Air Markets Division (CAMD) CAMD uses this emissions data to make sure that the power plants are in compliance with the ARP’s goal of reducing emissions. Every unit that is in the ARP must report emissions to CAMD for every hour that the unit is operating. That’s a lot of data! CAMD then makes this data available on the EPA website so anyone can analyze it. Brokerage firms, environmental organizations, university professors, and the general public can analyze these data, providing an incredible amount of transparency and credibility because anyone can look at any time to see what a regulated power plant is emitting. Making this information available is very important for the allowance trading market to work efficiently and is essential to getting emission reductions at the lowest possible cost.
EPA’s Acid Rain Program would not be successful in reducing pollution if we didn’t have a strong system in place for power plants to collect and report their data.
Before reading our blogs on EPA’s Acid Rain Program, were you aware of all of the components that make a cap and trade program successful? Share your thoughts!
Cindy Walke has a Master’s Degree in Environmental Science and Policy from the Johns Hopkins University. It’s this degree that inspired her to seek employment with EPA’s Clean Air Markets Division, where she currently manages website communications.
Source: “Fundamentals of Successful Monitoring, Reporting, and Verification under a Cap-and-Trade Program” written by John Schakenbach, Robert Vollaro, and Reynaldo Forte, of CAMD’s Emissions Monitoring Branch.
Editor's Note: The opinions expressed here are those of the author. They do not reflect EPA policy, endorsement, or action, and EPA does not verify the accuracy or science of the contents of the blog.
Please share this post. However, please don't change the title or the content. If you do make changes, don't attribute the edited title or content to EPA or the author.
Leave a Reply
You must be logged in to post a comment. | <urn:uuid:01098e07-ed6c-4764-ab31-4a74f857a09a> | CC-MAIN-2016-26 | http://blog.epa.gov/acidrain/2010/04/monitoring-and-reporting-emissions/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00036-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943625 | 846 | 2.640625 | 3 |
Distillation is the main method by which essential oils are extracted from plants. Indeed, according to some authorities, it is the only method that produces essential oils as correctly defined - those obtained by other methods being known as essences or absolutes.
Distillation involves heating the plant material, either by placing it in water, which is then brought to the boil, or placing the plant material on a rack or grid and heating the water beneath it, so that steam passes up through it. Leaves, twigs, berries, petals and other parts of the plant may be used. If the plant material is placed in the water, the process is known as direct distillation, and if it is put on a grid and the steam passed through it, the system is known as steam distillation.
In either method, the heat and steam cause the walls of the specialised plant cells in which the plant essence is stored, to break down and release the essence in the form of a vapour. This vapour, together with the steam involved in the distilling process, is gathered into a pipe which passes through cooling tanks, and this causes the mixed vapours to return to liquid form so that they can be collected in vats at the end of the process. The steam condenses into a watery distillate, while the essence from the plant becomes an essential oil. This, being lighter than water, collects in the upper part of the vats and can easily be separated from the watery part. In some cases, the watery distillate is also a valuable product, and is sold as a flower-water or herbal water. In France these distillates are usually described as a hydrolat.
With one or two plants, the amount of essential oil that can be obtained by distilling is insignificant, and is regarded as a byproduct of the production of rosewater or orange-flower water, for example. Other methods, such as enfleurage or solvent extraction are used to obtain the essences from these and other delicate flower petals.
The process of distillation has been known and used for obtaining essential oils since at least the 10th Century A.D. and is thought to have originated in Persia, where the oils were highly prized as perfumes (Shakespeare's 'perfumes of Arabia'). However, recent archaeological digs in Italy have uncovered simple stills which suggest the Romans already knew this technique.
Some of the stills in use today, especially in less developed countries, and at small-scale rural distilleries in Europe, differ very little from the earliest stills known, but in areas where essential oil production is an important industry, they may be very large and complex, though the basic principles of production involved are identical. Stainless steel is often used in the construction of modern stills to avoid any contamination of the distillate, and this may produce better quality oils, though it is non-proven. | <urn:uuid:8c6df377-08d4-44d3-ba69-4a33ed2ce6aa> | CC-MAIN-2016-26 | http://www.oilsandplants.com/distillation.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970766 | 602 | 3.796875 | 4 |
Dementia (a decline in memory and other mental abilities) is a serious condition, and its prognosis (the likely course of the disease) is marked by progressive loss of cognitive function and complications such as infections and falls. Dementia has no cure, and is increasingly a cause of death in the United States. Heart disease and diabetes, which affect blood vessels and circulation, have similar risk factors to dementia, so it's important for healthcare professionals to understand links between these conditions.
In new research published in the Journal of the American Geriatrics Society, researchers reviewed 12 studies that included more than 235,000 people with dementia. They learned that older adults with dementia and diabetes have a significantly higher risk for death (called a "mortality risk") than do people with just dementia. People with dementia who smoked tobacco were also at a much higher risk for death, and those with dementia who had coronary heart disease had a somewhat higher risk for death. In addition, the researchers learned that men who had dementia had a worse forecast for the likely course of their disease than did women.
The researchers said that their findings raise questions about how to treat high cholesterol and high blood pressure in older people with dementia, since those conditions don't seem to be linked to a higher risk for death. Decisions about treating people with dementia for those conditions should be based on an older person's preferences and whether the treatment will improve quality of life, while also weighing the risks and benefits of treatment, noted the researchers. | <urn:uuid:95aa5243-87b5-4dae-ba19-3c03f6ff9cc0> | CC-MAIN-2016-26 | http://www.medicalnewstoday.com/releases/305732.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00116-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.98023 | 302 | 3.453125 | 3 |
Money is important to one and all. It really doesn’t matter whether you belong to an elite class or the labor class, it is essential for every person on the planet so that he can lead a normal and comfortable life. Everyone needs to deal with financial transactions at some point or the other. You have to hang on to one bank or the other. Because you do not have a choice! Once you are a part of this world you have to make yourselves aware of the common phrases and acronyms like SWIFT, BIC and IBAN. You will be encountering with them almost each day. You probably must be wondering what these terms stand for and how are they useful!
Understanding the SWIFT Code / BIC Code
There are thousands of banks with lacks of branches. Hence, it is important to know its unique identification code. Every financial transactions deals with a financial institution. Thus, the unique address which helps in identifying the exact bank for the said transaction is called the Bank Identifier Code. It is the exclusive code that helps in fast identification without any error.
A SWIFT code usually consists of 8 or 11 characters. If you come across 8-digit SWIFT code, note that it is the main branch/office of the bank. For further clarification a simple skeleton of its feature is shown below. Every character has a significance which has been explained below for a quick understanding:
This makes it so logical and synchronized to easily spot the bank and its branch with the help of the code.
What is BankSwiftCode.ORG?
BankSwiftCode is a website that has been designed exclusively for offering free BIC Codes / Swift Codes directory for searching any financial institution's Bank Identification Code. You have all the convenience of choosing from two kinds of searches- Alphabetical search and Keyword search, to have quick and convenient access. It is a simple process. You just need to add the bank's name and you will be provided with all the information about that bank that is available in our database.
SWIFT Code, BIC Code, ISO 9362
SWIFT Code or BIC code are part of ISO 9362 standards. It is a standard format of Business Identifier Codes (BIC). BIC sometimes also refers to Bank Identifier Code.
SWIFT Code or BIC Code is a unique code to identify financial and non-financial institutions. These codes are mostly used when transferring money between banks, especially for international wire transfers or telegraphic transfer (TT). The codes are also used in exchanging messages between banks.
SWIFT Code for Worlds Largest Economies
For individual users, SWIFT Code normally used to transmit money across the international border.
The following countries are the 24th largest economies based on Gross Domestic Product (GDP) as listed by the International Monetary Fund (IMF).
What are Domestic Bank Codes?
Some countries also implement domestic bank code or clearing system to transfer money within their own border. Examples are, Routing Number in United States (USA), Routing Number or Transit Number in Canada, Sort Codes in United Kingdom (UK), National Sort Codes (NSC) in Ireland, Bankleitzahl (BLZ Codes) in Germany, Bankenclearing-Nummer (BC) & SIX Interbank Clearing Codes (SIC) in Switzerland, Code Banque & Code Guichet In France, Codice ABI (ABI) & Codice di Avviamento Bancario (CAB Code) in Italy, Registreringsnummer (Reg. nr.) in Denmark, Bank State Branch (BSB number) in Australia, Bank State Branch (BSB number) in New Zealand and Indian Financial System Code (IFSC) in India.
These are very helpful in tracing the information of the financial transaction that has taken place within your country by decoding the code.
How BankSwiftCode.ORG Helps?
BankSwiftCode.ORG is the largest and the most reliable SWIFT Code / BIC Code database on the internet. We allow quick access of the Bank Identification Code of any bank around the globe. It does not matter if you are planning on money transaction through bank wire or online, we provide a free service with a guarantee in quality.
We make sure that our SWIFT Codes and Bank Identification Codes are frequently updated so that you can have the right information on one click. We thank our customers for helping us construct an improved Bank Swift Code reference for the internet society.
Benefits of bank swift codes
Bank Swift Codes were introduced to make financial transactions easily traceable and manageable. Here are some of its benefits which have really helped a lot….
Easy Location and Retrieval of Bank
Convenience for Expanding Businesses
Easy Verification of Validity of Bank
Secure Financial Transactions
Use of Bank Swift Codes
There are thousands of banks all over the world. Hence, there is quite a fair chance that there are two banks with the same name in two completely different corners of the world. This can surely lead to lots of confusion amongst the investors. They will surely have a hard time dealing with these banks with same names. To make sure that such a thing doesn’t happen, swift code bank were invented. Now every bank can have its unique swift code which every customer can have it from the bank customer support department. You may also find it online from your bank site. This will never cause any kind of confusion or overlapping.
The swift code bic organization never takes part in any financial transactions. Its objective is to help banks maintain a unique code that does not lapse with one another. It also gives you access to swift code lookup. Again, the bic code iban gives a unique identification to every single bank worldwide, irrespective of the bank, branch or country to avoid confusion. | <urn:uuid:4b8903a4-9fbd-4311-82c2-8b9f1283d716> | CC-MAIN-2016-26 | http://www.bankswiftcode.org/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00027-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.920272 | 1,197 | 2.640625 | 3 |
Participatory Integrated Watershed Management in Upland Areas of DPR Korea
Project symbol: TCP/DRK/3002
Participating countries: DPR Korea
Project duration: 01 Jan 2004 - 31 Aug 2004
Photo © Thomas Hofer/FAO
DPR Korea covers an area of 122 762 km2, 80 percent of which is classified as hills and mountains. Most upper hills and mountains are wooded, but up to a quarter of non-agricultural uplands are bare. The mountainous topography leaves few areas suitable for farming, and only 1.85 million ha (15 percent of the total land area) is currently cultivated. Agriculture remains a major contributor to gross domestic product (GDP) and receives high priority in DPR Korea policy-making.
The main climatic constraint in DPR Korea is the short growing season caused by rigid winters and rainfall that is concentrated in the May to September period. Uncertain rainfall in spring can make main crop establishment difficult, and heavy rains in July and August can damage crops and generate upland erosion. In recent years, agricultural production has declined because of natural disasters (hailstorms, droughts and floods), environmental degradation and economic difficulties.
To address the hydrological imbalance, DPR Korea requested assistance from FAO’s TCP for the protection and sustainable development of upland catchments. Project TCP/DRK/0169 (extended as TCP/DRK/3002(A)) was conceived to strengthen forest, soil and water conservation activities in order to reverse upland degradation. The project also aimed to introduce sustainable hillside farming and gradually to phase out agriculture from steeper slopes. It followed a participatory and integrated watershed management approach, so as to make sustainable natural resources management compatible with rural livelihoods.
The project achieved the objectives by implementing following main activities:
- assessment of the current situation of upland natural resources and the required management measures, through collection of data from the government units concerned;
- assessment of the situation of 23 county-level forest nurseries, with rehabilitation of damaged nurseries and/or establishment of new ones;
- development of appropriate integrated watershed management approaches and technologies.
- selection of two small catchments - Sinrak-Ri in Yonsan county and Janghang-Ri in Sangwon county - as pilot demonstration and training sites for watershed management approaches;
- development of a comprehensive watershed management plans for these two sites. Afforestation, agroforestry and intercropping trials and sediment monitoring on sloping fields and in rivers have been initiated.
- development of a training programme on participatory integrated watershed management, for technicians from the national units concerned;
- capacity building at different national levels, which resulted in trained officials and technicians, who are able to implement integrated watershed management. Capacity created by the project is expanding beyond the project context; for instance, the Academy of Forest Sciences is developing a watershed management plan for the Taedong River, which flows through Pyongyang City.
- Implementation of the project has promoted closer collaboration among government agencies at different administrative levels. A particularly interesting experience was a national workshop in 2003, which focused on preparation of a medium- and long-term participatory integrated watershed management investment programme for DPR Korea. The event was attended by approximately 50 government officers, scientists, county field staff and representatives from international organizations.
- Following this workshop, the government endorsed the medium- to long-term investment programme for participatory and integrated watershed management in DPR Korea. The investment programme has a modular structure and includes eight project profiles covering capacity building in the Ministry of Land and Environment, participatory management of critical watersheds, development of a watershed information system, development of watershed research capacity, land protection and stream treatment in selected locations, sustainable forest management, and control of forest insect pests. | <urn:uuid:90b0764a-d435-4d3b-af8e-361a89d113ba> | CC-MAIN-2016-26 | http://www.fao.org/forestry/watershedmanagementandmountains/74910/en/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00036-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932488 | 780 | 2.59375 | 3 |
by Sunny McClellan Morton | Apr 2, 2013
We think of college attendance as a relatively modern phenomenon, and for many Americans, it is. A lot of families celebrate stories of immigrant or indigent ancestors who worked hard (and against the odds) to send the first family member to college. It was a gesture of hope for future generations: an investment meant to lift a family out of working-class poverty and into gentler lives.
Other families have deep traditions of higher learning, with Ivy League-level credentials plastered all over their family trees. Whether your clan boasts pedigreed professors or just a random seminary student, their attendance at an institution of higher learning may mean grade-A genealogical fodder for you. You just need to school yourself about U.S. universities and their records.
In colonial days, higher education was mostly the privilege of the privileged. Their list of options was much shorter than those of today's students, though. Colonial-era colleges included:
Other schools, like Transylvania University, existed before 1776 but without collegiate degree-granting programs.
As the young country expanded its western horizons, colleges began popping up in frontier towns, sometimes before the towns themselves existed. Many were sponsored by churches and included seminary learning in the curriculum. Some of these schools were short-lived. Those that survived often grew slowly, with setbacks and sometimes even temporary closures during times of war or economic stress.
Most American colleges initially catered to white men, with the notable exception of Oberlin College in Ohio, which admitted women and African Americans from its founding in 1833. Colleges and seminaries for women sprang up in the mid-1800s. Beginning in the 1860s, many male-only institutions began admitting women. Nearly 80 all-black colleges opened in the second half of the 1800s; racial integration at the university level wasn't the norm until well into the 1900s.
What sources can teach you more about your ancestor's college experience? There are several types to look for.
Yearbooks as we know them today began appearing in the 1880s and were common by the early 1900s. (Loose class or team photos may have been taken in pre-yearbook years, which you might find in school or community archives or in published histories.) Yearbooks can confirm a relative's attendance, contain important photographs and reveal extracurricular interests. If you leaf through the book, you can usually get an overall sense of student body demographics, fashions, values, school traditions and current events.
You can find many yearbooks online, but it's smart to contact the university archives first. The archivist will know which years yearbooks were printed. Usually the archives has a full or nearly-complete set of yearbooks, which they may post online. For example, Kent State University lets you download yearbooks from 1914 through 1985 in PDF, EPUB or Kindle format. Case Western Reserve University in Cleveland, Ohio has a collection of yearbooks dating to 1867 in its digital library.
Otherwise, the Internet is a great place to search for (and inside) old yearbooks. Archives.com members can peruse digitized college and university yearbooks. "U.S. School Yearbooks" is an Ancestry.com database with over 200 million records indexed in junior high, high school and college yearbooks. You can purchase old yearbooks through websites like ThisOldYearbook and eBay, or through used booksellers like AbeBooks.
Campus publications and memorabilia can help you learn more about a relative's participation in student leadership, Greek organizations, performing arts, sports and more. Let's say you know your relative played on a campus basketball team. Tell the university archivist the student's name and year of participation. The archivist may help you determine whether the relative played on a competitive university team or simply participated in intramurals. You may be able to request copies of game programs, ticket stubs, team photos, news clippings of games, the season record, and more.
Student and campus newspapers are a hit-and-miss, mostly 20th-century source. You may luck into an article that mentions your relative; more likely you'll learn about daily or weekly doings on campus. Again, check with a university archivist to see if any campus newsletters or newspapers ran during the year(s) your relative attended, and where surviving issues are now. If microfilmed copies are available, request them through interlibrary loan. Otherwise, find out if they are accessible to the public and then travel to the college to read them
Student records are potentially the richest college source available on your ancestor, but access can be a problem. Applications, registrations, course schedules, letters of reference, grade reports, disciplinary records, transcripts from previous schools, student work and more may appear in student files.
The release of information in student records is governed by the Family Educational Rights and Privacy Act of 1974 (FERPA). FERPA protects information (both academic and nonacademic) relating to living students. But that doesn't mean your deceased relative's records are available for the asking. Rather, it means the college has the right to set its own rules for release of deceased students' information. Some schools don't release individual records, no matter how old. Others require a waiting period (say, 25 years after the student's death) or proof of death, like a death certificate. Make friends with that university archivist and politely request their policy on releasing an old student record, if it exists. Provide as much information as possible about the person so they are confident they're sending the right records.
Alumni records may exist separately from student records and may not be protected by FERPA. Here you'll find information submitted by the alumnus after graduation: biographical updates, photographs and more. You may also find news clippings or other memorabilia added to the file by the alumni office. Students who had illustrious careers or remained active in the alumni community are most likely to have interesting alumni files. Contact the school's alumni office for these records.
You may find yourself eventually digging into published histories (of the college, its sports teams or even women's or minorities' education) or contacting the offices of Greek or other collegiate organizations. Or you may find yourself turning to standard genealogical resources--maps, city directories, county histories and the like--to learn more about the school and its neighborhood. Whichever direction your research takes you, you'll emerge well-schooled in an important part of your ancestor's life.
Start your free trial today to learn more about your ancestors using our powerful and intuitive search. Cancel any time, no strings attached. | <urn:uuid:caa7f19e-62fd-478d-9e96-c3744becbc1a> | CC-MAIN-2016-26 | http://www.archives.com/experts/morton-sunny-mcclellan/research-a-relatives-college-days.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00095-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.962935 | 1,382 | 3 | 3 |
Caricature first emerged as a distinctive visual language in sixteenth-century Italy, where it was developed by artists such as Annibale Carracci and the sculptor Gian Lorenzo Bernini. The distortion of facial features and physical form, which are its stock-in-trade, grew out of a fascination with the grotesque and with a long-held belief that individuals’ external features revealed—and were shaped by—their interior moral qualities. Exploited initially as a fashionable diversion, caricature was cultivated by polite society in eighteenth-century England and commercialized in Rome where artists such as Pier Leone Ghezzi made a living by offering humorous portrait sketches to aristocrats visiting the city on the Grand Tour.
By the 1760s, caricature began to dominate graphic satire. Though a vigorous culture of political print had grown up during the seventeenth century in the Netherlands and England, it relied for polemical effect on emblems, visual puns and rebuses, rather than comic distortion. By the 1760s, this frequently cryptic visual language gave way to more direct forms of caricature. In England, artists such as George Townshend, James Gillray, and Thomas Rowlandson mercilessly lampooned contemporary fashion and offered searing commentary on political events, treating statesmen and members of the royal family with equal disrespect.
The market for cartoons—most commonly produced as individual, hand-colored engravings—flourished in Georgian England. In other countries, such as France, censorship inhibited the growth of political caricature, which was often produced clandestinely before the 1789 revolution. In America, the first recorded cartoon was produced in 1747 by Benjamin Franklin, who quickly recognized the medium’s potential for galvanizing popular opinion. Yet it was England, with its more developed urban centers, its growing national and regional press, and its relatively permissive laws, that provided the lead for the international explosion of graphic satire in the nineteenth century.
Unless otherwise specified on this page, this work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License. | <urn:uuid:dedecfc6-72cd-4f5c-b825-4968a92a5bce> | CC-MAIN-2016-26 | http://library.duke.edu/exhibits/abusingpower/birth.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00182-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96263 | 430 | 3.796875 | 4 |
This statement, written in June 1983 by the advisory committee of the People with AIDS Coalition, launched the PWA self-empowerment movement. The document is a valuable reminder of AIDS history in this 20th year of the epidemic.
We condemn attempts to label us as "victims," a term which implies defeat, and we are only occasionally "patients," a term which implies passivity, helplessness, and dependence upon the care of others. We are "People With AIDS."
Recommendations for All People
- Support us in our struggle against those who would fire us from our jobs, evict us from our homes, refuse to touch us or separate us from our loved ones, our community or our peers, since available evidence does not support the view that AIDS can be spread by casual, social contact.
- Not scapegoat people with AIDS, blame us for the epidemic or generalize about our lifestyles.
Recommendations for People with AIDS
- Form caucuses to choose their own representatives, to deal with the media, to choose their own agenda and to plan their own strategies.
- Be involved at every level of decision-making and specifically serve on the boards of directors of provider organizations.
- Be included in all AIDS forums with equal credibility as other participants, to share their own experiences and knowledge.
- Substitute low-risk sexual behaviors for those which could endanger themselves or their partners; we feel people with AIDS have an ethical responsibility to inform their potential sexual partners of their health status.
Rights of People with AIDS
- To as full and satisfying sexual and emotional lives as anyone else.
- To quality medical treatment and quality social service provision without discrimination of any form including sexual orientation, gender, diagnosis, economic status or race.
- To full explanations of all medical procedures and risks, to choose or refuse their treatment modalities, to refuse to participate in research without jeopardizing their treatment and to make informed decisions about their lives.
- To privacy, to confidentiality of medical records, to human respect and to choose who their significant others are.
- To die -- and to LIVE -- in dignity.
Back to the July 2001
Issue of Body Positive | <urn:uuid:87957f58-74e0-47ea-a523-96a25e0eb39e> | CC-MAIN-2016-26 | http://www.thebody.com/content/art30903.html?nxtprv | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00153-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961901 | 444 | 2.609375 | 3 |
Getting an Accurate Count
Counting every person residing in the United States is a difficult endeavor and despite the Census Bureau's best efforts, some households are missed by the count; some households are counted more than once; and still others respond with incorrect information.
However, because the accuracy of the census directly affects our nation's ability to ensure equal representation and equal access to important governmental resources for all Americans, ensuring a fair and accurate census must be regarded as one of the most significant civil rights issues facing the country today.
The 2010 census will be faced with new challenges to stakeholders, including a larger, more diverse, and more mobile population; the displacement of thousands by natural (Hurricanes Katrina and Rita) and human-made (foreclosures) disasters; increased concerns about privacy and confidentiality in a post 9/11 environment; the potential chilling effect of anti-immigrant policies; and, most recently, a severe economic recession.
In addition, the Census Bureau has a number of significant internal challenges, from funding shortfalls, to unfilled leadership positions, to the failure of major information technology systems.
- Identifying Areas at Risk for Undercounting
- Reasons Behind Inaccuracies in the Census
- The Accuracy of the 2000 Census | <urn:uuid:4ac6cd14-9bc1-4cd6-8495-00da453e6267> | CC-MAIN-2016-26 | http://www.civilrights.org/census/accurate-count/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00022-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.935687 | 253 | 3.28125 | 3 |
Brief SummaryRead full entry
The group of small African parrots known as lovebirds consists of the nine species in the genus Agapornis:
The Grey-headed Lovebird (A. canus) is native to Madagascar, but has been introduced to the Comoro Islands, Réunion, Rodrigues, and the Seychelles. These birds are generally encountered in flocks of 5 to 30 or more individuals, in flight or feeding (mainly on grass seeds) on the ground. They are generally common in Madagascar (at least in more open country in coastal regions) and in the Comoros, but are present in only small numbers in Réunion and Rodrigues and have a limited distribution in the Seychelles. There are many Grey-headed Lovebirds in captivity as well.
The Red-faced (or Red-headed) Lovebird (A. pullarius) has a geographic distribution that overlaps with that of the Black-collared Lovebird over much of central Africa, with Fischer's Lovebird in the area around southern Lake Victoria, and with the Black-winged Lovebird in southwestern Ethiopia and its range approaches the range of Peach-faced Lovebird in the Cuanza River region of Angola. It is distinguished from these and other lovebirds by the combination of green upper breast with red (or orange) crown, face, and throat. This species has a broad but patchy distribution across West and Central Africa, inhabiting moist lowland savanna, riverine woodland and scrub, and also more open habitats, including abandoned plantations, cultivated land, and pasture. It is generally found below 1500 m (but up to 2000 m in Uganda). Flocks contain up to 30 birds (usually fewer) but these break into pairs for breeding. Flocks roam widely to find food (mainly grass seeds), but return to a communal roost. In captivity, these lovebirds often sleep hanging upside down. Red-faced Lovebirds nest in tree cavities (usually ones excavated by a woodpecker), in holes dug in the side of an arboreal ant or termite nest, or occasionally in terrestrial termite mounds. Significant numbers of Red-faced Lovebirds are trapped for sale as cagebirds.
The Black-winged Lovebird (A. taranta) is endemic to the highlands of Ethiopia, where it may be common in montane forests (it is relatively uncommon in lower altitude savanna). These birds are usually observed in small flocks of 8 to 20 at the tops of taller trees. They roost communally in tree cavities (often an old woodpecker or barbet nest). They feed largely on tree fruits, including Ficus figs and juniper berries. Large numbers are captured for the cagebird trade and many are in captivity outside their range. In captivity, these lovebirds occasionally rest upside down. Captive females have been observed carrying nesting material tucked into almost any part of their plumage. This is the only lovebird known to use its own feathers in nest construction.
The Black-collared Lovebird (A. swindernianus) occurs in two to four disjunct populations in West and Central Africa, where it inhabits lowland evergreen rainforest, both primary and secondary, usually below 700 m but reported up to 1800 m in Uganda. Other than female Grey-headed Lovebirds, Black-collared Lovebirds are the only lovebirds with green heads. The Red-faced Lovebird, which has a partially overlapping range, has a red bill (not blackish as in the Black-collared Lovebird) and no collar. Black-collared Lovebirds are generally encountered in small flocks flying swiftly over the forest canopy. They are quite shy and rarely encountered near ground level. They appear to feed largely on Ficus fig seeds, but also take other seeds and small fruits, as well as adult and larval insects.
The Peach-faced (or Rosy-faced) Lovebird (A. roseicollis) is found in southwestern Africa in dry, wooded country up to 1500 m. Like many lovebirds, Peach-faced Lovebirds are typically seen in small, fast-flying flocks. The diet consists mainly of seeds, sometimes taken from the ground. Peach-faced Lovebirds are very dependent on water. Capture for the cagebird trade has seriously impacted populations in southern Angola.
The Fischer's Lovebird (A. fischeri) is virtually restricted to Tanzania south and east of Lake Victoria, with its range centered on the Serengeti. This species is found in wooded grasslands as well as (especially in the western part of its range) more open grasslands and cultivated areas. Fischer's Lovebirds feed largely on seeds. They drink every day and are often found near water. This species breeds colonially. Feral populations are present in Mombasa, Kenya, and elsewhere where they apparently hybridize with Yellow-collared Lovebirds. The Fischer's Lovebird is distinguished from the Red-faced Lovebird (with which it co-occurs on islands in the south of Lake Victoria) by its golden brown collar, golden breast, and white eyering; it is distinguished from the Yellow-collared Lovebird (with which it overlaps narrowly at the southeastern margins of its range) by having an orange rather than yellow breast. Although Fischer's X Yellow-collared Lovebirds can be found in feral populations, these are not known from areas where the two species naturally occur together. In captivity (where any lovebirds may be seen!), the Fischer's combination of brown crown and nape, orange-red face, and blue rump distinguishes it. The Fischer's Lovebird is sometimes considered conspecific with (i.e., belonging to the same species as) the Yellow-collared Lovebird (and sometimes with the Black-cheeked and Nyasa Lovebirds as well). Fischer's Lovebirds are generally encountered in small flocks, often near water, and are usually quite tame and approachable. Although still quite common in some areas, and with large numbers in captivity outside its range, native populations may be endangered by the cagebird trade.
The Yellow-collared Lovebird (A. personatus) is native to the plateau in eastern and southern Tanzania. This species is easily distinguished by its blackish brown mask and bold lemon-yellow breast, which extends around the sides of the neck and nape to form a striking yellow collar. The Yellow-collared Lovebird is sometimes considered conspecific with (i.e., belonging to the same species as) Fischer's Lovebird (and sometimes with the Black-cheeked and Nyasa Lovebirds as well). This species is typically found in small flocks in well-wooded bushland and acacia thorn scrub, especially with scattered baobab trees, at 1100 to 1800 m.
The Nyasa Lovebird (A. lilianae) is found in several disjunct populations in southeastern Africa. Nyasa Lovebirds can be distinguished from all other lovebirds by the combination of orange-red face and throat and green rump and uppertail coverts. The similar Black-cheeked Lovebird has a dark hood. Nyasa Lovebirds are highly gregarious and generally encountered in noisy flocks of 20 to 100 or more birds. Non-breeders form communal roosts in tree hollows where 4 to 20 birds sleep clinging to the walls. Food consists mainly of grass seeds collected both directly from the plants and from the ground. Nyasa Lovebirds visit water often. Breeding is colonial. The Nyasa Lovebird is sometimes treated as conspecific with the Black-cheeked Lovebird and occasionally even with the Fischer's and Yellow-collared Lovebirds.
The Black-cheeked Lovebird (A. nigrigenis) is found in southern Zambia and, formerly, extreme northern Zimbabwe at Victoria Falls. The Black-cheeked Lovebird is sometimes treated as conspecific with the Nyasa Lovebird (from which it is separated by 100 to 150 km of unsuitable habitat) and occasionally even with the Fischer's and Yellow-collared Lovebirds.This species is found in specific types of medium-altitude deciduous woodlands, usually close to a reliable water source for daily drinking. Due in part to its extremely restricted range (perhaps only 6000 square km), this species is considered to be endangered.
(Collar 1997 and references therein; Juniper and Parr 1998 and references therein) | <urn:uuid:f2e130e7-7c04-4826-8966-5ab653a1f2df> | CC-MAIN-2016-26 | http://www.eol.org/pages/42893/overview | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00188-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955555 | 1,760 | 3.640625 | 4 |
Fry, S N; Wehner, R (2002). Honey bees store landmarks in an egocentric frame of reference. Journal of Comparative Physiology A, 187(12):1009-1016.
Full text not available from this repository.
View at publisher
Honey bees are well known to rely on stored landmark information to locate a previously visited site. While various mechanisms underlying insect navigation have been thoroughly explored, little is yet known about the degree of integration of spatial parameters to form higher-level spatial representations. In this paper we explore the basic interactions between landmark cues and directional cues, which stand at the basis of our understanding of piloting mechanisms. A novel experimental paradigm allowed us independent manipulation of each parameter in a highly controlled environment. The approach taken was twofold: cue-conflict experiments were first conducted to examine the interactions between positional cues and directional cues. The bees were then successively deprived of sensory cues to question the dependence of landmark navigation on context cues. Our results confirm previous findings that landmark cues are used in concert with external directional cues if present. Conversely, the bees' ability to locate a food site was not disrupted in the absence of an external directional reference. Thus, bees store landmark memories in an egocentric frame of reference and only loose and facultative associations between visual memories and compass cues are formed.
|Item Type:||Journal Article, refereed|
|Communities & Collections:||07 Faculty of Science > Institute of Zoology (former)|
|Dewey Decimal Classification:||570 Life sciences; biology
590 Animals (Zoology)
|Date:||1 January 2002|
|Deposited On:||11 Feb 2008 12:16|
|Last Modified:||05 Apr 2016 12:15|
Users (please log in): suggest update or correction for this item
Repository Staff Only: item control page | <urn:uuid:dcfd8344-8f6b-46b2-bd71-8f0296412aa8> | CC-MAIN-2016-26 | http://www.zora.uzh.ch/649/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00139-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.848228 | 386 | 2.984375 | 3 |
On Tuesday, a Supreme Court decision temporarily halted implementation of the Environmental Protection Agency’s Clean Power Plan. The decision was prompted by a lawsuit from 29 states and state agencies challenging the EPA’s authority to impose the Clean Power Plan under the Clean Air Act. Implementation of the Clean Power Plan will remain suspended until June 2, 2016 at least, when a federal appeals court will consider the states’ challenge.
The Clean Power Plan is the first regulation to limit carbon emissions from existing power plants in the U.S. and it does so ambitiously, aiming to reduce electricity sector emissions to 32 percent below 2005 levels by 2030. Each state was assigned an emissions reduction target based on past emissions and capacity for future emissions reductions. Originally, states had until 2018 to create State Implementation Plans outlining how they would meet the targets, but this timeline could be altered depending on how the legal challenges play out.
When the Clean Power Plan was announced six months ago, states and industry groups that depend economically on coal were quick to attack the law, characterizing it as federal overreach. Tuesday’s Supreme Court ruling is a temporary win for the fossil fuel industry, but supporters believe the Clean Power Plan will ultimately be upheld by the federal appeals court. California, Colorado, Virginia and Washington have reported that they will continue implementing the Clean Power Plan despite the Supreme Court’s ruling, and 14 states have vocalized continuing support for the Clean Power Plan.
This blog first appeared as a contribution to #Livestockdebate hosted by the European coalition ARC2020 (Agriculture and Rural Convention 2020). You can read contributions to the #Livestockdebate from other experts at the ARC2020 website.
Last month, workers entered ten massive, confined turkey and chicken operations in Indiana and sprayed foam designed to suffocate the birds. When the cold temperatures froze the hoses, local prisoners were brought in to help kill the birds manually. Other operations shut down the ventilation systems killing the birds as heat temperatures rose. More than 400,000 birds have been euthanized so far in an effort to contain a new strain of avian flu in the U.S. Last year, approximately 45 million birds were killed to contain the spread of a different avian flu strain in the U.S. These epidemics are not limited to poultry: two years ago, a massive piglet virus outbreak killed millions of pigs (an estimated 10 percent of the U.S. hog population).
One reason the TPP is in such trouble, especially in the United States, is that we’ve heard this story before. Passing NAFTA, CAFTA or other free trade agreements was supposed to mean more and better jobs, improved farm incomes and increasing prosperity all around. But that’s not what happened. In the wake of NAFTA, manufacturing jobs have evaporated, family farms have been decimated and income inequality has increased. Projections that this time around the TPP would generate increasing prosperity are met with a healthy dose of skepticism or outright disbelief.
Another part of the story is the strong opposition across borders. A big outcome of the NAFTA debate was the formation of strong ties among citizens’ groups in Mexico, the U.S. and Canada that refocused the discussion away from one country “stealing” jobs from another to a central emphasis on the role of transnational corporations in driving standards down to the lowest common denominator. An important element of the eventual defeat of the Free Trade Area of the Americas was the creation of the Hemispheric Social Alliance, allowing national and sectoral coalitions to coordinate analysis and actions across borders.
Last week Mexican civil society groups convened organizations from the NAFTA countries plus Peru and Chile to reenergize that collaboration in the context of TPP and build an action plan moving forward. It was great to see allies from Mexico and Canada, especially the coalitions that began during the NAFTA debate. It was inspiring to meet leaders from vibrant coalitions in Chile and Peru, as well as people working on digital rights and other issues that are relatively new in the trade debate.
It didn’t take long to test USDA Secretary Vilsack’s prediction that the poultry industry is prepared for a new outbreak of Highly Pathogenic Avian Influenza (HPAI). On January 8, Vilsack gave an interview to USDA Radio News in which he said, “I think we’re in a much better position to detect it more quickly, to respond more effectively within 24 hours to depopulate flocks if we see a reemergence of this.”
Earlier today, the USDA released a statement confirming an outbreak of H7N8 virus, a different strain of HPAI than the one that killed 48 million birds in the US this spring. This latest outbreak occurred in a commercial turkey flock in Dubois County, Indiana. Plans are under way to kill the flock and dispose of the carcasses. News of the outbreak sent stocks tumbling for all the major poultry processors.
It is too early to tell if this appearance of H7N8 virus will be as devastating as the last round of HPAI, which subsided in June of last year. While the Indiana outbreak was the first reported in the U.S. in almost seven months, avian flu has continued to wipe out millions of birds in other countries. France in particular has been hard hit, with over 30 flocks destroyed to prevent the spread of the disease.
The ability of the United States to make its own decisions regarding how, where and why to build transcontinental oil pipelines has been challenged by TransCanada Corporation, which sued the U.S. yesterday for the loss of potential future profits associated with the cancellation of the Keystone XL pipeline. The move represents a threat to both U.S. national sovereignty and national security, given the role of energy policy in protecting the homeland. The suit could also establish a precedent for challenging sovereign rights to address climate change through energy policy, not just in the U.S., but in any country that is party to the North American Free Trade Agreement (NAFTA).
The standing of TransCanada to sue the American government is provided not in any formal U.S. legal judiciary setting, but through rules laid down in a trade regime, NAFTA. The terms of this agreement, and other similar trade agreements, are designed to protect the rights of foreign investors over the rights of the states in which they are investing.
If successful, the suit will incur more losses to U.S. citizens than those associated with sovereign rights and national security. TransCanada is asking for $15 billion dollars in lost potential future profits. Furthermore, in an additional suit filed in Houston, Texas, TransCanada is seeking to limit the power of the President of the United States in setting U.S. energy policy by claiming that the Keystone decision was unconstitutional.
The World Trade Organization’s 10th Ministerial Conference, held in Nairobi, Kenya from 15-18 December came right on the heels of the final outcome of the 21st Conference of the Parties to the UN Framework Convention on Climate Change (UNFCCC). The contrasts were striking, and not just because of the shift from Europe to Africa, from northern winter to equatorial rains, and from environment to trade. There was also the level of interest: everyone who could not be in Paris was watching what went on there from afar, while few came to sit in the make-shift tents put up by the Kenyan Government as an NGO centre. The protest marches, organized by farmers’ organizations, gathered dozens of people rather than the several thousands who had come to WTO Ministerials past. The multinational lobbyists were few, many having turned their attention instead to plurilateral agreements such as the Trans Pacific Partnership, or TPP. Despite its long-standing support for the WTO and its agenda, The Financial Times newspaper did not even send its world trade editor. It seemed that the world could hardly have cared less.
With the recent conclusion of climate talks in Paris (see Ben Lilliston’s coverage here, here, here, and here), which included strong pushes for “Climate-Smart Agriculture” (CSA) by a variety of government, NGO and corporate actors, it’s worth returning to the recent conversations about agriculture at the FAO’s second Regional Agroecology Meeting. This meeting, which I attended in Dakar, Senegal from November 4-6 of this year, once again united scientists, civil society and members of government to discuss agroecology and its potential to improve small-scale food producers’ lives, support their extensive existing knowledge and improve environmental impacts from the agrifood system, from climate change to biodiversity.
When the text of a new global climate agreement reached by 195 governments was released this weekend, one word was conspicuously absent: agriculture. That doesn’t mean issues around how farmers produce food were entirely ignored; in fact, you can see agriculture’s shadow in nearly all parts of the Paris agreement—from national-level climate plans to climate finance to new initiatives on soil. But a clear path forward on how to limit agricultural greenhouse gas emissions and support more climate resilient agricultural systems is still too politically hot for governments to take on.
The decision to sidestep agriculture, at least temporarily, within the climate agreement was not surprising. Finding common ground on agriculture and food security is notoriously difficult in international settings (see long-stalled World Trade Organization negotiations). Much of the intransigence around agriculture lies in the enormous political and economic power held by an increasing small number of global agribusiness corporations, who have little interest in new rules that don’t fit with their current business model. There is strong resistance to new regulations for agribusiness sectors that are high greenhouse gas (GHG) emitters (particularly the big fertilizer and meat companies). After the Paris agreement was reached, the meat industry immediately put out a call to start aggressively lobbying governments to protect their interests.
On the eve of their Nairobi ministerial, WTO members should remember it is not food procurement policies in developing countries like India but unfair US agricultural subsidies which threaten free trade and farmer livelihoods across the world
On December 15, the world’s trade ministers will gather in Nairobi, Kenya, for the tenth attempt to craft a new set of trade rules under the World Trade Organisation (WTO). The so-called Doha Development Round (DDR), launched in Doha, Qatar, in 2001, promised to right the imbalance in previous trade negotiations that had favoured the United States, European countries, and other developed nations. Reforming unfair agricultural practices were at the centre of the Doha agenda.
On the eve of the Nairobi ministerial, that agenda itself is under threat. The US, EU, and Japan have proposed jettisoning the Doha agenda and the progress made before negotiations broke down in 2008. They have dismissed commitments made two years ago in Bali, Indonesia, to resolve objections to India’s ambitious National Food Security Act as an unfair subsidy to farmers. Agriculture, it seems, is barely on the Nairobi agenda.
Going along with the West would be a costly mistake for developing countries. They may well be facing a new era of low crop prices in which highly subsidised crop production in the US and other rich countries creates overproduction and dumping of cheap goods on global markets. If ever there were a need for new agricultural trade rules, now would be the time.
Changing economic landscape
Earlier this week, a leaked internal European Union document on climate negotiation priorities (posted by Corporate Observatory Europe) made clear that any global climate deal would not mention trade. Also this week, a group of concerned business associations (including the biotech industry) hurriedly wrote (subscription required) U.S. Secretary of State John Kerry warning him not to agree to anything that could impact trade rules established to protect intellectual property rights. Both documents show why powerful interests want to keep trade and climate agreements separate despite the numerous ways trade rules have not only facilitated climate change but limit our ability to set strong climate policy in the future.
The trade-climate disconnect exists not only within the global climate treaty being negotiated here in Paris. The Trans Pacific Partnership (TPP) does not include anywhere in its 5,000 plus pages the words “climate change.” The latest version of a U.S. Customs bill (subscription required) coming out of the House of Representatives forbids the President from considering climate impacts in future trade agreements. | <urn:uuid:f0b4c963-d63f-47e2-8a15-2b29f2c9a746> | CC-MAIN-2016-26 | http://www.iatp.org/blog?page=3 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00051-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.954558 | 2,573 | 2.90625 | 3 |
The Psychology major provides an opportunity to study the development of the individual in relation to his/her mental processes, emotions, and cognition.
The emphasis of this psychological development will be located within the context of the individual’s larger social environment to include the family, the neighborhood, and larger cultural influences.
This focus of psychological processes and human behavior within the social environment will be addressed across the person’s lifespan.
The possible impact of these systems on the development of the individual’s personality and identity will be addressed.
Part 1: General Education Core
(See Core Studies Requirements)
Part 2: Psychology Major
Prerequisites: PSY 140, PSY 141
39-40 credits: 21 credits in residence, 24 credits upper division. No more than 6 of the credits may be used to satisfy core requirements, major requirements, or other minor requirements. Required:
|EDPSY 420||Learning Theory||3|
|HD 311||Prenatal through Early Childhood||3|
|HD 312||Mid-Child though Adolescent||3|
|HD 313||Adult, Aging, and Dying||3|
|HD/HHK 320||Human Sexuality||3|
|PSY 314||Abnormal Psychology||3|
|PSY 401||History and Systems of Psychology||3|
|PSY 402||Personality Theory||3|
|PSY/REL 411||Psychology of Religion||3|
|PSY 416||Psychological Testing/Assessment||3|
|PSY 430||Counseling Theory||3|
|SS/BUS 393||Research Methods and Applied Statistics||4|
|Select one of the following courses|
|PSY/BUS 321||Organizational Behavior||3|
|SW 479||Selected Topics||2|
|SW 481||Family Violence Across the Lifespan||2|
|SW 482||Child Welfare||2|
- Articulate the major concepts, theoretical perspectives, research findings and historical trends in psychology.
- Apply basic research methods in psychology.
- Implement critical thinking skills to identify and solve problems related to mental processes and behavior.
- Weigh evidence, tolerate ambiguity, and act ethically as they implement their knowledge and skills in the field of psychology.
- Demonstrate oral communication skills effectively in various formats such as group discussion, debate, and lecture for various purposes such as informing, defending, explaining, and persuading.
- Exhibit professional writing conventions.
- Locate and use relevant databases, research, and theory to plan, conduct, and interpret results of research studies.
(Updated Sept. 2015) | <urn:uuid:90b3ac7e-be53-4e4b-9076-67314341006f> | CC-MAIN-2016-26 | http://www.warnerpacific.edu/academics/psychology/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00164-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.808158 | 555 | 2.5625 | 3 |
Map Of Ireland
This heraldic map of Ireland shows the locations of over thirteen hundred clans and families. The detail and artwork are exceptional
Place Your Order Here
Approximate US dollar price: $31.73
Hurry up and buy it while stocks last.
(You can always remove it from your shopping basket later.)
Delivery: 2/3 weeks to USA
The map measures 14 ½ x 23 ½ inches and shows the locations of over thirteen hundred clans and families, some of which go back over a thousand years. Gaelic, Anglo Norman and Norse names predominate with later additions of Welsh, Scots and English elements. Also featured are miniature illustrations of St. Patrick, and the castles of Bunratty and Dunluce.
Also featured are County boundaries, main rivers, mountains and glens and principal cities and towns and the Crests of Ulster, Munster, Leinster and Connaught.
This is an exquisite work of art,printed on high quality paper and is ready for framing. It will appeal to both historians and genealogists alike. It is posted in a strong protective tube and we offer a lifetime guarantee.It will enhance any home where the family is proud of its Irish heritage
Irish surnames derive from a medley of origins, representing Gaelic, Nordic, Anglo-Norman, Welsh, Scots and English strains which have intermingled for over fifteen hundred years. Historically, Ireland was one of the first countries in Europe to re-establish surnames after the disintegration of the Roman Empire.
The oldest names are Gaelic, usually preceeded by the famous`O`meaning `Grandson of` and `Mac` meaning `son of`. Thus we find, amongst others;
O`Neill, O`Brien, O`Connor, O`Donnell, O`Grady, and McCarthy, McGuiness, and Macmurrough. Many of these names are descriptive, either of physical appearance or character.E.G. Reilly (brave), Quinn( intelligent), Kennedy (helmeted), Coneely (courageous), Dempsey (proud), Sullivan (black-eyed)and O`Toole (mighty-people).
The anglo-Saxon invasions in the twelfth century injected a new strain which, added to the earlier Viking incursions, complicated the picture still further. Interestingly, The Irish Department of Foreign Affairs states that over 50% of the Irish population belongs to blood group`O`, directly linking that population to Nordic origins.
Saint Patrick Featured on the map of Ireland.
Dunluce Castle Featured on the map of Ireland.
Many of these Norman names were actually referring to place names in Northern France or in Wales (Walsh) that the invaders came from. For example Cusack (Cussac) Lyons (Lyons) De Lacey (Lacey) Joyce (Jose) and French.
Other names refer to original occupations: Falconer, Smith, Cooke, Taylor, Mason, Archer and Harper. Further complications arose when the old Gaelic names were transposed into English, thus Carey, derived from the Gaelic O`Ciardha, became Carew or even Carr. | <urn:uuid:de3154e4-7264-424c-b330-7185d9f6a159> | CC-MAIN-2016-26 | http://www.borderart.com/prodpage.asp?ProdID=2 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00193-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.918327 | 657 | 2.515625 | 3 |
Global education promotes understanding of social justice and human rights and the contribution they make to peace building and conflict resolution.
Global education promotes understanding of sustainable futures and the importance of developing skills of critical and creative thinking and ethical understanding.
Global education promotes understanding of identity and cultural diversity and its importance in developing intercultural understanding and personal and social capability.
Global education promotes understanding of our interdependence and the importance of working for a just future in which all people have access to their basic needs sustainably.
Poverty and food security
Poverty rates were halved between 1990 and 2010, but 1.2 billion people still live in extreme poverty.
Our changing world
The world has changed greatly in many ways over the last 20 years – some good and some bad.
Islands, celebrations and threats
The 2014 International Year of Small Island Developing States celebrates the unique contributions of, and threats to, tiny low-lying islands around the world. | <urn:uuid:8e41ff17-49f5-4c63-9eb7-33fbd42d30f3> | CC-MAIN-2016-26 | http://globaleducation.edu.au/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00017-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.924037 | 193 | 2.890625 | 3 |
Its First Hundred Years
The crossroads happened also to be in the center of an area of rich farmland, which appealed to the more provident of the later comers to a new area. First comers in the region, during the 1830's and 1840's, preempted the more desirable homesites along wooded streams where fuel, water, game and building material were more abundant. Those who settled the immediate Nixa area came along in the 1850's, and while they appreciated the level land and the richer soil, they had some inconveniences due to the necessity of digging wells or hauling water from a distant spring, and in many cases the need of hauling in logs for their buildings, since much of the area was covered with a tangled growth of brush and vines with only occasional clumps of trees.
First community services were set up to cater to the travelers on the old Wilderness Road from Springfield south, and on the less traveled road westward from Ozark. Among a the first of these was a store, although who established and ran it is not known; and a blacksmith shop set up in his barn by Nichols Alexander Inman.
Born December 17, 1831, in the mountains of eastern Tennessee, "Nick" Inman migrated to Missouri before the Civil War and established a blacksmith shop in partnership with Joe | <urn:uuid:303656e4-cbdb-4f99-98ae-6c09601b13d3> | CC-MAIN-2016-26 | http://www.rootsweb.ancestry.com/~moccl/CCMo1st100/cc_chapter3_55.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00106-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97852 | 271 | 2.703125 | 3 |
This painting captures the infamous crackdown against striking workers during the Homestead Steel strike of 1892. The incident has become a hallmark event in the history of labor and industry.
Raymond Simboli painted the subject of the strike in two versions. The artist frequently repeated themes, often experimenting with different aesthetic approaches. Both compositions include some of the same figures, such as the horse to the left; the central figure with upraised, bent right arm; the man with the hat at the bottom central edge; and the two wrestling figures toward the bottom right corner. Simbolis earlier version of the painting, completed c. 19351940, was more traditionally representational, with greater detail applied to the various figures and forms (2008.74.6). This painting, the second version, is Cubist-inspired and abstracted. It is a dynamic depiction of the chaotic scene filled with crowds of figures and complex space. Simboli rendered the different figures with uniformity and eliminated significant individual details; this, combined with the flattened, compressed space, enhances the sense of abstract design. | <urn:uuid:79215808-b157-4321-9498-754e72c40d6e> | CC-MAIN-2016-26 | http://www.cmoa.org/CollectionDetail.aspx?item=70899&retUrl=CollectionTheme.aspx%3Fid%3D17628&retPrompt=Return | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.942072 | 219 | 3.25 | 3 |
Learn something new every day
More Info... by email
Beta cells are responsible for creating and releasing the hormones insulin and amylin, which serve to regulate glucose levels in the blood. They make up 65 to 80% of the cells in the islets of Langerhans, the endocrine structures in the pancreas. In addition to the hormones they produce, these cells also release a byproduct of insulin production called C-peptide, which aids in the repair of the muscular layers of the arteries, thereby preventing neuropathy and similar complications of vascular deterioration.
A base level of insulin is maintained in a healthy person's pancreas at all times, but more is released and created in response to a spike in blood glucose, such as that accompanying the digestion of carbohydrates. Beta cells respond to the body's glucose levels by releasing that extra insulin when it is needed. They are able to respond rather quickly to a spike in blood glucose, usually in about ten minutes. Amylin, also called islet amyloid polypeptide (IAPP), works in conjunction with insulin by regulating glucose levels in the blood in a more short-term manner.
People who suffer from diabetes have malfunctioning beta cells. In diabetes type I, the body's immune cells destroy these cells, while in diabetes type II, they gradually stop functioning over time. In both types, the lack or reduction of insulin leads to hyperglycemia, or abnormally high blood sugar. Insulin replacement therapy is mandatory for treating diabetes type I and may be required for advanced cases of type II.
Another condition affecting the beta cells is insulinoma, a rare pancreatic tumor derived from these cells that results in the unregulated release of insulin, leading to hypoglycemia, or low blood sugar. Medication may be used to regulate this condition, but the only definitive treatment is surgical removal of the tumor. About 2% of people who undergo this surgery develop diabetes type II as a result. In rare cases, a pancreatic tumor releasing excess insulin is cancerous, in which case it is treated with chemotherapy.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! | <urn:uuid:a338d560-802d-4222-80d4-b77026e9f86f> | CC-MAIN-2016-26 | http://www.wisegeek.com/what-are-beta-cells.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00151-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936369 | 477 | 3.375 | 3 |
GLOWING Volcano & Rainbow Eruptions
What could be more fun than a glow volcano & erupting rainbows?
Rosie absolutely LOVED this activity! We made SO MANY ERUPTIONS, each one more beautiful than the next. Not only was this activity super fun and beautiful but also super simple, making it one of my all time favorites so far.
GLOWING Volcano Eruptions Materials
We started by making a glow in the dark volcano. In a tall glass we added roughly 1/2 cup of baking soda along with 1 teaspoon of orange glow in the dark paint. Then we poured in the vinegar (roughly 1/2 cup)
Simple & amazing Science! We made our volcano erupt over and over. We did find that stirring the paint into the vinegar helped the entire volcano to glow intensely. I recommend mixing the paint in to make the vinegar glow before adding it to your volcano.
After a bit of play with our volcano we moved on to make GLOWING rainbow eruptions. These were so cool! All you have to do to make the rainbow eruptions is add varying colored glow paints to your baking soda before pouring in the vinegar. You can add one color paint at a time or a bit of each color of the rainbow to make ERUPTING RAINBOWS! We made many eruptions, some with one color and some with all the colors at once. Every single eruption was stunning- even when we turned the lights on!
Add a squirt of dish soap to make the eruptions move slower and last longer
Photographing in the dark is tricky. The eruptions were so vibrant that all the photos of Rosie were very dark. This is the best shot I got, but her wide eyed expressions while we played will be remembered for a long time.
A Few Tips: You can find glow in the dark paint in a variety of colors at Michael, Walmart, or online here. The fluorescent paint we used can be found here. If using fluorescent paint you will also need a blacklight. We got ours at Walmart for $10. If using glow in the dark paint don't forget to charge it by a light source before play.
For All the Best Kid Activities
More Fun Science Activities | <urn:uuid:f45ec965-cd85-4fff-be34-9476bbf44b21> | CC-MAIN-2016-26 | http://www.growingajeweledrose.com/2013/01/glowing-science-kids-activity.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00011-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.942593 | 463 | 2.546875 | 3 |
Responding to the publication of Ofsted's report ‘No Place for Bullying’, Chris Keates, General Secretary of the NASUWT, the largest teachers’ union, said:
"The NASUWT welcomes this report, in particular its recognition that bullying takes a variety of forms and some children and young people are more vulnerable than others.
"This prejudice-related bullying is often characterised by abusive behaviour, intolerance or ostracism of the grounds of an individual's gender, ethnicity, body image/size, sexuality, disability, age, religion or belief.
"To be tackled effectively bullying has to be recognised, understood and taken seriously. Regrettably, it is still too often dismissed as teasing or joking or just part of growing up.
“Bullying blights the lives of children and young people. It affects their health, self-esteem, confidence and educational progress. In some cases it has led to suicide. Its effects can last a lifetime.
"Schools need help and support, as the report highlights, to tackle bullying and to create a climate in which difference and diversity are recognised, respected and celebrated.
"Regrettably, all the excellent support, guidance and information that was previously available to schools was discarded by the Coalition Government when it came to office and replaced with a few inadequate paragraphs of advice which amount to little more than telling schools that 'bullying is bad and you shouldn't tolerate it’.
"This report demonstrates that the obsessive obliteration of vital information and guidance by the Coalition Government is leaving children and young people highly vulnerable and is placing additional burdens on schools as they have to use their own time and resources to plug the gap.”
Notes to editors
The NASUWT led the campaign to secure recognition of the problem of prejudice-related bullying and was instrumental in securing a detailed suite of support and guidance for schools provided by the previous Labour Government.
The NASUWT has recently published a report highlighting the increase in the workplace bullying of staff in schools. A copy of the report is available on request.
Journalist and acting press officer
Campaigns and Communications Team
0121 457 6250 / 07867 392 746 | <urn:uuid:73a1fa39-e378-440c-8318-d18547cea936> | CC-MAIN-2016-26 | http://www.politics.co.uk/opinion-formers/nasuwt-the-teachers-union/article/nasuwt-bullying-blights-lives-and-can-lead-to-suicide | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00164-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963899 | 453 | 2.59375 | 3 |
versión On-line ISSN 1996-7489
S. Afr. j. sci. vol.110 no.3-4 Pretoria feb. 2014
Centre for Invasion Biology, Department of Botany and Zoology, Stellenbosch University, Stellenbosch, South Africa
South Africa is home to 6% of the world's approximately 370 000 plant species, making it the country with the richest temperate flora in the world. This dazzling diversity includes many large genera, and it is not often that a monograph appears that describes an entire, large genus. Lachenalia (also known as Cape hyacinths or viooltjies) is one such large genus. It has 133 known species that are confined to South Africa and (marginally) southern Namibia. These endemic plants have been popular with specialist bulb growers worldwide for over 100 years. The publication in 2012 of a comprehensive account of the genus marks the culmination of the life's work of two 20th-century South African plant taxonomists whose work between 1929 and 2012 has spanned more than eight decades.
Early records of Lachenalia date back to the late 17th century. In 1880, the Kew botanist John Baker published an account that described 27 species, divided among six genera. Baker later described more species, which culminated in 1897 in a monograph (published in the 6th volume of Flora Capensis) that recognised 42 species in five sub-genera. Most of the subsequent taxonomic work was done by Ms Winsome Barker, first curator of the Compton Herbarium at Kirstenbosch. Her first publication on the genus appeared in 1930, and over the next 59 years she described 47 new species and 11 new varieties. It was always her intention to publish a monograph on the genus, but the goal ultimately eluded her. Her last taxonomic description appeared in 1989, and she passed away in 1994 at the age of 87. In 1978, Graham Duncan took up a position at the Kirstenbosch Botanical Gardens, where he met and was influenced by Barker. Duncan is now Curator of the Bulbous Plants Living Collection at the Kirstenbosch Gardens, and over 35 years he has described a further 38 species of Lachenalia. In 1988, he published The Lachenalia Handbook, which illustrated and described 88 species. The handbook did not provide a comprehensive coverage of the genus, and was intended 'to collate available information so as to provide horticulturists and informed gardeners with a list of valid species names....and notes on identification and cultivation'. Almost another quarter of a century was to pass before the final goal of complete treatment was to be realised.
The information in this book arises from a combination of a great deal of searching in the field, horticultural efforts to grow and propagate specimens, and scientific endeavour. As a horticulturalist, Duncan has gained enormous insights from working with this genus for over three decades. He has combined this experience with scientific study, having recently completed a MSc degree that dealt with the cladistics of the genus, and which provided a sound basis for this book. Attention is also drawn to the work of others who have shown that differences in basic chromosome numbers result in breeding barriers between sympatric Lachenalia species whose flowering periods overlap. Thus, although many hybrids have been produced by horticulturalists, they are very rare in the wild. The book even contains a portrait of the 6-year-old Charles Darwin holding a potted hybrid Lachenalia, dated ca. 1816. The fact that certain species of Lachenalia exhibit a high degree of morphological variation has in the past led to confusion regarding their taxonomy, but the book points out that most species are in fact distinct and easily identified. Species with a similar (morphological) appearance are also not necessarily more closely related. In the 1988 Lachenalia Handbook, species were arranged by similar appearance (for ease of identification), and this has been interpreted by other scientists as indicating genetic relatedness. These questions are discussed in some detail, and provide a sound basis both for identifying species and understanding how they are related.
As is the case with so many Cape plants (the genus is concentrated in the southwestern Cape), many (over 50) are critically endangered, endangered or vulnerable, most of which is a result of habitat destruction for agriculture or urban development. Duncan recalls some of the 'highs' of his field career, including finding thousands of flowering specimens of Lachenalia matthewsii that had been considered extinct for over 40 years and finding an elusive specimen of the inconspicuous L. maximilliani under their parked car after a long and unsuccessful search of the surrounding area.
Although it has had a long gestation period, the resultant monograph has been worth the wait - it is a beautifully produced book. There are separate chapters on the history of the genus, cultivation and propagation, ecology and conservation, and biology. The largest section (over two-thirds of the book) is devoted to the taxonomic treatment; 11 new species are described for the first time. Each species is illustrated by means of full-colour photographs, and distribution maps are also provided. There is also a list of insufficiently known names (e.g. Lachenalia cooperi - 'Type not found, described from a cultivated plant'), and excluded taxa (previously described species that subsequently merged with other species). Other useful features include a key to the species, a table showing month-by-month flowering times for all species, and a glossary of terms for the uninitiated. The book is a blend of art and science, enhanced by 39 colour paintings of species by nine artists (17 of them by Barker). No taxonomic monograph is ever the final word - new species will be discovered, and changes to nomenclature will occur. However, The Genus Lachenalia will no doubt stand as one of the significant milestones of South African botanical publishing for decades to come. It will be a very welcome addition to the libraries of botanists, horticulturalists, conservationists and collectors of Africana.
Department of Botany and Zoology,
Stellenbosch University, Matieland 7602, South Africa | <urn:uuid:9f9d22ed-e727-4aac-9f9c-0b015186b404> | CC-MAIN-2016-26 | http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0038-23532014000200003&lng=es&nrm=iso&tlng=en | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00127-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955933 | 1,291 | 3.046875 | 3 |
Quote from the Deep Sky Field Guide to Uranometria 2000:
Very bright nucleus with narrow sharp dark lane.
NGC 5866 is a bright Lenticular Galaxy in the Constellation of Draco and the largest member of a small galaxy group. Based on the published red shift, (Hubble Constant of 62 Km/sec per Mpc) a rough distance estimate is 35 million light years, with a diameter of about 50,000 light years.
Lenticular galaxies are disk shaped like spiral galaxies, but mostly consist of old or middle-aged stars, like elliptical galaxies. Some have prominent dust lanes. NGC 5866’s dust lane is “buried” inside of a large outer envelope that makes the galaxy look something like an elliptical on long photographic exposures. Strangely, the dust lane is tilted slightly from the plane of the rest of the galaxy. Also, there is some new star formation in this galaxy near the outer edge of the disk.
Some listings show NGC 5866 to be Messier-102. However, most consider M-102 to be just a mistaken re-observation of M-101.
Click below to
George Normandin, KAS
July 28th, 1998
Revised: May 4th, 2002 | <urn:uuid:1b1f7c48-1da5-43e9-b10f-90b222a6e534> | CC-MAIN-2016-26 | http://www.kopernik.org/images/archive/n5866.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00152-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.894533 | 265 | 3.1875 | 3 |
Definitions for hudson river school
This page provides all possible meanings and translations of the word hudson river school
Hudson River school, romantic realism(noun)
the first coherent school of American art; active from 1825 to 1870; painted wilderness landscapes of the Hudson River valley and surrounding New England
Hudson River School
The Hudson River School was a mid-19th century American art movement embodied by a group of landscape painters whose aesthetic vision was influenced by romanticism. The paintings for which the movement is named depict the Hudson River Valley and the surrounding area, including the Catskill, Adirondack, and the White Mountains; eventually works by the second generation of artists associated with the school expanded to include other locales.
The numerical value of hudson river school in Chaldean Numerology is: 2
The numerical value of hudson river school in Pythagorean Numerology is: 9
Find a translation for the hudson river school definition in other languages:
Select another language:
Discuss these hudson river school definitions with the community:
Word of the Day
Would you like us to send you a FREE new word definition delivered to your inbox daily?
Use the citation below to add this definition to your bibliography:
"hudson river school." Definitions.net. STANDS4 LLC, 2016. Web. 30 Jun 2016. <http://www.definitions.net/definition/hudson river school>. | <urn:uuid:63806b03-6cb2-4117-a46c-2740fac86120> | CC-MAIN-2016-26 | http://www.definitions.net/definition/hudson%20river%20school | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00145-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.901787 | 312 | 2.828125 | 3 |
In running, this animal confines himself entirely to his hinder, legs, which are possessed with an extraordinary muscular power. Their speed is very great, though not in general quite equal to that of a greyhound; but when the greyhounds are so fortunate as to seize them, they are incapable of retaining their hold, from the amazing struggles of the animal. The bound of the kangaroo, when not hard pressed, has been measured, and found to exceed twenty feet.
At what time of the year they copulate, and in what manner, we know not: the testicles of the male are placed contrary to the usual order of nature.
When young the kangaroo eats tender and well flavoured, tasting like veal, but the old ones are more tough and stringy than bullbeef. They are not carnivorous, and subsist altogether on particular flowers and grass. Their bleat is mournful, and very different from that of any other animal: it is, however, seldom heard but in the young ones.
Fish, which our sanguine hopes led us to expect in great quantities, do not abound. In summer they are tolerably plentiful, but for some months past very few have been taken. Botany Bay in this respect exceeds Port Jackson. The French once caught near two thousand fish in one day, of a species of grouper, to which, from the form of a bone in the head resembling a helmet, we have given the name of light horseman. To this may be added bass, mullets, skait, soles, leather-jackets, and many other species, all so good in their kind, as to double our regret at their not being more numerous. Sharks of an enormous size are found here. One of these was caught by the people on board the Sirius, which measured at the shoulders six feet and a half in circumference. His liver yielded twenty-four gallons of oil; and in his stomach was found the head of a shark, which had been thrown overboard from the same ship. The Indians, probably from having felt the effects of their voracious fury, testify the utmost horror on seeing these terrible fish.
Venomous animals and reptiles are rarely seen. Large snakes beautifully variegated have been killed, but of the effect of their bites we are happily ignorant. Insects, though numerous, are by no means, even in summer, so troublesome as I have found them in America, the West Indies, and other countries.
The climate is undoubtedly very desirable to live in. In summer the heats are usually moderated by the sea breeze, which sets in early; and in winter the degree of cold is so slight as to occasion no inconvenience; once or twice we have had hoar frosts and hail, but no appearance of snow. The thermometer has never risen beyond 84, nor fallen lower than 35, in general it stood in the beginning of February at between 78 and 74 at noon. Nor is the temperature of the air less healthy than pleasant. Those dreadful putrid fevers by which new countries are so often ravaged, are unknown to us: and excepting a slight diarrhoea, which prevailed soon after we had landed, and was fatal in very few instances, we are strangers to epidemic diseases. | <urn:uuid:8300639d-383e-4b1d-9158-fa252a11b257> | CC-MAIN-2016-26 | http://www.bookrags.com/ebooks/3535/42.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00124-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.978623 | 671 | 2.625 | 3 |
Editor's Note: This is the latest in a series of articles describing the people, deeds and events playing a significant role in the settling of Utah in the years leading to statehood, January 4, 1896.
If any particular period could be considered critical in Utah's history--it likely would be 1857-58, when the U.S. Army marched on the Mormons and forever cracked their shell of isolationism. In the brief span of two years, Brigham Young would be deposed as Governor of the Territory, troops would be sent to protect his successor and a horrific episode known as the Mountain Meadow massacre would cloak the Mormon Church in a black shroud of shame and disgrace.
The Army eventually withdrew, leaving behind millions in materiel, which the Mormons bought for pennies on the dollar. On the surface it seemed an unalloyed Mormon triumph, but in truth Brigham Young lost the total domination he once enjoyed over the citizens of Utah. It was far from a fair exchange. Who gained the advantage-- It depended on which side was asked.
The confrontation had been long simmering, starting in the early 1850s with the runaway officials--those federal appointees who vacated Utah after clashing with Governor Brigham Young on how things should be run in the territory. In the next few years, the situation became complicated when Jim Bridger was forced out of the trading post he built on Blacks Fork in present Wyoming. The Mormons accused him of intriguing with the Indians. Brigham Young offered to buy the property, but dealt with Louis Vasquez, Bridger's partner. Young paid Vasquez $4,000 in 1855 with a promise of another $4,000 to close the deal.
At the same time, Albert Carrington, editor of The Deseret News, railed against William M.F. Magraw, the U.S. mail contractor, accusing Magraw of dereliction in delivering eastern mail to Utah. So vociferous were the complaints, that Magraw, sullen and vindictive, finally threw up his hands and abandoned the contract to a much lower Mormon bid.
Then, of course, there was Associate Justice W.W. Drummond, in 1855, the latest federal appointee to the Supreme Court of the Territory of Utah. Drummond was related by marriage to a Mormon; and he rated them somewhat lower than horse thieves on the social ladder. If anything, the feeling was mutual. Brigham Young, in one of his more solicitous moments, referred to Drummond as "a rotten-hearted loathesome reptile."
Drummond became a stench within his jurisdiction once he introduced "Mrs. Drummond" to Mormon social circles and it was discovered that she was, in fact, Ada Carroll, a prostitute he had picked up in Washington. It also became public then that Drummond had deserted his wife and children at Oquaka, Illinois. A letter from the real Mrs. Drummond was published in The Deseret News exposing his scandalous behavior with the Carroll woman and "his general perfidy." During Drummond's tenure among the Saints, he was hard-pressed to keep Ada interested, there just wasn't enough action in Zion to entertain a city woman. So on the days Drummond held court, she joined him on the bench and, according to several diarists, offered her counsel on handing out sentences.
Historian Dale L. Morgan summarized the problem succinctly in stating, "Drummond launched a wholesale assault upon the Mormon courts as being founded in ignorance, and he discovered an ally in Judge George P. Stiles, who had at one time been a Saint in good standing but who had, as the Mormons saw it, gone lusting after strange gods." Stiles, the wavering Mormon, was excommunicated for immoral conduct--adultery.
Then Bill Hickman, who divided his time between being a desperado and a lawyer, let it be known if Drummond pulled any such shenanigans on him, he would inflict on the judge painful bodily injury. The message reached Drummond, who decided it was time to hold court in a more distant corner of Utah Territory--Carson City, for instance. From there he and his lady Ada went to San Francisco and booked ship passage to the States.
Stiles' abrasion with Utah lawyers came in February 1857 when James Ferguson and Hosea Stout, a couple of true hard cases, raised Cain in the court and intimidated Stiles into adjourning. The judge's law office was ransacked and certain papers in his office burned in a nearby outhouse, giving rise to subsequent charges that Stiles' law library and court records had been destroyed. (They had not been.) The judge appealed to Brigham Young as governor to protect him in the discharge of his office, but was told that if he could not sustain and enforce the laws the sooner he adjourned the court the better. Stiles closed shop and also packed to leave the territory.
The national election in November 1856 had seen James Buchanan defeat John C. Fremont for the presidency. Bitter reports brought east by those who departed Great Salt Lake City proved to be a last straw, writes Dale Morgan in The Great Salt Lake. Characterizing the Mormons as being in open rebellion, Buchanan ordered a sizable military force to Utah as an escort for new federal appointees (including a governor to replace Brigham Young) and to re-establish the supremacy of government.
This Utah Expedition was to be commanded by Brig. General William S. Harney, but he was temporarily reassigned to Kansas and Colonel Albert Sidney Johnston named to take his place. Because of this change in orders, the Expedition--consisting of the 10th Infantry Regiment, the 5th Infantry Regiment, the 4th Artillery and elements of the 2nd Dragoons--got off to a late and erratic start from Fort Leavenworth, Kansas. Civilian contractors Russell & Majors were hurriedly called upon to supply the expedition as well as the western army posts. The short notice resulted in mass confusion along the frontier as wagons, mules, oxen and men were recruited and assembled for the massive campaign.
The Department of the Army dispatched Captain Stewart Van Vliet, an assistant quartermaster, to Utah to contact Governor Young, and inform him of the expedition's mission: to escort the new appointees, to act as a posse comitatus and to establish at least two and perhaps three new U.S. Army camps in Utah. Van Vliet reached Great Salt Lake City September 8 and sought out Young. In the maelstrom Buchanan had made a critical slip; he had failed to notify Brigham Young officially that he had been superceded. Young--who had once declared: "We have got a territorial government, and I am and will be the governor, and no power can hinder it until the Lord Almighty says, 'Brigham, you need not be governor any longer,' and then I am willing to yield to another"--made the most of Buchanan's blunder. He chose to regard the troops as a mob and on September 15, 1857, declared martial law in the territory. His now famous proclamation began: "Citizens of Utah. We are invaded by a hostile force."
Two weeks later, Young learned that an entire wagon company--men, women and all children over the age of six--had been slaughtered in southern Utah. Only seventeen youngsters had been spared! Not until September 29 did John Doyle Lee arrive in Great Salt Lake City from Cedar City with his "awful tale of blood." According to Lee, Indians had massacred a wagon train. Brigham Young grieved for the victims. But by early October details began trickling out of California. It was Indians, yes. But whites, too--Mormons, who had betrayed the immigrants.
The wagon train was made up of Arkansans from Carroll County who had pulled up stakes for a fresh start in California. The company included nearly ninety immigrants treacherously slain at a place called Mountain Meadow on the old Spanish Trail southwest of Cedar City. Full identities of the victims has never been totally resolved and probably never will be, but it is generally accepted that some of the eighty-two or so slain had traveled in companies led by John T. ("Uncle Jack") Baker and Alexander Fancher. Seventeen youngsters under the age of six were spared and parceled out to Mormon families. They later were recovered and returned to Arkansas. It's believed at least one additional unidentified surviving child remained in Utah to be reared by a Mormon family.
The history of the massacre is complex and in great measure hopelessly contradictory. Books have been written and will continue to be written on this black, bloody chapter in Utah's past. But for now, the most balanced account is The Mountain Meadows Massacre by the late Juanita Brooks. New facts continue to surface with the passage of time because of dogged research on the part of historians and scholars. But in essence, the Arkansas train was composed of well-to-do families from the Carroll County area. It has been said theirs was the richest outfit to have crossed the Plains.
They left in April of '57 and followed the Arkansas River, crossed the Santa Fe trail, then coursed north until they reached the Platte River in mid-June, moving slowly with a trail herd of some 900 cattle. Two other wagon parties--the Turner-Duke outfits, primarily Missourians--also were on the trail and traveled with the Baker-Fancher companies from time to time for mutual protection against Indians; the Turner-Duke bunch also trailed a sizable cattle herd.
Nearing Great Salt Lake City in late July, they encountered groups of Mormons making preparations to fend off the U.S. Army they now knew was on the march for Utah. There were still other scattered immigrant companies on the road, at least one from Texas, and these wagon trains, too, sought pasturage in the Salt Lake Valley for their herds. But the Mormon population, girding for an expected "invasion by a hostile force," was in no mood to banter or barter with Gentiles--especially Missourians and Arkansans (whom they now held responsible for the ambush murder of Parley P. Pratt by an angry husband west of Fort Smith the previous May.) The immigrants were ordered to move their livestock off Mormon pastures and warned to keep them off!
Trouble Brewing: During the next few weeks, the Baker-Fancher company first drifted north, thinking to take the upper route around the Great Salt Lake; then in late August turned about for the southern corridor to California (along today's I-15). And so, in the eye of a hurricane of Mormon war-planning, the Arkansas company proceeded south on a road that would take them through outlying settlements. Trouble began almost immediately beyond Provo when they were rebuffed in efforts to buy vegetables and other hard-to-get trail provisions.
Mormon apostle George A. Smith was on a circuit of those southern settlements preparing them for the approaching Utah Expedition. In his "war talk," he warned the Army might try to drive the Mormons from their homes and, he emphasized, they should husband food and provisions, and not trade with Gentiles. "Store your harvest for the hard times ahead," he counseled. The stubborn refusal to part with even the smallest amount of greens and garden vegetables infuriated the wagon companies, who retaliated with threats to return when the army reached Utah and help in "teaching you damned Mormons a lesson." It was also said the travelers turned their herds into Mormon fields and trampled fences in the towns beyond Salt Creek (Nephi) and Fillmore. Later it was reported they insulted and cursed the settlers and, some claimed, the immigrants dumped strychnine into a spring at Corn Creek (Kanosh) and poisoned an ox carcass, which subsequently sickened Indians who ate the meat.
The Fancher-Baker outfits camped in Mountain Meadow to rest their animals. They left in their wake a seething string of settlements including Beaver, Paragonah and Parowan. Isaac C. Haight, a Mormon stake president, angrily told a church meeting on September 6 that "...the Gentiles will not leave us alone. They have followed us and hounded us...and now they are sending an army to exterminate us. So far as I am concerned I have been driven from my home for the last time."
Years later, John D. Lee confessed that Piede Indians in the region had been encouraged to attack the wagon train for its plunder. The Mormons, he said, became involved out of revenge for past grievances and to lash out at the belligerent attitude of the emigrant companies as they traveled through the settlements. Indians opened fire on the wagon camp at the south end of the meadow the morning of September 7. Those first shots caught the Arkansans completely off-guard, killing and wounding a dozen or more before the immigrants could circle their wagons and throw up a dirt barricade.
The cattle herd pastured a mile or so north of the camp was run off the first day. Shooting was sporadic thereafter, with the immigrants returning fire and the Indians forced to snipe from a distance. By midweek the Piedes had lost patience with the siege; two of their chiefs had been seriously wounded by the sharp-shooting whites, and the Indians demanded the Mormons come to help finish the job--or face Indian wrath later. Lee, who was a "farmer to the Indians" was summoned to deal with the situation, while Mormon authorities--Haight in Cedar City and William H. Dame in Parowan--organized the Iron County militia to put the immigrants "out of the way," according to historian Brooks.
Accompanied by William Bateman, who carried a white flag, Lee walked to the open country near the immigrant redoubt, where white had already been hoisted (a child wearing a white dress had been lifted to view). Two men from the camp strode out to meet the Mormons. After a brief conversation the four went to the wagon camp, where Lee persuaded the immigrants to surrender their weapons "to placate the Indians." In return he would provide safe conduct out of the meadow.
Three Mormon wagons were ordered up. The youngest children placed in one, all the guns in another, and three or so wounded immigrants in the third. The women and older children of the camp walked out and followed the first two wagons in a disorganized march. After a quarter mile, the men started out in single file, each with an armed militia "guard" at his side. Major John Higbee of the Iron Militia, on horseback, was in charge. After approximately a mile, the women and children were way out ahead, and the men had reached a point east of what is now known as Massacre Hill. Here Higbee shouted, "Halt! Do your duty." Each Mormon turned to shoot the immigrant at his side. Up the trail, the Piedes leaped whooping and screaming from hiding places in the brush to begin butchering the women and children.
Mormons who protested the killing, were to fire in the air and kneel down, remaining quiet while the Indians finished off their men. It was Friday, September 11. The bloody business, by all accounts, was over quickly. The Indians stripped and plundered the corpses. The whites left the scene until the next morning when they made a half-hearted effort to bury bodies, chucking them in shallow trenches and covering them with dirt and brush.
Arguments about who should accept responsibility erupted at once. A rider, James Haslam, had been dispatched earlier in the week to notify Brigham Young that the Indians planned to attack the train. He arrived in Great Salt Lake City on Thursday, September 10. Young, who had been in meetings with Captain Van Vliet, sent the exhausted Haslam on his return south the same afternoon with instructions that the Indians "must be restrained." Haslam reached Cedar City on the 13th--two days after the massacre.
Meanwhile, the Missouri wagon companies--Turner, Duke and others--had been detained on the trail, and after paying Mormons to lead them, were guided on a route skirting the meadow. They had been preceded by a mail train driven by two Mormons, Sidney Tanner and William Matthews, in company with three immigrant wagons. They traveled the meadow at night a week after the massacre and arrived in San Bernadino October 1. It was from these various immigrant wagon parties that newspapers in Los Angeles and San Francisco pieced together a story of the attacks that so outraged the nation.
(Twenty years later, John D. Lee alone would pay the supreme penalty for his role in the massacre. After two trials, he was condemned to die by firing squad on March 23, 1877, at, of all places, Mountain Meadow. In 1859, elements of the U.S. Army from Camp Floyd, visited the massacre site and erected a cairn and monument over the collected skeletal remains. Vengeance is mine, saith the Lord, I shall repay! was inscribed on a cross at the cairn.)
Burn the City: Back in Great Salt Lake City, Brigham Young was telling Captain Van Vliet he would not assist the army in any way to occupy the city; and if "Squaw-killer" Harney--commanded the Expedition, Young and his followers would reduce their homes to ashes and fight a relentless guerilla war against the troops. "Five times we have been driven--no more!" was the rallying cry by George A. Smith, and echoed by church members. Van Vliet promised on his return to present the Mormon position in his report and to halt further advance of Utah Expedition supply trains on his own authority. In the Mormon view, the problem was savagely elementary: If Harney crosses South Pass, "the buzzards will pick his bones."
Brigham's first action after proclaiming martial law was to order Nauvoo Legion scouting parties into the field. Colonel Robert T. Burton was to take a detachment as far east as South Pass on the Continental Divide, Colonel Lot Smith was to command a company of guerillas to harrass and delay any government advance near Green River crossing above Fort Bridger, while Porter Rockwell and Bill Hickman and their companies did the same.
When in late September, the first detachments of the 10th U.S. Infantry Regiment commanded by Colonel Edmund Alexander made their way over South Pass and camped at Pacific Springs, four miles below the summit, Rockwell and a half-dozen of his men took the initiative. As the soldiers slept, Rockwell's raiders, whooping and yelping like wild Indians, firing their guns in the air and clanging large cowbells, came galloping among the tents like buffalo with their tails on fire. Their objective was at once psychological: to rattle the troops (which they surely did), and tactical: to run off the huge mule herd packing troop supplies. (Colonel Alexander's quick reaction in sounding stable call, halted the stampeding mules.)
Ten days later Lot Smith compounded the Expedition's woes by surprising two civilian supply trains camped for the night at Green River crossing and setting the fifty-two wagons ablaze; at noon of the next day he encountered another train near present Farson, Wyoming, and torched all but two wagons, which he allowed the teamsters to keep. In one stroke, the Mormons had dealt a body blow to the federal force. The third train alone contained enough ham, bacon, flour, beans, coffee, sugar, canned vegetables, tea and bread for more than 100,000 individual meals--provisions for an army for a winter.
Stalled in Mountains:Within a month Albert Sidney Johnston would overtake the advance elements of the Expedition and with the new federal appointees--including Governor Alfred Cumming--in tow, pitch a winter quarters encampment he named Camp Scott near Fort Bridger. The trading post itself had been put to the torch by its Mormon occupants in abandoning it to U.S. troops. Johnston was effectively stalled in the mountains. And the worst winter in decades was whistling up the Uintas. Without adequate clothing and virtually no rations, the soldiers began butchering oxen and mules for food.
The army bivouac turned into a frozen hell on the night of November 6; when Camp Scott became known as the Camp of Death. Temperatures plunged to minus thirty. Horses, mules and cattle died in their tracks. Some wandered into campfires and refused to move though they were literally roasting. Death was everywhere. The Governor's lady, Elizabeth Cumming, could not finish a letter because the ink froze; but, she noted, 2,000 government animals perished in that storm, and her own frostbitten foot pained excruciatingly until the skin burst. Colonel Cooke's report compared the final miles of the march to Fort Bridger to "horrors of a disastrous retreat." It has been a march of starvation, he wrote, "the earth has a no more lifeless, treeless, grassless desert; it contains scarcely a wolf to glut itself on the hundreds of dead and frozen animals which for thirty miles block the road."
Johnston called on Captain Randolph Marcy to lead a volunteer party from Fort Bridger to Fort Massachusetts in New Mexico for relief supplies and livestock. All the while, Mormon scout parties continued to harass the Expedition, running off what few cattle remained and infiltrating the camp itself with men disguised as teamsters. Among the thousands of soldiers and camp followers, it was impossible to tell friend from foe, and the Mormon outriders were able to keep close tabs on the federal troops and the latest rumors, while the rest of the Nauvoo Legion spent the winter at home, except for a token force in Echo Canyon. The canyon had been fortified to some extent by the Mormons who used it primarily as an observation point in case the soldiers moved toward Great Salt Lake City.
In February, a "Dr. Osborne," arrived in Utah from San Bernardino. He was an old friend of Brigham Young, traveling incognito: Dr. Thomas L. Kane, acting as an unofficial emissary from President Buchanan to arrange a peaceful settlement between the Mormons and the U.S. government. Kane had journeyed from Washington to San Francisco, by way of the Isthmus of Panama, with letters of introduction from Buchanan; he also was armed with a letter from Brigham Young to Cumming. In March, he rode to Fort Bridger to meet Governor Cumming and offer to serve as an intermediary. Cumming agreed to accompany Kane into Great Salt Lake City, despite Johnston's repeated warnings that the Mormons should be considered hostile.
Kane and Cumming with a Mormon escort traveled through Echo Canyon at night, but the wily Nauvoo Legion commander, Daniel H. Wells, arranged to have sentries conspicuously near campfires atop the canyon walls, giving the party the impression that "hundreds" of Legionnaires rather than a handful, were entrenched along the 15-mile corridor. In his approach to the city, Cumming saw multitudes of Mormons on the road with wagons and baggage. They were moving south toward Provo. Brigham Young had announced the "Move South"--to abandon Great Salt Lake City and prepare to burn the homes. It was Young's threat to leave the city in ashes if General Harney led the Army into the valley. Young had received word that Harney had been relieved of his Kansas duties and was enroute to take over command of the Utah Expedition.
Once in the city, Cumming was greeted by Brigham Young and recognized by all as the new governor. He was given the official Territorial Seal of office and shown the law library that the Mormons had been accused of destroying. In the discussions that followed, Brigham Young assured Cumming that he would allow any dissidents or apostates who wanted to leave Utah that opportunity. It had been a galling point with officers of the Expedition the stories that hundreds of disenchanted Mormons were being held against their will in Great Salt Lake City by "the despot Brigham Young." It was in part the reason so many of the "young turks" of the 10th Infantry spoiled for a fight with the Nauvoo Legion --to march in the city and put Young and his Twelve Apostles in chains for treason and anything else the government could think of.
Peace Brokers: In Washington, President Buchanan had been prevailed upon by the Congressional delegate John Bernhisel to send a "Peace Commission" to Utah to investigate the facts. Bernhisel's persistence and warnings from respected senators such as General Sam Houston, who cautioned Congress that "if the Mormons fight, [the Utah Expedition] will get miserably whipped." So it was that Lazarus W. Powell, former governor of Kentucky, and Major Ben McCulloch of Texas were appointed Peace Commissioners by Buchanan, who entrusted them with a Proclamation of Pardon dated April 6, 1858, ironically the 28th anniversary of the founding of the Mormon Church. The document offered amnesty to all "who would submit to the authority of the federal government."
The Peace Commissioners arrived at Camp Scott within days of the return of Captain Marcy from his relief expedition to New Mexico. The captain returned with hundreds of horses for the cavalry and mules for the wagons. Shortly after the commissioners journeyed to the city to meet with Governor Cumming and Brigham Young in the Council House, Porter Rockwell arrived from Echo Canyon with a message that Johnston (now a brigadier general by virtue of a brevet promotion during the winter) planned to march his troops to the valley on June 14. The news was disquieting, but was resolved by Governor Cumming in a dispatch to Johnston urging discretion.
The Presidential Pardon was accepted after some discussion, Johnston announced he would move his troops through the city on or about June 26 and encamp "beyond the Jordan on the day of arrival in the valley," which accommodated Brigham Young's insistence that the Army move some distance from his city; Governor Cumming wrote a proclamation declaring "peace is restored to our territory," and Young counseled his church members to return to their homes, which had been abandoned. The U.S. troops ultimately marched forty miles south to establish Camp Floyd--named for Secretary of War John B. Floyd--and the Utah War was history. | <urn:uuid:720405ac-c6bd-4313-a5fc-d5516af1e960> | CC-MAIN-2016-26 | http://historytogo.utah.gov/salt_lake_tribune/centennial_celebration/072395.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00178-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.976494 | 5,448 | 2.703125 | 3 |
To the editor:
Being choosy when it comers to TV shows,what pre-schoolers watch affects their behavior even if the amount of time they spend watching isn't reduced.
In one of the largest studies to examine how modifying television contest affects the development of children ages 3-, researchers report that six months after families reduced their kids' exposure to aggressive and violence filled programming and increased exposure to enriching and educational programming, even without changing the number of viewing hours, children showed significantly improved behavior compared to children whose media diet went unchanged.
Many families who changed their kids' exposure to aggressive and violence filled programming noticed a big improvement such as a decline in aggression and being difficult.
Parents must remember that young children learn by imitating what they see.
Television content is important and does not get much attention.
However how much TV you child watches, it's worth the parents' efforts to be more selective. It is hard for a young child to distinguish realty vs. fantasy in TV shows.
Better TV content increases healthy social behaviors such as empathy, helpfulness and concern for others
Again, young children imitate what they see and for pre-schoolers a lot of what they see is on television and has an influence on their behavior at home and when they start school. | <urn:uuid:66d4152c-ddd3-48de-a3a1-5aedf96f8ef4> | CC-MAIN-2016-26 | http://www.lehighacrescitizen.com/page/content.detail/id/528998.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00196-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968703 | 264 | 2.859375 | 3 |
Researchers found a way to direct stem cells by magnets
Collaboration between researchers at Emory University and Georgia Institute of Technology (Georgia Tech) resulted in the development of stem cells that can be directed by magnetic fields. Stem cells could soon be intravenous injected into the patient to treat heart diseases and vascular problems. By loading stem cells with superparamagnetic iron oxide nanoparticles (SPIOs), scientists can then use magnets to deliver them to areas of the injury or disease.
The research team used human mesenchymal stem cells (hMSCs) that can be easily derived from adult tissues such as bone marrow or fat. This type of stem cells can differentiate into a variety of cell types including: bone, fat and cartilage cells, but not other types of cell such as muscle or brain. HMSCs secrete factors that inhibit inflammatory processes as well as nourishing factors, which make them promising cell population for treating conditions such as cardiovascular disease or autoimmune disorders.
Nanoparticles of magnetized iron oxide are already FDA-approved for diagnostic purposes with magnetic resonance imaging. Similar particles were used for loading stem cells in the previous studies, but coating on the particles was either toxic or changed the cells’ properties.
The research team used SPIOs coated with the non-toxic polymer polyethylene glycol that protects the cell from damage. The particles have an iron oxide core that is about 15 nanometers across. Furthermore, researchers used a magnetic field to push polyethylene glycol-coated SPIOs into the cells, rather than previously used chemical agents.
“We were able to load the cells with a lot of these nanoparticles and we showed clearly that the cells were not harmed. The coating is unique and thus there was no change in viability and perhaps even more importantly, we didn’t see any change in the characteristics of the stem cells, such as their capacity to differentiate”, said Robert Taylor, professor of medicine and biomedical engineering and director of the Division of Cardiology at Emory University School of Medicine.
SPIOs appear to be localized primarily in secondary lysosomes of hMSCs, which are parts of the cell that break down waste. The researchers claim that there is no degradation of the material withing the first week, because they haven’t detected any leakage of iron particles. The scientists measured the iron content in the cells once they were loaded up and determined that each cell absorbed roughly 1.5 million particles.
The Emory/Georgia Tech team tested the ability of magnets to steer SPIO-containing cells both in cell culture and in mice. They labeled the cells with a fluorescent dye in order to track where the cells went inside the mice. During injection of these cells into the mice’s body, a bar magnet was applied to the part of the tail close to the body and it attracted injected stem cells to the tail.
Normally most of the mesenchymal stem cells would become deposited in the lungs or the liver. The bar magnet made hMSCs loaded with SPIOs six times more abundant in the tail. Additionally, the iron oxide particles themselves could potentially be used to follow cells’ progress through the body.
The use of magnetic stem cells could have a broad spectrum of applications in medicine. Eventually, scientists could target these cells to a particular limb, an abnormal blood vessel or even the heart.
“Next, we plan to focus on therapeutic applications in animal models where we will use magnets to direct these cells to the precise site need to affect repair and regeneration of new blood vessels”, said Taylor.
For more information, read the article published in Small: “Magnetic Targeting of Human Mesenchymal Stem Cells with Internalized Superparamagnetic Iron Oxide Nanoparticles“. | <urn:uuid:af2fa366-8a1c-415f-b36f-f99adf609605> | CC-MAIN-2016-26 | http://www.robaid.com/bionics/researchers-found-a-way-to-direct-stem-cells-by-magnets.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00042-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.945792 | 780 | 3.390625 | 3 |
Local ceasefires could be a path to peace in Syria
Syrian civil society leader and astrophysicist Rim Turkmani may live in the UK, where she is raising her young children while juggling her academic duties with her peacebuilding activities, but a good chunk of her heart is with her family and friends struggling to survive in her native Syria. They are living day to day, uncertain about what the next one will bring.
In the part of Syria in which they are, “they often have to walk everywhere. There’s no transport or it’s extremely slow because of the checkpoints,” says Turkmani. “There’s hardly any electricity, water. It’s freezing, difficult to get food. They don’t have any prospects.”
Nearly four years of bloody civil war have torn the country apart, driving more than 11 million people from their homes and leaving about half of Syria’s 22 million citizens in need of humanitarian assistance. So it’s no wonder that Turkmani has been pouring every spare moment into a campaign to promote greater support to civil society actors engaging in local negotiations for ceasefires that could help reduce the violence and ease the suffering of families.
Such ceasefires and truces have caught international attention in recent months, and the UN Special Envoy for Syria, Staffan de Mistura, has presented a plan to establish local freezes, starting in Aleppo. Many of the past deals that have been negotiated between the Syrian government and opposition groups have been problematic—in some cases driven by massive violations of human rights. The absence of independent monitors and a limited role for civilians and civil society groups has also contributed to the fragility of truces between conflict parties. With de Mistura’s initiative, there is a chance to draw lessons from past cases to make to make local ceasefires more sustainable and link to renewed efforts to find a political solution to the crisis.
In a report she co-authored last fall, “Hungry for Peace,” Turkmani and her fellow researchers explored how local ceasefires, even in one area at a time, can offer a glimmer of hope in a dire situation—especially if the ceasefires can be linked to a broader peace process.
For Syrians enduring the endless conflict, what does that glimmer of hope look like?
“A better day than the day before because now every day that comes is worse than the day before,” said Turkmani recently. “This dream [is] to be more secure. The people will be able to go back to their houses. This bleeding of people having to leave the country will stop. Having water, electricity, or food—having hope that this thing is going to end someday.”
For Samah, a 37-year-old mother of six children whose story is recounted in Failing Syria: assessing the impact of UN Security Council Resolutions in protecting and assisting civilians in Syria, that day can’t come soon enough. The threat of bombing forced her family to flee from their home for the safety of some caves in the mountains.
“When we were in the caves, we used to go to nearby farms to collect anything we find, sometimes grass or bark, to feed to our six children,” she said. Occasionally, she would meet a farmer too afraid to go into the field to harvest because of aerial bombardment, so she would harvest in exchange for a bit of money and some food.
A few months ago, Samah’s family got a tent so they can now live on their own.
“Can you imagine that our dream had become just to have our own tent?” she asked. “I still think that living under shelling and airstrikes is more dignified than this kind of life. If a shell hits you, then you will die instantly, but here we are dying every day a thousand times over. We are dying from cold, illness, and hunger."
Pressure from basic needs
Rim Turkmani and her co-authors of Hungry for Peace found that local ceasefires can help communities regain a degree of security, allowing the people who live there to return to at least a semblance of their former lives. Often what makes these local deals possible is when citizens, desperate for food and water or for services such as electricity, put pressure on the armed factions that control the area.
One good example, says Turkmani, is in Barzeh, a rebel-held neighborhood in northeast Damascus, where the front-line divided it from Esh Alwarwar, an area of government loyalists. Barzeh, from which many civilians fled, sits on a main road that leads to central Damascus and winds past loyalist neighborhoods and a military hospital. Barzeh rebels were blocking that road, which meant that everyone else had to take a much longer route around, including government military officials. So, they had some incentive to engage in negotiations, Turkmani notes.
“The [Barzeh] civilians who left, most of them were still living in Damascus with friends or neighbors or renting houses. [They were] also adding pressure; they wanted to go back. So they initiated negotiations—cutting a very good deal where the regime and the opposition agreed to stop fighting,” Turkmani says. The opposition, while still keeping armed checkpoints, opened the road so the regime could use it.
“So thousands—the figure has been given as 30,000—many people went back to their areas after that. They settled back in their houses. They’re not IDPs anymore. There was a revival of modest economic activities. There was some progress,” Turkmani says. “The ceasefire has been holding out for more than a year now.”
Deals like that in Barzeh can be hard to achieve, acknowledges Turkmani. And maintaining them is equally challenging, especially as regional and international powers—with a stake in the fate of Syria—continue to arm the fighters, she adds. In fact, in Turkmani’s opinion, in addition to the lack of political will, regional interference is the biggest obstacle to local ceasefires.
“The regional actors are supporting different conflict parties,” she says, naming Iran and Turkey among those feeding the fight. “These conflict parties are fighting each other. So if you don’t resolve the regional conflict, it is hard to resolve the local ones.”
For peace-minded citizens trapped inside Syria, this regional power play has left many feeling hopeless and abandoned by the world at large—by the Russians, the Americans, Iranians, and those in the Gulf countries, Turkmani says.
“They [Syrians] feel they [those countries] are also behind their misery, that they just helped keep the conflict going. They had the power. They had the leverage. And they were not interested in presenting a solution for them and as a result they are paying the price while these people are safe in their countries,” Turkmani says.
Still, Turkmani has a deep belief in the power of talk to end the strife and bring peace to a place she loves so much. And she takes inspiration from her 9-year-old son.
“He was asked [along with others] in school to write a message to someone else in the world,” Turkmani says. His message was for the people suffering in Syria: you have to believe war will end, and talks are the way to get there.
“I thought this is absolutely right,” Turkmani says. “This is going to end. We have to keep working on this.”
Building hope out of shattered lives:
Approximately 12.2 million people are in need in Syria. Oxfam has reached over 1.5 million people affected by the Syria crisis, across Syria, Lebanon and Jordan. Women and children have been particularly affected by the violence. | <urn:uuid:f7bc332c-d34c-406a-aff5-d15ba9bd9bf2> | CC-MAIN-2016-26 | http://www.oxfam.ca/blogs/local-ceasefires-could-be-a-path-to-peace-in-syria | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00024-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975482 | 1,660 | 2.625 | 3 |
10 technologies that could revolutionize filmNews
I have a suspicion that my friend Stéphane Jolicoeur (Senior Internet Applications Analyst at the NFB) lives in the future. Not that he time travels like Marty McFly (although I wouldn’t be surprised if he had a DeLorean DMC-12 hidden away somewhere…), but he spends most of his free time surfing computing, design and future tech websites like NOTCOT, Fastcodesign and TechCrunch, and sending me links about 3D printing and HTML 5.
A few days ago, I asked him what new technology and software he thought could revolutionize the world of film. Since then, he’s been sending me a few links everyday.
10 technologies to watch
1. 3D Printing
3D printing is an invention that allows designers, engineers and artists of all stripes to create the impossible. Although not a new technology, the current incarnation is a major step forward. High performance office printers, like the MakerBot, can now print colour objects out of plastic in high resolution (with fewer visible filaments). They start with 3D digital models created on a computer or scanner and print them layer-by-layer in three dimensions. Visual concepts and objects that were previously impossible to create are now within reach. For animators, applications include making marionettes, set decor and miniature models. For a glimpse at the technology’s potential, have a look at this Japanese 3D printing photo booth.
In this video, Lee Unkrich, the director of Toy Story 3, demonstrates how his team at Pixar used 3D printing technology (and a printer made by Z Corporation) to create models of the film’s characters.
2. Arduino Microcontroller (and Makey Makey)
Arduino is an open source electronics prototyping platform that consists of development software and computer hardware. The easy-to-program Arduino microcontroller board makes it possible to develop computers that interface with the physical world through its chips and transistors. With plans and software available online under a non-restrictive licence, Arduino is a small revolution for creative people. Designer Lara Grant quickly learned how to harness the technology to create a clothing collection that features music interfaces and sensors that produce music based on touch and movement. Very, very cool.
Filmmakers could use this type of microcontroller to develop equipment such as a remote-controlled camera dolly. Or they could use it to create electronic gadgets to control objects from a distance, à la James Bond.
Budding engineers might also be interested in the Makey Makey inventor kit, which allows anyone to turn any everyday object into an electrical controller. It’s easy to use and oh-so-practical!
3. Augmented Reality
Your view of the world can now be enhanced. Augmented reality is a technology that opens an impressive new world of possibilities. It makes it possible to overlay computer-generated data – such as useful information, animation, another dimension (3D), audio, video or interactive content – on images of the real world. For example, the ARART iPhone application can breathe life into objects – animating images that would otherwise be static. With it, you can look at well-known works of art like The Girl With The Pearl Earring through your smartphone camera and watch as it is animated before your eyes.
4. D3 Data Visualization
5. Near Field Communication (NFC)
NFC technology allows two devices to communicate when in close proximity. It can dynamically reprogram a smart tag to both transmit and receive data. That means that it is now possible to transfer data between two devices (smartphones, credit cards, advertising posters, etc.) without a physical cable connection. The data transfer starts when the two devices are touched together or brought into close proximity.
Apple has made major investments in NFC technology in recent years, with a particular focus on mobile payments using smartphones.
Meanwhile, a company called MOO has adopted NFC for its business cards of the future. The business cards have a tiny embedded microchip that will transmit data to an NFC-equipped smartphone when they are tapped together.
We can also expect film distributors to drop QR codes on movie posters in favour of NFC’s potential for transmitting data – like trailers – to moviegoers.
6. The Make Movement
The MAKE movement is a community of researchers and inventors of all kinds who use their website and magazine to share information and collaborate on creating new technology. It brings together biologists with computer scientists and designers with mechanics. Together, they are having a great time reimagining the world we live in. After the revenge of the nerds (do the names Steve Jobs, Bill Gates and Mark Zuckerberg ring a bell?), this is the revenge of the garage inventors. There is no limit to the MAKE community’s creativity. Its members have already invented a way to make biodiesel with E. coli bacteria and discovered cures for some forms of cancer. By providing a space for creation and experimentation, MAKE celebrates the power of the imagination.
In this video, MAKE community member Matt Richardson talks about his Arduino microcontroller.
7. Computer-rendered “Hand-drawn” Graphics
For anyone who loves the hand-drawn look in digital art, this is sure to be of interest no matter whether they are creating film illustrations or designing a website. For examples of the realism that can be achieved with these algorithms, have a look at this image gallery. You would swear that they were hand-drawn.
LEAP is a touch-free sensor interface that allows users to interact with what is on their computer screens. This new human-computer interface has lots of potential in digital drawing and animation. LEAP is said to be more accurate than a computer mouse, more efficient than a keyboard and more sensitive than a touchscreen. Watch a demonstration here:
Stéphane has been telling me about smartpens for a long time, and he finally bought one recently. Pens like the Sky wifi Smartpen from Livescribe are fairly affordable. Built around an embedded computer and audio recorder, the Sky pen records everything you hear and write, then sends it all over a wifi connection to your mobile device. It is a great tool.
For artists, there is also the unique Wacom Inkling, a spatially-aware and pressure-sensitive tool for graphic artists. It works on any surface and quickly saves the digital image in vector or bitmap format. Watch a demonstration here:
10. Pico Projectors
I’ll wrap up this trip through the future with Pico projectors. Pico, as in small. These pocket projectors are probably the most affordable on the market. With them, anyone can project a presentation, video or film on any surface. It’s a great way to take the film-going experience out of traditional movie theatres for projections anywhere you can imagine.
As we were preparing this article, The Creators Project website published its own list of technology that could change the film industry. You can read it here. One of their last points talks about interactive movie screens and provides as an example the one we used to present Vincent Morisset’s interactive film computer BLA BLA at La Gaîté Lyrique in Paris and in the streets of Montreal last summer.
Research: Stéphane Jolicoeur
Written by: Catherine Perreault
Header photo credit: BLA BLA installation at La Gaîté Lyrique in Paris. Caroline Robert. All rights reserved. | <urn:uuid:d0705164-230f-4406-839f-854cee56e1bf> | CC-MAIN-2016-26 | http://blog.nfb.ca/blog/2012/11/30/10-film-technologies/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00074-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.925506 | 1,558 | 2.640625 | 3 |
Questions: Which Method? Class or Object?
How to set-up the Guess-who game
What to do if you can't save changes to Picture.java
What to do if DrJava can't find a class like Picture or Pixel
What to do if DrJava can't save .drjava
How do you set the media path to mediasources?
How about a class for simple input?
How about a class for simple output?
How do you set system variables for leJOS?
How do you set up the Marine Biology Case Study in DrJava?
How do you set up Karel J Robot in DrJava
Diagonal Mirror Questions
Questions from Introduction to Programming Class:
Retrieving website information
Link to this Page
Resources for Teachers
last edited on 23 July 2013 at 3:57 pm by lawn-128-61-45-87.lawn.gatech.edu | <urn:uuid:e4b4e616-599c-4c79-b584-35402ade2947> | CC-MAIN-2016-26 | http://coweb.cc.gatech.edu/ice-gt/55 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00002-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.766251 | 191 | 2.890625 | 3 |
Adjectives are classified on the basis of various parameters. Depending on these parameters, there are various types of adjectives. Read on to know in detail about the main kinds of adjectives.
Types Of Adjectives
Beautiful, pretty, bold, fierce, majestic, many, few, small, blue, much, green, tall, cute, red, smart, two, any-and the list just goes on. These words might look random but they are all describing words, aka adjectives. Adjectives cannot stand on their own as they are supposed to describe nouns or pronoun and modify them. Adjectives are used to bring color to your sentence by making the noun look something special and the sentence sound more complete. The many types of adjectives make it even more convenient to know where which type of adjective is to be used. The types of adjectives available for use are governed by a number of rules. And it is these rules that need to be understood in order to be able to use these parts of speech to your advantage. For better understanding, know and learn all the kinds of adjectives and the ways in which each type can be used to describe words and/or phrases.
Kinds of Adjectives
- Descriptive Adjectives or adjective of quality
- Adjective of quantity
- Predicative Adjectives
- Personal Titles
- Possessive Adjectives
- Demonstrative Adjectives
- Indefinite Adjectives
- Interrogative adjectives
- Comparative Adjectives
- Superlative Adjectives
Descriptive Adjectives Or Adjective Of Quality
Descriptive adjectives are those adjectives which describe nouns or the noun phrases. For example: 'A beautiful day'. In this case, 'beautiful' is the adjective which qualifies or describes the noun 'day'. Descriptive adjectives have several forms as discussed below.
- Colors as adjectives: Black, Blue, White, Green, etc.
- Touch as adjective: Slippery, Sticky, etc.
- Feelings as adjectives: Happy, Sad, Angry, etc.
- Sizes as adjectives: Big, Small, Thin, Thick, etc.
- Origin as adjectives: European, Latin, Greek, etc.
- Shapes as adjectives: Triangular, Rectangular, Square, Circular, etc.
- Qualities as adjectives: Good, Bad, Average, etc.
- Time as adjective: Yearly, Monthly, etc.
- Age as adjectives: Young, Ancient, Old, etc.
- Material as adjectives: Wood, Cotton, Gold, etc.
- Opinions as adjectives: Pretty, hot, expensive, etc.
Adjective Of Quantity Or Numeric Adjective
Adjective of quantity talks about the quantity of the noun being talked about and provides answer to the question of 'how much'. It shows the quantity or the numbers present in the sentence. For example: 'there were three boys playing in the ground'. Here the word 'three' signifies the quantity or the number of boys playing. Other examples are:
- He has little intelligence.
- Sunday is the first day of the week.
Predicative adjectives are those which follow a linking verb and not placed before a noun. Predicative adjective does not act as a part of the noun it modifies but serves as a complement of a linking verb which connects it to the noun of the sentence. Take for instance 'The bag is heavy'. Here the predicative adjective 'heavy' is associated with the verb 'is' and links to the noun 'bag'. Other examples are:
- The weather will be cool and dry.
- That child is young.
Personal titles are adjectives where the titles such as, Mr., Master, Miss, Mrs., Uncle, Auntie, Lord, Dr, Prof. and so on, are used as adjectives to describe the position of the noun. These titles could be placed in the front or even at the end. For example:
- The day after tomorrow, you can visit Auntie Pauline and Uncle John.
- The classes on Monday will be presented by Dr. Mary and Prof. Kate.
Possessive adjective is used where the sentence shows possession or belongingness. They are similar to possessive pronouns and, in this case, are used as adjectives which modify a noun or a noun phrase. Here words such as, our, my, your, his, her, it's and their/s, are used. For example:
- Have you seen their house?
- This is his room.
Demonstrative adjectives are used when there is a need to point specific things. The adjectives function as a way to demonstrate something and are similar to demonstrative pronouns. Here words such as this, that, these, those and what are used. Take, for instance, the sentence: 'If I hear that sound again, I will call the Police'. Here 'that' refers to a specific sound. Other examples are as follows:
- Whose is this bag?
- These mangoes are sour.
Indefinite adjectives are used when the sentence has nothing to point out or specify. These adjectives are formed from indefinite pronouns and do not indicate anything in particular. It uses words such as, any, many, few and several, etc. Here is an example explained in detail: 'The chief has heard many people make the same promise'. The word 'Many' is an indefinite adjective which does not specify the quantity of people and modifies the noun 'people' without pointing out exactly who all have made the said same promise. Other examples:
- Many children like dinosaurs.
- Is there any water in the bottle?
An Interrogative adjective modifies a noun or a noun phrase and is similar to the interrogative pronoun. It does not stand on its own and includes words such as, which, what, who, whose, whom, where and so on. For example: 'What dress are you wearing?' Here, 'what' modifies the noun 'dress' and is the object of the compound verb 'are wearing'. Other examples:
- Which leaves turn color first?
- Whose son is he?
Comparative adjectives are those which imply increase or decrease of the quality or quantity of the nouns. It is used to compare two things in a clause. Adjectives are generally made comparative by adding 'er' to the original work like nicer, taller, smarter, etc; there are some exceptions also. Other examples are:
- The detective is younger than the thief
- Science is more important than math in these days.
- This school is better than the last one I attended.
Superlative adjectives express the greatest increase or decrease of the quality; it conveys the supreme value of the noun in question. For instance, 'He is the richest man in this town'. Here, the word 'richest' is the superlative adjective which shows a comparison individually.
- Mary is the tallest of all the students.
- I am in the smallest class in the school.
- This is the most interesting subject for me.
It is not difficult to describe anything in this world. Even a lizard can be called pretty by someone and ugly by another; adjectives fall into place right here. The type of description required in the specific sentences is something which should be given utmost attention to and the right kind of adjective must be duly selected. These are the simplest parts of speech ever!
Comment On This Article | <urn:uuid:b1b7d903-2671-47fa-9ce7-4e14a3b14088> | CC-MAIN-2016-26 | http://fos.iloveindia.com/types-of-adjectives.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00092-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929502 | 1,589 | 3.75 | 4 |
Using wind power to help mitigate climate change
From Germany to China to South Korea, the complex and potentially catastrophic global warming conundrum led the news last week while negotiations prior to the United Nations climate change conference in Copenhagen continued to intensify.
As negotiators met in Bonn to try and reach a new and strengthened post-Kyoto treaty, Ends Europe reported on the UN Framework Convention on Climate Change, which revealed that greenhouse gas emission reductions pledged by rich countries for 2020 amount to between 15% and 21% below 1990 levels.
While there are still about 100 days left until the Copenhagen conference begins, those figures are much lower than targets set by the Intergovernmental Panel on Climate Change, which notes that wealthy nations must reduce their greenhouse gas emissions by 25-40% below 1990 levels in 12 years in order to avoid the worst ravages associated with global warming.
At the beginning of the Bonn meeting, the UN’s senior climate change representative said he was worried that the negotiating process was not moving fast enough.
Yvo de Boer reportedly warned that many of the key issues such as emissions targets and financing for clean technologies in the developing world remain deadlocked. “You’re looking at hugely divergent interests, very little time remaining, a complicated document on the table and still a lot of progress to be made on some very important issues,” he told the BBC.
In a similar vein, Business Green reported former UK deputy Prime Minister John Prescott, who was involved in the negotiation of the original Kyoto deal, warned that the Copenhagen talks would collapse unless rich nations show greater support for targets based on per capita emissions.
“The West is going to come up with big money on how to finance alternative energy in the developing countries, including clean coal,” Prescott told the Guardian.
“China and India are going to want to know how many billions the rich countries are going to put aside to help them make their carbon contributions. That will be one of the big tests at Copenhagen. The fact is that the West has poisoned the world and left continents such as Africa in poverty. The West will have up to stump up the cash for clean technology.”
Meanwhile, at an environmental forum in South Korea, UN Secretary-General Ban Ki-moon said climate change is the fundamental threat to humankind.
“If we fail to act, climate change will intensify droughts, floods and other natural disasters,” Ki-moon said. “Water shortages will affect hundreds of millions of people. Malnutrition will engulf large parts of the developing world. Tensions will worsen. Social unrest – even violence – could follow.
“The damage to national economies will be enormous. The human suffering will be incalculable. We have the power to change course. But we must do it now.”
As if on cue, the Chinese government announced it would make “controlling greenhouse gas emissions” an important part of its development plans. Reuters noted that a Beijing cabinet meeting chaired by Premier Wen Jiabao said global warming threatened China’s environmental and economic health.
Warning of worsening droughts, floods and melting glaciers, Reuters said the meeting stressed the urgency of tackling climate change and called for domestic objectives to control greenhouse emissions, though it made no mention of emissions cuts.
Considered to be the world’s top annual emitter of greenhouse gases, Beijing has long argued that development comes first when there are still tens of millions living in poverty.
But China’s leaders are increasingly worried about the risks rising temperatures pose to a densely-populated country with limited natural resources, according to Reuters. They also want to exploit a boom in clean technology.
In addition to this whirlwind of climate change news, the BBC reported a study of satellite measurements of the massive Pine Island glacier in Antarctica revealed it is melting four times faster than it was a decade ago.
The European Wind Energy Association (EWEA) believes that the number of climate change stories will continue to dramatically increase as the Copenhagen conference nears. After all, science has shown the fate of humankind is at stake.
As a minimal first step, EWEA encourages nations to reach an agreement on a new pact to limit and reduce greenhouse gases. Once a global treaty is signed, countries can further exploit wind power – a sustainable, affordable, local and non-polluting energy – to help mitigate the horrifying calamity caused by 150 years of burning fossil fuels.
For more on glaciers melting at an alarming rate, please see: http://news.bbc.co.uk/2/hi/science/nature/8200680.stm | <urn:uuid:9901c094-f811-411b-a8fd-b4567bc0873e> | CC-MAIN-2016-26 | http://www.ewea.org/news/detail/2009/08/18/eweas-opinion-4/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00080-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934018 | 953 | 3.109375 | 3 |
Wednesday's Google Doodle celebrates the life of the nineteenth-century French physicist Léon Foucault by featuring one of his most prominent inventions: the Foucault pendulum.
Born on September 18, Foucault created the pendulum in his attempt to demonstrate the Earth's rotation. According to Time magazine, Google's Doodle explains two phenomena discovered by the Frechman.
"First, it shows how the Earth moves under the pendulum. Second, it shows how the speed of the pendulum’s apparent movement depends on where the experiment is held."
You can check out a real life version of the pendulum below. | <urn:uuid:4ae1ba57-0891-43f8-a696-a7f77488d91e> | CC-MAIN-2016-26 | http://www.upi.com/Odd_News/Blog/2013/09/18/Foucault-pendulum-honored-in-Google-doodle/9851379531597/?spt=mps&or=4 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00127-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.906934 | 131 | 3.78125 | 4 |
Biomass is a very versatile resource that can be used to produce heat, electricity, transport fuels and a range of chemicals and materials. It is used in all these applications today, and its future demand is estimated to grow substantially.
In particular, bioenergy is expected to play an increasing role in the future energy system, with benefits in terms of greenhouse gas emissions, energy security and rural development. However, a significant growth in bioenergy will present sustainability challenges, in particular as it may rely on energy crops. The production of these crops could lead to competition for land with food crops and land use change resulting in environmental and social impacts.
Furthermore, benefits and negative impacts will depend very much on the type of biomass resource and its management. Avoiding or mitigating such risks is crucial to the sustainable future of the industry. Dealing with these risks requires an understanding of the risks, innovation in bioenergy systems, and regulatory and industry measures. | <urn:uuid:a4e75c09-264f-4ca5-bbfd-754990c3fca1> | CC-MAIN-2016-26 | http://www.renewableenergyfocus.com/webinar/271/sustainability-in-bioenergy/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00112-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.942531 | 190 | 3.34375 | 3 |
FALSE PREGNANCY IN THE DOG occurs when the bitch produces both physical and psychological changes that are a nuisance to the bitch and the owner. The psychological changes in the owner arise most often when informed that their prize bitch is NOT pregnant after all! The bitch will often produce milk, engage in nesting activity and look like she's pregnant. It's amusingly sad to see the affected dog try to persuade a tennis shoe to nurse! These visible changes take place beginning about 4 weeks after the heat cycle begins (estrus) and can continue for a number of weeks. False pregnancies are always unpredictable and can show up whether or not a mating has occurred. Often so much milk is produced the bitch becomes uncomfortable. Once a dog has had a false pregnancy she's likely to be afflicted again.
Most dogs experiencing a false pregnancy will begin to show some swelling in the mammary glands about five weeks after their heat cycle has ended. If you have bred your bitch, you will be elated that she "is getting ready to have pups". You might also be surprised that she "isn't filling out much". You will wonder why she isn't starting to show a big belly. Many dogs whether they are bred or not, will develop a false pregnancy, and look, act, and even think as if they are pregnant. Some will carry small toys or pillows around and even start digging a nesting site wherever they please. When the time draws near to when they would be delivering the pups, usually 63 days after a mating, milk will drain on its own from the mammary glands. Some dogs are really troubled that they cannot find the pups they psychologically feel they should be nursing.
CAUSE OF FALSE PREGNANCY IN THE DOG
The exact hormonal mechanisms that must occur to trigger false pregnancy are as yet unknown. We do know that a combination of interacting hormones including estrogen, adrenal hormones, and prolactin from the pituitary gland influence milk production in the mammary glands. Prolactin levels seem to be the main culprit, but why this hormone does what it does when it shouldn't is a subject for future research.
Fortunately 90% of false pregnancies resolve over a period of three weeks with no treatment. Since no real harm is done there's no reason to speed up what nature will take care of in time. For about 10% of *******, though, the psychological effects directing mothering behavior are so intense that the bitch is miserable. She's continually searching for pups that aren't there and seeking relief from the mammary gland engorgement that's making her uncomfortable. Mastitis, an infection of the mammary glands, if it were to occur at this time could be particularly dangerous.
On occasion, in about 10% of false pregnancy cases, treatment is warranted. Various hormonal substances have been used to hasten the reabsorption of milk and to halt the milk production. None of these medications is entirely safe so close veterinary supervision is necessary. Most often the veterinarian will administer a hormone to interrupt the dog's secretions of internal hormones that may be promoting the production of more milk.
Any bitch showing false pregnancy is apt to have a reoccurrence in the future. There is NO reason NOT to breed this bitch but she may be a poor producer. There seems to be a greater risk of pyometra, a severe infection of the uterus, in any female dog that has had false pregnancies. Learn more about Pyometra in the Surgery Room. There's no way of predicting the outcome of any breeding but many ******* that have had a false pregnancy have gone on to whelp normal, healthy litters. Evidence does not indicate that false pregnancies are an inherited disorder. | <urn:uuid:08fccf69-4ab0-4c5a-9ce8-bae35c4f3daf> | CC-MAIN-2016-26 | http://www.prodoggroomingsupplies.com/dog-forums/showthread.php?t=11899 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00013-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.957533 | 757 | 2.53125 | 3 |
by Staff Writers
Singapore (SPX) Oct 02, 2013
Species living in rainforest fragments could be far more likely to disappear than was previously thought, says an international team of scientists. In a study spanning two decades, the researchers witnessed the near-complete extinction of native small mammals on forest islands created by a large hydroelectric reservoir in Thailand.
"It was like ecological Armageddon," said Luke Gibson from the National University of Singapore, who led the study. "Nobody imagined we'd see such catastrophic local extinctions."
The study, just published in the leading journal Science today, is considered important because forests around the world are being rapidly felled and chopped up into small island-like fragments.
"It's vital that we understand what happens to species in forest fragments," said Antony Lynam of the Wildlife Conservation Society.
"The fate of much of the world's biodiversity is going to depend on it."
The study was motivated by a desire to understand how long species can live in forest fragments. If they persist for many decades, this gives conservationists a window of time to create wildlife corridors or restore surrounding forests to reduce the harmful effects of forest isolation.
However, the researchers saw native small mammals vanish with alarming speed, with just a handful remaining - on average, less than one individual per island - after 25 years. "There seemed to be two culprits," said William Laurance of James Cook University in Australia.
"Native mammals suffered the harmful effects of population isolation, and they also had to deal with a devastating invader - the Malayan field rat."
In just a few years, the invading rat grew so abundant on the islands that it virtually displaced all native small mammals. The field rat normally favors villages and agricultural lands, but will also invade disturbed forests.
"This tells us that the double whammy of habitat fragmentation and invading species can be fatal for native wildlife," said Lynam.
"And that's frightening because invaders are increasing in disturbed and fragmented habitats around the world."
"The bottom line is that we must conserve large, intact habitats for nature," said Gibson. "That's the only way we can ensure biodiversity will survive."
'Near-complete extinction of native small mammal fauna 25 years after forest fragmentation' by Luke Gibson, Antony J. Lynam, Corey J. A. Bradshaw, Fangliang He, David P. Bickford, David S. Woodruff, Sara Bumrungsri and William F. Laurance was published on 27 September 2013 in Science and is available at http://www.sciencemag.org (doi: 10.1126/science.1240495).
National University of Singapore
Forestry News - Global and Local News, Science and Application
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:f77c0ab6-e83d-41f4-b516-3816eb9e497f> | CC-MAIN-2016-26 | http://www.terradaily.com/reports/Wildlife_face_Armageddon_as_forests_shrink_999.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00159-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.919171 | 687 | 3.453125 | 3 |
Progress in Preventing Childhood Obesity: How Do We Measure Up?
INSTITUTE OF MEDICINE
Food and Nutrition Board
Sept. 13, 2006
Jeffrey P. Koplan, M.D., M.P.H.
Vice President for Academic Health Affairs, Emory University, Atlanta
Chair, Committee on Progress in Preventing Childhood Obesity
Good afternoon. I too would like to extend my thanks to all the committee members and IOM staff who worked on this report. In addition to the committee members with me on the panel, we also have another committee member who has been able to join us. Susan Foerster is chief of the cancer prevention and nutrition section of the California Department of Health Services.
As Clyde noted, this study was undertaken in 2005 at the request of The Robert Wood Johnson Foundation as a follow up to the IOM report Preventing Childhood Obesity: Health in the Balance, which was a congressionally mandated study that recommended ways for government, industry, schools, families, and other stakeholders to work together to prevent childhood obesity. The report being released today, Progress in Preventing Childhood Obesity: How Do We Measure Up?, assesses what is being done to address childhood obesity and makes recommendations for important actions across many sectors.
The remarkable and unexpected rise in obesity among our children and youth in a relatively short time span is one of the 21st century's most critical public health challenges. Currently, one-third of American children and youth are either obese or at risk of becoming obese. That number is projected to grow if our response is not effective in halting the epidemic.
This report offers four distinct contributions to developing an effective and comprehensive response to the childhood obesity epidemic. First, it summarizes the findings of three regional symposia. Second, it provides a framework that stakeholders can use to evaluate progress on a range of outcomes. Third, it measures progress for specific recommendations in the Health in the Balance report. And fourth, it calls for greater leadership and commitment among all stakeholders in preventing childhood obesity.
The committee held three regional symposia to learn about policies and programs that are currently being implemented throughout the nation. At each meeting, the committee heard about the challenges that communities, schools, and industry face when implementing and evaluating childhood obesity-prevention efforts. Additionally, the committee heard from federal representatives and conducted an in-depth literature review.
The committee found that innovative actions and interventions to reduce childhood obesity are emerging across the United States. The number and scope of these programs indicate that the nation is beginning to grasp the severity of the epidemic. However, the committee concluded that despite these encouraging efforts, many of them remain fragmented and small in scale. We still are not doing enough to prevent childhood obesity, and the problem is getting worse. The committee also found that a lack of systematic monitoring and evaluation has hindered the ability to identify promising practices that can be replicated or adapted to different settings. And the number of potentially useful interventions that are not being properly evaluated is large. For example, the first national registry of programs, Shaping America’s Youth, revealed that only half of the 1,090 programs registered had quantifiable outcome measures.
We also observed that many environments do not support healthy behaviors for our children and youth. In some communities, fruits and vegetables are not readily available or affordable, especially for families on limited household budgets. Certain neighborhoods do not offer safe places for children to play.
While there is growing awareness that childhood obesity is a serious public health problem that has substantial costs, the current level of public- and private-sector investment does not match the extent of the problem. Given the many changes being implemented throughout the nation to improve the diets and physical activity levels of children and youth, a comprehensive assessment of progress requires both the tracking of trends and a detailed evaluation of relevant interventions. Stakeholders should commit adequate resources to conduct evaluations and engage in surveillance, monitoring, and research.
The recommendations in this report emphasize the need for a collective responsibility and collaborative actions among all who have a stake in reversing this problem. No single sector of society should bear the responsibility for the problem, and no single sector acting alone can effectively halt and reverse it.
The committee recommends that government, industry, communities, schools, and families demonstrate leadership and mobilize the resources required to identify, implement, evaluate, and disseminate effective policies and interventions to prevent childhood obesity. In particular, each level of government should establish a task force to identify priorities for action, coordinate public-sector efforts, and establish effective interdepartmental collaborations. The federal government also should provide sustained investment in initiatives found to be effective -- for example, the five-year VERB campaign that was not funded in FY 2006. We also recommend that the federal government support surveillance systems that are vital to tracking trends in the obesity epidemic -- such as the National Health and Nutrition Examination Survey and the School Health Policies and Programs Study -- and expand them to include obesity-related outcomes.
We recognize that certain segments of the food, beverage, restaurant, fitness, and entertainment industries have shown constructive responses. Nevertheless, independent and periodic evaluations are needed to determine which industry initiatives are effective. Evaluations of industry efforts should track the proportion of a company’s product portfolio and marketing resources devoted to developing and promoting healthful products; monitor changes in product portion sizes; and show that industry is conveying consistent information to consumers that supports a healthy lifestyle. Industry also should partner more with public institutions to support childhood obesity prevention efforts. This includes creating a mechanism for sharing proprietary data that will enhance our understanding of how marketing influences children’s attitudes and behaviors, and will inform effective interventions. The media are encouraged to develop programs that promote healthy lifestyles and evaluate their effectiveness.
We recommend that communities work with government and others to develop a community health index toolkit examining factors that help create healthy communities. Communities are also encouraged to compile and widely share findings and community action plans.
The report recommends that schools bolster their physical-education and activity requirements and standards, and recommends further actions by preschool, child-care, and after-school programs. Schools should be provided with adequate federal and state funding to implement changes in the school environment that will increase physical activity and the consumption of healthful foods and beverages.
Finally, the report encourages families to ensure that meals, snacks, and beverages provided at home support a healthful diet and are served and consumed in reasonable portion sizes. Families should also make physical activity a family priority and establish rules or guidelines that both encourage activity and limit leisure time in front of the TV or computer.
In many racial and ethnic groups, in low-income populations, and among recent immigrants to the United States, obesity rates in children and youth are alarmingly high or are increasing faster than the average rate. Specific attention must be given to children and youth from these populations to lower risk for becoming obese.
That concludes my statement. My colleagues and I welcome your questions. Please come ask your question at one of the microphones or use the e-mail link on the National Academies Web site, and be sure to first identify yourself by name and affiliation. Thank you. | <urn:uuid:7e5d4a38-c2f0-468e-be1a-7c943691cc72> | CC-MAIN-2016-26 | http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=s09132006 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00141-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.945988 | 1,448 | 2.8125 | 3 |
Spinal cord injuries, torn ligaments, and bone fractures are not exclusively human maladies. When dogs are subjected to these injuries, they must be treated with rehabilitative efforts similarly employed for human patients, to be restored to good health. It's one thing, however, to motivate a human to lift weights to regain muscle tone. How does one convince a dog that walking a treadmill and similar therapies will be in its best interest?
"You can't communicate verbally and ask a dog to exercise," said Dr. Darryl Millis, associate professor at the University of Tennessee College of Veterinary Medicine and an orthopedic surgeon. "You have to trick them into doing what you want them to do."
By "tricking," Dr. Millis means training. Dr. Millis is one of the researchers at UT who have been working toward adapting human physical therapy techniques to dogs since the mid-1990s.
K-9 Garth has resumed his law enforcement duties, thanks to the aquatic treadmill for canids at the University of Tennessee College of Veterinary Medicine.
He collaborated with David Levine, PhD, associate professor of physical therapy at UT-Chattanooga, on a rehabilitation program for dogs. They're co-authoring a book on canine physical therapy with Dr. Robert Taylor, who appears on cable television's Animal Planet Emergency Vet series. The UT veterinary college has established a comprehensive orthopedic rehabilitation program for dogs, using custom-designed equipment, veterinary specialists, and a licensed physical therapist.
The program consists of several postsurgical, progressive activities and exercises, including neuromuscular electrical stimulation, passive range-of-motion exercises, and ultrasound therapy. During early rehabilitative efforts, the first hurdle is getting the animal to stand on all fours. That involves getting the animal past what Dr. Millis called their "reflex inhibitions."
"A lot of dogs with acute injuries right after surgery will keep the limb flexed up against the body because it's more comfortable. It's probably not a true reflex. It's a pain-associated tendency."
Similarly, Dr. Millis has seen dogs with chronic injuries who haven't used their limbs for up to several months.
"They get used to carrying the limb. So it's a greater challenge to get them to use it after surgery, because they've become accustomed to not using it anyway."
Getting dogs to have faith in their limbs, via positive reinforcement and increasing their trust in the process, is the key, Dr. Millis said.
"It's a confidence thing. They have to know it's okay to touch down on that limb."
When the animal begins to carry the injured limb near the ground, treadmills are employed. Dr. Millis said that there's something about being on the treadmill that often prompts dogs to use all four limbs again, and speculated it may be a reaction to the ground moving under them. Once it happens, Dr. Millis said the behavior frequently continues, and dogs begin to walk on all fours while off the treadmill. A "force plate" measures weight bearing and training responses, and the information is fed to a computer programmed to analyze the forces placed on the limbs.
With some basic training under their collars, the dogs are ready for work on a special underwater treadmill. Acquired by the veterinary college last June, the treadmill is the first of its type in the country, according to Dr. Millis. He worked with a company that builds aquatic therapy units for humans to design a machine for use with dogs.
Inside the Plexiglas box, the dog is immersed in water up to its chest. The water provides both support and resistance as the dog walks the treadmill on the bottom of the tank.
One might imagine that placing an injured dog in a Plexiglas tank of water and expecting it to walk on a treadmill is a feat more suited to Houdini. But this is no magic trick. Dr. Millis said the dogs are game for the challenge.
"Their heads are never submerged in water. The water is warm and it offers support. So they're very comfortable in the tank," Dr. Millis said. Trained dogs — like service dogs, already used to walking on command — adapt to the treadmill more rapidly.
Even lacking human qualities of self-motivation and will, the dogs have impressed physical therapists at UT with how quickly they begin to bear weight, compared with a lot of human patients. "We've noted the marked ability for these animals to want to return to function," Dr. Millis said. "They seem to have the ability to bounce back a little quicker than humans."
As technology allows dogs to be treated in ways once only available to their owners, more progress in physical therapy is expected. One idea being explored at the University of Tennessee is the creation of a therapeutic device similar to a balance board to help rehabilitate dogs with proprioceptive problems.
"Part of the process of physical therapy is bridging the gap and learning what can be done. These are the next steps in getting animals back to full function [or as fully functioning as they can], and speed that recovery."
The innovative water treadmill has already been used to help put some high-profile dogs back into service. These include Millie, Tennessee Governor Don Sundquist's family dog, and K-9 Garth, who has resumed drug-sniffing duties as a member of the Tennessee Highway Patrol's criminal interdiction team.
"It's rewarding when we can take a service animal that has time, training and money invested in it and turn that animal back into a functional working animal," Dr. Millis said, "but it's equally rewarding to get the family pet to be able to run, jump, and play with the family again." | <urn:uuid:fd7f953f-6483-42ce-abda-686cc31f373c> | CC-MAIN-2016-26 | https://www.avma.org/News/JAVMANews/Pages/s030100d.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00118-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968626 | 1,187 | 2.609375 | 3 |
Before the Production: Dawn of a Rivalry
In 1975, Austrian racer Niki Lauda drove to the
Formula 1 (commonly known as F1) world title in a
Ferrari-powered car, ending a seven-year reign by Ford.
Lauda's run to the top set the stage for the dramatic
1976 season in which our story is told.
The early stages of the 1976 racing season gave no
indication as to the incredible drama that would unfold
between two of racing's fiercest competitors. Defending
champion Lauda of Ferrari drove to six victories in
the season's first nine races, capturing the top prize
at Brazil, South Africa, Belgium, Monaco and Great
Britain. Lauda also earned a spot on the podium as
runner-up in the Spanish and the United States Grands
Prix and made it to a third-place finish in Sweden.
By the midway point of the season (eight
races), Lauda and Ferrari had built up a seemingly
insurmountable lead in the point standings, more than
doubling the total of their nearest competitor. While
Lauda dominated, James Hunt -- the driver who would
ultimately emerge as his greatest rival -- struggled for
the most part. In his first year with Team McLaren, he
failed to finish four of the season's first six races.
Controversy even haunted Hunt in victory. Although
he beat Lauda to the finish line in the season's fourth
race, the Spanish Grand Prix, officials disqualified
Hunt after the race -- ruling that his Marlboro McLaren-Ford M23 was too wide. McLaren protested on the
grounds that the discrepancy was due to the expansion
of the tires during the race. McLaren eventually won
its appeal, but only after two months of haggling were
Hunt's points reinstated.
Hunt claimed victory at the French Grand Prix
(Race No. 8), when Lauda was forced to retire due to
engine trouble. At that point, it was the only race that
the Austrian had failed to finish.
Following his triumph in France, Hunt returned
home a hero to compete in the British Grand Prix at
Brands Hatch. However, Lauda disappointed the British
faithful as he won the pole and led throughout the first
half of the race. When Lauda experienced gearbox
troubles with only 15 minutes left, Hunt took the lead
and sent the home crowd into a frenzy. Hunt went on to
victory, and Lauda held on for second.
But controversy would again slap Hunt in the face.
The British Grand Prix was finished after a restart on
the first lap. Clay Regazzoni, Lauda's Ferrari teammate,
immediately challenged Lauda. Their cars touched.
Regazzoni spun and was hit by Hunt and
Jacques Laffite. Although the remainder
of the field passed by safely, the debris
on the track necessitated a restart.
Hunt had jumped into his team's
spare car for the restart, as did Laffite
and Regazzoni, although they were
forced to retire. After the race, Ferrari
and two other teams protested Hunt's
win in a backup machine. McLaren
maintained that, since no lap had been
completed, the restart rules did not
apply. F1's governing body upheld the
protest, stripped Hunt of the victory
and promoted Lauda to first place.
Heading into the 10th race of the season, the
German Grand Prix, Hunt had inched slightly
closer to Lauda in the point standings but remained
a whopping 23 points behind, with seven races re-
maining. Lauda still seemed a sure-fire bet to win his
second straight title.
All that changed in Germany.
Although F1 began introducing greater safety
innovations in the 1960s, the measures were often
outpaced by technological advancements that allowed
the cars to go faster. In its first 56 years of the sport,
driver fatalities had averaged nearly three per year.
From 1967 to 1975, a total of 13 F1 drivers lost their
lives in racing accidents.
No turn at any track was more infamous than the
Nordschleife (northern loop) at Nurburgring, Germany,
a racing circuit nicknamed "The Green Hell" by F1
driving legend Jackie Stewart. Nestled in the Eifel
mountains about 70 miles south of Cologne, "The
Ring" was often damp, misty or foggy. Varying weather
conditions at different ends of the track were not
unusual, and the 14.2-mile, tree-lined course featured
an incredulous 177 turns.
Lauda, one of the sport's most vocal advocates on
the subject of driver safety, was a vocal opponent to
racing Nurburgring. At a drivers' meeting in spring
1976, Lauda proposed a driver boycott of Nurburgring
but was voted down. Prodded by driver Stewart, the
track had spent substantial sums in 1974-76 to improve
safety with catch fencing and guardrails. But "The
Ring" still loomed as an ominous racing venue.
"The problems posed by Nurburgring were obvious
at a glance," Lauda wrote in his autobiography, "Meine
Story." "Its layout made it the most difficult circuit
imaginable. It was well-nigh impossible to render safe
14.2 miles of tree-lined track."
Despite his concerns, Lauda qualified second, to
Hunt, for the 1976 German Grand Prix. On the morning
of the race (August 1, 1976), the weather forecast for
Nurburgring was typically unpredictable. Near race
time, rain began to fall, and most teams switched to
their wet-weather tires -- in retrospect, a strategic error
as the rain subsided and stiff winds dried the track.
Lauda started poorly, dropping quickly in the field.
He remembers pulling into the pits, changing from
wet to dry tires: his last memory of the race. As he
approached a corner, a tie-rod broke on his Ferrari.
The car went sideways, slammed into an embankment,
became airborne and then smashed onto the track.
The first racecar through was able to avoid Lauda
and the wreckage. A second car, driven by Brett Lunger,
crashed into Lauda, whose Ferarri burst into flames.
The next car, driven by Harald Ertl, plowed into both
wrecked cars. Lunger and Ertl were unhurt, but Lauda's
car was engulfed in flames. Several drivers, including
Lunger and Ertl, worked frantically to remove Lauda
from his burning vehicle. They eventually succeeded
in pulling Lauda to safety, but not before he had been
Lauda was airlifted to an intensive care unit in
Mannheim where a team of six doctors and 34 nurses
worked to save his life. He had suffered third-degree
burns on his head and wrists, several broken ribs, a
broken collarbone and cheekbone. Of even greater
immediate concern was the damage to his lungs that
resulted from breathing toxic fumes delivered by the
fire extinguishers at the crash scene.
Although Hunt ended up winning the German
Grand Prix, the headlines the following day were
rightfully dominated by Lauda's crash and how the
defending F1 champion was clinging to life. For four
days, Lauda hovered near death.
But Lauda wouldn't let go. Nearly blinded, he
focused on voices to maintain consciousness. After his
recovery, he immediately began to form plans for his return to racing -- that season. With a therapist as his
constant companion, he exercised 12 hours each day.
"I made a quick recovery as far as damage to the vital
organs was concerned," Lauda wrote, "but my superficial
injuries turned out to be a bit more complicated."
In addition to the severe burns on Lauda's face, both
eyelids had been burnt away. Plastic surgeons offered
different opinions on his therapy, but Lauda settled
upon a Swiss surgeon who grafted skin from behind
his ears to form new eyelids.
With Lauda out, Hunt narrowed in on the points
lead. He won the pole for the Austrian Grand Prix and
placed fourth in the race. He followed Austria with a
win at the Dutch Grand Prix, cutting Lauda's points
lead to two, 58-56. Only four races remained and, with
Lauda presumably done for the year, it appeared the
World Championship was Hunt's for the taking.
Then came the unbelievable news from Lauda's
camp: The reigning world champion would return to
the track for the Italian Grand Prix on September 12,
1976, only six weeks
after his near-fatal
Lauda qualified fifth
and scored an amazing
in Italy. He extended
his points lead over
Hunt, who struggled in
qualifying and failed
to finish the race.
back to win both the
Canadian and U.S.
Grand Prix, while
Lauda placed eighth
and third, respectively,
in those events. In
between, Federation Internationale de l'Automobile
(FIA) took away Hunt's July 18 victory at the British
Grand Prix. Now, Lauda held a three-point advantage,
68-65, with one race to go in the season, the Japanese
Although Hunt still trailed Lauda, the dashing
young Brit was now racing's hottest property. While
Lauda had won four of the year's first six races, Hunt
was the champion four times in the latest six.
In Japan, Hunt and Lauda qualified second and
third, respectively, behind Mario Andretti. Perhaps
Lauda may have been more concerned about the
weather forecast, but he knew Hunt's car would handle
better on a wet track; as well, he was also worried about
his eyes and reduced visibility in the rain.
Lauda's worst fears were realized when rain poured
all night on the Fuji International Speedway, followed
by fog and more rain on race day. Hunt and Lauda,
both members of the drivers' safety committee, urged
organizers to postpone the race. Their plea fell on
deaf ears. Although the start was delayed by 1:40, it
otherwise went off as scheduled.
Hunt got off to a fast start while Lauda quickly fell
back. After two laps, Lauda pulled into the pits and shut
off the car. "It's too dangerous," the Austrian said.
The Brit led 61 of the 73 laps, then went on to
place third behind Andretti and Patrick Depailler. Hunt
earned four points for his performance, enough to wrest
the season championship from Lauda by a single point.
The championship came as a surprise to Hunt, who had
been unsure of his position following a late pit stop.
"I think it was really a brave decision for Niki to
stop. I really feel for him," Hunt told Sports Illustrated.
"Under the circumstances, he was incredibly
courageous. To tell you the truth, I feel that the race
should not have been started in those conditions. Niki's
decision not to carry on was perfectly reasonable. In
his situation, with the accident at Nurburgring and
everything, who wouldn't have made the same choice?"
Lauda left the track immediately, too emotional to
wait for the inevitable post-game media blitz. Years later,
he expressed few regrets for his decision: "I see the loss
of the 1976 World Championship differently from how
I did then, although I do not reproach myself. If I had
been a little less tense at the decisive moment, if I had
taken it easy and coasted to the couple of points I needed
for the title, then I would have four titles to my credit
instead of three. But, to be candid, I couldn't care less."
Lauda would return to win the World Drivers
Championship again in 1977 for Ferrari, but 1976
would be etched into fans' memories for decades to
come. He later switched to McLaren and won his
third title in 1984 by one-half point over teammate
Alain Prost. Following the 1985 season, Lauda retired
From the severe burns to his head following the
1976 crash in Germany, Lauda suffered extensive
scarring. He lost most of his right ear, as well as the
hair on the right side of his head, eyebrows and eyelids.
He had reconstructive surgery to replace the lids and
get them to work properly, but never felt the need to do
more. Since the accident, he has worn a cap to cover the
scars on his head. The author of five books, Lauda ran
his own airline, Lauda Air, before selling it to Austrian
Airlines in December 2000.
Hunt's dramatic battle with Lauda would result in
Hunt's sole World Championship. Following the 1979
season, Hunt retired from racing and worked for years
as a racing commentator for BBC Sports. He also
served as an adviser and consultant to young drivers.
Hunt died of a heart attack in 1993 at age 45.
Next Production Note Section
Home | Theaters | Video | TV
Your Comments and Suggestions are Always Welcome.
© 2016 8®, All Rights Reserved. | <urn:uuid:55f1e305-8759-401d-a2a6-3676e61edce2> | CC-MAIN-2016-26 | http://www.cinemareview.com/production.asp?prodid=15791 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00181-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970365 | 2,926 | 2.640625 | 3 |
The digital certificate is one of the foundations of a public key infrastructure (PKI). A digital certificate is in many ways the electronic equivalent of a passport or driver's license, and maybe used to identify and authenticate someone making online transactions.
Details on a digital certificate include the certificate holder's name, their public key, the name of the certification authority and an indication of the certificate policy under which it was issued. Most digital certificates are in the format specified in the X.509 standard.
The public key and private key pair can be generated on a secure device. A certification authority creates the digital certificate, incorporating the public key and signs it, protecting the integrity of the information.
The public key in a digital certificate is linked to the private key. The certificate holder must hold the private key securely. The security of the private key is extremely important. In many applications a private key is stored by placing or creating the private key on a physical token such as a smart card. | <urn:uuid:19a0b1ed-73a3-4809-9490-fde25f659349> | CC-MAIN-2016-26 | https://www.anz.com/corporate/products-services/transaction-services/public-key-infrastructure/guide-pki/digital-certificates/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00086-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.919209 | 198 | 3.546875 | 4 |
By mimicking cells, MIT researcher designs electronic circuits for ultra-low-power and biomedical applications.
A single cell in the human body is approximately 10,000 times more energy-efficient than any nanoscale digital transistor, the fundamental building block of electronic chips. In one second, a cell performs about 10 million energy-consuming chemical reactions, which altogether require about one picowatt (one millionth millionth of a watt) of power.
MIT's Rahul Sarpeshkar is now applying architectural principles from these ultra-energy-efficient cells to the design of low-power, highly parallel, hybrid analog-digital electronic circuits. Such circuits could one day be used to create ultra-fast supercomputers that predict complex cell responses to drugs. They may also help researchers to design synthetic genetic circuits in cells
In his new book, Ultra Low Power Bioelectronics (Cambridge University Press, 2010), Sarpeshkar outlines the deep underlying similarities between chemical reactions that occur in a cell and the flow of current through an analog electronic circuit. He discusses how biological cells perform reliable computation with unreliable components and noise (which refers to random variations in signals — whether electronic or genetic). Circuits built with similar design principles in the future can be made robust to electronic noise and unreliable electronic components while remaining highly energy efficient. Promising applications include image processors in cell phones or brain implants for the blind.
Sarpeshkar, an electrical engineer with many years of experience in designing low-power and biomedical circuits, has frequently turned his attention to finding and exploiting links between electronics and biology. In 2009, he designed a low-power radio chip that mimics the structure of the human cochlea to separate and process cell phone, Internet, radio and television signals more rapidly and with more energy efficiency than had been believed possible.
That chip, known as the RF (radio frequency) cochlea, is an example of "neuromorphic electronics," a 20-year-old field founded by Carver Mead, Sarpeshkar's thesis advisor at Caltech. Neuromorphic circuits mimic biological structures found in the nervous system, such as the cochlea, retina and brain cells.
Sarpeshkar's expansion from neuromorphic to cytomorphic electronics is based on his analysis of the equations that govern the dynamics of chemical reactions and the flow of electrons through analog circuits. He has found that those equations, which predict the reaction's (or circuit's) behavior, are astonishingly similar, even in their noise properties.
Cells may be viewed as circuits that use molecules, ions, proteins and DNA instead of electrons and transistors. That analogy suggests that it should be possible to build electronic chips — what Sarpeshkar calls "cellular chemical computers" — that mimic chemical reactions very efficiently and on a very fast timescale.
Netbook Technology News | <urn:uuid:bcd087ea-f67b-4136-bdbb-77b0a5f2945a> | CC-MAIN-2016-26 | http://nextbigfuture.com/2010/02/mit-makes-cell-inspired-electronics-for.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00171-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.93802 | 584 | 3.515625 | 4 |
Food security is not simply a function of production or supply, but of availability, accessibility, stability of supply, affordability and the quality and safety of food. These factors include a broad spectrum of socio-economic issues with great influence on farmers and on the impoverished in particular.
Large shares of the world’s small-scale farmers, particularly in central Asia and in Africa, are constrained by access to markets, while inputs, such as fertilizers and seed, are expensive. With lack of irrigation water, infrastructure and investments, and low availability of micro-finance combined with dependency on few multinational suppliers, crop production is unlikely to increase in those regions where it is needed the most, unless major policy changes and investments take place. These constraints are further compounded by conflicts and corruption.
Agricultural prices are forecast to decline over the next two years but to remain well above the levels of the first half of this decade. However, by 2030–2050, the current scenarios of losses and constraints due to climate change and environmental degradation – with no policy change – suggest that production increases could fall to 0.87% towards 2030 and to 0.5% by 2030–2050. Should global agricultural productivity rise by less than 1.2% per year on average, then prices, rather than declining, can be expected to rise by as much as 0.3% per year. In addition, a production short of demand, a greater geographical inequity in production and demand, combined with possibly more extreme weather and subsequent speculation in food markets, could generate much greater price volatility than before. In turn, this could potentially induce a substantially greater reduction in food security than that seen in the current crisis, if appropriate options for increasing supply and security are not considered and implemented.
The previous chapters clearly outlined the potential impact of environmental considerations on projected food demand and supply. These environmental considerations are not well addressed in global food assessments to date. Whether the Millennium Development Goals (MDGs) like hunger eradication will be met in the (near) future and whether the food crisis as evolved until 2008 will impact these MDGs on the longer term, depends on how markets will respond, how price impacts will cascade through the food production system and how international governments will respond to these new circumstances. In short, the impact on food availability and food security can only be assessed through the different dimensions that play a role in the state of food security. The FAO defines food security as follows: “Food security exists when all people, at all times, have physical and economic access to sufficient, safe and nutritious food for a healthy and active life” (FAO, 2003). This involves four dimensions:
- Adequacy of food supply or availability;
- Stability of supply, without seasonal fluctuations or shortages;
- Accessibility to food or affordability; and
- Utilization: quality and safety of food.
Before conclusions can be drawn on food security, these dimensions need to be examined. The first three dimensions are elaborated upon in this chapter. The fourth dimension of food utilization is beyond the scope of this report, of which the focus is the environmental aspects of food security.
AVAILABILITY OF FOOD
The availability of food within a specific country can be guaranteed in two ways: Either by food production in the country itself or by trade. The first option has been discussed extensively in the previous chapters. The second option has become more and more important (Figure 29), with increasing transport possibilities and storing capacities and the growing challenges faced by some countries in their domestic production, including because of limitations in available cropland. International trade in agricultural products has expanded more rapidly than global agricultural GDP (FAO, 2005).
|Figure 29: World cereal trade in agriculture has increased steadily in the past decades. OECD has always been the major net exporter and Asia has become a major net importer. (Source: FAOSTAT, 2009).
The past several decades have witnessed a major increase in the integration of the world economy through trade. Many parts of the world have experienced high economic growth in recent years. For example, Asia’s GDP has increased by 9% annually between 2004 and 2006, and growth is especially high in China and India. Sub-Saharan Africa experienced 6% annual growth in the same period, after a long period of recession in many countries. Even countries with a prevalence of hunger reported some economic growth, although this is not always reflected in social conditions. However, global economic growth is projected to slow to around 4% and be in the 6% range for developing countries beyond 2008 (IFPRI, 2008).
An increasing share of global agricultural exports originates from developed countries. It increased from 32% in 2000 to 37% in 2006, but there are large regional variations. For instance, Africa’s share in global exports only increased from 2.3 to 2.8% in this period (UNCTAD, 2007). The EU countries account for most of the global growth; their share of total agricultural exports has increased from slightly more than 20% in the early 1960s to more than 40% today.
A large portion of this increase is accounted for by intra-EU trade, which represents around 30% of world agricultural trade. Conversely, during the past four decades, developing countries have seen their share of world agricultural exports decline from almost 40% to around 25% in the early 1990s before rebounding to about 30% today. This contrasts with the steadily increasing share of developing countries in total merchandise exports. Over this same period, the share of global agricultural imports purchased by developing countries increased from less than 20% to about 30% (FAO, 2005).
Another perspective of this trade is the purchase of land abroad for food production. Responding to recent food crises, a number of countries have started to purchase land abroad for cultivation of crops needed to support domestic demand (Figure 30). This is seen as a long-term solution to the high prices of agriculture commodities and increasing demand for agroforestry products such as palm oil. Among the most active countries owning, leasing or concessioning farmland overseas are China, India, Japan, Saudi Arabia, South Korea and United Arab Emirates; a number of other countries are only starting negotiations for the coming years. The total area of overseas farmland in different countries was estimated at 5.7 million ha at the end of 2008 or 0.4% of the global cropland area.
INCREASING FOOD PRODUCTION
Another option for meeting food demand is to ensure production in the country or region itself, by aiming at self-sufficiency and lowering the dependency on other regions. Current estimates of the developments on the demand side require increase in production in those regions with the highest economic growth or population increase (see Chapter 2). The majority of these regions will be in emerging economies in Africa and Asia. Nowadays, Africa is especially dependent on food imports. Food production in this region is lagging behind due to limited research investments and the problems for farmers to use the appropriate inputs in their production process.
The world regions are sharply divided in terms of their capacity to use science in promoting agricultural productivity in order to achieve food security and reduce poverty and hunger. For every US$100 of agricultural output, developed countries spend US$2.16 on public agricultural research and development (R&D), whereas developing countries spend only US$0.55 (IFPRI, 2008). Total agricultural R&D spending in developing countries increased from US$3.7 billion (1991) to US$4.4 billion (2000), or by 1.6% annually (IFPRI, 2008). This spending was largely driven by Asia, where annual spending increased by 3.3 percent. Today, Asia accounts for 42% of total agricultural R&D spending in developing countries (with China and India accounting for 18 and 10%, respectively). In Africa, agricultural R&D expenditure declined slightly, by 0.4%/year. Although Africa is geographically large, its share in R&D spending is only 13%. Latin America accounts for 33% (with Brazil being responsible for 48% of the region’s spending).
|Figure 30: An increasing number of countries are leasing land abroad to sustain and secure their food production. Data are preliminary only. (Source: GRAIN, 2008; Mongabay 2008).
Productivity has risen in many developing countries, mainly as a result of investment in agricultural R&D combined with improved human capital and rural infrastructure. In East Asia, land productivity increased from US$1,485/ha in 1992 to US$2,129/ha in 2006, while labour productivity rose from US$510 to US$822/worker. In Africa, the levels of productivity are much lower and their growth has also been slower. In 1992, land productivity in Sub-Saharan Africa was only 79% of that in East Asia; by 2006 this gap of 21% had increased to 59% (IFPRI, 2008).
RESOURCES FOR FERTILIZER USE
One of the major options for significantly raising crop production is increasing the use of mineral fertilizers. The Africa Fertilizer Summit 2006 concluded that the use of fertilizers should be increased to a level of at least 50 kg/ha by 2015. The present use of fertilizers in Sub-Saharan Africa is only about 9 kg/ha of arable land, compared to a world average of 101 kg/ha (Camara and Heinemaan, 2006; FAOSTAT 2009). Within Africa, there are strong differences in fertilizer use between regions, with relatively high use in Northern and Southern Africa, and very low use (around 1 to 2 kg/ha) in Western and Central Africa. Taking the increase as proposed by the Africa Fertilizer Summit as a starting point, this would mean a growth of the yearly use of fertilizers from 1 to 6 million tonnes. Based on the price of fertilizer (DAP) of approximately US$600/tonne (beginning of 2008), this would mean US$3 billion/year for the purchase of DAP only. A more moderate price of US$200/tonne would still mean US$1 billion/year. Added to this are significant costs of and investments in transport and distribution, developing agricultural research, extension programs, capacity building, etc. Indeed, there are many reasons for this low use. One of the reasons is the high retail prices of fertilizers, especially in areas with poor infrastructure. A metric tonne of urea costs $90 in Europe, $120 kg in the harbor of Mombassa, $400 in Western Kenya and $770 in Malawi (Sanchez, 2002).
A major challenge is to find ways of making fertilizer available to smallholders at affordable prices. There is also a need for holistic approaches to soil fertility management that embraces the full range of driving factors and consequences of soil degradation (TSBF-CIAT, 2006). This would include the integration of mineral and organic sources of nutrients, thereby using locally available sources of inputs and maximizing their use efficiency, while reducing dependency upon prices of commercial fertilizers and pesticides. The use of perennials, intercropping and agroforestry systems, such as the use of nitrogen fixating leguminous trees, are ways to increase nutrient availability, but also enhance water availability and pest control, in a more sustainable manner (Sanchez, 2002).
A major challenge is to find ways of making fertilizer available to smallholders at affordable prices. There is also a need for holistic approaches to soil fertility management that embraces the full range of driving factors and consequences of soil degradation (TSBF-CIAT, 2006). This would include the integration of mineral and organic sources of nutrients, thereby using locally available sources of inputs and maximizing their use efficiency.
RESOURCES FOR IRRIGATION
Irrigated land area increased rapidly until 1980 with expansion rates of more than 2% a year. In Asia in particular, it led to a steady increase of staple food production together with other elements of the green revolution package (Faures et al., 2007). After 1980, growth in expansion of irrigated area decreased and it is assumed this trend will continue in the near future. One of the reasons is that the areas most suitable for irrigation are already used, leading to higher construction costs in new areas (Faures et al., 2007). Another reason is the strong decline in relative food prices over the last decades, which makes it less profitable to invest in irrigation. Current irrigation systems could be improved by investing in water control and delivery, automation, monitoring and staff training.
The irrigated area has remained very low in Sub-Saharan Africa and of the land under irrigation, 18% is not used (FAO, 2005b). In most African regions the major challenge is not the lack of water, but unpredictable and highly variable rainfall patterns with occurrences of dry spells every two years causing crop failure. This high uncertainty and variability drive the risk-averse behaviour of smallholder farmers. Rarely are investments made in soil management and fertility, crop varieties, tillage practices and even labour in order to avoid losses in case of total crop failure (Rockstrom et al., 2007a,b). Managing the extreme rainfall variability over time and space can provide supplemental irrigation water to overcome dry periods and prevent crop failure. In combination with improved soil management (in regions with severe land degradation, only 5% of the rainwater is used for crops), this should reduce the risk of total crop failure and enhance the profitability of investments in crop management, for example, fertilizers, labour and crop varieties. Increasing crop canopy coverage reduces evapo-transpiration from the soil, improving soil moisture and the provision of water for the crop. | <urn:uuid:351c42a5-4057-471a-ad47-f8d5eb663554> | CC-MAIN-2016-26 | http://www.grida.no/publications/rr/food-crisis/page/3570.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00112-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.937928 | 2,806 | 3.59375 | 4 |
During each workshop, students rotated through three activities that reflected engineering courses offered through the Project Lead the Way engineering program at the high school.
The activities included:
• Fischerteknics, in which students built a model and then wrote a computer program to make the model move;
• Students soldered electronic components to make a reaction tester;
• Students used a three-dimensional modeling program to design a game, and then they were able to create it with equipment in the technology department.
Baker has held the workshop every year since 2003. The high school’s technology department designed the workshop to encourage females to think about pursuing engineering courses when they enter high school, as well as to consider a career in engineering or another field in math or science. Currently, about 20 percent of the students enrolled in Project Lead the Way courses at the high school are females. | <urn:uuid:cb9005d0-cd9b-45d7-bda5-033656c87920> | CC-MAIN-2016-26 | http://blog.syracuse.com/neighbors/2010/01/baldwinsville_girls_study_career_of_engineering.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00144-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.977099 | 178 | 3.796875 | 4 |
The Boulder Valley School District has three people assigned to help teachers at 55 schools figure out how to integrate technology into their classrooms.
Looking for a better way to use limited resources, a committee of teachers, principals and community members spent a year developing a vision and researching programs in school districts nationwide. The group settled on a model in which small groups of teachers will receive extensive training and then serve as mentors to other teachers in their schools.
"It's a good new direction," said Boulder Valley educational technology manager Kelly Sain, who worked with a similar model in two other school districts. "The enthusiasm from our teachers has been huge."
In this first year, the district is asking middle and high school teachers to apply for 30 spots. The next year, 30 elementary teachers will be chosen, and the program will be open to any Boulder Valley teacher for 30 spots in the final year. Teachers can apply until Oct. 29, with the training expected to start in January.
Teachers in the program will receive nine days of professional development, salary credit and about $2,600 worth of digital tools to use in their classrooms. The professional development will include time for the teachers to collaborate, Sain said.
"We want to build a community of learners," she said.
Librarians at the schools also will serve as resources for the teachers as they try new projects in the classrooms. The digital toolkit is expected to include devices like video cameras, iPods and smartboards.
Once the first group is trained, they will be expected to become resources at their schools, helping and encouraging colleagues.
"They know the staff members and the culture," Sain said. "They can do a fantastic job connecting with others. That's really important as we go forward to try to figure out how to use digital tools with our students."
Andrew Moore, Boulder Valley's chief information officer, said the three people in the educational technology office have been spread too thin. This model, he said, should be a more effective way to help teachers incorporate technology in their lessons.
He said another reason for trying the model now is that it's becoming increasingly difficult for teachers to keep up with all the innovations, from iPods to netbooks to the paperless possibilities of Google collaborative environments.
"Technology is changing rapidly," he said. "We're in a different world than we were even just three years ago. Teachers need to know how technology can keep kids engaged in the learning process."
For more information, go to https://sites.google.com/a/bvsd.org/21st-century-cohort. | <urn:uuid:f9630211-95dd-414c-ab39-9cbc3fab0346> | CC-MAIN-2016-26 | http://www.dailycamera.com/boulder-county-schools/ci_21818468/boulder-valley-use-new-model-teach-teachers-about?source=most_emailed | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00197-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.976747 | 536 | 2.71875 | 3 |
Northern Gastric Brooding Frog
Distribution and Habitat
Rheobatrachus vitellinus was discovered in January 1984 (Mahony et al. 1984) and was found exclusively in undisturbed rainforest in Eungella NP, mid-E. Queensland at altitudes of 400-1000 m (Covacevich & McDonald 1993). The area of occurrence of R. vitellinus was less than 500 km2 (map in McDonald 1990). The species was considered common across its range until Jan. 1985 when the first signs of decline (reported by Winter and McDonald 1986) were observed at lower altitudes (ie. about 400 m) (McDonald 1990). At higher altitudes the frogs remained common until March 1985 but were absent in June of that year (McDonald 1990). Despite continued efforts to locate the species, Rheobatrachus vitellinus has not been recorded within Eungella NP or any other locations since March 1985 (Ingram and McDonald 1993; McDonald and Alford 1999).
Rheobatrachus vitellinus was formerly known from Eungella NP and Mt Pelion SF (Tyler 1997) and was not recorded on private lands.
Life History, Abundance, Activity, and Special Behaviors
Impact of Invasive Species
Trends and Threats
Covacevich, J.A. and McDonald, K.R. (1993). ''Distribution and conservation of frogs and reptiles of Queensland rainforests.'' Memoirs of the Queensland Museum, 34(1), 189-199.
Hero, J-M., Hines, H.B., Meyer, E., Morrison, C., and Streatfeild, C. (1999). ''New records of 'declining' frogs in Queensland (April 1999).'' Frogs in the Community – Proceedings of the Brisbane Conference 13–14 February 1999. R. Natrass, eds., Queensland Museum, Brisbane.
Hero, J.-M., Hines, H.B., Meyer, E., Morrison, C., Streatfeild, C., and Roberts, L. (1998). ''New records of 'declining' frogs in Queensland, Australia.'' Froglog, 29, 1-4.
Ingram, G. J., and McDonald, K. R. (1993). ''An update on the decline of Queensland's frogs.'' Herpetology in Australia: A diverse discipline. D. Lunney and D. Ayers, eds., Transactions of the Royal Zoological Society of New South Wales, 297-303.
Mahony, M., Tyler, M.J., and Davies, M. (1984). ''A new species of the genus Rheobatrachus (Anura: Leptodactylidae) from Queensland.'' Transactions of the Royal Society of South Australia, 108(3), 155-162.
McDonald, K. and Alford, R. (1999). ''A review of declining frogs in northern Queensland.'' Declines and Disappearances of Australian Frogs. A. Campbell, eds., Environment Australia, Canberra. Available in .pdf format online.
McDonald, K.R. (1990). ''Rheobatrachus Liem and Taudactylus Straughan and Lee (Anura: Leptodactylidae) in Eungella National Park, Queensland: distribution and decline.'' Transactions of the Royal Society of South Australia, 114(4), 187-194.
McDonald, K.R. and Tyler, M.J. (1984). ''Evidence of gastric brooding in the Australian leptodactylid frog Rheobatrachus vitellinus.'' Transactions of the Royal Society of South Australia, 108, 226.
Richards, S. J., McDonald, K. R., and Alford, R. A. (1993). ''Declines in populations of Australia's endemic rainforest frogs.'' Pacific Conservation Biology, 1, 66-77.
Tyler, M.J. (1989). Australian Frogs. Penguin Books Australia Ltd., Victoria.
Tyler, M.J. (1997). The Action Plan for Australian Frogs. Wildlife Australia, Canberra, ACT.
Winter, J. and McDonald, K. (1986). ''Eungella, the land of cloud.'' Australian Natural History, 22(1), 39-43.
Written by J.-M. Hero; L. Shoo; C. Morrison; M. Stoneham; H. Hines; M. (m.hero AT mailbox.gu.edu.au), Griffith University
First submitted 2002-04-05
Edited by Kellie Whittaker (2010-10-14)
Feedback or comments about this page.
AmphibiaWeb's policy on data use. | <urn:uuid:d8bef762-b1ac-4cd9-86f0-fe7f1a45d8e3> | CC-MAIN-2016-26 | http://amphibiaweb.org/species/3544 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00192-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.782821 | 1,006 | 3.171875 | 3 |
IRENA Urges Faster Use of Renewable Energy to Meet UN Target
Countries must “substantially” expand their capacity to produce energy from solar, wind and other renewable resources to meet a global United Nations target, the International Renewable Energy Agency said.
Investment in power generation and grids and other technologies for producing heat and energy in an environmentally sustainable way should accelerate if states are to achieve the UN’s goal of doubling renewables use by 2030, the agency known as IRENA said. Nations must add an average renewables capacity of 150 gigawatts a year through 2030, in contrast to the increase in 2011 of 110 gigawatts, IRENA said today in an e- mailed statement.
Governments want to diversify their energy mix and reduce fuel imports and costs without adding to global carbon emissions, IRENA Director General Adnan Amin said in a Jan. 9 interview at the agency’s Abu Dhabi headquarters. Declining costs have made many renewables more competitive with fossil fuels, he said at his office in the United Arab Emirates capital.
A boom in the output of natural gas trapped in shale would complement growth in renewables use because both are substitutes for dirtier fuels such as coal, Amin said. Gas can supply constant power, unlike intermittent wind or solar.
Costs for some types of solar panels have fallen by as much as 60 percent in two years, Amin said. Small-scale hydropower often provides the cheapest source of energy for developing countries, according to an IRENA report this week.
To contact the reporters on this story: Anthony DiPaola in Dubai at firstname.lastname@example.org.
To contact the editor responsible for this story: Steve Voss at email@example.com. | <urn:uuid:53da63dd-5b11-46ac-b678-ca6ec161f1d3> | CC-MAIN-2016-26 | http://www.bloomberg.com/news/print/2013-01-14/irena-urges-faster-use-of-renewable-energy-to-meet-un-target.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00040-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932766 | 359 | 2.9375 | 3 |
You can add notes to your slides
that don't show when the presentation is played.
What Good Are Notes?
- Help for you (the speaker):
Print and use as cue cards for your speech.
- Help for the audience:
Print and give to the audience. They can add their own notes
during your speech. Include
complex charts or graphs for audience to study up close or detailed info
that won't fit on a slide, like bibliography references.
In front of an audience:
If the computer that you are using to show your presentation can support
two monitors at once, you can project a slide onto the screen while you are
looking at the slide with its notes on the computer's own monitor. No more
fumbling with index cards or sheets of notes for your speech!
Where you are:
JegsWorks > Lessons > Presentations
Before you start...
Project 1: PowerPoint Basics
Project 2: PowerPoint Formatting
Project 3: Advanced PowerPoint
Point of Confusion- Notes Pane or Notes Page?
Two ways to enter and view your notes!
The Notes Pane is part of the
You can type in notes in a short pane.
Notes Page View allows you to type in notes
while seeing how those notes will print.
Opening Notes Page View
You can switch to Notes Page View with the
menu: | .
What can you do in Notes Page View
- Type text in the Notes placeholder below the slide.
- Resize or move the slide on the page.
- Resize or move the Notes placeholder.
picture or table or chart onto the Notes page, that does not show on the
- Switch back to Normal view to edit your slide by double-clicking the
slide in the Notes Page view.
What can't you do in Notes Page View?
(Use Normal view
- You cannot edit the slide contents in this view.
The Notes Page is a great way
to include complex data in a printed version of the presentation. The
audience can see the details that won't show well on a slide and can study
them afterwards. You will learn how to insert to this kind of material
What you will learn:
| to view notes in Notes Pane
to resize the Notes Pane
to edit notes in Notes Pane
to open Notes Page view
to switch back to Normal view
to change Zoom size to Fit
to change slides in Notes Page view with scrollbar
to add notes in Notes Page view
from your Class disk
- If necessary, open issues.ppt from
your Class disk with the Slides thumbnails showing.
the slide Unethical: Spam in the Slide Pane.
The Notes Pane at the bottom has some text, but the pane is so small it
is hard to tell how much is there.
the text in the Notes Pane, using the scroll arrows at the right.
Hmmm. Not very satisfactory! This pane is just too tiny for more than one
your mouse pointer over the top edge of the Notes Pane
until it turns into
the resize shape.
- Drag upwards. A gray bar shows on the slide and on the
vertical ruler at the left, if the ruler is showing. This marks where the top of the pane will be
if you drop.
Command to show/hide the ruler:
the gray bar is about at the 1 on the vertical ruler, drop. Now
all of the text in the Notes Pane shows.
The image of the slide is made much smaller since there is less room for
the Slide Pane now.
You cannot use
Undo for changes to the interface, like dragging the edge
of a pane.
normally remembers your arrangement of panes for the next time you open
this presentation. You can change that in the Options dialog on the View
tab. Tools | Options
Switch to Notes Page View
The Notes Page view gives you a lot more room to work with than the Notes
Pane. However, you cannot edit the
slide itself in Notes Page view.
- Select on the menu |
. Your view changes from Normal to
adjust the Zoom by selecting Fit, using the Zoom
control on the toolbar. Now you can see
the whole page, showing both the slide image and the notes.
Notes Page: Edit
You can use the usual word processing methods for entering, deleting, and editing text in the Notes
Page view as well as in the Notes Pane. The new text is highlighted in the illustration but will NOT be
highlighted in your pane.
If necessary, change the Zoom percentage so that you can read the notes
in front of the first line and type Discussion questions:
and then press Enter. You have a new first line.
Click after the word spam and type the word email.
Be sure there is a space between words.
Click at the end of the line and Backspace to delete the period.
Type a question mark.
- At the end of the next line of notes, add a question mark.
- At the end of the line "Do you keep chain letters...", add the sentence How
you check their truthfulness?
Notes Page View: Add Notes
With so much space to work with, it is easy to enter notes in Notes Page
While still in Notes Page View,
scroll up to the slide
Security. There are no notes for this slide yet. An opportunity for
- Change the Zoom to 66% or whatever size makes it easy
for you to read what you are going to type.
- Click in the Notes placeholder for the slide Security,
and type the following text:
Have you ever used pirated software?
Have you ever downloaded music without permission of the copyright owner?
Have you ever used your work computer for personal tasks?
Have you ever had damage from a computer virus or trojan?
Have you ever experienced identity theft?
Switch Back to Normal View
- Double-click the image of the slide on the Notes Page. The view
switches back to Normal view, with that slide showing.
Alternate methods: Menu | or the Normal view button
on the Views bar.
The Notes Pane is still enlarged. You need to restore the original size.
the top edge of the Notes Pane back down to where it was before, tall enough
to show one line.
the Save button
on the toolbar to save the presentation with the same name and the same
location. [issues.ppt] (Be sure your
Class disk is still in the drive.)
How to handle a full Class disk | <urn:uuid:ab69e2ad-780f-4525-ad8a-8937899641f9> | CC-MAIN-2016-26 | http://www.jegsworks.com/lessons/presentations/basics/step-notes.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00134-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.860158 | 1,396 | 3.15625 | 3 |
Here's another nail in the near-death experience (NDE) coffin for those who believe that NDE's point to some sort of supernatural, non-physical, soulful, heaven'ish aspect of reality.
The brains of dying rats show signs not of a lack of brain activity, but of hyperactivity. A last neurological gasp, so to speak.
A burst of brain activity just after the heart stops may be the cause of so-called near-death experiences, scientists say.
The insight comes from research involving nine lab rats whose brains were analyzed as they were being euthanized. Researchers discovered what appears to be a momentary increase in electrical activity in the brain associated with consciousness.
This goes a long way toward disproving the notion that when people have a near-death experience, the brain isn't functioning. Or at least, barely is.
Quite the contrary, according to the rat experiment (rats are notoriously physiologically similar to humans in many ways).
Borjigin wanted to find out if there was something happening in the brains of these people who had close calls with death that could help explain these experiences.
"If the near-death experience comes from the brain, there's got to be signs — some measurable activities of the brain — at the moment of cardiac arrest," she says.
But it's really hard to study this in people. So Borjigin and her colleagues decided to study rats. They implanted six electrodes into the brains of nine rats, gave the animals lethal injections and collected detailed measurements of brain activity as they died.
"We were just so astonished," Borjigan tells Shots.
Just after the rats' hearts stopped, there was a burst of brain activity. Their brain suddenly seemed to go into overdrive, showing all the hallmarks not only of consciousness but a kind of hyperconsciousness.
"We found continued and heightened activity," Borjigan says. "Measurable conscious activity is much, much higher after the heart stops — within the first 30 seconds."
Borjigin and her colleagues think they essentially discovered the neurological basis for near-death experiences. "That really just, just really blew our mind. ... That really is consistent with what patients report," she says.
Patients report that what they experienced felt more real than reality — so intense that it's often described as life-altering.
But Borjigan thinks the phenomenon is really just the brain going on hyperalert to survive while at the same time trying to make sense of all those neurons firing. It's sort of like a more intense version of dreaming.
"The near-death experience is perhaps is really the byproduct of the brain's attempt to save itself," she says.
This is a great example of how the scientific method can cast light on quasi-religious beliefs. Eben Alexander claimed that he almost died and went to heaven. That claim deserves extreme skepticism, for reasons described here.
Rats aren't people, obviously.
But this study will stimulate research on what happens to human brains when death approaches. It is likely that rats and people are much alike. Which means that glimpses of "heaven" really are akin to dreams: creations of the brain, not a manifestation of supernatural reality.
It’s called a near-death experience, but the emphasis is on “near.” The heart stops, you feel yourself float up and out of your body. You glide toward the entrance of a tunnel, and a searing bright light envelops your field of vision.
It could be the afterlife, as many people who have come close to dying have asserted. But a new study says it might well be a show created by the brain, which is still very much alive. When the heart stops, neurons in the brain appeared to communicate at an even higher level than normal, perhaps setting off the last picture show, packed with special effects.
“A lot of people believed that what they saw was heaven,” said lead researcher and neurologist Jimo Borjigin. “Science hadn’t given them a convincing alternative.” | <urn:uuid:7e6a23e7-b88f-499c-a3c3-95caa1ee6d75> | CC-MAIN-2016-26 | http://hinessight.blogs.com/church_of_the_churchless/2013/08/near-death-experiences-could-be-hyperactivity-of-dying-brain.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00170-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975728 | 851 | 2.578125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.