text stringlengths 199 648k | id stringlengths 47 47 | dump stringclasses 1 value | url stringlengths 14 419 | file_path stringlengths 139 140 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 50 235k | score float64 2.52 5.34 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
RedR - United Kingdom
Language of event:
Set amidst the context of global climate change and population growth, the number of natural and human-made emergencies every year has increased drastically in the past decade. Although floods, earthquakes, droughts, and other natural hazards cannot be prevented, their impact on communities can be limited through disaster risk reduction (DRR) practices.
You will learn the concept and practice of reducing disaster risks through systematically analyzing and managing the causal factors of disasters. For example, the course examines reducing exposure to hazards, lessening vulnerability of people and property, wisely managing land and the environment, or improving preparedness for adverse events.
Blending theory with practice, this workshop takes you through good practices in the stages of the disaster risk management cycle and the commonly used terminology, frameworks, tools and approaches to effective DRR.
The curriculum also covers a selection of global DRR documents and frameworks, including the United Nations-endorsed Hyogo Framework for Action and the International Strategy for Disaster Reduction (ISDR) system.
The course covers:
- Basic disaster management terms and concepts
- Hyogo Framework
- Community Based Disaster Risk Management (CBDRM) Framework for DRR
- Introduction to hazard, vulnerability and capacity assessment (HVCA)
- Introduction to community participatory tools and techniques
- Organising and conducting a participatory risk assessment
- Planning DRR strategies appropriate to the context
- Monitoring and evaluating the impact of DRR strategies
- How to incorporate DRR into various humanitarian sectors | <urn:uuid:5e57f5f5-191b-4b07-857b-593f2a960b38> | CC-MAIN-2016-26 | http://www.un-spider.org/event/6745/2014-02-03/workshop-disaster-risk-reduction-emergencies | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00132-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.871562 | 315 | 3.5 | 4 |
|Tool: Hands-On Math: Interactive Color Tiles||
|Go to:||http://itunes.apple.com/.../id472739277 (opens a new window)|
Hands-On Math: Interactive Color Tiles provides students with an open-ended tool for exploring fundamental math concepts. The use of a Color Tiles is a way to give students insight into a variety of important mathematical topics. Students place Color Tiles on an interactive Playground to represent mathematical ideas. Very young children can learn positional terms such as above, below, right, and left while creating colorful designs.
Equivalency and related concepts for example, equal to, not equal, more, less, greater than, and less than are easily represented using Color Tiles. The basic operations of addition, subtraction, multiplication and division can all be represented using Color Tiles. Visualizing the meaning of these operations is a fundamental skill.
More advanced ideas such as working with integers, fractions, decimals and percents are topics including in the lesson strategies. Color Tiles are especially well suited for introducing symmetry and asymmetry. Fact familes can be studied using two different Color Tiles to represent sets. View the Instructor's Guide for sample lessons and additional ideas.
|Guide:||User's Guide [PDF] (opens a new window)|
|Author:||Ventura Educational Systems|
|Cost:||Requires payment for use|
|Average Rating:||||Login to rate this resource|
|My MathTools:||Login to Subscribe / Save This|
|Reviews:||be the first to review this resource|
|Discussions:||start a discussion of this resource|
|Math 4||Operations with numbers, Percentages, Integers, Symmetry|
|Math 5||Operations with numbers, Fractions, Percentages, Integers, Symmetry|
|Math 6||Operations with numbers, Fractions, Percentages, Integers, Symmetry|
|Math 7||Operations with numbers, Fractions, Percentages, Integers, Symmetry| | <urn:uuid:b5c14127-3c25-4207-8691-a236c7840a45> | CC-MAIN-2016-26 | http://mathforum.org/mathtools/tool/148328/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.780783 | 441 | 3.96875 | 4 |
An unusual portrait of Genghis Khan, made from arranged white stones, on a hillside outside Ulan Bator. Mongolians pronounce it as 'Chinggis Khaγan'
"Genghis Khan (Чингис Хаан) August 18, 1227) born Temujin, was the founder, Khan (ruler) and posthumously-declared Khagan (emperor) of the Mongol Empire, the largest contiguous empire in history.
Temujin came to power by uniting many of the nomadic tribes of north-east Asia and Central Asia. After founding the Mongol Nation and being proclaimed "Genghis Khan", he pursued an aggressive foreign policy by starting the Mongol invasion of China and Central Asia. During his life, the Mongol Empire eventually occupied most of Asia.
Temujin died of unknown causes in 1227 after a campaign to subjugate the Xi Xia and Jin dynasties in China. He was buried in an unmarked grave somewhere in his native Mongolia. His descendants went on to stretch the Mongol Empire across most of Eurasia, conquering all of modern-day China and Mongolia, as well as substantial portions of modern Russia, southern Asia, Eastern Europe and the Middle East."
More info here:
This is a link to Wikipedia
Critiques | Translate
bracasha75 (24589) 2007-11-14 15:43
The legend of Mongolia is good presented on this hill....everithing cane die but the legend about G.Khan ,Atila and Aleksander Maced. never
gervaso (13243) 2007-11-14 16:36
Interesting how it seems to be so dry and with so green trees all around! The face drawn on the mountain is wonderful, and so are the colors of the sky! Well done!
cak (4923) 2007-11-14 17:03
woov cengiz write on the mountain .excellent
ribeiroantonio (22637) 2007-11-14 17:18
Being made with stones placed on the ground and subject to the weather we can say that it is a very original peace of art.
The photo is very good with great colours and very sharp. And the note is excellent with plenty of good info about the Mongolian hero.
singuanti (15250) 2007-11-14 18:56
Hi Chris. Your composition is well thought out and you kept your focal point just enough off-center. The light gives you a pleasing sky and colors. This looks to be a barren and empty area. tfs Chris.
Floydian (30970) 2007-11-14 19:56
A very bright picture with strong colours and a very interesting focal point to the hill with Genghis on it.
Tree on the right is an important detail in the composition...i like it.
UnTrained (0) 2007-11-14 22:06
Mount Rushmore in the Mongolian way, that is what I thought of. Excellent sharpness to present his portrait in that beautiful landscape. I like your POV opening the shot with the bushes and the tree.
Lieben Gruss, Ulf
gildasjan (43538) 2007-11-14 23:42
Un paysage bien mis en valeur par cet arbre au premier plan.Un naturel des couleurs et une très bonne gestion des paramètres de l'image.
Buin (42466) 2007-11-15 0:26
You show us an impressive landscape here and this portrait truly adds a lot to the scenery. Your point of view creates a very good depth here. It's interesting how one hill is wooded and the others are naked. The colour's contrast between the trees and the soil is incredible. A great view!
Greetings from an a bit winterly Germany!
carper (96) 2007-11-15 0:29
fine shot here Chris
the title of the shot is good, fine pov and good settings, I like the place good photojob have a good day.
barrufeto_77 (28886) 2007-11-15 0:44
Well seen! It's nice how the portrait breaks the compo in the middle of nowhere!
snunney (98241) 2007-11-15 0:45
Interesting to see this apparently unforgiving landscape does have vegetation. Good point of view on the great Mongol warrior. Very good natural tones and definition.
toto (0) 2007-11-15 1:04
Belle photo de cet endroit ou l' on voit ce ' genghis khayan' bien réalisé, bien pris, bien vu. Ces terres ont l' air seches.jolies couleurs, bonne netteté.
pasternak (15185) 2007-11-15 2:00
Nice composition here, Chris, a desert view with this foreground tree and the rock with this "portrait" balancing the scene very well...
PixelTerror (0) 2007-11-15 2:27
This kind of mountain painting is similar to what's done in northern Chile, a good way to celebrate the national hero, foreground tree and bushes add good depth to your image.
Have a nice day kY
Budapestman (82620) 2007-11-15 2:32
Splendid shot, this is a really interesting and spectacular place. I like the particular portrait, the POV is superb. Well done!Thank you for sharing. Have a nice day
gneufeld (15890) 2007-11-15 14:10
Great shot of this unusual place including the stone marked portrait. Nice colors, good contrasts and details in perfect light. Great shot and TFS. Gerald
Gerrit (51935) 2007-11-15 16:08
interesting phenomenon and fine landscape. Good clear tones, well handled light.
vincz (19113) 2007-11-16 1:28
Very nicely done, your composition with the little tree in the foreground and these hills with the portrait in the back is excellent.
batalay (38773) 2007-11-16 6:19
A compelling shot, with a superb note. All the Turkic peoples across Central Asia seem to claim him, and I guess they are in large part descended from the Mongols. This is an education note experience, seeing your photo and reading your explanation.
Emile (20352) 2007-11-16 20:56
Good shot with the portrait of a famous man made with white stones, they did a great work. Very good light and the tree on the left produces a good balance.
Well done. TFS.
jrj (34843) 2007-11-19 7:47
A most unusual portrait Chris - must have taken you most of the morning placing all the white stones hehe...
Indeed a most unusual piece of art and for sure a fun idea - freat landscape there in the faraway Ulan Bator
dareco (17134) 2007-11-20 1:09
An amazing portrait made of rocks!! So very creative! The scenery around is beautiful and I love the colors in this picture. TFS
Angshu (56750) 2007-11-21 0:29
Unusual and original work of art. the landscape looks mostly arid with the brown barren earth providing good contrast with the green of the vegetation. It appears that you got very good blue skies the entire trip in Mongolia.
danyy (0) 2009-04-24 23:53
une vue toute simple sur une région restée naturelle et assez austère, notre planète a besoin de tels lieux d'évasion même si ils sont arides. | <urn:uuid:8b60c3a6-8fab-4a7f-a058-72a949e7aaa8> | CC-MAIN-2016-26 | http://www.trekearth.com/gallery/photo776347.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00160-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.824181 | 1,677 | 2.6875 | 3 |
Murukan Worship in Melbourne
Globetrotting has been a phenomenon of Tamils; they are one of the seafaring races since time immemorial. Yet, the landscape of Tamil race endured a drastic variation with the advent of European colonization and/or as an after effect of such colonization and so began an era of migrations. Tamils began to make homes outside their traditional homeland, not necessarily by choice but by a combination of circumstantial factors such as political, economical and social.
"Do not live in a place where there is no temple" is a Tamil maxim and true to that spirit they built temples wherever they settled.Today there are Hindu temples across the globe. According to Encyclopedia Britannica, "Tamil race is perhaps the greatest temple builders in the world". Temple is perceived as a way of engaging the energies and talents of the community to allow its members to evolve to a higher state of perfection.
Murukan characterizes youth, splendor and beauty and many other things that normally associated with goodness. Like all different sects of Hinduism, Murukan faith is also undergoing the process of relocation and adaptation across the globe. The fact that the Murukan faith accommodates a range of beliefs and practices inspires tenuous concept of identity. Nevertheless, the same phenomena serve to facilitate participation of a wide spectrum of the society. Murukan symbolises the time cycle and represents the cosmic process of creation and dissolution, day and night, life and death.
Murukan worship is vast and this article is by no means exhaustive but persuasive enough to leave a lasting impression on one's mind and to encourage further researches.
Australia's multicultural society recognizes the diversity of its people and nurtures the aspirations, beliefs, traditions, and practices of individuals. Melbourne is the capital of the state of Victoria and according to the 1996 census 6,251 Tamils live there, second only to Sydney's 9,072. Australia's Tamil population is 18,690. There are Saiva temples in all states and territories except in the state of Tasmania, which has a small population of Tamils (76 according to 1996 census). According to recent estimates Tamil population in Australia has reached 30,000 and continue to grow rather rapidly.
Murukan worshipers in Australia are predominantly Tamil speaking people and most of them are of Ceylonese, Indian, Malaysian, Singaporean, Mauritius and Fijian origin. Tamils have a short history of about quarter of a century in a land where the white settlers have a two hundred-year history. Murukan is considered as a Dravidian and/or Tamil god and though some of the beliefs, traditions, practices of the Aborigines could remotely be linked to Dravidian group of people, yet there is no definitive research to confirm it. Therefore, the origins of Murukan worship could only be linked with immigration of Tamil speaking people to Australia in the mid 1970s.
The estimated Tamil population in Australia prior to the mid 1980s was in the region of 3,000 to 3,500 and organized form of religious activities began only after the mid 1980s when the influx of Tamil immigrants began in large numbers. Religious activities began in the form of congregational prayers and temples were built subsequently. Murukan temples are located in Melbourne (in the state of Victoria), Sydney (in the state of New South Wales), Perth (in the state of Western Australia) and Canberra (Australian Capital Territory).
Temples are the sole forum for the articulation and continued negotiation of Hindu identity in Australian society.Temple serves as an epicenter of cultural activities. Temples are the cornerstones of not just Murukan worship but of the greater Saivism. There are three temples in Melbourne; permanent structures are built observing agamic principles while factoring the local socio-environmental-statutory circumstances. Lord Murukan with his consorts Teyvanai and Valli is the presiding deity at Melbourne Murukan temple and it is only one of its kinds in Melbourne. Other two are Siva-Vishnu temple at Currum Downs and Śrī Vakratunda Vinayakar temple at The Basin.
Śrī Vakratunda Vinayakar Temple is located at The Basin, about 35 km east of City, a place in the midst of areas of natural scenic beauty. This temple was consecrated in October 1992.The presiding deity is Śrī Vakratunda Vinayakar. Idols of Murukan with his consorts Teyvanai and Valli, Durga, Abhirami and Navagraham are enshrined in the inner courtyard along with statues of Nataraja with his consort Parvati, Murukan with his consorts Teyvanai and Valli and Visnu. Melbourne Vinayakar Hindu Sangam manages this temple and has plans for further expansion. This temple publishes a newsletter and runs a weekly two-hour Hindu religious radio program, both in English and Tamil, to develop a better understanding of Hinduism amongst the non-believers and to serve as an informative as well as educative opportunity among the believers.
Siva Vishnu Temple
Siva Vishnu Temple is situated at Carrum Downs, a place located in the South-east of City in close proximity to sea. This temple was consecrated in May 1994. This temple reflects the growing convergence of two of the main streams in Hinduism: Saivism and Vaisnavism. There are two temples under one roof: one half of the temple dedicated to deities of Saivism and the other half dedicated to the deities of Vaisnavism with two areas symbolically connected by a shrine of Hariharan (Aiyyapan) located along the notional dividing line.
This temple is built on enormous extent of land and the temple has two processional paths. There are two temple towers, both filled with sculptures depicting stories from Hindu epics and those scriptures inspiring the beholder in divine contemplation. These representations being the infinite expressions of infinite beauty of god through the medium of infinite materials such as stone, metal etc., which is one of the basic characteristics of Hinduism. This temple is also claimed to be one of the biggest in the southern hemisphere with 1500 square meters of roofed area with 39 deities enshrined there. Hindu Society of Victoria manages this temple. This temple too has plans to expand. This temple has a vast stretch of land in its possession in a secluded area and would possibly become a religio-cultural center in future.This temple publishes a newsletter too.
Melbourne Murukan Temple
Melbourne Murukan Temple at Sunshine is located in the western suburbs of Melbourne. This center is the fruit of ardent devotees of Murukan who so desired to build a dedicated temple for their beloved god. The center was formed in 1995, the temple project was launched and the first stage of the temple was completed in January 1999. Currently the statues of Lord Murukan with Teyvanai and Valli and Vinayakar are held in a hall where a resident priest conducts puja and abishekam. Idols of Navagraham are enshrined too. Melbourne Murukan Cultural Center manages this temple and endeavors to build a fully-fledged traditional style temple in the near future. This center attracts devotees from far and wide.
Introduction: Murukan is correlated with sun, moon and rain; accordingly he presided over the coming of the rains and blossoming of trees etc. The Paripatal refers to the coming of the monsoon, the concomitant flooding and fecundation of the earth as the arrival of Murukan with his entourage of elephants and army.Lunar cycle plays a vital role in the eastern religions; in Murukan faith three stages in the fortnight between waxing and waning moons have significances attached to them. The first tithi of the new moon is known as the day of Murukan's birth, second tithi is Sasti and the third tithi being the full moon. Full moon stands for completion, fulfilment and total maturity and is the pinnacle of the cosmic cycle and the maturity of the deity.
Most of the important dates in the religious calendar are observed in these temples and it goes without saying that monthly Karttikai and Sasti, annual Kanta Sasti, Vaikaci Vicakam, Taippucam, Pankuni Uttiram are observed with special rituals. It is just that the proceedings vary from one temple to the other; it would just be performing of abhishekam (holy bath) and puja in one temple while in another temple the deities are then taken in procession in the inner and outer courtyard. The festivals in Melbourne temples are held according to traditional style with minor variations to suit the local conditions.
Karttikai is celebrated each month and is of particular significance as it commemorates the coming into being of Kantan and his being nurtured into maturity and his lordship over time. Karttikai is equivalent of 'birthday'. After the abhishekam and special puja, utsava murti or bronze icon of Murukan is taken in a procession in the inner courtyard of the temple. Karttikai is observed in all the three temples as a monthly special puja. During the Tamil month of Karttikai, the Karttikai natchatiram is celebrated as Tiru Karttikai and the custom of lighting lamps outside homes and temples are followed here.
The sixth day of the waxing moon in the month of October-November is celebrated as Kanta Sasti. Sasti is associated with the destruction of evil forces, asuras (demons). Murukan engaged the armies of Singhamukhan, Surapadman and Tarakasuran on a six-day battle and vanquished all of them on the sixth day. Events leading to the conquest of the asuras are dramatized and enacted.Murukan's triumph over the cosmic forces of evil is celebrated. The asuras were annihilated and the gods were liberated. This battle is known as Surasamharam. This is one of the important Murukan festivals in all the three temples in Melbourne.
During the Kanta Sasti period, daily abhishekam and special puja are conducted each day. Surasamharam is held as a vibrant ceremony on the sixth day. Siva-Vishnu temple and Sri Vakratunda Vinayakar temple has huge structures depicting the evil forces and the ceremony is conducted in the traditional style with puppet demon changing many masks. Corresponding to the six days of the war over the evil forces, devotees undertake fasts, prayers and devotional singing to Lord Murukan. An increasing number of devotees are fasting here; they consume fruits and/or milk once a day for the six days and complete their fast on the seventh day before sunrise.
Vaikaci Vicakam: Vicakam that occurs during the Tamil month of Vaikaci is the commemoration of Murukan's birth and ascendancy to a place of supremacy amongst the gods. This is observed with abhishekam and special puja here.
Taippucam: Poosa natchatiram during the month of Thai is observed as Taippucam. It was on this day that Murukan was given his Vel by his mother Parvati, at the outset of his campaign to defeat Surapadman, head of the asuras. Devotees undertake kavadi and patkudam (milk pot) on the Taippucam day.
Kavadi is increasingly becoming an important feature of Taippucam festival. Kavadi refers to a horizontal pole held on the shoulder on two ends of which load is carried and it assumes the form of human body; the wooden structure represents the bones; the cloth cover represents the skin; the string woven around it represent the veins and the milk contained in the two pots hung by the two ends is the blood. Therefore the act of lifting a Kavadi is professed as submitting oneself at the feet of Murukan himself.
At the end of the Kavadi procession, abhishekam is held to Lord Murukan at the sanctum sanctorum. A devotee who carries kavadi or patkudam aspires to view the milk being poured on Lord Murukan inside the shrine where he or she reaches after an arduous journey. In Melbourne, the Kavadi festival has not reached the magnanimity as in Singapore, Malaysia, Mauritius, Seychelles etc., but still remains an important festival in the Murukan calendar.
Pankuni Uttiram: Pankuni Uttiram in March-April is celebrated as the wedding of Murukan. His wedding and the concept of marriage are subject to intense research. Teyvanai and Valli, consorts of Murukan, represent two types of love, Teyvanai symbolise karpu or chastity (an orthodox form of marriage) while Valli characterise kalavu or love outside of marriage. In northern hemisphere, the climatic day of Pankuni Uttiram is said to mark winter's becoming summer and cold's turning hot, opposite happens in southern hemisphere. Not withstanding this fact, the Pankuni Uttiram is still a special day amongst Murukan worshipers in Melbourne.
This study facilitates to conclude that the Hinduism in general and Murukan worship in particular are popular in Melbourne. Congregational centres were formed; devotees assembled there for weekly prayers and special occasions; idea of constructing temples got rooted. Temples were built within a short span of two decades. Lord Murukan is enshrined in all the temples and it is a clear demonstration of the popularity of Murukan worship. The observance of Murukan calendar and the congregation on such special occasions at temples do characterise the popularity of Murukan worship in Melbourne. | <urn:uuid:ebbf2d59-dd9e-49f1-875f-05bbd6755a94> | CC-MAIN-2016-26 | http://murugan.org/temples/melbourne.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953923 | 2,916 | 2.71875 | 3 |
the capital of Poland. About 2 million of citizens. 80% of buildings were destroyed in Warsaw Uprising in 44' and after that blown up by Germans. After WW2 restored. Situation in the 40s and 50s was really hard in Poland, Warsaw was rebuilt without any help from the outside, mostly by citizens. The most significant point in Warsaw is The Palace Of Culture and Science, the gift from USSR, built in 50s by Russians. its very similar to Moscow palaces - a huge socrealistic piece of beton. Because of the destruction during the WW2 there are plenty of free space even in the centre of city - there are many parks located.
by wolan October 10, 2004 | <urn:uuid:b7f26932-1557-4ab4-be5a-fa41b53e01cc> | CC-MAIN-2016-26 | http://www.urbandictionary.com/define.php?term=warszawa | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00021-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.978984 | 144 | 2.734375 | 3 |
Mastering focal length is the first step to understanding how a lens works.
The most important information to know when looking for a camera lens is the focal length. Focal length tells a photographer or videographer a lot about how the image is going to look. The shorter the focal length, the wider the angle of view and vice-versa. In the following article we will dissect (hopefully) everything you could ever want to know about focal length. If you have any questions or comments please share in the comments at the bottom of the page.
Lenses are often broken down into 3 different categories wide, telephoto, and standard. A wide angle lens is any lens that is 35mm or smaller. Lenses that are more wide than 24mm can be called ultra-wide angle lenses, but most photographers just call them fisheye lenses. Due to size exaggeration, wide angle lenses are great for shooting landscapes, real estate, and architecture.
Telephoto lenses are any lens with a focal length of 85mm or higher. They are usually very long in length making them easy to identify. Telephoto lenses are generally used to shoot objects that are far away making them ideal for capturing weddings, wildlife, and events.
Telephoto lenses usually have more glass elements inside than wider lenses making them generally more expensive. In fact, you’ve probably seen an expensive telephoto lens at a sporting event. Telephoto lenses can be broken down into two further subtypes: medium telephoto (85-300mm) and super telephoto (300mm+). They usually create a very blurry background making them ideal for isolating your subject…we’ll dive into ‘depth of field’ in a future post.
Standard lenses are any lens between 35mm and 85mm. The most commonly used standard lens is the 50mm prime or “nifty-fifty”, as it’s affectionately referred to by many photo pros. Standard lenses usually have a much cheaper base price than their wide and telephoto counterparts.
These lenses are the Goldilocks of lenses, not too wide, not too telephoto, making them perfect for shooting portraits, medium shots, and general photography. Technically speaking, a lens is considered standard or “normal” if it is close to the diagonal length of the camera sensor in millimeters.
Zoom vs. Primes
Lenses with focal lengths that can change are called zoom lenses and those that remain fixed are called prime lenses. When comparing equally priced prime and zoom lenses, prime lenses usually will produce a better image. This is because zoom lenses require many moving parts that hinder light’s ability to move through the lens. Professional photographers do use zoom lenses for their work (like the Canon 70-200mm), but it’s more typical for high-end productions to use prime lenses, as they let in more light. Lenses that come with a camera (kit lenses) are usually zoom lenses.
What is Focal Length?
Focal Length is not…
- The length of the lens.
- Half the length of the lens.
- The diameter of the lens.
Focal length is the measurement (in millimeters) from the optical center of a camera lens to the camera’s sensor. The optical center is also known as the focal point. For all lenses (including primes) the focal length changes depending on what the lens is focusing on. For example a 50mm lens when focusing to infinity will have a focal length of 50mm, but when focusing on an object 1 meter away the focal length needs to be moved 2.6mm further away from the camera sensor to be in focus. Thus what you thought was a 50mm image is actually a 52mm image.
Not to be confused with a 35mm lens, most high-end cameras have a camera sensor that is 35mm in length. A 35mm sensor is “full-frame“, meaning it uses the entire lens when capturing an image. This 35mm standard was designed to be identical to film cameras which used 35mm film to capture images. So a 50mm lens on a 35mm film camera will act very similarly to a 50mm on a 35mm “full frame” sensor.
However, if you are using a camera that has a smaller sensor than 35mm you are going to experience crop factor. If you’re in the photography or video world than you are probably well aware of crop factor, but for those who aren’t already acquainted, crop factor is a phenomenon in which a lens will act more telephoto than it actually is. So for example, a 100mm lens on a camera with a crop factor of 1.6x will have a similar field of view as a 160mm lens on a full frame camera.
When you read online about cropped sensors you will run across pages and pages talking about how cropped sensors have a focal length multiplication factor. This means that if a camera sensor has a multiplication factor of 1.5x than a 50mm lens will actually have a focal length of 75mm. This is actually somewhat false. As we’ve found out above, the only way for an image to be in focus is for the camera sensor to be a very specific distance away from the camera. If the focal length actually changed from 50mm to 75mm you would have an image that was completely out of focus. Instead, a crop factor is actually decreasing the of angle of view.
The market offers camera adapters that increase the angle of view of 35mm lenses to reduce crop factor. These adapters are called focal length reducers (but we know they actually mean ‘angle of view increasers’). Cameras with a cropped sensor can make shooting wide shots very difficult so be sure to take that into consideration before purchasing a lens.
Remember how we talked about the focal point inside of the lens? This point is the place in which light is directed, but unfortunately light doesn’t always bend perfectly. If you’ve ever shined a flashlight through a glass prism, or seen a Pink Floyd shirt for that matter, than you know that when bent, light will separate into different colors because color waves move at different speeds. This happens in camera lenses too, and most photographers consider it a bad thing. It’s called chromatic aberration. For a digital camera chromatic aberration occurs when blue, green, and red light separate across 3 separate focal points. The result is the skewing of colors around the edges of objects within your picture.
Newer camera lenses have a lens element known as a ‘flint’ specifically designed to focus red, green, and blue light rays onto a single point…but older lenses typically do not. Good quality lenses are those which have minimal chromatic aberration.
Chromatic aberration is worse around the edges of an image frame, so when you are buying a new lens look around the edges in your image for color shifting or “purple fringing”. However, if you are intentionally trying to get a vintage look, try using an older lens with an adapter, you will find plenty of chromatic aberration!
The most important take away is focal length is directly related to the angle of view. There are many more technical things to learn about focal length but the topics discussed in this article are the most important for understanding how it plays into both photography and videography. If you’ve enjoyed this post and are interested in learning a little more about the science of focal length in photography check out the articles below.
Have any questions about focal length? Anything you would like to add? Share in the comments below. | <urn:uuid:70dc7025-0e22-4b17-b4f8-4d8db1041188> | CC-MAIN-2016-26 | http://www.premiumbeat.com/blog/understanding-lenses-what-is-focal-length/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00019-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.94594 | 1,586 | 2.921875 | 3 |
Wingspan 12-16 mm.
Scattered locally throughout much of Britain, the adults of this species can be beaten from its foodplant, Scots pine (Pinus sylvestris) during the day.
The normal flight time for the adults is from dusk onwards, when it is attracted to light. They are on the wing in June and July.
The larvae feed in a silken gallery among the young shoots and male flowers. Maritime pine (P. pinaster) is also a foodplant. | <urn:uuid:1a68d6ff-9b88-4c5f-a8ac-d601e93d7841> | CC-MAIN-2016-26 | http://ukmoths.org.uk/species/piniphila-bifasciana/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00043-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96284 | 105 | 2.703125 | 3 |
6.111 Lab #2
Goal: implement simple circuits in Verilog; download and run a
sample circuit on the labkit.
Exercise 1: Writing Verilog code
In this exercise you'll design a Verilog module that implements a
74LS163. Here are the steps:
- Log into one of the workstations in the Digital Lab. Your
username is your Athena login name and your initial password is
- Download lab2_1.v by right-clicking on
the link and select "Save As" (or "Save Link As"), specify your home
directory (U:) as the destination.
The steps below describe how to use our Verilog simulator Modelsim
as a standalone application. One can also run the simulator from the
Xilinx ISE toolkit -- see the labkit documentation Simulating
with Modelsim for details on how to do this. Feel free to use
either approach for this part of the lab.
- Start Modelsim (look in the Programs listing under the Start
menu). If it complains about a missing license file, go to Start
-> Programs -> Modelsim -> Licensing Wizard. Click the
Continue button and specify email@example.com as the location of
the licensing file. If it offers to add the appropriate environment
variable, let it! The wizard will complete, click OK when it's done.
Now restart Modelsim and everything should be happy.
- At the bottom of the Modelsim window there's a frame labeled
"Transcript" where you can type in commands and see various
messages from the simulator. Type in "cd u:" to change to
your home directory.
- Type "vlib work" to set up the simulator's working
- Type "vlog lab2_1.v" to compile the verilog file you
downloaded above. Output from the compiler is displayed in the
- Type "vsim test" to start simulating the test module
found in lab2_1.v.
- Type "run 2000ns" to run the simulation for 2000ns.
You should see the following printout in the Transcript window
# Starting test of LS163...
# clear was asserted low, but counter didn't clear
# out = xxxx, expected 0000
# Break at lab2_1.v line 48
These messages were generated by code inside the test module as it
runs through various tests of the LS163 module. The LS163 module
supplied in lab2_1.v is empty which is why the test failed. Your job is
to fill in the body for the LS163 module, implementing the correct
functionality. Refer to the 74LS163 datasheet to see what
functionality your code needs to implement.
You can use the editor of your choice to edit lab2_1.v
appropriately; Modelsim has a simple built-in editor which should be
displaying lab2_1.v after you completed step 8. As you edit lab2_1.v,
repeat steps 6 through 8 above to test your code. When you're
successful you'll see
# Starting test of LS163...
# Finished test of LS163...
- When your code passes the tests, have a staff member check
you off. They may ask a simple question or two, but mostly they
just want to see it working under Modelsim.
- After checkoff, please upload your Verilog
file using the "Submit Verilog" page on the course website. We'll
review your code and post some comments to help you improve your
Verilog style. We'll be looking for proper use of comments and
formatting to make your code easy to understand.
Exercise 2: Compiling and running Verilog on the labkit
In this exercise you'll design a Verilog module that reads a 4-bit
value from labkit's switches and displays the appropriate hex digit
on a 7-segment display.
- To learn more about the Xilinx FPGA tools please read the
section of the Labkit documentation. You'll follow the steps
outlined there whenever creating a new project for the labkit.
- Download labkit.v and
labkit.ucf by right-clicking on the links and
selecting "Save As" (or "Save Link As"), specify your home directory
(U:) as the destination. Note that the browser may save your downloads
with a ".txt" extension -- you'll have to rename the files to have
".v" and ".ucf" extensions in order for the Xilinx tools to recognize
The labkit module (defined in labkit.v)
has port declarations for all the labkit peripherals as well as
supplying default values for all the output ports. This is the top-level
module for all labkit projects -- you should make a copy of it using
a meaningful file name (eg, lab2_2.v) and modify the copy to implement
the circuitry for your project. labkit.ucf (which you'll never need to
modify) specifies which FPGA pin is connected to which named port in
- Start the Xilinx ISE tool and create a new project following
the steps outlined in the Getting Started document. The Xilinx tools
create a very large number of files, so to keep things neat and tidy I
recommend keeping your .v files separate from the project directory.
For example, if you keep your .v files in U:, specify U: as the
location for your project directories, and when you supply a name for
the project (eg, lab2_2) a directory of that name will be created in
U: and used to store all the Xilinx-created files. When you add
verilog source files to the project, just go up one directory level
to locate your source files.
- We'll be using the 7-segment display from your kit of parts
(this is the display you used in Exercise 6 of Lab 1), this time wired
to the FPGA via the labkit's breadboard (see photo below). Wire up
the display connecting its ground pins to the appropriate columns of
the breadboard, and the signal pins to the User 1 connector at the
top of the labkit (I used pins 0 through 7).
- Add Verilog code to the labkit module using four of the
labkit's slide switches to specify which hex digit to display on a
7-segment display. The switch port of the labkit module is
an 8-bit value reflecting the current settings of the labkit's slide
switches. Use switch[3:0] as the 4-bit hex digit to be
displayed. Here's the appropriate pattern of segments for each
Compute the appropriate value for each of segment control signals
and drive them onto the appropriate FPGA output pins (I used
user1[7:0]). Note that you'll have modify or comment-out the
existing line in the code that sets a default value for the output
pins you're using.
Synthesize and implement your design. Generate a programming
file and configure the FPGA. When your circuit is working, ask
a staff member to check you off. For checkoff be prepared to show
your circuit in operation, displaying different digits as the switches
are turned on and off.
- After checkoff, please upload your Verilog file using the
"Submit Verilog" page on the course website. We'll review your code
and post some comments to help you improve your Verilog style. | <urn:uuid:b5eae448-7331-4a4b-8c59-71557f84729c> | CC-MAIN-2016-26 | http://web.mit.edu/6.111/www/f2008/handouts/labs/lab2.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00195-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.822194 | 1,637 | 2.890625 | 3 |
Oral or parenteral antibiotics are typically given to the hamster to control the bacterial infection. Your veterinarian may also provide fluids and electrolytes if the hamster is dehydrated.
Proliferative enteritis can be prevented to a great extent by maintaining good sanitary cage conditions. Dispose of used bedding material and routinely clean the cage using recommended disinfecting solutions. In addition, because of the contagious nature of the bacteria, separate hamsters that appear infected from those that are healthy.
The administration of something through a route other than the normal route, which is through the gastro intestinal tract
A medical condition in which the small intestines are inflamed | <urn:uuid:24a0613e-c7d2-49fe-a873-f1363fa13632> | CC-MAIN-2016-26 | http://www.petmd.com/exotic/conditions/digestive/c_ex_hm_proliferative_enteritis?page=2 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00200-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.901894 | 136 | 2.953125 | 3 |
Long before grapes grew on Trellises in Napa and Sonoma, long before vineyards flourished in Bordeaux and Bourgogne, a sophisticated wine industry arose along the banks of the Nile. From tombs, temples and palaces that date as far back as 5,000 years ago, archaeologists have uncovered clay amphorae stamped with seals that name not only the contents (irp, or wine) but also the region in which the grapes were grown, the year in which the wine was produced, the owner of the estate and often some indication of quality, such as "good" and "very, very good." And who is to say that wines like these cannot be made again someday, asks Patrick McGovern, a molecular archaeologist at the University of Pennsylvania Museum, including perhaps the mysterious elixir that supposedly drove Cleopatra mad.
Yet the ancient Egyptians were relative newcomers to the wine industry, says McGovern, whose new book, Ancient Wine: The Search for the Origins of Viniculture (Princeton University Press; 365 pages), traces the long prehistory of our most celebrated beverage. The earliest pharaohs imported wine from the southern Levant, and before the occupants of that region became winemakers, about 6,000 years ago, they no doubt imported wine from their neighbors. In such stepwise fashion, McGovern suggests, viniculture (a term he uses to encompass both the growing and the processing of grapes for wine) spread from its point of origin in the uplands of eastern Turkey or northwestern Iran, eventually crossing the Mediterranean to fill the goblets of the ancient Greeks.
Just how and when this happened is still a mystery, but no one is better qualified to sift through the widely scattered clues than McGovern, a skilled scientific sleuth who wields the most powerful tools of modern chemistry in his search for the roots of ancient wines. In 1996, for example, his lab created a stir by finding dried traces of wine in 7,500-year-old jugs that hailed from the Zagros Mountains of present-day Iran. A few years later his lab identified some of the key constituents in a funerary feast held in about 700 B.C. in honor, some think, of King Midas. The feast, as re-enacted at a gala hosted by the University of Pennsylvania Museum, included a modern re-creation of Phrygian grog, a concoction McGovern's lab determined was part wine, part beer and part mead.
Now McGovern is hoping to solve the biggest mystery of all, which is where and when the Eurasian grapevine--the species from which 99% of the world's wine is derived--was first taken under cultivation. For unlike the ancient ancestor of modern corn, which has been traced to a valley in southern Mexico, the wild Eurasian grapevine grows across a broad geographic range. It is therefore possible, though McGovern thinks unlikely, that it was domesticated by several cultures independently. What will eventually help resolve the question, McGovern says, are ancient snippets of DNA from wine residues and shriveled raisins that have been excavated from archaeological sites throughout the Middle East. | <urn:uuid:2abed6f2-f627-455e-8991-573c0f059bfc> | CC-MAIN-2016-26 | http://content.time.com/time/subscriber/article/0,33009,1006242,00.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00108-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.972202 | 652 | 3.265625 | 3 |
Prevention & Screening for Vascular Disease and Stroke
Early detection is key in preventing stroke & vascular disease.
At Rochester General, proper vascular screening is an essential part of preventive care, and especially important when it comes to vascular disease and stroke, which can often become serious before any symptoms are noticed. People over the age of 60 with one or more of the following risk factors, and everyone over the age of 50 who has a family history of abdominal aortic aneurysm, should be screened for vascular disease and stroke prevention purposes. Risk factors include:
- Coronary (heart) artery disease
- Family history of vascular disease
- High blood pressure
- High cholesterol
- Peripheral (non-cardiac) vascular disease
- Prior stroke or “mini” stroke
What is Vascular Disease?
Vascular disease is the general term for a variety of problems affecting the blood flow throughout the body's blood vessels. The causes of arterial diseases or problems are wide-ranging, but most vascular disease result from artherosclerosis (hardening of the arteries). Other causes of arterial problems include congenital malformation, trauma and diseases of the arteries.
People with vascular disease have an increased risk of potentially disabling or fatal conditions, including heart attack; stroke due to blocked carotid arteries, which carry blood to the brain; aneurysm of the aorta, the body's main artery; and impaired circulation in the arms and legs, which can lead to severe disability.
What is a Stroke?
A stroke, sometimes called a “brain attack,” is caused by an interruption in the blood supply to any part of the brain. There are two major types of stroke that cause bleeding in the brain: ischemic and hemorrhagic strokes.
An ischemic stroke occurs when a blood vessel supplying blood to the brain is blocked by a blood clot (either a thrombus or an embolism). A hemorrhagic stroke happens when small blood vessels in the brain become weak and burst. The resulting blood flow from the ruptured blood vessel damages the brain cells. Some people have defects in the blood vessels of the brain, making a hemorrhagic stroke more likely.
Learn More About Stroke Risk Factors
At Rochester General, stroke prevention is extremely important to us. Adopting the following lifestyle changes may help prevent a stroke:
- Eat a healthy, low-fat diet and avoid fatty foods
- Do not drink more than 1 to 2 alcoholic drinks per day
- Exercise regularly - 30 minutes a day if you are not overweight and 60–90 minutes a day if you are overweight
- Quit smoking
- Get your blood pressure checked every 1–2 years—especially if high blood pressure runs in your family—or more often if you have high blood pressure, heart disease or have had a stroke
- Have your cholesterol checked every 5 years (for adults) or more often if you are being treated for high cholesterol
Follow your doctor's treatment and stroke prevention recommendations if you have high blood pressure, diabetes, high cholesterol or heart disease
Vascular Surgery Associates offers preventative screenings for the early detection of vascular disease, stroke and other vascular diseases, which can significantly reduce the risk of related long-term health problems, death or disability.
If you would like to arrange a non-urgent screening appointment with one of our vascular surgeons, please contact us at any time.
To learn more about vascular disease and the treatments we provide, visit our Diagnosis & Treatment page. | <urn:uuid:a3ff800a-5db3-4481-b146-449d7faa5df8> | CC-MAIN-2016-26 | http://www.rochestergeneral.org/centers-and-services/surgical-services/vascular-surgery/prevention-and-screening/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00192-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922781 | 728 | 3.21875 | 3 |
Kids at Risk
Contaminants dangerous to children may be falling through the cracks of pollution-control and public health regulations.
Mothers and fathers worry about their children's health and safety. That's just human nature. But public officials around the country are learning they had better pay attention to parents' gnawing fears that contaminants that pollute the environment are poisoning the youngest, most fragile among us.
That's what happened earlier this year in Washington, D.C. Last winter, it was disclosed that testing conducted four years ago found unacceptably high levels of lead in the drinking water that is piped into thousands of homes. The U.S. Environmental Protection Agency responded by declaring the city's water utility in violation of federal Safe Drinking Water Act rules that call for municipal systems to replace old pipes that let toxic metal seep into water flowing from drinking fountains and kitchen taps. This summer, the D.C. system began pumping phosphoric acid through lines to coat the inside of the pipes with a chemical that prevents lead from leaching into household water.
That's just a stopgap solution for a problem that the D.C. utility as well as other water supply agencies should have taken care of years ago. It's been clear for decades that lead poisoning poses a severe threat to children, whether it comes from drinking water or from lead- based paint flaking off the walls. What's just as worrisome is that evidence of equally damaging forms of contamination may be falling through the cracks in how the nation's pollution-control and public health programs are structured, leaving perils to children's health undetected and festering.
It's a crucial issue since children are particularly vulnerable to pollutants. Their growing bodies require them to take in proportionately more food, air and water than adults, but kids are less able to detoxify and excrete contaminated matter. The likely, if still unproven, consequence is that children who live in industrial communities may be more at risk of birth defects, cancer, asthma, learning disabilities, and quite possibly other disorders as well.
Starting in the mid-1990s, the U.S. Environmental Protection Agency began taking the threats to children's health into account in setting pollution-control targets. In addition, four years ago, Congress ordered EPA, the Centers for Disease Control and Prevention and other federal health agencies to conduct a $3 billion study to follow 100,000 children from birth to age 21 and examine what influence environmental contaminants have on their health. Eventually, the findings could profoundly change national environmental protection strategies. But state and local governments shouldn't wait until 2027 to refocus pollution control programs on public health problems that threaten the present generation of youngsters.
The way governments are now organized, tracking down causes has been complicated by narrowly focused government agency missions. In most states, public health agencies originally were charged with pollution control; but legislators in the 1970s created separate agencies that assumed authority to regulate air, water and waste releases to the environment. Maybe that arrangement makes managing the agencies easier, but nobody's been in charge of checking emission reports against emerging public health trends. Just as with national intelligence failures, the stovepipe structure may keep governments from spotting some telling clues to what's causing some increasingly common childhood afflictions.
We already know that nearly one out of every 13 American children suffers from asthma. Polluted air is likely one culprit, particularly in poor urban neighborhoods. Two years ago, national organizations that represent state health directors and their environmental policy counterparts targeted childhood asthma in recommending strategies to break down the stovepipes so their agencies can work together on limiting health-threatening exposures to contaminants. The Environmental Council of the States, which represents state pollution- control commissioners, and the Association of State and Territorial Health Officials followed up by commissioning pilot collaborative studies by health and environmental agencies in five states. California is working on providing air-quality alerts to school principals, and Oregon encourages drivers to turn off engines when waiting to pick up students after school. Wyoming meanwhile has collected data from air-quality monitors mounted on schoolhouse roofs in four communities to compare how many students school nurses have been seeing for asthma attacks.
Those are encouraging steps toward the kind of imaginative, results- driven governing that will be needed to find and then solve lingering environmental problems. Parents can't reasonably ask governments to allay all their fears about what threatens their children's health and safety. At the least, however, they should expect their governments to take vigilant steps to make the connection between pollutants and childhood afflictions.
Join the Discussion
After you comment, click Post. You can enter an anonymous Display Name or connect to a social profile. | <urn:uuid:3c322ccc-70c1-4fa8-a213-a16f64423a32> | CC-MAIN-2016-26 | http://www.governing.com/topics/energy-env/Kids-Risk.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00163-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958361 | 946 | 3.0625 | 3 |
Dr. Seuss’s Naked Dames and Horses,Too!
March 28, 2012 // Filed under Uncategorized
During the final week of the library’s Dr. Seuss display, I thought it would be interesting to “flesh out” our appreciation of this author a bit. We’ve learned that Dr. Seuss wrote and illustrated many books for children, but he also produced cartoons for adults which have been collected in Dr. Seuss Goes to War and The Tough Coughs as He Ploughs the Dough. Additionally, Theodor Seuss Geisel authored the humorous storybook You’re Only Old Once! which was aimed at the aging adult audience.
Perhaps his most unexpected adult book is the strange story of The Seven Lady Godivas, one of his earliest publications. In this book Dr. Seuss concocted a highly imaginative legend to explain the origins of seven well-known proverbs. These are the proverbs having to do with horses and what he called horse truths, an example of which would be “Never change horses in the middle of the stream.”
So here’s where it gets strange: Dr. Seuss utilized the legend of Lady Godiva and her horse, but converted this individual into a family of seven Godiva sisters who must uncover these horse truths/proverbs. As he stated tongue in cheek in his foreword, “History has treated no name so shabbily as it has the name Godiva. Today Lady Godiva brings to mind a shameful picture-a big blond nude trotting around town on a horse…..There was not one; there were Seven Lady Godivas, and their nakedness actually was not a thing of shame.” And so the seven stalwart sisters fulfill their Seussian destiny to discover horse sense (and true love, too!) all while unflinchingly wearing their birthday suits. Dr. Seuss must have had fun coming up with this one. ~ Evelyn Fischel ~
<< Go back to the previous page | <urn:uuid:e77f045c-5ac0-4aa5-a9c2-5f39adfd43e7> | CC-MAIN-2016-26 | http://www.bernardsvillelibrary.org/dr-seusss-naked-dames-and-horsestoo/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00079-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965157 | 423 | 2.765625 | 3 |
Lewy Body Disease
Lewy body disease is a type of dementia. Dementia is the progressive loss of memory and various other mental functions, including the ability to learn, reason, and judge.
|Copyright © Nucleus Medical Media, Inc.|
Lewy body disease is associated with the build up of Lewy bodies in regions of the brain. These are abnormal protein deposits inside cells that play a role in certain aspects of memory, visual processing, and and motor control. It is not clear exactly what causes the build up of Lewy bodies in the brain.
Lewy body disease is more common in men, and in people over 50 years old. It is also more common in people with a family history of Lewy body disease, Parkinson's disease , or other dementias.
The disease is linked to:
Lewy body disease is characterized by:
- Fluctuations in alertness and attention—frequent drowsiness, lethargy, staring into space, disorganized speech, and insomnia
- Recurrent visual hallucinations
- Poor regulation of body temperature and blood pressure
- Obsessive compulsive behaviors
- Parkinsonian motor symptoms, such as rigidity or loss of spontaneous movement
- REM sleep behavior disorder
You will be asked about your symptoms and medical history. A physical exam will be done. A doctor can do tests to narrow the cause of dementia. Other tests may include:
- Memory, language, and other cognitive tests
- Neuropsychological tests
- Patient and family interviews
- Imaging tests take pictures of internal bodily structures. This can be done with:
- Blood tests
The only way to confirm Lewy body disease is through an autopsy after death.
While there is no cure for Lewy body disease, there are treatments that can control the symptoms. Talk with your doctor about the best treatment plan for you. Treatment options include:
These medications may be used to help with the symptoms:
- Cholinesterase inhibitors
- Glutamate blockers
If you have Lewy body disease, you may be sensitive to medications called neuroleptics. You may have adverse events with these medications.
You may benefit from:
There are no current guidelines to prevent Lewy body disease.
- Rimas Lukas, MD
- Reviewed: 08/2015
- Updated: 09/03/2014
Please note, not all procedures included in this resource library are available at Allegiance Health or performed by Allegiance Health physicians.
All EBSCO Publishing proprietary, consumer health and medical information found on this site is accredited by URAC. URAC's Health Web Site Accreditation Program requires compliance with 53 rigorous standards of quality and accountability, verified by independent audits. To send comments or feedback to our Editorial Team regarding the content please email us at HLEditorialTeam@ebscohost.com.
This content is reviewed regularly and is updated when new and relevant evidence is made available. This information is neither intended nor implied to be a substitute for professional medical advice. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with questions regarding a medical condition. | <urn:uuid:177dce95-4a32-4e1f-8453-350fee085e7a> | CC-MAIN-2016-26 | http://www.allegiancehealth.org/wellness/article/230658 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00167-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.916582 | 653 | 3.296875 | 3 |
The boundary (pink line) between the Australian and Pacific plates is marked by the Kermadec Trench and the Alpine Fault. All of the North Island, the northern South Island, and a thin strip along the west coast of the South Island are on the Australian Plate, west of the boundary. The rest of the South Island, to the east of the boundary, is on the Pacific Plate. The yellow and pale brown areas are continental crust.
About this item
This item has been provided for private study purposes (such as school projects, family and local history research) and any published reproduction (print or electronic) may infringe copyright law. It is the responsibility of the user of any material to obtain clearance from the copyright holder.
Source: GNS Science | <urn:uuid:deaac4d3-545e-42cb-aef8-72b4f9365969> | CC-MAIN-2016-26 | http://www.teara.govt.nz/en/map/4398/plate-boundary-through-new-zealand | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00126-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936324 | 157 | 3.59375 | 4 |
Welcome to the hairy world of programming for complex scripts.
>But, once the ligature is formed, it becomes a problem for
screen editing. What the users think they do when they edit an
electronic document is to insert, delete, substitute, move or
mark *characters*. What they actually do symbolic actions on
*glyphs*, that are the visual representation of characters, and
this causes the software to actually change the characters in
No, creating a ligature does not (or should not) cause any
problems for screen editing. Software *renders* text as glyphs,
but it should always allow the user to interact with the text
in terms of characters. What this means is that any decent app
that displays a fi ligature should allow the caret to be drawn
in the middle of that glyph if, say, the carat is immediately
to the left of that glyph and the user hits the right arrow
key. When the caret is shown to the left, right or within the
ligature glyph, the cursor into the text in memory should be
pointing to the positions before the character f, between the
characters f and i, and after the character i, respectively.
Any editing operations the user performs will affect the
characters in memory, and the screen will be redrawn to
indicate the new state of affairs.
>Everybody already noticed that, if the "m" in the above
examples is substituted with an "f", we are going to have
troubles. In a system that displays "f" + "i" as a ligature, I
cannot move the caret in the proper place in "fil" to add my
"a". I can certainly delete the "a" in "faile" but, after I do
this, my caret remains in an embarrassing location: "somewhere
*inside* a ligature".
If software is implemented properly, there is no reason why we
shouldn't be able to deal with these issues. It should be
possible to position the caret within the ligature so that you
can add an "a" between "f" and "i"; deleting the "a" in "fai"
will leave the caret within the ligature, and that is exactly
the desired behaviour. Any software that doesn't do this is
>What can programmers do about this? Some approaches:
>#1 Avoid ligatures. - This is not acceptable in a WYSIWIG
environment and, for certain scripts, this is not acceptable
even in the humblest text-only interface.
>#2 Split ligatures when the caret passes over them. - This is
the same as #1 above, only less frequent.
>#3 Once a ligature is formed, treat it as if it was a single
unit. - Most people, although perfectly literate, never noticed
that "fl" looks slightly different from, say, "fb" or "fh". Do
you want them to notice it just to decide they don't like
>#4 Pretend that the "ffl" glyph represents the first "f" only;
the second "f" and the "l" would then be zero-width things
following the visible glyph. - This is the same as #3 above,
but even more puzzling.
None of these are acceptable, though #2 might be tolerable.
>But if our font represents an "fi" ligature as two ad-hoc
artificial glyphs (plus an ad hoc kerning pair, plus an ad hoc
contextual shaping rule), we
obtain a double score:
- The display looks pretty, just like a printed book;
- The user's perception that characters = visible glyphs =
keyboard strokes may be supported, for the sake of usability.
If our software is done right, we just shouldn't need to resort
to these kinds of kludges.
>Finally, the virama in the consonant clusters of many Indic
scripts is *really* invisible and there is no way we can
visualize it *and* claim we are WYSIWYG.
Now you're getting into some interesting UI challenges for
which I don't think standard solutions have been developed. The
invisible character is really there in the text, and so a user
ought to be able to manipulate it. But how can they do so if
they can't even see it? This is also true for things like ZWSP,
>For the "impossible" cases like the invisible viramas, I would
step back to #3 above, trying to enforce the user's perception
that virama is *not* a character by itself.
There's another possibility. Consider this: the distinction
between SPACE and NBSP isn't visible, but most word processors
provide a "display non-printing characters" option where some
visual queue is provided. This could be utilised for things
like virama, ZWSP, ZWJ, etc: when the option is enabled, then
these things appear, preferable using some representation that
identifies them unmistakeably, such as "ZWJ" inside a dotted
There are other possibilities:
- change the shape of the caret (think of split cursors)
- draw some kind of symbol, possibly coloured, above, below or
on top of the glyphs surrounding the position of the invisible
character; e.g. draw two small arrows pointed toward each other
in red below the line to indicate ZWJ, and draw the arrows
pointing away from each other to indicate ZWNJ
The field is open for anybody and everybody to think of the
best mechanisms to provide visual feedback to deal with these
things. There will be lots of crazy ideas that will get
rejected, but someone will come up with something clever, and
if marketers and lawyers don't get in the way, we'll all be
able to use the best ideas until, eventually, these things
become as standard as double clicking.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:57 EDT | <urn:uuid:eef1be86-a230-45cf-8a3f-017e4e2ba862> | CC-MAIN-2016-26 | http://www.unicode.org/mail-arch/unicode-ml/Archives-Old/UML020/0686.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00117-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.925503 | 1,327 | 3.484375 | 3 |
4) Improvements in Printing and the Emergence of Popular Culture
The proliferation of the mechanized printing press and nineteenth-century improvements in lithography, photoengraving, and other printing processes, coincided with the period depicted in Panoramic Maps, 1847-1929. Such strides made it possible to produce multiple inexpensive copies of these maps. These processes also made possible the inexpensive production of song sheets, advertising flyers, magazines, and colorful baseball cards. These materials became far more accessible to the average American during the second half of the nineteenth century and a "popular culture" began to emerge from coast to coast. That is, aspects of culture came to be shared, to greater or lesser degrees, across lines of region, race, religion, politics, and class.
See the collections, Music for the Nation, 1870-1885, and Nineteenth-Century Song Sheets to learn more about popular music of the era. See Baseball Cards to learn more about another item of popular culture that owed a debt to improved printing techniques.
- Through print, the United States was beginning to refine its self-definition locally, regionally, and nationally. How did the print medium contribute to America's definition of itself as a nation?
- How did the print medium contribute to defining regional and local identities?
- How did the panoramic maps, specifically, contribute to these definitions?
- What are the similarities and differences between the panoramic maps and the other print materials of the time? Consider the audience, subject matter, funding, distribution, and use of these materials. | <urn:uuid:6b0fae1e-80ea-4119-b5c9-616a993d956e> | CC-MAIN-2016-26 | http://www.loc.gov/teachers/classroommaterials/connections/panoramic-maps/history4.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00153-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932221 | 323 | 3.5625 | 4 |
Definition of Laughing gas
Laughing gas: Nitrous oxide, a gas that can cause general anesthesia. Nitrous oxide is sometimes given in the company of other anesthetic agents but it is never used today as the only anesthetic agent because the concentration of nitrous oxide needed to produce anesthesia is close to the concentration that seriously lowers the blood oxygen level and creates a hazardous state of hypoxia.
Nitrous oxide figured in the history of anesthesiology. In 1840 a dentist named Horace Wells had the idea that, with the recently discovered "exhilarating or laughing gas", teeth might be extracted without pain. Under its influence he had one of his own teeth pulled in 1844 and afterwards frequently used nitrous oxide in his practice. At the Massachusetts General Hospital, Wells gave a demonstration with a patient. Things did not go too well. The patient suffered great pain. Wells became depressed, addicted (to chloroform, another anesthetic agent) and in 1848 committed suicide.Source: MedTerms™ Medical Dictionary
Last Editorial Review: 5/13/2016
Find out what women really need. | <urn:uuid:a087e604-a8fa-4580-94b2-0b41a500f67c> | CC-MAIN-2016-26 | http://www.rxlist.com/script/main/art.asp?articlekey=7794 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00099-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947383 | 229 | 3.390625 | 3 |
Is there anything that will make people stop speeding for good? Transport of London has come up with a new technology that they hope will do the trick—both to stop speeding and reduce accidents. It’s called Intelligent Speed Adaptation, and it’s a program installed in cars as a kind of auto big brother: If the car is going too fast, the computer will automatically reduce the speed.
This summer, a total of twenty vehicles, including cars, buses, and cabs, will be testing the new software. Each vehicle will receive a monitor that displays a digital map of the city, corresponding speed limits, and a GPS system, so the program can calculate how fast the car should be going based on its real-time location.
Don’t freak out about a loss of driving freedom just yet — the program has several settings that let the driver control how much leeway it has. In “advisory mode,” the driver, once alerted that he or she is speeding, can voluntarily slow down. The monitor will display an emoticon to express how the driver is doing: A happy face shows up when the car is below the speed limit, and a frown appears if the driver is still going too fast.
Meanwhile in “voluntary mode,” the car automatically slows down when the program spots above-speed-limit speeds. And in case of an emergency, the driver can hit a button to override the program.
In theory, the concept seems to make sense, but can it really ever replace the reliable radar system already in place? And—understandably—irate drivers are already complaining that it will take away their freedom on the road.
Image: flickr/ pfn.photo | <urn:uuid:c8b5a8d1-3776-4b36-b81e-0c845ee9581e> | CC-MAIN-2016-26 | http://blogs.discovermagazine.com/discoblog/2009/05/18/cant-stop-speeding-a-computer-in-your-car-may-do-it-for-you/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00099-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.948042 | 356 | 2.984375 | 3 |
The mud has a strong, pungent, sulphurous smell, resembling that of mineral oil, and is hotter than the surrounding atmosphere. During the rainy season the explosions increase in violence.
There are submarine mud volcanoes as well as those of igneous kind. In 1814 one of this character broke out in the Sea of Azof, beginning with flame and black smoke, accompanied by earth and stones, which were flung to a great height. Ten of these explosions occurred, and, after a period of rest, others were heard during the night. The next morning there was visible above the water an island of mud some ten feet high. A very similar occurrence took place in 1827, near Baku, in the Caspian sea. This began with a flaming display and the ejection of great fragments of rock. An eruption of mud succeeded. A set of small volcanoes discovered by Humboldt in Turbaco, in South America, confined their emissions almost wholly to gases, chiefly nitrogen.
There is a close connection in character between mud volcanoes and those intermittent boiling springs named geysers. A good many of the mud volcanoes throw out jets of boiling water along with the mud; but in the case of the geysers, the boiling water is ejected alone, without any visible impregnation, though some mineral in solution, as silica, carbonate of lime, or sulphur, is usually present.
THE GEYSER IS A WATER VOLCANO
The phenomenon of the geyser serves in a measure to support the theory that steam is an important agent in volcanic action. A geyser, in fact, may be designated as a water volcano, since it throws up water only. It comprises a cone or mound, usually only a few feet high. In the middle of this is a crater-like opening with a passage leading down into the earth. As in the case of the volcano, the geyser cone is built up by its own action. In the boiling water which is ejected there is dissolved a certain amount of silica. As the water falls and cools this mineral is deposited, gradually building up a cup-like elevation. The basin of the geyser is generally full of clear water, with a little steam rising from its surface; but at intervals an eruption takes place, sometimes at regular periods, but more often at irregular intervals. | <urn:uuid:f878fd26-94b9-4f84-a078-cdcb4c60fd16> | CC-MAIN-2016-26 | http://www.bookrags.com/ebooks/1560/233.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00048-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.971767 | 490 | 3.453125 | 3 |
*HTML is ON
*UBB Code is ON
[i]In 1969, less than six months before the first human beings landed on the Moon as a part of the Apollo Program, officials at the National Aeronautics and Space Administration (NASA) began to plan the symbolic activities that would mark the historic event. In addition to coordinating the planting of a United States flag and attaching a commemorative plaque to the leg of the lunar lander, NASA officials also solicited "messages of good will" from nations around the world. One hundred sixteen requests went out to world leaders for "a document suitable for microfilming." Eighty-one nations replied. With the results engraved in miniature on a silicon disc and packed inside of an aluminum compact, those good wishes were deposited on the lunar surface without much ceremony-- almost as an afterthought on a mission packed with scheduled objectives and symbolic acts--just before astronauts Neil Armstrong and Edwin "Buzz" Aldrin finished their moonwalk. Back on Earth, 16 replica discs were distributed to dignitaries, with one being given to the Smithsonian Institution.
Although the typewritten English translations of the messages included on the Apollo 11 silicon disc have been public in NASA records for almost 40 years, Tahir Rahman's book tells for the first time the full story of this unique object--and in doing so, offers a fresh look at the symbolic importance of the first Moon landing. Rahman, a board-certified psychiatrist by training and a space buff by avocation, uncovered and collected the documentary record of this commemorative act, including locating the original messages in the Library of Congress. Illustrated with gorgeous color photographs, We Came in Peace retells the story of Apollo 11 and reveals the process behind the creation of the silicon disc (including the patent filed for the specific procedure used to emboss the miniaturized messages on the disc). What this book does best, however, is display the original messages in context.
More than half of the volume is dedicated to showcasing the goodwill messages. The glossy, elegant layout allows the reader to consider fully each note and its source. On each two-page spread, a map of the world showing the country in question forms the backdrop for each message presented in its original language--and in some cases, its original calligraphy. On the facing page, translations reveal the sentiments composed by world leaders to be left on the Moon. (The only error in the maps is the confusion of Chiang Kai- Shek's Taiwan-based Republic of China with Mao's mainland People's Republic of China.)
Intended as a time capsule that would last for thousands of years on the Moon, the contents of the silicon disc have become a time capsule of another sort. The geopolitical world recorded in these messages is a relic of 40 years ago. In 1969, Indira Ghandi led India and Haile Selassie was Emperor of Ethiopia. Although the Soviet Union did not respond to the United States' invitation to participate, Josip Tito of Yugoslavia did. Forty years later, the Cold War is over, regimes have changed, and some Eastern European boundaries have been redrawn. Reading the good will messages and noticing who wrote them permits a unique glimpse of a historical moment.
It also reminds one that, in 1969, the events of the early 20th Century remained in living memory for many. For instance, the message from Ireland came from Eamon de Valera, a man who helped to lead the Easter Uprising in Ireland in 1916 and who during Apollo 11 was serving his second term as President of Ireland.
The messages themselves also make interesting reading.
Although some are short declarations, many are poetic. In retrospect, the wishes for good will, brotherhood, and peace delivered by despots and dictators carry an extra layer of irony. Despite the earnest tone of most of the blessings, some messages include lighter notes, such as the wish of President Felix Houphouet-Boigny, that "I hope also that he [the moonwalker] would tell the Moon how beautiful it is when it illuminates the nights of the Ivory Coast" (157).
Beautifully illustrated and elegantly laid-out, this small coffee- table book would be a welcome addition to the collection of any space enthusiast. As the 40th anniversary of Apollo 11 approaches in 2009, there will be many volumes on the first lunar landing vying for one's attention. Consider this one. By focusing on the Apollo 11 silicon disc, Tahir Rahman offers a new perspective on the event's resonance, both as it was planned by those creating the commemorations and as viewed 40 years later.
Margaret A. Weitekamp, Curator
National Air and Space Museum
If you have previously registered, but forgotten your password, click here.
*** Click here to review this topic. ***
Copyright 2016 collectSPACE.com All rights reserved.
Ultimate Bulletin Board Version 5.47a
Ultimate Bulletin Board Version 5.47a | <urn:uuid:798b177d-f3b3-4ebe-b742-99ea50435dc2> | CC-MAIN-2016-26 | http://www.collectspace.com/cgi-bin/postings.cgi?action=reply&forum=Publications+%7CAMP%7C+Multimedia&number=9&topic=001036.cgi&TopicSubject=We+Came+In+Peace+For+All+Mankind+(Apollo+11%7CAPO%7Cs+Silicon+Disc)&replyto=39 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00064-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.933601 | 1,015 | 2.75 | 3 |
- Download PDF
51 Answers | Add Yours
Mass is "the amount of stuff" in an object. That is, the amount of atoms and molecules present. Mass in science, is measured in the metric system using units such as kilogram (about 2.2 pounds)nwhich are a thousand grams, grams, centigrams (hundredths of grams), and milligrams (thousandths of grams). Mass is different from weight, Weight is a force. It depends on gravity pulling us toward the center of our planet (or another planet if we ever get there or the moon). If we weighed ourselves on our bathroom scale and noted our weight and then took our scale with us to the moon, we would find that our weight was only one sixth as much there because the force of gravity is much smaller. However, because when we find mass, we use a procedure that compares our object with known mass, our mass would not change regardless of the amount of pull of gravity. This explains why astronauts did not change their size when they were on the moon even though their weight decreased considerably. Well, OK, that's not exactly true. Their size changed just a little. They were temporarily a tiny bit taller because gravity was not pulling them down, compressing their spines a little bit. But that has nothing to do with the definition of mass.
Mass is a property of matter, and its units are usually measured in grams (g) or kilograms (kg) worldwide and additionally ounces (oz) and pounds (lbs) in the US. On the small scale, atomic mass, or the mass of elemental particles (protons, neutrons, electrons, and whole atoms) are measured relative to an atom of carbon-12, which is set to have a mass of 12.0000. Mass should not be confused with weight, which a measurement of how heavy matter is when subject to the force of gravity, so:
weight = (mass) * (force of gravity) or w=mg.
unit of mass and convertions are as follows:
milligram = 0.001 gram
1 centigram = 0.01 gram
1 decigram = 0.1 gram
1 kilogram = 1000 grams
1 slug = 14.5939 kilogram
1 kilogram = 0.068521779 slugs
Pounds and ounces are not the unit of mass but a unit of Weight. Weight is different from a mass because a mass becomes weights if it is subjected to a gravitational force or acceleration. The weight of objects at sea level is different when the object is placed at a higher elevation or when the object is placed in moon where there is no gravity there.
A unit of mass is a unit in which the amount of matter contained in a substance is measured.
The SI unit of mass is Kilogram(kg).
Other common units are tonne, pound, quintal, etc...
The property a body has of resisting any change in its state of rest or uniform motion is called inertia. The inertia os a body is related to what we think of as the amount of matter it contains. A quantitative measure of inertia is mass: The more mass a body has, the less its acceleration when a given net force acts on it. The SI unit of mass is the kilogram (kg).
source: Applied Physics, 2012.
Mass = Mass is the amount of material in an object.
Weight = Weight is the gravitational force acting on a body mass.
The SI unit of mass is ( kg )
See you next time
A unit of mass is a measurement of mass. It is actually a measurement of how much matter there is.The closest concept is weight. Let's understand the difference between weight and mass. A 50 lb. dumbell is twice the mass of a 25lb. weight.If these dumbells are on the surface of the moon,they will weigh less because of less gravitational pull.The change of weight might be seen by being able to lift the weights with two fingers as though it was styrofoam.So weight is variable in relation to gravitational pull.However the ''amount of matter'' or ''mass'' did not change when it was on the moon.
One way of measuring mass is used in particle physics. The ''atomic weight''( despite it's name) is one scale used to measure the mass of different elements.Hydrogen has an atomic weight of 1. It contains one proton.Helium has an atomic weight of 4 and its nucleus has 2 protons + 2 neutrons.
Mass is the amount of matter in an object and the unit of mass is kilogram (kg).
An example of a unit of mass would be gram, but it can also be kilogram. There can be many examples this is just one of them.
The SI unit of mass is the Kilogram. Mass is used in weights or scales. Pounds are ounces are units of weights, not units of mass.
Many may find it difficult or confusing to differentiate mass from weight. Mass is how much matter an object contains while weight is the force of gravity acting upon that object. Therefore, the SI unit of weight is N (Newton) while that of mass is kg (kilogram).
Mass is the measure matter. Its often confused with weight which is just a gravitational force. The SI unit for mass is Kilogram. The SI unit for force is Newton.
One way to think of it as that a person's mass will stay the same regardless of the gravity of a planet. A person on Earth will have the same mass as a person on the Moon.
But the weight, the gravitational force, will be different.
A Unit of mass can be a gram or kilogram.
The SI unit of mass is the kilogram (kg). As mass is difficult to measure directly, usually balances or scales are used to measure the weight of an object, and the weight is used to calculate the object's mass.
The actual SI unit is kilogram.
However the conversion units are as followers (like if the problem wants you convert to grams, milligrams etc)
milli gram = 0.001 gram
1 centi gram = 0.01 gram
1 deci gram = 0.1 gram
1 deca gram= 10 grams
1 kilo gram = 1000 grams
There are two types of units SI and Imperial.
The SI unit for mass is the kilogram (kg).
I see many answers that include grams and milligrams but these are just units of measure, while the kilogram (kg) is the actual SI unit.
gram, kilogram, decigram, etc
Mass is the Property of matter and is usually measured in grams and
the SIunit of mass is kilogram(kg)
Mass is the amount of matter in an object. The SI base unit is kilograms (kg). Mass is NOT weight. Weight is correctly measured in Neutons and it is a measure of gravitational force.
The SI unit of mass is kilograms but there other derived units like tonnes, metric tonnes e.t.c
The metric unit of mass is 1 kilogram, as represented by the International Prototype Kilogram stored in Paris.
The gravitational pull that the Earth exercises towards a 1-kilogram mass is 1kg * 1 G = (roughly) 9.81 kgm/s2 = 9.81 Newton. Note that the Earth's gravitational pull varies by 1% depending on where you are on the Earth.
An earlier answer suggested that the Moon has no gravity. Not true. The gravitational pull of the Moon is roughly 1/6 of Earth's gravity. The Standard kilogram would therefore weigh some 0.166 kilograms in the Moon.
grams, milograms, micrograms, all in the metric system.
There are many units of mass.In S.I ,unit of mass is kilogram(kg),in C.G.S ,gram(g),in F.P.S,pound(lbs).
unit of mass is kg(kilogram) in si system and g(gram) in cgs system..
The unit of mass is the kilogram (kg), but commonly, when we measure something in kilograms, we are measuring the force of gravity on that mass, which should be measured in newtons (N), (a measure of force). However, if we used a balance, measuring an object against a standard mass, then our result would be independent off the strength of gravity in that place and so it would not matter if we were on Earth or on the moon our, measurement would be the same (i.e: a measurement of the mass which never changes, wherever you may be).
What is a unit of mass?
Mass is the total amount of matter in an object. Basically it is the total weight of an object in terms of kilogram for large objects, grams for small objects and for very small objects milligrams is used.
1 kilogram = 1000 grams
Further it can be divided into gram, centigram, milligram etc.
The SI unit is the kilogram
The SI unit for mass is kg
A unit of mass are gram, kilogram, decigram etc..
gram, killogram, milligram
The definition of unit of mass is very simple, here it is: A unified body of matter with no specific shape: a mass of clay
Mass is the amount of matter contained in a substance.
The unit of mass is Kilogram(Kg).
the unit for mass is kilograms and the unit for weight is kg m/s^2 or Newton (N).
The amount of constituent particles that a matter contains is said to be its mass. Its very different in respect to weight. It is common misconception that mass and weight are almost same but they are very different infact. Coming to the point the SI units of mass is kilogram(Kg).
The Gram is the standard metric (and SI) unit of mass.
I believe that the kilogram is now the standard metric and SI unit and the gram is defined as 1/1000 of a kilogram
The SI unit of mass is Kilograms (Kg).
1 Kg = 1000gm
and gm molecular mass of any substance is equal to 1 mole.
The ammount of matter contained in a body is can be termed as mass.
According to different system of unit are used in the different parts of the world. These systems are mainly the methods of defining the three fundamental units. The most widely used system of units is CGS or French or metric system.
In 1960, at Eleventh General Conference on Weights and Measures held at Paris, an international system of units is adopted to have a consitent system of units and also to simplify the communications amon the scientists. This system of units is known as Le Systeme International d' Unites abbreviated as SI units.
- C.G.S. or French or Metric System: This system was originated in France but now it is used in the scientific measurements and also is widely used throughout the world. In this system, the unit of length is centimetre, the unit of mass in gram and the units of time is second. From the initial letters of the words centimetre, gram and second this system is called the C.G.S. system.
- S.I. units: In this system in addition to units of length, mass and time, units of electric current, temperature, luminous intensity and quantity of substance i.e., seven basic units in all are adopted. These are-
PHYSICAL QUANTITY UNIT SYMBOL
1. length metre m
2. mass kilogram kg
3. time second s
4. electric current ampere A
5. temperature kelvin K
6. luminious intensity candela cd
7. quantity of a substance mole mol
the unit of mass is KG(s)
SI unit of Mass
- g, km, lbs
Yojana, lbs (pounds) are a unit of weight, not of mass.
- Please do not ask such stupid questions
We’ve answered 327,804 questions. We can answer yours, too.Ask a question | <urn:uuid:738370bb-3b10-4d8e-8fcc-ff4e5cbf4854> | CC-MAIN-2016-26 | http://www.enotes.com/homework-help/what-unit-mass-54161 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00177-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.942075 | 2,547 | 3.90625 | 4 |
Underneath the thermal insulation cover there is a complex set of telescopes and prisms through which incoming light is initially separated into four main bandwidths. Different gases in the atmosphere absorb different wavelengths of light. The GOME-2 spectrometer is used to split the light into different wavelengths to reveal absorption lines, which correspond to certain gases present the observed sample. GOME-2 covers the 240-790 nm wavelength regions, i.e. wavelengths covering ultraviolet and visible light.
A Scan Mirror directs the light emitted from the Sun-illuminated atmosphere into a telescope. The telescope is designed to match the two directions of the field of view (0.286° across-track and 2.75° along-track) to the two directions of the entrance slit of 0.2 x 9.6 mm2. In addition, the Scan Mirror can point to two internal calibration light sources and the Sun diffuser.
|1 - Disperser||14 - 401 – 600 nm|
|2 - Calibration slit||15 - 240 – 315 nm|
|3 - Detector||16 - 311 - 403 nm|
|4 - Double Brewster Prism||17 - Electronics box|
|5 - Telescope mirror||18 - Channel # 1|
|6 - Predisperser Prism||19 - Channel # 2|
|7 - Channel seperator||20 - Grating # 1|
|8 - Grating # 3||21 - Grating # 2|
|9 - Grating # 4||22 - Calibration lamp|
|10 - Beam splitter||23 - Calibration Unit (CU)|
|11 - Channel # 3||24 - Sun diffuser|
|12 - Channel # 4||25 - Telescope mirrors|
|13 - 590 -790 nm||26 - Scan mirror|
Behind the entrance slit, the light is collimated by an off-axis parabolic mirror (f = 200 mm) onto the double Brewster / pre-disperser prism configuration. These prisms generate the s- and p-polarised light beams which are directed onto dispersers and through to detectors. This additional spectrometer is part of the Polarisation Unit (PU).
Light passing through the pre-disperser prism is also directed into the main spectrometer. A second off-axis parabolic mirror (f = 25 mm) focuses the dispersed beam onto the channel separator prism. The pair of parabolas forms a relay system with a magnification of 0.625. The band separator is a quartz prism, the first surface of which is partially coated with a reflective coating (for channel 2) and a transmission coating (for channel 1). The light for channels 3 and 4 passes the prism edge, and a dichroic filter separates it into the two channels. To avoid the slow but steady outgassing of this coating, experienced with GOME-1, it was manufactured using plasma ion-assisted deposition technique to provide high-temperature stability.
All refractive optics are made of quartz (Suprasil 1) and are multilayer-coated for maximum efficiency and low stray-light. The off-axis parabolas are made of aluminum, nickel-coated and machined with a single-point diamond turning technique. Polishing then achieves a surface quality compliant with the low-stray-light application in the ultraviolet.
The four holographic gratings (7, 8, 19, 20) have demanding requirements in terms of stray-light reduction and diffraction efficiency, and so only master gratings can be used. The stray-light performance of the UV channel requires that the grating blanks have a micro-roughness of better than 0.5 nm root mean squared. The groove density is determined by the angles of incidence, which are adjusted to the required densities of 3600 l/mm (channel 1), 2400 l/mm (channel 2) and 1200 l/mm (channels 3 & 4). Particular care is taken to avoid and shield against false light generated with the recording set-up. The symmetrical photoresist groove profile is transformed to a sawtooth-like shape by ion-beam etching. Due to the high spectrometer angles of 45 to 50°, the efficiency very much depends on the shape of the groove profile. The polarisation sensitivity of the GOME-2 gratings is considerably lower than for GOME-1. The dispersed light is focused by a four-lens objective onto a silicon linear detector array in the Focal-Plane Assembly.
Last update: 28 June 2006 | <urn:uuid:66f80915-f467-450b-96fe-6e79cd4256bb> | CC-MAIN-2016-26 | http://www.esa.int/Our_Activities/Observing_the_Earth/The_Living_Planet_Programme/Meteorological_missions/MetOp/Spectrometer/(print) | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00042-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.882626 | 954 | 3.265625 | 3 |
primary cultures: age of rat?
kps2 at kimbark.uchicago.edu
Thu Aug 17 09:26:20 EST 1995
The age at which cultures should be prepared also depends on the types of
neurons you wish to obtain. For example, granule cells (hippocampal, just
for the sake of argument) develop much later than the pyramidal cells.
To obtain a culture enriched in granule cells, most workers will prepare
cultures after birth, sometimes several days. There are also likely
to be differences in the development of pyramidal neurons verses local
circuit neurons. The same may apply to the cerebellum. I don't have
first-hand experience with other regions but it is likely that preparing
cultures at different ages will yield different cell types.
Ken Scholz Department of Pharm. and Physiology
kps2 at midway.uchicago.edu Univ. of Chicago
More information about the Neur-sci | <urn:uuid:0c90c83e-94cc-44af-b4f1-d98dcdb1ff0d> | CC-MAIN-2016-26 | http://www.bio.net/bionet/mm/neur-sci/1995-August/019911.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00119-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.90083 | 209 | 2.59375 | 3 |
When drawing the sphere perspective has no meaning, since the sphere looks like a circle from any angle. The most important part of drawing a sphere is showing its volume. You can do this by carefully examining the light, shades, the reflected light and highlights. To draw a good circle, you can first draw a square and then draw the circle inside it. Notice that the sphere's shade on the table has the shape of an ellipse.
While drawing, try to see the object as an image. Be bold with your lines and don't worry about making mistakes. You can always fix them later. Don't worry about the construction lines or smudges. | <urn:uuid:4be531e2-cca3-45ab-b16f-75d6cee1de36> | CC-MAIN-2016-26 | http://www.draw23.com/drawing-sphere | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00103-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.948765 | 134 | 4.28125 | 4 |
Improving Web accessibility through an enhanced open-source browser
Item TypeJournal Article
MetadataShow full item record
The accessibility Works project provides software enhancements to the Mozilla™ Web browser and allows users to control their browsing environment. Although Web accessibility standards specify markup that must be incorporated for Web pages to be accessible, these standards do not ensure a good experience for all Web users. This paper discusses user controls that facilitate a number of adaptations that can greatly increase the usability of Web pages for a diverse population of users. In addition to transformations that change page presentation, innovations are discussed that enable mouse and keyboard input correction as well as vision-based control for users unable to use their hands for computer input.
Hanson, V.L., et al. 2005. Improving Web accessibility through an enhanced open-source browser. IBM Systems Journal. 44(3): pp.573-588. Available from http://dx.doi.org/10.1147/sj.443.0573 | <urn:uuid:b5714368-66de-4916-8c9f-aa869a605ac4> | CC-MAIN-2016-26 | https://repository.abertay.ac.uk/jspui/handle/10373/1177 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00123-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.773173 | 201 | 2.671875 | 3 |
Mass Atrocities and Armed Conflict: Links, Distinctions, and Implications for the Responsibility to Prevent
Alex J. Bellamy
Policy Analysis Brief
A distinct and practical agenda for atrocity prevention has proven difficult to articulate. Concrete policy development has been frustrated in part by the complex relationship between mass atrocities and armed conflict. A strong empirical correlation leads some to assume a direct causal link and conclude that reinforcing existing efforts to prevent armed conflict remains the most effective approach to genocide and mass atrocity prevention.
Yet, not all conflicts give rise to mass atrocities, and many atrocities occur in the absence of armed struggle. In cases such as Bosnia, Rwanda, and Sudan, international efforts to secure peace settlements distracted attention from, and ultimately enabled, ongoing and imminent atrocities.
In a new policy analysis brief from the Stanley Foundation, Alex Bellamy considers the dynamics of the relationship between conflict and atrocity prevention. He stresses that, while conflict prevention is central to preventing mass atrocities, effective atrocity prevention demands something more—tailored engagement targeting both peacetime atrocities and those committed within a context of armed conflict.
What is required, he argues, is an “atrocity prevention lens” to inform and, where appropriate, direct policy development and decision making across the full spectrum of prevention-related activities. With the focus this lens provides, governments and international organizations can implement effective operational approaches to address the complex challenges of atrocity prevention.
This statistical chart catalogues mass atrocity campaigns between 1945 and 2010, indicating whether they occurred in contexts of war or minor armed conflict. All included campaigns resulted in an excess of 5,000 civilian deaths and demonstrated evidence of deliberate civilian-targeting.
This chart cross-compares the policy instruments associated with systemic, structural, and direct prevention with existing prevention agendas articulated for conflict, the Responsibility to Protect (R2P), and genocide.
This chart relates elements of the common prevention agenda to the specific indicators of genocide risk identified by the United Nations Office of the Special Adviser for the Prevention of Genocide.
|Reporting a Radiation Emergency
Journalists would play an indispensable role keeping the public informed in an emergency resulting in the release of radiation, either accidental or deliberate. But what do they need to do their job effectively? The following recommendations to authorities who would manage such an emergency were drafted by participants in the 2016 Rotterdam Nuclear Security Workshop for International Journalists.
The latest issue of Courier features articles on the state of securing nuclear material as the final nuclear security summit approaches in early April. It also includes a special reprint of an article from the Center for Public Intregrity, "The Stalking Threat of Nuclear Terrorism." Alex Bellamy, an expert on R2P, discusses the progress made in "Acting on the Responsibility to Protect," and three students from a global scholars conference comment about climate change. Lastly, a brief look at the foundation's Iowa Student Global Leadership conference. The full Spring 2016 issue. PDF (2.0 MB) Subscribe for FREE.
Operations Administrative Specialist: This full-time position involves administrative support for the operations department at the Stanley Foundation.
Communications Specialist: The Stanley Foundation seeks an experienced communications specialist to join its Communications Department.
Policy Program Associate, Nuclear Security: The Stanley Foundation seeks a program associate to join its Policy Programming Department.
|PARIS & BEYOND: COP21
Launching Global Climate Actions
The world recently looked to Paris for the most important global climate change negotiations for achieving a safer climate world: the 21st meeting of the Conference of the Parties to the United Nations Framework Convention on Climate Change (UNFCCC). As a helpful reference we have compiled this summary of the major work of the Stanley Foundation and its collaborators in an active global role during the past year preparing for this historic event…and the important continuing work ahead. More.
Our bimonthly newsletter is filled with resources to keep you up to date with our work at the Stanley Foundation. Each edition includes news about recent publications and stories as well as features our people and partners.
You will also find many extras, from upcoming events to multimedia resources. Sign up for the latest to stay engaged on key global issues.
|Stanley Foundation Annual Conferences
The Stanley Foundation holds two annual conferences, UN Issues and the Strategy for Peace Conference. These bring together experts from the public and private sectors to meet in a distraction-free setting and candidly exchange ideas on pressing foreign policy challenges.
Divided into roundtable talks, the cutting-edge discussions are intended to inspire group consensus and shared recommendations to push forward the debate on the foundation’s key policy areas.
The Stanley Foundation publishes policy briefs, analytical articles, and reports on a number of international issues. To reduce our carbon footprint and cut waste, we almost exclusively, use electronic distribution for our publications. Sign up to receive our resources via e-mail.
|Nuclear Security Video
The Stanley Foundation produced a 13-minute video looking at what needs to be done to stop terrorist groups from acquiring enough fissile material to make a bomb. The foundation talked with over a dozen diverse and distinguished experts from the Nuclear Security Governance Experts Group and the Fissile Materials Working Group to see how today's patchwork of voluntary arrangements can be forged into a long-lasting system. Watch the video.
|Watch and Learn
Stanley Foundation events, talks, video reports, and segments from our Now Showing event-in-a-box series can now be viewed on YouTube. To receive regular updates on our video posts, please subscribe today. | <urn:uuid:ff3c2c40-4793-45f8-9967-0740e8163229> | CC-MAIN-2016-26 | http://www.stanleyfoundation.org/resources.cfm?ID=445 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00185-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.907499 | 1,136 | 2.703125 | 3 |
OpenOffice Tutorial: Delete a database object using the toolbar
- To delete database objects such as Tables, Queries, Forms and Reports using the toolbar, first you must look for the particular object you want to delete.
- In this example we want to delete a query. So we are clicking on the "Queries" icon here.
- Next, look for the name of the object you want to delete and click on it to select it.
- For this example we are clicking on "Query_person_by_name"
- Next, look for the delete button here, it has the document and X icon. Just click on it once.
- You will then be asked to confirm the delete action.
- Just click on the "Delete" button to continue.
- Great! We have deleted the query. Remember that this procedure also applies when deleting other database objects such as Tables, Forms and Reports.
- Congratulations! You have learned how to delete database objects.
by Kaj Kandler
Delete a database object using the toolbar
- How to look for and select a database object.
- How to find the delete button in the toolbar.
- How to delete a database object.
For advanced functionality with similar results see:
- Edit a database object using the toolbar
- Open a database object using the toolbar
- Rename a database object using the toolbar
OpenOffice.org™ is a registered trademark of Oracle.
LibreOffice™ is a trademark of The Document Foundation.
Windows® is a registered trademark of Microsoft Corporation in the United States and other countries. | <urn:uuid:1541e62b-c5cf-4750-b730-f7652d679eef> | CC-MAIN-2016-26 | http://plan-b-for-openoffice.org/base/topic/delete-database-object-using-toolbar | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00095-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.879776 | 331 | 2.59375 | 3 |
If Lucy is limping around and having difficulty getting up in the morning, you might suspect arthritis. But with your dog's lameness switching from one leg to another, and the presence of scaly, red sores, it's likely something else is wrong. These symptoms, along with testing, may lead your vet to diagnose canine systemic lupus. It's an autoimmune disease that's as serious as it sounds, and it could possibly shorten Lucy's life.
Lupus is a chronic disease meaning once Lucy has it, lifelong treatment will be necessary. She'll have her good days and bad days as the disease goes into and comes out of remission. Becky Lundgren, D.V.M. wrote in her article "Systemic Lupus Erythematosus" for VeterinaryPartner.com that the disease is potentially fatal. Canine lupus is capable of shortening a dog's life because it causes her immune system to attack her own tissues and cells. Occasionally the resulting cell damage can lead to death.
Early diagnosis and treatment is the key to keeping lupus from affecting Lucy's life span, but diagnosing it can be difficult. The symptoms are difficult to pin down because not all dogs show the same signs. Additionally, the ones you'll notice, such as fever, lameness and skin and mouth sores will come and go. This can keep you from recognizing the condition as serious. Blood tests have to be done to confirm other symptoms such as anaemia, thyroiditis and antinuclear antibodies.
Prevention is Problematic
The cause of canine lupus isn't known, although some factors have been suspected as having an effect on which dogs develop the disease. Lucy could be genetically predisposed to having canine lupus, or a viral infection or a drug reaction could bring it on. Because of the uncertainty surrounding the cause of lupus in dogs, there's no sure-fire way to prevent any dog from getting it, other than keeping a dog who has it from breeding to avoid perpetuating the disease.
Caring for a Dog With Lupus
Treating Lucy's lupus with immunosuppressive drugs and corticosteroids can reduce the chances of the disease damaging her tissues and cells. That can go a long way toward ensuring the illness won't cut her life short. At home, you can do your part by encouraging rest during her flare-ups, even crating her if necessary to keep her from overexerting. Bright sunlight can increase the frequency of those flare-ups, so helping her to avoid intense sunlight is beneficial. Pet MD notes that if Lucy's kidneys have been affected by the disease your vet likely will put her on a low protein diet, too. | <urn:uuid:447eaa80-b9c3-4e8c-876f-74ce27ab9852> | CC-MAIN-2016-26 | http://dogcare.dailypuppy.com/canine-lupus-shorten-dogs-life-6304.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00071-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.966814 | 562 | 2.984375 | 3 |
Most of the hundreds of thousands of ash trees in Onondaga County are headed for a slow death.
Earlier this week, the U.S. Department of Agriculture confirmed that a little green beetle found on a purple sticky trap in DeWitt was an Emerald ash borer.
The invasive insect, thought to have hitch-hiked to the United States from China in the mid-1990s, devastates ash trees. It decimated all of the ash trees in Detroit. With the first-ever Onondaga County finding earlier this week, the beetle is now in 16 New York counties.
"We all knew it was just a matter of time," said David Coburn, director of the Onondaga County Office for the Environment and a member of the task force.
The Emerald ash borer was first found in the United States in Michigan in 2002. It was first found in New York in 2009 in Cattaraugus County.
Coburn, along with Stephen Harris, the city/county arborist, and Jessi Lyons, of Cornell Cooperative Extension of Onondaga County, will be working on how to deal with the infestation once they find out exactly how bad it is. That could take months, Lyons said. The three have been working together for a year as part of a task force set up to deal with the ash borer's anticipated arrival.
The only thing that's proven effective in staving off death from an ash borer is to have trees injected with powerful insecticide treatments, Lyons said. The injections are costly and have to be done every two years.
The city has already started taking down ash trees, Harris said. Over the past year, 400 have been cut down and chipped up. They've been replaced with other trees, he said.
Some big, healthy trees owned by the city and the county will likely be treated with the insecticide injections. But hundreds of others will be cut down before they fall prey to the beetle. Others - many of them in forests - will likely die of damage from ash borers.
There's no known way to eradicate them, Lyons said. The ash borers only leave an area when all of the ash trees are gone.
The one identified earlier this week was found near Carrier Circle. Finding the ash borer was part of an early detection program. State, local and federal officials have been putting out purple sticky traps -- shaped like prisms -- since 2011.
The county and city are both still in the process of counting their ash trees. The county is about half done counting the trees on its property. There are 24,000 just in the half that they've counted - 4,400 of them are in Onondaga Lake Park. The city has about 18,900. Once they know better how far the borer has spread, they'll decide what trees to take down and what trees to treat.
Ash trees make up 13 percent of the tree population in Onondaga County.
Over the next two days, workers and volunteers will be checking more than 100 purple sticky traps around the county, looking to see if any of the bugs stuck there are ash borers.
Cornell Cooperative Extension will answer questions about what you can do with your ash trees: 424-9485.
Contact Marnie Eisenstadt at 315-470-2246 or firstname.lastname@example.org. | <urn:uuid:738b48d1-557d-41f7-ad06-1b4242accfaf> | CC-MAIN-2016-26 | http://www.syracuse.com/news/index.ssf/2013/08/experts_arent_certain_how_bad_the_ash_borer_infestation_is.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00015-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965467 | 698 | 2.6875 | 3 |
a1 AAAS Science & Technology Policy Fellow, National Science Foundation
A little over a year has passed since US President Barack Obama launched the Materials Genome Initiative (MGI) to enhance the United States’ global competitiveness by cutting in half the current time and cost of bringing new materials from the laboratory to the marketplace. MGI is part of the president’s larger innovation and competitiveness agenda addressing advanced domestic manufacturing.
Traditional materials development can be described as a continuum that spans discovery to deployment and commonly takes 10 to 20 years to traverse. Thus, materials crucial to solving some of society’s most pressing problems may have already been invented and are awaiting implementation in a manufactured product. MGI aims to accelerate this timeline by creating a materials innovation infrastructure (MII) that will more closely integrate experimental tools, computational tools, and digital data. Through the MII, the discovery-to-development continuum would become more iterative and collaborative, which the administration hopes will result in more new materials brought to market in a shorter amount of time.
In many ways, MGI is the next step in the natural evolution of a movement that began in the 1980s with accelerated advances in computation and the concept of “materials by design,” and has continued through more recent activities in integrated computational materials engineering. According to Linda Horton, director of the Materials Sciences and Engineering Division at the Department of Energy (DOE), “The community is ready for this initiative. The confluence of computational and experimental advances has made the research community very well prepared for launching this activity.”
Traditional materials development commonly takes 10–20 years to traverse the discovery-to-deployment continuum.
Although the community may have the technological preparation to take on the challenges of MGI, fully embracing the initiative requires a cultural shift. “We’re asking people to change the way they think about their work,” said Jim Warren, special advisor to the director on MGI at the National Institute of Standards and Technology (NIST). “Make it, test it, make it, test it’ is not an MGI approach. Experiments need to be carefully selected against models so one can enhance the value of the experiment and the model.” This means bringing together the separate and often disparate communities of materials experimentation, simulation, modeling, and theory to form collaborative research networks.
As MGI develops, additional issues will have to be addressed, such as how to recognize and assign value to contributed data sets, and how to navigate and police the emerging world of open access.
Several government agencies are working to advance the initiative, coordinated by an interagency subcommittee chaired by Cyrus Wadia, assistant director for Clean Energy and Materials Research and Development at the White House Office of Science and Technology Policy. In FY 2012, DOE, NIST, the National Science Foundation (NSF), and the Department of Defense (DoD) have together invested over $60 million in direct MGI activity and programs influenced by MGI, and the president has requested significant budget increases for FY 2013 to continue building on this progress.
NSF’s Designing Materials to Revolutionize and Engineer our Future (DMREF) program supports research that would accelerate materials discovery and development by enabling a materials-by-design approach, whereby materials functions and properties could be predicted from first principles. Fundamental advances in materials understanding across length and time scales will allow the interrelationships between constitution, processing, structure, properties, performance, and process control to be established. DOE’s Office of Science’s Predictive Materials Science and Chemistry program has a similar approach, seeking to advance materials and chemical processes that will specifically address energy-related basic research challenges through computation, experiment, and data. NSF plans to award $11 million in grants this fiscal year, while DOE plans to award up to $18 million. Both agencies intend to continue and expand these programs, with NSF requesting more than $30 million for FY 2013 and DOE’s Office of Science requesting $20 million.
DOE’s Office of Energy Efficiency and Renewable Energy has its own set of activities to promote MGI, including a $14 million “lightweighting” program (spread over FY 2012 and FY 2013) that aims to improve vehicle fuel efficiency by incorporating advanced materials that will be developed with the aid of predictive modeling.
DoD’s investment in MGI covers the entire materials development continuum and aims to fund research that takes advantage of new approaches to modeling materials characteristics that will improve the prediction and optimization of materials properties for applications including body and vehicle armor, jet engines, and maritime and aerospace structures. These efforts are integrated into the activities of the Office of Naval Research, Army Research Laboratory, and Air Force Research Laboratory, including through the Multidisciplinary University Research Initiative, University Centers of Excellence, research contracts with industry, and Collaborative Research Alliance programs. DoD’s FY 2012 investment in MGI is estimated at $17.3 million, with a significant increase in funding planned in FY 2013.
The aim of the US government’s Materials Genome Initiative (MGI) is to cut in half the current time and cost of bringing new materials from the laboratory to the marketplace.
NIST’s MGI-related effort, Advanced Materials for Industry, aims to develop techniques and tools that will enable materials data, from both simulation and experiment, and modeling systems, operating over a range of length and time scales, to be more interoperable. The agency will also develop the means and standards for quality assessment of these models and the data generated from them. For FY 2013, NIST has requested an additional $10 million for this program, which would bring its total annual investment to $14 million.
It is important to keep in mind that MGI is more than a program; it is a scientific and cultural movement. As such, each agency is aligning and coordinating its investments with other initiatives. For example, both NSF and DOE’s Office of Science are leveraging related initiatives at their agencies, such as Cyberinfrastructure for the 21st Century (CIF21) at NSF, and Scientific Discovery through Advanced Computing (SciDAC) and Computational Materials and Chemical Sciences Network at DOE. There are also strong ties to two other interagency administration priorities in advanced manufacturing and big data (see the July 2012 issue of MRS Bulletin, p. 628, for more on the Big Data R&D Initiative).
Meeting the scientific, technological, and cultural challenges MGI presents will require the active participation of stakeholders in government, industry, and academia—an “all hands on deck” approach. Luckily, the initiative continues to gain support. In May, more than 60 companies and universities announced new commitments to MGI, joining a list of existing support from industry, academia, national laboratories, and professional societies.
Any opinion, finding, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. | <urn:uuid:519d3767-0aa4-46ea-8d47-bb2ecc55f9cc> | CC-MAIN-2016-26 | http://journals.cambridge.org/action/displayFulltext?type=6&fid=8669523&jid=MRS&volumeId=37&issueId=08&aid=8669522&fulltextType=XX&fileId=S0883769412001947 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00085-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.930731 | 1,445 | 2.75 | 3 |
Before Ted Bateman became a space geek, he was a science nerd and proud of it. As a ten-year-old he delighted in dissecting the frogs and sheep lungs that his mom the biology teacher brought home from school. He once got a cat—already dead—from the Humane Society so he could bleach the bones and piece together the skeleton for an end-of-year project. And when he was twelve he designed his first mouse experiment at a science fair.
“I had three mice,” he remembers. “I fed one normal lab chow, one soybeans, and one Frosted Flakes.” Then for two weeks he ran them through a maze that he and his grandfather had built. The soybean mouse was always the clear winner. “It wasn’t very scientific,” Bateman admits. “But it was really cool.”
Thirty years later, as a biomedical engineer, Bateman is still running experiments with mice. Except now he works with NASA, and his experiments have much more relevance to human health. He and colleagues around the country want to know what near-zero gravity does to bone density and whether new drugs can promote bone growth. Their research isn’t just for the next generation of astronauts, but for the rest of us, too.
“Space flight is really a model for accelerated aging,” Bateman says. “Many things astronauts experience in microgravity happen to us when we get older—muscle atrophy, bone loss, cardiovascular deconditioning, immune dysfunction.”
And it turns out space flight is also a good model for radiation exposure. When cancer patients undergo radiation treatment, they lose bone mass. Bateman is trying to understand how that happens and what can be done about it.
In 1990 Bateman met his future. He was one of six physics majors at a small liberal arts college, but one of two from Fort Collins, Colorado. He befriended this fellow Coloradan, whose dad happened to work for NASA’s Space Life Sciences Training Program, a six-week learn-a-thon for aspiring engineers. The father encouraged Bateman to apply; the next year, Bateman was accepted.
He listened to lectures from prominent scientists, learned how NASA operated, and saw how researchers designed animal experiments using sea urchins and rats. Bateman was sold. He enrolled in graduate school, where he helped design structures that astronauts could live in. Later he earned a doctorate in bioengineering and collaborated with BioServe Space Technologies at the University of Colorado, which conducted microgravity research and designed space-flight hardware.
“From then on my life has been kind of outlined by space shuttle flights,” Bateman says. In 1996 he flew rats on STS-77 (that’s the seventy-seventh mission of the space transportation system, a.k.a. the space shuttle.) Bateman helped design an experiment with NASA and the drug company Chiron to see if a hormone called insulin-like growth factor 1 increased bone growth. It did, but was never approved as a drug to grow bone tissue because it was too nonspecific. “Anything anabolic—anything that grows bone tissue—can potentially promote the growth of other things in the body,” Bateman says. “Cancer cells, for instance.”
That experiment, though, did yield fruit for Bateman. He wondered why NASA used rats for experiments instead of mice; most scientists preferred mice for their genetic variability. “Essentially NASA was wed to rats because rats don’t smell as bad as mice,” Bateman says. But he and colleagues proved that astronauts would have to put their noses right up against the glass mouse enclosures just to get a whiff of the furry little space-goers. And there was no reason for astronauts to do that; they barely even needed to check on the mice.
NASA relented. In December 2001, mice went on their first space odyssey. Bateman was in charge of every aspect of an experiment that took months to piece together with NASA, the biotechnology company Amgen, and other researchers. Its goal was to see if a protein called osteoprotegerin could keep mice from losing bone mass in space.
The mice were in orbit twelve days. A day before liftoff, Bateman’s team injected some of them with the drug and some with a placebo. The mice were put on board twenty-two hours before the launch, and there they stayed. The astronauts hardly paid them any mind, except to visually check to make sure they were okay. They were. In microgravity, the mice could sprint up the wall, across the ceiling, and down the other side of their glass enclosure over and over. They nibbled on a big chunk of food affixed to the wall of their enclosure. To drink, they sucked on a nozzle attached to a spring-loaded water bag. The mice floated much like astronauts, using their forelimbs to maneuver around their habitats.
When the rodents returned, Bateman’s team found that osteoprotegerin worked exceedingly well. Nine years later Amgen gained FDA approval for a drug astronauts have since used to prevent bone loss.
For the earthbound among us, though, preventing bone loss isn’t always enough. Osteoporosis, for instance, often isn’t diagnosed until bone density has already been depleted. One solution could be a drug that actually grows bone tissue. That fact was on Bateman’s mind when STS-118 rolled around in 2007. That mission featured mice treated with a drug aimed at increasing muscle mass. It worked, and Bateman’s team found that the drug also promoted bone growth. It’s now in clinical trials.
Bateman, by then a professor at Clemson, was making another connection between space travel and life on Earth. Astronauts are exposed to radiation, which can deplete bone tissue. “But that dose of radiation,” he says, “is much lower than what cancer patients get during radiation therapy.” Bateman wanted to find out what happened to these patients and, more specifically, what happened to their bone cells.
Bone tissue is composed of two major kinds of cells: osteoclasts, which limit bone growth, and osteoblasts, which promote bone growth. Together, they maintain proper bone density.
In 2007 Bateman teamed up with researchers at UC Irvine to measure how much bone mass cervical cancer patients lost after six weeks of radiation and what exactly the radiation did to osteoclasts and osteoblasts. The conventional theory was that radiation damages osteoblasts, resulting in a gradual decline in bone mass over time. But that’s not what Bateman and his collaborators saw. They found that radiation turns on osteoclasts, which spurs the initial bone loss. Then, as radiation treatment continues, osteoblasts are suppressed. “We found that bone loss is incredibly rapid and just massive,” he says.
Bateman thinks existing therapies that diminish osteoclast activity could help patients. As for drugs that promote bone growth, there’s only one on the market. “It’s very expensive and you have to get a daily injection,” Bateman says. We should have better options, he says, and we should know what happens to patients with all kinds of cancers who undergo radiation treatment.
For that, Bateman, who came to Carolina in 2010, has teamed up with radiation oncologist Larry Marks to start clinical trials for lung cancer and prostate cancer patients. They’ve designed a study and have collaborators lined up, but they need funding. Bateman was working on that when NASA and Amgen approached him about participating in the final space shuttle flight.
“I was really too busy moving the lab to UNC to be involved with the mission,” he says. “But my wife reminded me that I’d seriously regret not doing this. And she was right.” So Bateman spent several months helping BioServe gather scientists from five universities and coordinate their work with NASA and Amgen to test a bone-promoting antibody on thirty mice.
In July 2011, BioServe and Bateman’s team traveled to Cape Canaveral to participate in the launch festivities. They did what they always had done—prepped the mice, secured them in the shuttle’s mid-deck locker, and made sure their enclosures were functioning properly. (Air must flow from the ceiling to the floor to make sure the mouse waste floats downward.) Thirteen days later the mice, men, and woman of Atlantis returned home safely.
The other labs have yet to report their findings, but Bateman’s team found that the placebo mice lost bone mass as expected, and the drug-treated mice gained bone. The next step is to conduct clinical trials to see if the antibody can help prevent osteoporosis and bone fractures in humans. Or if it can help cancer patients maintain bone density.
As for Bateman’s career as NASA’s mice man, it might be over. There won’t be any more space shuttle flights, and NASA won’t be flying mice anytime soon.
Private rocket builders, though, could pick up the slack. One company will be ready to fly to the International Space Station next year. BioServe, Bateman’s old stomping grounds, is slated to help design experiments with yeasts and bacteria on subsequent flights. Bateman thinks that animal experiments won’t be far behind. Will he be the man in charge, wrangling researchers, rocket entrepreneurs, and drug companies?
“Organizing these flights is sort of like having a kid,” Bateman says. “You forget how much work it is. You need enough time to pass. But let’s just say that if I’m asked I’ll have a tough time saying no.” | <urn:uuid:2d1a23be-6178-42c2-9625-e38fae599761> | CC-MAIN-2016-26 | http://endeavors.unc.edu/mice_in_space | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00160-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967958 | 2,085 | 3.0625 | 3 |
We’ve often been told stress is the enemy.
Stress has been linked to illnesses including cardiovascular disease and even cancer. While stress has been made into public health enemy number one, Dr. Michael Gervais, high-performance psychologist for the Seattle Seahawks, says stress can be beneficial when channelled in positive ways. While stress can work to the advantage of high-performance athletes, it can also do wonders us regular people.
Gervais says there is a correlation between stress and performance. In his work with athletes, he strives to find the optimal zone where high-performance and moderate levels of stress intersect, allowing for growth and adaptation. "Physiologically, [stress] primes our body to be in an optimal state where we’re fast and powerful and quick and responsive. Cognitive processing is optimal and vision is more acute," says Gervais. Too much stress, though, causes the body over-tighten and hinders growth.
"What we’ve learned from those that excel is that they spend equal amounts of time working on recovery," says Gervais. While too much recovery can cause you to become static, not enough recovery causes burnout. Using stress to its advantage means building in appropriate amounts of recovery whether in the boardroom or on the sports field.
Breathing is one of the first mechanisms that allow us to find a calm state and increases focus and attention. The Seattle Seahawks head coach Pete Carroll introduced meditation techniques to his players to help them stay calm and focused.
How we think about and interpret stressful events will impact the potential benefits that can be reaped from stress. Stress simply put, is change. When we have events of change, we have the opportunity to categorize it as positive or negative. Losing a client, for example, can cause you to react by saying "this is the end of my business as I know it" or "I’m going to devote my energy that I would have spent on that client on landing another client that I’ve wanted to work with for a long time and haven’t had the opportunity." Athletes are taught to use these same techniques to interpret stressful events when they fumble a ball or lose a game.
Getting proper nutrition and sufficient sleep is important to promoting recovery. "Sleeping is where some of the most important recovery and regeneration takes place," says Gervais. Fueling your mind and body with proper nutrition and hydration allows you to stretch your mental muscle. "Hydration has been said to impact brain volume, which impacts thought processing, learning, and memory," says Gervais.
"Mood is a great indicator of overtraining and improper recovery," says Gervais. The best way to tell if you’ve gone too far over your stress threshold is to measure your zest for life and business. Feeling easily agitated, irritable, or short-tempered is an indicator that you may have overdone it on your stress levels or you haven’t built in enough recovery. | <urn:uuid:3a661010-b705-4139-873d-d25440aab7b8> | CC-MAIN-2016-26 | http://www.fastcompany.com/3026592/dialed/turn-stress-into-your-best-ally | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00024-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.964367 | 624 | 2.53125 | 3 |
Basic Electric Bass, Volume II: Basic Repertoire
By Lucas Drew
University of Miami Press
Bass Guitar Method or Supplement
In the early 1970s, the five volumes of the Basic Electric Bass series were among the first publications for the instrument. At that time, many double bass (or upright players) employed similar fingering on the electric bass (bass guitar). In this second edition of Volume II, most fingerings have been deleted so that more contemporary techniques may be applied. This book of traditional and contemporary repertoire will help intermediate students improve their reading skills, and includes 14 solos and 7 duets. | <urn:uuid:a5844953-00ef-4318-a6d2-490c2bff2b9c> | CC-MAIN-2016-26 | http://alfred.com/Products/Basic-Electric-Bass-Volume-II-Basic-Repertoire--82-34943.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00143-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.924915 | 126 | 2.515625 | 3 |
Our country is on a fast track of development. The population pressure demands rapid strides in housing sector. There has been a tendency to develop housing colonies as close to city centre as possible. Consequently now there is a shortage of land. Take Mumbai for example. Land is costlier than life there. Concept of Navi Mumbai did come, but it seems every one wants a flat at Colaba only. Naturally buildings have to grow skywards. More human beings mean more hospitals, more schools and more shopping malls. Now even these structures are rising more towards the sky than earlier. Are we aware of possibilities of killer earthquakes in this sprawling country of ours? Do we have a roadmap to seismic safety? These are some questions that often haunt the minds of geologists, and earthquake engineers. We shall try to analyze them and seek answers.
Our subcontinent is unique, because it is a product of past turmoil. Had the landmass not broken from Africa, Australia and Antarctica and moved like a Noah's Arc some 270 million years (m y) ago and started to collide with stable, Asian landmass some 20 m y, the story today might have been different! The union of smaller plates and their collision with the Asian plate did cause gigantic earthquakes, witnessed today in the form of seismites or evidences of past earthquakes. Even today, the imperceptible movements of the Indian plate underneath the Asian plate cause tremors powerful enough to shake us from the slumber.
Some of the biggest earthquakes in the world have occurred in the subcontinent and we have learnt some major lessons in the science of seismology and earthquake engineering. How serious were the British geologists and engineers about the earthquake problem and safety in India can be understood by several example. When an earthquake shook Kutch in 1819 it was first time found and established that the tremors were due to a subterranean fault. It was then also establish that the buildings on rocky foundations remain safe while those on the alluvium shake like a fig leaf.
Necessity is the mother of inventions goes the saying. True to this after the Baluchistan earthquake 1831, earthquake resistant buildings were constructed for the first time for the officers of the Railways at Quetta. The effort paid dividends, as these were the only houses that remained unruffled during the 1835 earthquake at Quetta. The 1835 earthquake was quite devastating. It took toll of nearly 25,000 lives. Taking cue from the earthquake safe houses built by the Railways, earthquake safety codes were refined and developed for the army and civil authorities at Quetta.
Professor S.K. Jain of IIT, Kanpur, has drawn attention to 'safe housing' through his papers published in various national and international forums. In one of his papers published in April issue of Current Science he says that collapse of 135 modern, multistory buildings in Ahmedabad, 245 km from the epicentre clearly shows that Indian civil construction practices require much to be desired with respect to seismic safety. Comparing the problem with South America, he says that earthquake safe houses at Peru which is rocked frequently by earthquakes are within the reach of only the higher echelons of society. Unfortunately in India he says no one is safe with the type of so called earthquake safe houses.
Professor Jain further elaborates damages during the 2001 Bhuj earthquake and Tsunami of 2004 have left ample scars on the soils of Gujarat and Andamans. He says that about 6000 school buildings were constructed in Gujarat between 1999 and 2000. During the earthquake on 26 January 2001, more than three quarters of these school buildings either collapsed or were seriously damaged. School buildings and hospitals are two places where utmost care is taken in developed countries. Perhaps lives are more precious there than material costs! Andamans fall in high seismic risk zone. Austin Creek bridge connecting North Andaman with Middle Andaman was constructed in 2002. Seismic codes were set aside, this negligence was pointed out, yet the authorities did not bother. The bridge became non-functional after the tsunami-genic Sumatran earthquake of 2004.
As per the structural engineers the seismic codes are often flouted, falls certificates are obtained by greasing the palms of the authorities and sky-scrappers are built. Immediately after the earthquakes the state and the central governments announce measures to be taken to safeguard the future. However, the tall promises are never kept and a repeat earthquake proves the myth.
A question thus emerges, what is the way out? If by a magic wand all the buildings and structures become earthquake resistant the problem would be over. Is it possible? To some extent it is. Certain precautions are mandatory in the buildings code. For example, a hospital building or a school building where there is a larger congregation of people precise information about past ground motion and intensity of earthquakes that have occurred within a radius of 300 km has to be collected. In developed countries this is a normal practice. But in this part of the sub-continent, safety of school buildings and hospitals is perhaps the last priority. We have recent examples of 400 plus children perishing in Anjar during Bhuj earthquake of 2001 and another 400 children were crushed alive in Muzzaffarabad in Pakistan during 2005 earthquake.
Examples of such earthquake generated tragedies are well recorded in the annuls of the Geological Survey of India (GSI). The first semi-scientific record in India is that of Delhi-Agra earthquake of July 15, 1505. A major earthquake again rocked Delhi on July 15, 1720. It was 22nd Ramzan and people had assembled in mosques to offer prayers. Many people lost their lives in Shahjenabad (New Delhi) and Old Delhi. After this Kaifikhan recorded in 'Muntakhabul-Ul-Lulab' that similar shocks continued to terrorize Delhi for 40 days.
On September 1, 1803, the Mathura-Delhi area was rocked at 3AM. Mathura suffered the maximum damage. Open fissures were formed on the ground and water gushed out with force. Delhi was also severely affected and top portion of famous Kutubminar came tumbling down.
The earthquake on June 16, 1819, in Kutch was no less devastating. Its intensity was such that eight kilometers north of Sindri, in the Rann of Kutch a three metre high and 65 km long ridge of clay and shell was formed. The local population named it 'Allah Bund'. Bhuj a major town then, had perished in this earthquake. About 2000 people lost their lives. This earthquake took a toll of 500 lives during a religious congregation in a mosque at Ahmedabad. So powerful was the earthquake that shocks were felt in the far north at Sultanpur, Jaunpur, Mirzapur and Kolkata.
Thomas Oldham of Geological Survey Of India calculated the intensity of this earthquake on Richter scale as 8.3. The gravity of the situation can be understood by the fact that an earthquake beyond the intensity 8 causes total devastation.
Despite such precise information available we have not drawn any concrete roadmap to safeguard our lives. Immediately after an earthquake teams after teams of 'experts' visit the affected areas and describe the devastation like the 'five blind men' describing an elephant. Each expert has his own notions of safety and plans for future safety. Naturally there are clash of views and finally the government accepts some recommendations and does implement some of them. However, the masses remain deprived of the provisions made by the government, because there is hardly any awareness amongst the masses.
The recent two earthquakes of 2001 (Bhuj) and 2005 (Kashmir) have been an eye opener and the experts have aired their views on what should be done for future safety. It is a tendency to gather the expertise from developed countries. No harm as long as it is for the betterment of our own people. Lorna Prieta in California is prone to frequent earthquakes. Famous for historical buildings and churches the government and the society decided to go for large scale retrofitting after a devastating earthquake in 1989. The technique saved hundreds of lives and scores of building too. Thereafter the term retrofitting came to stay for the earthquake safety of buildings in India. But before going for that one has to think 'is it really viable', 'where do we need it and where we can avoid it'.
Retrofitting is a specialized technique. Apart from skill it involves huge costs. Before embarking upon the project one has to evaluate the cost-benefit ratio. For example for a heritage building retrofitting may be extremely beneficial, but for comparatively recent constructions dismantling the present structure and reconstructing it with earthquake safe designs may work out cheaper, says Prof Jain.
Construction activity in our country, especially in the urban housing sector has been following age old techniques. There are vast areas that have not experienced a severe earthquake in the recent past. Buyers of apartments in the high rise buildings of Ahmedabad learnt the hard way about the primitive techniques. Their houses crumbled like a pack of cards during the Bhuj earthquake of 2001.
People of Gujarat and Kashmir have become quite aware about the significance of earthquake proofing of houses. But alas, people in the rest of the country either hoodwink themselves or the government and avoid the mandatory aseismic designs. Therefore a massive awareness drive is required to enlighten the masses across the country. It is time that big builders are brought under a confederation which keeps updating their knowledge about the advantages of earthquake resistant houses.
Rules for earthquake safe housing have been implemented, but down the ladder slackness is always there. Issuing fake certificates for earthquake safe houses has to be made a cognizable offence wherein the builder and the municipal authority both should be made parties.
Commonly an Indian house builder avoids engaging an architect what to think of a structural engineer. For him all in one is the mason. There is a strong need for creating a force of technicians and skilled hands guided by structural engineers so that the civil constructions are made earthquake proof.
Prof Jain says that in the field of medicine it is criminal offense for any one without degrees of medicine to practice it. Same applies to law as well. However, in case of civil construction there are no restrictions and there are several 'self styled' experts in the field. Such practice has to be stopped. The best way is to educate the masses. Like literate people generally avoid quacks for treatment, same way they will stop going to engineer quacks.
Implementation of regulations is very necessary. A person without a valid driving license if caught is penalized heavily. Whereas a person hoodwinking the rules or the other keeping his eyes shut while the rules are not followed go scot free.
These days we are reading a lot about seismic micro-zonation. Well that is a long drawn process and it is very much needed especially in the metros or the cities where high rise buildings are coming up in bulk. But unless precise details of anticipated ground shaking are available at a given spot and measures taken in the construction to counter that, a mere map will be of any use. Thus seismic zonation studies need to be implemented in letter and spirit.
In a nut shell a tighter control and monitoring is required on part of the government in the civil constructions Vis a Vis earthquakes and a closer cooperation is needed from the builders and the like.
J.A. Dunn of GSI wrote in Memoir 73 of GSI in 1939, after completion of the Nepal-Bihar earthquake of 1934:
"Leprosy is not a common disease, but the medical profession has done its utmost to eradicate it for the sake of humanity. Great earthquakes are not part of the earth's crust, but it should be our duty to do all that we can to reduce its effects. Unless this matter is looked upon in a broad way, posterity may yet look back upon our short-sightedness with regret".
What Dunn wrote 68 years ago holds true even now. The science of seismology and structural engineering has grown in leaps and bounds in these years. Now at least let us have a road map to seismic safety. | <urn:uuid:83d63bb5-cc3f-49fc-b221-ec4693faf36e> | CC-MAIN-2016-26 | http://cms.boloji.com/index.cfm?md=Content&sd=Articles&ArticleID=4973 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00129-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.972093 | 2,486 | 3 | 3 |
Brian Palmer covers daily environmental news for OnEarth. His science writing has appeared in Slate, The Washington Post, the New York Times, and many other publications. This article first appeared in the Natural Resources Defense Council (NRDC) publication OnEarth. Palmer contributed this article to Live Science's Expert Voices: Op-Ed & Insights.
A stretch of the Gulf of Mexico spanning more than 5,000 square miles along the Louisiana coast is nearly devoid of marine life this summer, according to a study from the Louisiana Universities Marine Consortium released this week. Caused largely by nutrient runoff from farm fertilizer, this oxygen-deprived "dead zone" is approximately the size of Connecticut. Although slightly smaller than last summer's edition, the Gulf dead zone is still touted by some as the largest in the United States and costs $82 million annually in diminished tourism and fishing yield.
Which makes you wonder…
How many other dead zones are out there?
There are probably around 200 dead zones in U.S. waters, alone. After reviewing the academic literature on "hypoxic zones" in 2012, Robert Diaz, professor emeritus at the Virginia Institute of Marine Science at the College of William and Mary, identified 166 reports of dead zones in the country. Coastal waters contain the vast majority, though some exist in inland waterways. A handful of the 166 dead zones have since bounced back through improved management of sewage and agricultural runoff, but as fertilizer use and factory farming increase, the United States is creating dead zones faster than nature can recover.
There are more than 400 known dead zones worldwide, covering about 1 percent of the area along the continental shelves. That number is almost certainly a vast undercount, however, since researchers have yet to adequately study large parts of Africa, South America and Asia. Diaz estimates that a more accurate count is 1,000-plus dead zones, globally.
What causes dead zones?
Agricultural practices are the biggest culprit for dead zones in the United States and Europe. Rains wash excess fertilizer from farms into interior waterways, which eventually empty into the ocean. At the mouths of rivers, such as the Mississippi, the glut of phosphorous and nitrogen intended for human crops instead feeds marine phytoplankton. A phytoplanktonic surge leads to a boom in bacteria, which feed on the plankton and consume oxygen as part of their respiration. That leaves very little dissolved oxygen in the subsurface waters. Without oxygen, most marine life cannot survive. [Mississippi Floods May Cause Record-Breaking Dead Zone in Gulf]
Sewage causes the majority of dead zones in Africa and South America. That's a good thing, in a way, because engineers have been working for hundreds of years on sewage management solutions. In the early 19th century, London built a sewer system to divert waste from newfangled flush toilets into the River Thames. With this influx of nutrients — one creature's sewage is another's sustenance — bacterial populations multiplied and depleted the river's oxygen. The circumstances chased off aquatic life and enveloped the city in a horrific stench, culminating in the Great Stink of 1858. Sewage treatment and managed releases remedied the situation back then, and similar infrastructure investments could likely alleviate the excrement-fueled dead zones of the modern world.
Airborne nitrogen also contributes to the world's dead zones. When cars, trucks and power plants burn fossil fuels, they emit nitrogen-laden particulates into the air. These particulates eventually settle into waterways and head for the sea. Nitrification is a special problem in Long Island Sound and the Chesapeake Bay, which have absorbed large amounts of nitrogen from coal-burning power plants in the Midwest.
Do I live near a dead zone?
The largest U.S. dead zones are in the Gulf of Mexico and off the coast of Oregon. But, everyone in the eastern and southeastern United States lives close to a dead zone of some size.
There are two reasons for the density of dead zones along the Atlantic and Gulf coasts. First, look at a heat map of U.S. population density. There is an astonishing concentration of people, as well as animals and farms to feed them, in the East.
Second, there simply aren't that many rivers draining into the Pacific Ocean. With fewer rivers to carry farm runoff to the sea, fewer dead zones form.
The eastern portion of Long Island Sound has suffered dead zones nearly every year for the last two decades. Even halfway across the Sound — more than 50 miles from the most densely populated parts of New York City — the waters have been hypoxic in at least 10 of the last 20 summers.
The Chesapeake Bay hosts several dead zones, each from the drainage of a different river. According to Diaz, agricultural runoff and sewage account for about three-quarters of the problem. The other quarter is the result of airborne nitrogen.
You needn't live near a coast to have a dead zone. Lake Erie is likely in for a serious case of hypoxia this summer. The cyanobacteria that recently contaminated Toledo's drinking water will soon die and sink to the bottom, where other bacteria will feast on their remains and consume large quantities of the lake's dissolved oxygen.
Are humans solely responsible for dead zones?
No, but we almost always play a role. Natural processes, such as the churning of ocean waters, can form dead zones on their own. The massive dead zone born in 2002 near the coast of Oregon — which rivals the Gulf of Mexico dead zone in area — is the result of the upwelling of nutrients that fed an algal bloom. As the algae died and settled, they created a hypoxic area. Not all scientists think the dead zone was entirely natural, though — many believe changes in wind circulation related to global warming played a part.
Can dead zones be brought back to life?
Absolutely. The Black Sea once hosted one of the largest hypoxic zones in the world, stretching 15,000 square miles. When agricultural subsidies from the Soviet Union collapsed in the late 1980s, fertilizer runoff dropped by more than 50 percent. The waterways took three years to recover, and international support for runoff management has helped keep the Black Sea alive and well ever since.
There's no reason the United States can't adopt those practices, too — we simply need to implement the science that we already have. Agricultural researchers have made countless recommendations to minimize farm runoff, but the advice hasn't been heeded. Other property owners can help by taking it easy on the fertilizer and resisting the urge to install impermeable surfaces like concrete. And we already have plenty of other reasons to retire coal-fired power plants — dead zones are just one more. After all, it needn't take the fall of an empire to improve a nation's coastal areas.
This article is adapted from one that appeared in the NRDC publication OnEarth. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google +. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science. | <urn:uuid:95bf349d-60d1-4bc3-9297-107c0212d2e2> | CC-MAIN-2016-26 | http://www.livescience.com/47274-dead-zones-in-united-states.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00099-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.944011 | 1,468 | 3.34375 | 3 |
the Upper Limb
Advertisements help pay for this website. Thank you for
Main Anatomy Index | The
Last updated 30 March 2006
Bones of the Upper Limb
This contains the bones of the superior appendicular skeleton:
- The clavicle and scapula (pectoral girdle)
- Humerus (arm)
- Ulna and radius (forearm)
- Carpal bones (wrist)
- Metacarpals (hand)
- Phalanges (fingers)
Bones of the Pectoral Girdle
- This bone extends laterally and almost horizontally
across the root of the neck. It extends from the
manubrium of the sternum to the acromion of the scapula.
- The clavicle (L. little key) connects
the upper limb to the axial skeleton and the
- The triangular-shaped medial (sternal) end of the
clavicle articulates with the sternum at the sternoclavicular
- The medial two-thirds of the body (shaft) of the clavicle
are convex anteriorly, whereas the lateral one-third is
flattened and concave anteriorly.
- Its curvature increases its resilience. The broad lateral
(acromial) end of the clavicle articulates with the
acromion of the scapula at the acromioclavicular
The clavicle has three functions:
- To act as a strut for
holding the upper limb free from the trunk so it may have
maximum freedom of action;
- To provide attachments for muscles;
- To transmit forces from the upper
limb to the axial skeleton.
The Scapula (p.
- This flattened, triangular bone lies on the
posterolateral aspect of the thorax, covering parts of
the 2nd to 7th ribs.
- The scapula connects the clavicle to the humerus. It is
highly mobile and has a head, neck,
- The body is thin and translucent.
- The scapula has a concave costal or anterior surface (subscapular fossa) and a
convex posterior surface from which the spine of the scapula projects.
- The smaller part, which is superior to the spine, is
called the supraspinous fossa,
and the larger part, which is inferior to the spine, is
called the infraspinous fossa.
- The spine continues laterally into a flattened process
called the acromion. It
projects anteriorly and articulates with the clavicle.
- Superolaterally, the scapula has a shallow glenoid fossa for articulation
with the head of the humerus. This part of the scapula,
called the head, is
connected to its blade-like body by a short neck.
- The coracoid process,
like a bird's beak, arises from the superior border of
the head and projects superoanteriorly.
- The scapular notch is in
the superior border.
The Bone of the Arm
The Humerus (p.
- The humerus is the largest bone in the upper limb.
- Its smooth, ball-like head articulates with the glenoid
fossa of the scapula.
- Close to the head are the greater
and lesser tubercles for the insertion of the
muscles that surround and move the shoulder
- The lesser tubercle is separated from the greater
tubercle by the intertubercular
groove (sulcus), from which lies the tendon of
the long head of biceps brachii
- The anatomical neck
separates the head and tubercles.
- Distal to the anatomical neck is the surgical
neck. This is where the bone narrows to become
the shaft. The region is called the surgical neck because
it is the most frequent fracture site of the proximal end
of the humerus.
- The body, or shaft, of the humerus is easy
to palpate, as are its medial and lateral epicondyles.
Its superior half is cylindrical.
- Anterolaterally, there is a roughness know as the deltoid tuberosity for the
insertion of the deltoid muscle.
- There is a shallow, oblique radial
groove for the radial nerve that
extends inferolaterally on the posterior aspect of the
- The distal end of the humerus is expanded from side to
side. The trochlea (L.
pulley) fits into the trochlear
notch of the ulna, which swings on this pulley
when the elbow is flexed.
- Just proximal to the trochlea are the coronoid
fossa and the olecranon
fossa, which accommodate corresponding parts
of the ulna.
- Adjoining the lateral part of the trochlea is a rounded
ball of bone called the capitulum
(L. little head).
- A prominent process, the medial
epicondyle, projects from the trochlea, and
the lateral epicondyle
projects from the capitulum. The epicondyles being
subcutaneous are easily felt. The medial epicondyle is
- From each epicondyle, a bony ridge runs proximally; these
are know as the medial and lateral supracondylar
Fractures to the Humerus
- Fractures of the surgical neck
are common in elderly persons
and usually from falls on the elbows when the arm is
- The fracture line occurs superior to the insertion of the
pectoralis major, teres major and latissimus dorsi
- Because nerves are in contact with the humerus, the axillary, radial, and ulnar nerves may be
injured in fractures of the humerus.
- Traumatic separation of the proximal epiphysis of the
humerus can occur in young persons because this epiphysis
does not fuse with the body of the humerus until about 18
years of age in females and 20 years of age in males.
- Fracture-separation of the proximal
epiphysis occurs in children because the
articular capsule of the shoulder joint is stronger than
the epiphyseal cartilaginous plate.
of the Forearm
The Radius (p.
- This is the shorter of the two forearm bones.
- It was given its name because it resembles the spoke of a
wheel (in Latin).
- The proximal end of the radius has a disc-shaped head, a smooth cylindrical neck, and an oval prominence
or tuberosity, distal to
- The body (shaft) of the
radius increases in size from its proximal to its distal
end; it has a slight lateral convexity or bowing.
- The body is concave anteriorly in its proximal
three-fourths and flattened in its distal one-fourth.
- The anterior oblique line
of the radius runs obliquely across the body from the
region of the radial tuberosity to the area of greatest
- The medial aspect of the body has a sharp interosseous border for
attachment of the interosseous
membrane. Its lateral border is rounded.
- The distal end of the radius has a medial ulnar notch into which the
head of the ulna fits, forming the distal
- Laterally the distal end of the radius tapers abruptly
into a prominent pyramidal styloid
- The inferior surface of the distal end of the radius is
smooth and concave where it articulates with the wrist or
- Posteriorly there is a prominent dorsal
tubercle on the distal end of the radius.
Fractures of the Radius
- A fall on the outstretched hand may result in a fracture of the distal end of the radius.
- Sometimes there is also a fracture of the styloid process
of the ulna.
- In the common Colle's fracture,
the distal fragment of the radius is displaced posterior.
The result is the radial and ulna styloid processes being
at approximately the same horizontal level which is an
abnormal condition (dinner fork
The Ulna (pp. 555,
- The ulna (L. elbow) is the longer bone of the forearm.
- This prismatic bone looks somewhat like a pipe wrench,
with the olecranon
resembling the upper jaw, the coronoid
process the lower jaw, and the trochlear notch the mouth.
- The olecranon and coronoid processes clasp the trochlea
of the humerus; somewhat like a pipe wrench clasps a
- The proximal "wrench-like" end of the ulna is
larger that the small, rounded distal end called the head.
- The lateral side of the coronoid
process has a small, shallow radial notch for the
disc-shaped head of the radius.
- Inferior to the radial notch is the triangular supinator fossa, which
provides an attachment for the supinator muscle.
This fossa is bounded posteriorly by a distinct supinator crest.
- The irregular anterior surface of the coronoid
process is rough and ends distally in a tuberosity onto which the brachialis, the
chief flexor muscle of the forearm, inserts.
- The body (shaft) of the
ulna is thick proximally. Its prominent lateral edge, the
interosseous border, is
where the interosseous membrane attaches.
- The small, slender distal end of the ulna has a rounded head and a conical styloid
process. The styloid process
projects distally, about 1 cm proximal to the styloid
process of the radius.
- The distal end of the ulna has a convex articular surface
on its lateral side for articulation with the ulnar notch
of the radius.
Bones of the Wrist and Hand
The Carpus (pp.
- The eight small bones of the wrist, called carpal bones are referred to
collectively as the carpus
- They are arranged in proximal and distal rows, each
containing four bones.
- The proximal row of carpal bones
(lateral to medial) consists of the scaphoid (navicular),
lunate, triquetrum, and pisiform.
- The boat-shaped scaphoid
is the largest bone of the proximal row and was given its
name because of its resemblance to a rowboat (G. scaphe).
- The lunate is
- The pea-shaped pisiform
(L. pisum, pea) is included in the proximal row,
even though it is a sesamoid bone in the tendon of flexor
ulnaris muscle. The pisiform bone is a clinically
important landmark that is easily palpable.
- The distal row of carpal bones
(lateral to medial) consists of the trapezium, trapezoid,
capitate, and hamate.
- The hamate can be
identified by its prominent process, the hook of the hamate, which
- The capitate has a
rounded head (L. caput).
- The carpal bones articulate with each other at synovial intercarpal joints and are
bound together with ligaments to form a compact mass.
- The carpus has an anterior concavity known as the carpal groove (sulcus). The
groove is converted into an osseofibrous carpal tunnel (canal) by the flexor retinaculum), which is
attached to the scaphoid and trapezium laterally and to
the pisiform and the hook of the hamate bone medially.
- The carpal tunnel is filled with tendons and the median nerve.
- Compression of the median nerve in the carpal tunnel
produces the carpal tunnel syndrome.
Metacarpus (pp. 561, 565)
- The five metacarpal bones
are miniature long bones.
- They extend from the carpus (wrist) to the digits (thumb
and fingers) and are numbered from the lateral side.
- The first metacarpal is much shorter than the others.
- Although covered with tendons, the metacarpals can easily
be palpated throughout their whole length on the dorsum
of the hand.
- The heads of the metacarpals
are at their distal ends, where they articulate with the phalanges (bones of the
- They form knuckles of the hand
that become visible when the fist is clenched.
- On the dorsal surface of each head is a small tubercle on
each side for attachment of collateral ligaments and
- The bodies (shafts) of the
metacarpal are slightly concave on their
medial and lateral sides, where the dorsal interosseous
- The bases of the metacarpals
are arranged in a fan-shaped manner from the distal row
of carpal bones.
- Each phalanx (bone of a digit) is a miniature long bone,
which consists of a body
(shaft), a larger proximal end or base,
and a smaller distal end or head.
- The thumb (first digit) has two phalanges (proximal and
distal) and each finger (second to fifth digits) has
three phalanges (proximal, middle and distal). The
phalanges in the first digit are shorter and broader than
those in the other digits.
- The proximal phalanges are the longest and the distal
ones are the shortest. | <urn:uuid:1b8704eb-2acf-402e-be87-baa4bae8d490> | CC-MAIN-2016-26 | http://download.videohelp.com/vitualis/med/uppbone.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.826201 | 3,018 | 3.65625 | 4 |
A look at the genealogy, history, folk art and archeology of the PA Germans and their gravestones, with German language translations, at the Ephrata Cloister.
Lilies and Crown at Ephrata - Lilies
A little north of Bergstrasse, in the same township in Lancaster Co., stands the Ephrata Cloister, a German Protestant religous settlement, was founded by Conrad Beissel during the Great Awakening of religious revivalism which swept the colonies in the 1730s and 40s fed by German Pietists fleeing religious persecution in Europe. This group was drawn from the surrounding congregations of Dunkers, Mennonite, Lutheran and Reformed German settlers. The main tenants advocated by Beissel were Saturday worship, celibacy, prayer, work, singing and vegetarianism. His sermons and rituals incorporated Old Testament teachings, Rosicrucian thought and alchemical practices, mixed in with basic Christian doctrine.5 The order thrived well into the 1770s, its fame reaching as far as Europe, where Voltaire mentions it in his "Dictionaire Philosophique".6
The Cloister proper was the home of the celibate "brothers" and "sisters", who wore distinctive habits and lived in separate buildings. Two stones, found in its rock walled graveyard, are representative of the people, and beliefs, of this sect. The first stone is that of a young man named Friedrick Keller. Its carvings are fine examples of the Ephrata Cloister calligraphy which made their hand illuminated hymn books and manuscripts famous. A large lily flanked by two small blooms tops the stone's front, below which Freidrick's life dates are given in Roman script. The back has two arches composed of running wedges that could symbolize the rays of the rising sun. There is star is at the apex of the upper, while the lower bows over a stylized flower. The verse below is beautifully done in fraktur scrip and fits well on the stone face. As translated by Dr. Leroy Hopkins of Millersville Univ. it reads:
Be still again my soul
The lily pictured on this stone was a favorite flower symbol of the sect and was used to represent Christ and/or one of his true believers.7 In 1760, Georg Adam Martin, a pietist and revivalist, visited the Cloister, and the sisters sang him the "Song of the Lilies." The following verse from that song would have fit just as well under the lily on Bruder Frederick's stone:
My life I would give it Forever to Thee,
Copyright ©1985-2005 Sandra J. Hardy. All rights reserved.
Those more interested in the genealogy, history, folk art and archeology of the PA Germans and their gravestones, with German language translations, at the Ephrata Cloister, see the Links Page. | <urn:uuid:5e40873f-9c1e-42ec-b7dd-6b0d6be3a7d0> | CC-MAIN-2016-26 | http://www.pagstones.com/pgs_cal_eph.page.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00131-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956032 | 610 | 3.234375 | 3 |
Definition of opah
n. - A large oceanic fish (Lampris quttatus), inhabiting the Atlantic Ocean. It is remarkable for its brilliant colors, which are red, green, and blue, with tints of purple and gold, covered with round silvery spots. Called also king of the herrings.
The word "opah" uses 4 letters: A H O P.
No direct anagrams for opah found in this word list.
Words formed by adding one letter before or after opah (in bold), or to ahop in any order:
c - poach s - opahs
Shorter words found within opah:
ah ha hao hap ho hop oh op pa pah poh
List shorter words within opah, sorted by length
Words formed from any letters in opah, plus an optional blank or existing letter
List all words starting with opah, words containing opah or words ending with opah
All words formed from opah by changing one letter
Other words with the same letter pairs: op pa ah
Browse words starting with opah by next letter
Previous word in list: opacity
Next word in list: opahs
Some random words: roach | <urn:uuid:a0cc56e5-3b7d-489d-b6a3-d389e370643e> | CC-MAIN-2016-26 | http://www.morewords.com/word/opah | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00005-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.818428 | 259 | 2.609375 | 3 |
INDONESIA - The Provincial Food Security & Vulnerability Atlas (FSVA) of Nusa Tenggara Barat (NTB) Province 2010
The collaboration between Provincial Food Security Office (FSO) NTB and the United Nations World Food Programme (WFP) brings us the Provincial Food Security & Vulnerability Atlas (FSVA) of Nusa Tenggara Barat (NTB) Province 2010. Provincial FSVA covering 105 sub-districts in 8 rural districts that consolidated many variables of the food security aspects such as food availability, food access and distribution, and health and nutrition. Provincial FSVA serves as an important tool for decision making in targeting and developing recommendations for responding to food and nutrition insecurity at the district and sub-districts levels.
Analyzed 13 indicators related to food security for the period of 2007-2009, and composite analysis of 9 of them allow the FSVA to answer three key questions on food security and its vulnerability: Where are the higher vulnerable to food insecurity (by district, sub-district); How Many are they (estimated population); and Why are they higher vulnerable (main determinants for food insecurity)?.
The provincial FSVA are available in bilingual (Bahasa Indonesia and English). | <urn:uuid:1adf2bac-02dc-4d2f-80bb-0740ee2a4773> | CC-MAIN-2016-26 | http://m.wfp.org/content/indonesia-provincial-food-security-vulnerability-atlas-fsva-nusa-tenggara-barat-ntb-province?device=mobile | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00101-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.894734 | 253 | 2.828125 | 3 |
Under a moonless sky in the North Carolina mountains, a Democratic gubernatorial candidate named Terry Sanford stood on the steps of the Henderson County courthouse and made a proposal that seemed audacious for 1960. It had been a stinking hot May day, but the night was cooling rapidly, and 350 voters had shown up to hear this former FBI agent and World War II paratrooper describe his vision.
Sanford spoke slowly and deliberately. He warned that North Carolina’s economic growth was being stymied by a school system that ranked among the ten worst nationwide. “This is not good enough for my children or for yours,” he said that night. “We can do no less than to offer the individual child the educational opportunity to compete in today’s competitive world.” Better jobs required better schools, even if that meant raising taxes. “We cannot put our children in deep freeze,” waiting for the state’s tax base to grow, he said.
His detractors had been warning voters to guard their wallets from Sanford’s profligacy. “The primary need is not an outpouring of funds but a revival of learning,” said Sanford’s main opponent, a segregationist attorney named I. Beverly Lake. Another candidate, John Larkins, called Sanford’s education agenda “pure tommy-rot,” warning that it consisted of “cure-all programs, many of which have dubious merit and all of which are expensive.” Voters disagreed. That November they elected Sanford as their governor, then watched as he launched an ambitious campaign to modernize North Carolina.
During his single, constitutionally limited term, from 1961 through 1965, Sanford persuaded the legislature to levy a tax on food and medicine sales, then used the revenues to hire 2,800 new teachers and raise their pay by more than one-fifth. He helped consolidate the state’s university system and build a network of community colleges. He founded the North Carolina Fund, a five-year effort to eradicate poverty and discrimination. He took a measured approach to desegregation at a time when other Southern governors were calling for resistance. Many historians and policy experts say that Sanford—who later became Duke University’s president, serving from 1970 to 1985—helped set in motion a moderate bipartisan consensus that, over the past half century, has fostered a robust and stable business climate.
"Terry Sanford, as much as anybody, helped create the North Carolina brand. This was a clear marking point where North Carolina emerged on a different path than the rest of the South.”
That consensus held until 2010, when voters elected a Republican legislature committed to dramatically overhauling state policy. Then, in 2012, they elected former Charlotte Mayor Pat McCrory as governor, giving the GOP its first lock on state government since 1870. The new majority has made broad changes to tax policy, school funding, and social-welfare programs; loosened regulations on businesses; expanded gun owners’ rights; and passed new restrictions on voter registration and poll access.
“There’s almost nothing the legislature did that doesn’t have a precedent in some other state or country,” says John Hood, president of the John Locke Foundation, North Carolina’s most influential conservative think tank. “What was truly unprecedented was action on all of those issues in one year.” Hood calls the sweep “spectacular” and says it was “based on the best available empirical data about what makes state economies prosper.”
Critics don’t believe the shift has been data-driven at all and fear it will harm both commerce and social and economic equity. Throughout the 2013 session, North Carolinians descended on the Legislative Building in Raleigh for a series of exuberant and peaceful protests, known as Moral Mondays, which garnered international headlines and more than 900 arrests. Within the Duke community, where Sanford casts a long shadow fifteen years after his death, some faculty members and alumni describe the rightward turn as a deliberate dismantling of Sanford’s legacy.
“TERRY SANFORD'S a hero of mine, but he wouldn’t want me to tell you he was a saint,” says Pope “Mac” McCorkle III J.D. ’84, director of graduate studies for the Master of Public Policy program at Duke’s Sanford School of Public Policy. At his idealistic best, Sanford envisioned a future in which the South would shed its reputation as a moral and economic drag on the country. But he also knew that winning elections required circumspection. During the 1960 Democratic primary battle against Lake—who had defended North Carolina’s single-race schools during the arguing of Brown v. Board of Education—Sanford offered himself as a more modulated supporter of segregation. “It was not a time to be a purist,” he told William Chafe, now the Alice Mary Baldwin Professor of history emeritus, around 1975. “I was trying to keep the banner flying, but I was trying to mute it enough so that I didn’t get slaughtered on pure principle.”
If Lake hovered to Sanford’s right, on his left were the civil rights activists who found their collective voice first at the Greensboro Woolworth’s lunch-counter sit-ins in February 1960 and later at demonstrations throughout North Carolina. Sanford didn’t like the protests; he preferred that enlightened leaders like himself quietly enact reforms. But once he became governor, the protesters lent him political cover as he set out to tackle issues involving race, poverty, and education. “The Greensboro sit-ins liberated Terry Sanford,” says Chafe. “They changed the terrain. Moderation becomes different.”
Sanford knew there were still compromises to be made. Investing in public schools meant imposing the only tax to which North Carolina’s large landowners, high-wage earners, and tobacco executives would consent: a regressive sales tax on food and nonprescription medicine. But Sanford calculated that better education would help the poor more than the extra pennies on each food dollar would hurt them.
Sanford also wanted to address other root causes of the state's 37 percent poverty rate, from racial bias to low industrial wages. "He realized the poverty that he saw all around him, from them mountains to the coast, was going to hold the state back,” says Robert Korstad, the Kevin D. Gorter Professor of public policy and history at Duke. Sanford knew the legislature wouldn’t allocate a penny to these efforts. So he persuaded the Ford Foundation, along with the North Carolina-based Z. Smith Reynolds and Mary Reynolds Babcock foundations, to finance the North Carolina Fund, a precursor to President Lyndon Johnson’s War on Poverty. (It received federal dollars, too.) Led by a board drawn from the state's bankers, industrialists, and educators, the fund was best known for sending racially mixed teams of volunteer college students into low-income communities. But it evolved to support efforts to organize poor North Carolinians, black and white, to advocate for themselves. And it spun off organizations that focused on job training, rural development, and affordable housing. "That's really the apogee of progressivism in North Carolina," says Korstad, who coauthored a book about the fund called To Right These Wrongs.
"TERRY SANFORD, as much as anybody, helped create the North Carolina brand,” says John Drescher M.P.P. ’88, executive editor of The News & Observer in Raleigh and author of Triumph of Good Will, a chronicle of the 1960 gubernatorial race. “This was a clear marking point where North Carolina emerged on a different path than the rest of the South.” Sanford’s legacy endured most visibly in the area of public education. Republican Governor Jim Holshouser expanded kindergarten statewide during the 1970s. Democrat Jim Hunt began an early-childhood initiative called Smart Start in 1993. Eight years later, Democrat Mike Easley championed More at Four, an academic pre-kindergarten for at-risk children. Measured by teacher pay and student-teacher ratios, North Carolina stayed in the middle of the national pack but ahead of most of its Southern neighbors. Education fueled economic expansion—witness the tech and pharmaceutical sectors in Research Triangle Park and the banking industry in Charlotte—which in turn bolstered school spending without major tax hikes. “It was a virtuous circle,” says McCorkle.
North Carolina took a leadership role on other issues, too, ranging from coastal protection to fairness in criminal sentencing. And it expanded access to the polls through policies like early voting (with same-day registration) and preregistration for sixteen- and seventeen-year-olds.
This was hardly a straight-line path. Sanford’s agenda produced a backlash that elected a conservative successor, Democrat Dan Moore, as governor in 1964. And in federal elections, North Carolinians have wandered all over the ideological map, most notably sending one of the nation’s most rock-ribbed civil rights opponents, Republican Jesse Helms, to the U.S. Senate from 1973 until 2003.
“To suggest that a Southern state cannot make progress unless it has a moderate-to-liberal Democratic political culture would strike the rest of the South as parochial.”
“We can’t over-mythologize the moderate nature of North Carolina,” says the Reverend William Barber II M.Div. ’89, state president of the NAACP—noting for example, that five rural school districts had to sue the state in the 1990s for adequate funding. “And yet, when I’ve traveled south, people in Mississippi [and] Alabama would always say, ‘We’re looking to North Carolina,’ in terms of our universities and the Research Triangle Park and all of those things that would not be possible if North Carolina had not taken some deliberate steps away from the philosophy of the segregated South.”
Not everyone shares this narrative linking prosperity to moderate politics and activist government. “North Carolina’s economic history is not an uninterrupted climb until 2007, when suddenly we fell,” says Locke’s John Hood. He notes that the past half century has been filled with peaks and dips, which can be attributed to factors ranging from state highway spending to international manufacturing trends. Hood also says that Texas and Virginia have developed strong economies with more conservative governance. “To suggest that a Southern state cannot make progress unless it has a moderate-to-liberal Democratic political culture would strike the rest of the South as parochial,” he says. “They would pat you on the arm and say, ‘That’s very nice.’ ”
It took a confluence of factors to set the moderate consensus crumbling recently. North Carolina’s Democrats, who dominated politics for a century, fell into disarray. Governor Easley, House Speaker Jim Black, and Agriculture Secretary Meg Scott Phipps were all criminally convicted in separate corruption scandals. The party had lost considerable credibility by 2009, when Governor Bev Perdue discovered that the Great Recession had ground the “virtuous circle” to a halt. “When Bev got shellacked with a budget that said, just to keep pace, with some cuts, we’re going to have to raise taxes $1 billion, North Carolina was not ready for that,” McCorkle says.
Meanwhile, Democrats had done little to cultivate fresh leaders. “Terry always had young people around him, giving them influence, talking with them, working with them,” says Korstad. “The Democratic Party had lost its ability to perpetuate itself.”
At the same time, conservatives were creating a brain trust—groups like the John Locke Foundation, Civitas Institute, and North Carolina Institute for Constitutional Law—funded in part by the family fortune of lawyer, retailer, and former state legislator Art Pope J.D. ’81, who is now state budget director. “[We were] building out a policy infrastructure in response to what the left had already done with greater amounts of money and more organizations,” says Hood. (Indeed, Sanford’s North Carolina Fund helped turn the Reynolds foundations into major funders of social-justice and community-development organizations.)
In 2010, this conservative infrastructure was able to seize on the public’s economic despair and diminished faith in its leaders. “The opportunity was created by events,” Hood says. “But the ability to respond was absolutely the result of years of investment and years of planning.”
THE NEW LEGISLATURE'S most direct confrontation with the Sanford legacy came in the area of K-12 school funding. Its $7.868 billion appropriation for 2013-14 represents a $117 million cut from the “base budget,” which is defined as what’s “necessary to continue the current level of services.” (Budget director Pope disputes that funding was cut, saying the base budget is “based on preliminary information and arcane budget rules.”) The legislature reduced funding for teacher assistants by 20 percent and eliminated bonuses for future teachers with master’s degrees. The Locke Foundation has opposed both of these budget items, saying they don’t demonstrably boost student achievement.
The new budget also created a voucher system for low-income families to send their children to private schools at taxpayer expense. “Too many minority children are lagging academically,” wrote Bob Luebke, senior policy analyst with the Civitas Institute, in a June 2013 column. “Many of these children are trapped in schools that are struggling or failing or don’t fit their needs.” Vouchers, he wrote, provide families “the ability to choose the type of school that is best for their child.”
Critics put a harsher spin on the $10 million voucher program. “They are paying people to leave the public schools,” says historian Tim Tyson Ph.D. ’94, senior research scholar at Duke’s Center for Documentary Studies. If Sanford were alive today, “he would be cutting them a new”—Tyson pauses here—“angle of vision. He would be serving it up red hot.”
The two sides disagree on what the budget will mean for the number of teachers in North Carolina classrooms. Pope says the final figures will not come out until February, but that “based on our estimates, there is sufficient budgeting to hire more teachers per student this year than last.” The state Department of Public Instruction, by contrast, estimates that 5,200 positions will be lost because the legislature altered the student-teacher ratio used for hiring. “We have started a spiral where we are slowly starving our public schools,” State Superintendent June Atkinson, a Democrat, told a television reporter in August.
The legislature also cut unemployment insurance; rejected a federally funded Medicare expansion; repealed the Racial Justice Act, which gives relief to death-row inmates who can prove that race influenced their prosecutions; passed abortion restrictions that will limit insurance coverage for some women and tighten licensure requirements for clinics; and expanded the venues where permit holders can carry concealed weapons, including playgrounds and funeral processions. It lowered the corporate income-tax rate and let expire the earned-income credit for low-paid workers.
Few measures stirred more discussion than the one dialing back North Carolina’s expansive voting policies. The new law reduces the number of early-voting days, requires voters to show government-issued IDs at the polls (college IDs don’t count), ends same-day registration and youth pre-registration, makes it easier to challenge a voter’s eligibility, and bans local election boards from extending polling hours because of extraordinary circumstances, like long lines. Defenders call the law, especially its photo-ID provision, an anti-fraud measure; the State Board of Elections documented two cases of voter impersonation between 2000 and 2012. “Part of the problem is it’s hard to detect voter fraud when there’s such loose standards,” says Pope.
Opponents call the fraud argument a smokescreen, arguing that the law is intended to reduce turnout among more liberal constituencies. Take early voting, for example. “Black American churches, where I’m from in North Carolina, during election season we have an abbreviated Sunday service and have buses that transport folks who otherwise would not have transportation to the polls,” says Jay Pearson, assistant professor of public policy at Duke. Extended voting also benefits workers with inflexible schedules. Curtail the number of days that the polls are open, he says, and “you have an institutionalized mechanism that has been altered, systematically disenfranchising working-class, blue-collar folks.”
Pearson argues that, as governor, Sanford recognized that poverty stemmed from “structural inequality,” rather than individual failings and understood how the machinery of government could be mobilized to give poor people power. The new majority, he says, understands how the machinery of government can be used to take that power away.
“I think Terry would probably try to rally his network of business leaders and political elites and create some kind of official opposition. The Democrats have not known what the hell to do. Instead, those of us who are protesting and getting arrested are taking the place of what Terry would have done."
HISTORIAN CHAFE, who was arrested during a Moral Monday demonstration in May, sees a connection between Sanford’s governorship and the 2013 protests. “How would Terry handle it?” he asks of the rightward shift. “I think Terry would probably try to rally his network of business leaders and political elites and create some kind of official opposition. The Democrats have not known what the hell to do. Instead, those of us who are protesting and getting arrested are taking the place of what Terry would have done.”
“But the fact that these demonstrations are respectful and controlled and ‘moderate’ gives you some sense that that Sanford tradition is still in place,” Chafe adds. “People are not fighting the police. They are not aggressively transgressing the boundaries that have been established. These are polite protests.”
Like Chafe, others in the Duke community who knew Sanford wonder how he would have responded to a wholesale undoing of his policies. McCorkle, who worked closely with Sanford after graduating from law school, believes the former governor would have invested his energy developing new leaders to recapture power.
“It would be very clear to him: Go young, and go diverse,” McCorkle says. “He would be counseling people: Step aside. Be the elder statesmen. But bring in the young. They’re going to make mistakes, but they’re the future.”
And Tyson, who also got arrested during a Moral Monday protest, believes Sanford would reach out to the twenty-first-century demonstrators, just as he did in the 1960s to civil rights activists like North Carolina A&T student-body president Jesse Jackson. “Sanford would have immediately sent out trays of sandwiches and urns of coffee, and maybe deviled eggs—there would be a little Southern touch to it—and said, ‘Come, let us reason together,’ ” Tyson says. “Without clogging the engine of the movement, he would have tried to get it tied to a crankshaft that was going to do something positive and powerful.”
During the 1960s, protesters made Sanford uneasy. But Tyson believes the former governor would have appreciated today’s racially diverse expressions of outrage. “We are the embodiment of the values he tried to advance in this state,” the historian says. “I think he would say, ‘At long last. At long last. My people.’ ”
— Yeoman is a journalist based in Durham. His recent work has been published in OnEarth, Audubon, The American Prospect, Parade and The Saturday Evening Post. | <urn:uuid:d35bcde8-52a2-4e29-b32f-e0d334b09e13> | CC-MAIN-2016-26 | http://dukemagazine.duke.edu/article/end-moderation | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00151-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968728 | 4,234 | 2.640625 | 3 |
WASHINGTON (Reuters) - Fewer African Americans are dying from cancer, but compared with white Americans their length of survival is shorter and the fatality rate is still far higher, according to a report released on Tuesday.
The conclusions were part of a new American Cancer Society report on African-Americans and cancer.
While there are many reasons for the racial disparity, the main cause is that a larger proportion of African Americans are poor, American Cancer Society chief medical officer Otis Brawley said in a statement.
"African Americans are disproportionately represented in lower socioeconomic groups," he said. "People with lower socioeconomic status have higher cancer death rates."
Tim Byers, a doctor from the Colorado School of Public Health, published research in 2008 that shows the link between socioeconomic status and cancer. For those of lower classes diagnosed with cancer, there was a 35 percent greater likelihood that they will die.
The most common form of cancer among African American males is prostate cancer at 40 percent of cases, 15 percent have lung cancer and 9 percent colon and rectal cancer.
For African-American women, breast cancer is the most common, with lung cancer second with 13 percent, and 11 percent colon and rectal cancer.
Compared to whites, death rates were 32 percent higher among African-American men and 16 percent higher among African-American women in 2007, the last year measured.
The study said that the reduction in smoking-related cancer was due to the fact that the percentage of African-American men smoking has fallen faster than their white counterparts.
There are expected to be 168,900 new cancer cases and 65,540 cancer deaths among African Americans in 2011, the study said.
SOURCE: http://bit.ly/g1mlbg American Cancer Society | <urn:uuid:907b190d-683e-4545-8cc5-fa5f212f7919> | CC-MAIN-2016-26 | http://www.lifescript.com/health/centers/crohns/news/2011/02/01/african_americans_have_higher_cancer_fatality_rate.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00199-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.966092 | 356 | 2.671875 | 3 |
A major social-cognitive achievement of young children is the understanding that other people act on the basis of their own representations of reality rather than on the basis of reality itself. Developmental psychologists have explored the refinement of mental-state reasoning in children, typically by measuring their ability to pass false-belief tasks, such as the example above. Yet previous research has only been conducted in Western cultures, where children pass such tests around the age of 5. New research reveals that children reach this false-belief milestone at about the same age the world over.
The findings appear in the report, "Synchrony in the Onset of Mental-State Reasoning: Evidence from Five Cultures," published in the May 2005 issue of Psychological Science, a journal of the American Psychological Society. Researchers Tara Callaghan, St. Francis Xavier University; Mary Louise Claux, Catholic University of Peru; Shoji Itakura, Kyoto University; Angeline Lillard, University of Virginia; Hal Odden, Emory University; Philippe Rochat, Emory University; Saraswati Singh, M.K.P. College; and Sombat Tapanya, Chiang Mai University, tested the false-belief understanding of children in Canada, India, Peru, Samoa, and Thailand.
The test group consisted of 267 children, approximately 50 from each country, ranging from 30 to 72 months in age. The false-belief task involved the following test: One experimenter hid a trinket such
Contact: Tara Callaghan
Association for Psychological Science | <urn:uuid:f6a446ee-c0fb-4cc1-99ef-1bdcc2a15d9a> | CC-MAIN-2016-26 | http://news.bio-medicine.org/medicine-news-3/Mental-state-reasoning-is-universal-milestone-in-child-development-8845-1/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00151-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.89166 | 315 | 3.125 | 3 |
University of Arizona
The Idea: For years, scientists have been trying to figure out a way to eradicate one of the world's biggest health problems: malaria. Malaria medicine exists, but is not always available nor is it completely foolproof (not to mention how inconvenient it is to have to swallow all those pills). Mosquito nets are a solution, as well, but with the nets come the difficulties of distribution.
The best solution is to eliminate the malaria altogether, killing it at the onset before giving it a chance to spread. A new genetically engineered mosquito has come close to meeting this task.
The key? It is completely - yes, completely - immune to the Plasmodium parasite, the agent that causes mosquitos to transmit malaria when they bite. Although it will take some time to release this mosquito into the wild, the first crucial step - removing the malaria spreading capabilities from the mosquito - has been taken.
Whose idea: scientists at the University of
Why it's a brilliant idea: Malaria inflicts 250 million people a year, of which 1 million are lethal. This new malaria-proof mosquito should cause a huge dent in those numbers, when, ideally, within ten years, it replaces the malaria-causing, havoc-wreaking mosquito already out there. Rather than just reducing the chances of malaria spreading, or simply counteracting when it does spread, this new genetically engineered mosquito will catch and entirely wipe out the malaria from its origin.
The malaria-proof mosquito shows that health and medicine-focused science has come a long way and is truly something to invest time and money in.
Have a million dollar idea of your own? Send it to firstname.lastname@example.org and see if it stands up to our critical readers. Just be sure to include your name and a photo of yourself, or your idea, in the email. | <urn:uuid:df51dac4-fb2f-48f1-bac5-d61c6ad8222c> | CC-MAIN-2016-26 | http://www.businessinsider.com/million-dollar-idea-mosquito-resistant-to-malaria-2010-12 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00047-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936013 | 382 | 3.21875 | 3 |
Ant networks could be more complex than Google algorithms
When ants raid a family picnic to pick away at leftovers, it might look like pure chaos. But, that's really not the case. (Flickr / Dawn Camp)
In fact, it's the opposite. And, new research suggests the tiny crawlers' rank-and-file methods might be so good, so systematic and efficient, that they beat out perhaps our most prolific Internet search engine.
A new study published in the Proceedings of the National Academy of Sciences claims the ants devise "highly complex networks" to collect food that are "far more efficient" than all the algorithms and formulas used by Google - yes, Google - to shoot out its search results. (Via Potsdam Institute for Climate Research)
A study co-author told The Independent: "I'd go so far as to say that the learning strategy involved in that, is more accurate and complex than a Google search. These insects are, without doubt, more efficient than Google in processing information about their surroundings."
Still, an individual ant is not the smartest creature. The magic happens when they come together. Here's how the little buggers do it.
First, a single ant takes whatever crumb it finds back to its nest, but, in doing so, leaves behind a trail of pheromones that marks the path. Then, other ants follow suit, with more pheromones released, creating a more refined path. Which then attracts even more ants. Lather, rinse and repeat, and pretty soon an optimal path is forged between food source and their nest. (Via YouTube / Karl Westworth, politplatschquatsch, yannigk)
The study had a second finding as well - that the age of the ant made a difference in how well it could track down food.
As Time explains, older ants have more experience, and, therefore, better street smarts, allowing them to create those optimal pathways to food easier even though younger ants might move quicker. The younglings then just kind of learn what they can and wait their turn. | <urn:uuid:70453b3a-ec39-4c12-ab48-b9533be19a83> | CC-MAIN-2016-26 | http://www.aol.com/article/2014/05/27/ant-networks-could-be-more-complex-than-google-algorithms/20893769/?icid=acm50backtoplink | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00137-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.94191 | 432 | 2.765625 | 3 |
Changes in climate are transforming our planet. To adapt, we must rethink traditional approaches to conservation and development, moving beyond managing for persistence to managing for change. Climate change adaptation—the process of adjusting to the changing climate and its cascading impacts—seeks to reduce the vulnerability and build the resilience of people and nature to the current and anticipated effects of climate change while managing the uncertainties of the future.
Crowdsourcing is a way to find solutions to problems by asking a large group of people to contribute information, ideas, data, and content about a certain idea. WWF is using this tool to address knowledge gaps about climate change, and help implement solutions.
Climate change poses new challenges to conservation. We are committed to promote far-reaching and aggressive reductions in greenhouse gas emissions while in equal measure helping communities, companies, governments and international institutions anticipate and adapt to climate change. WWF’s adaptation and resilience program works with a wide range of partners to accomplish three outcomes:
manage the uncertainties of climate change at different scales;
reduce social and environmental risk and vulnerability to multiple hazards; and,
increase the social, ecological and institutional resilience of our many partners.
We develop tools to assess and map climate vulnerability and build capacity among WWF field staff and partners to develop climate-smart approaches to conservation. We have developed a trait-based wildlife vulnerability assessment to update action plans for WWF priority species, and Flowing Forward, a participatory framework that helps landscape stakeholders assess the vulnerability of their surrounding ecosystems to the combined impacts of climate change and economic development.
We are working with leading humanitarian organizations and governments to provide advice and training on better practices for integrating the environment in disaster response and building resiliency for communities impacted by or at risk to disasters.
We are exploring the connections between Snow leopard habitat and water provision throughout Asia’s major mountain ranges, working to improve watershed management and enhance the resilience of local communities to the impacts of climate change.
ADVANCE is a partnership between WWF and the Columbia University Center for Climate Systems Research (CCSR) at The Earth Institute. Launched in 2015, ADVANCE facilitates adaptation by providing new ways of generating and integrating climate risk information into conservation and development planning, policies and practice. | <urn:uuid:6884a8fc-b3f4-4f9d-89ca-c348c5f19b25> | CC-MAIN-2016-26 | https://www.worldwildlife.org/initiatives/adapting-to-climate-change | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00185-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.914415 | 456 | 2.921875 | 3 |
Tell-tale Clues To A 335-year-old Mystery Spotted In Cassini Images
MEDIA RELATIONS OFFICE CASSINI IMAGING CENTRAL LABORATORY FOR OPERATIONS (CICLOPS) SPACE SCIENCE INSTITUTE, BOULDER, COLORADO http://ciclops.org
Preston Dyches (720) 974-5859 CICLOPS/Space Science Institute, Boulder, Colo.
For Immediate Release: Oct. 8, 2007
TELL-TALE CLUES TO A 335-YEAR-OLD MYSTERY SPOTTED IN CASSINI IMAGES
The appearance of two-toned Iapetus has been deeply mystifying ever since the moon was first discovered by Jean-Dominique Cassini in the late seventeenth century.
Now, high-resolution images of Iapetus recently acquired by the spacecraft named after the Italian/French astronomer during its low pass over the moon last month have uncovered telling details on the moon's surface that may well yield the reason for its strange bright and dark patterns.
The images show that on the moon's bright trailing hemisphere, especially in the equatorial regions, dark material tends to coat the equator-facing slopes of ridges and crater walls and also many crater floors. This finding strongly suggests the warming action of the Sun in removing bright ice from these sunward-facing surfaces and leaving behind the native dark material that is normally mixed with the ice. Subsequent downslope motion is very likely responsible for collecting much of the dark material in the floors of craters and other low lying regions.
"This is somewhat reminiscent of the vineyards in Germany," said Tilmann Denk, imaging team associate at the Free University in Berlin, Germany and an expert on Iapetus. "The grapes get more sunlight when the vine is planted on a south-facing slope. The same mechanism works on Iapetus: the equator-facing slopes get more sunlight, and the bright ice there evaporates, leaving behind the darker stuff."
In this particular characteristic, Iapetus is similar to another moon in the Saturn system with large surface contrasts, Hyperion. "The craters on Hyperion have dark floors, probably for a similar reason, but with a twist," said Paul Helfenstein, an imaging team associate at Cornell University and an icy satellite expert. "Sunlight also warms the surface, but on Hyperion, the terrain is so rugged that it is believed all dark material moves downward to collect on crater floors."
The fact that this process of thermal segregation is so clearly operating on the bright face of Iapetus lends confidence to the two-part suggestion--the first half of which was made by scientists thirty years ago and the second half made more recently by Cassini scientists--that the infall of a thin coating of dark material onto Iapetus' leading side long ago initiated a runaway version of thermal segregation there. With a coating of dark material scooped up by the moon in its orbit around Saturn, all surfaces on the entire leading hemisphere except at high latitudes, regardless of the direction to the sun, became warm enough to evaporate the ice beneath. Once warmed, evaporation proceeded even more quickly until all the surface ice was gone.
The result: a layer of dark material, consisting of both foreign and native material, coating most of leading hemisphere. The differing colors of the leading and trailing hemispheres on Iapetus observed in Cassini images indicate slight differences in composition, as would be expected if the leading side also had mixed in with it material that derived from elsewhere in the Saturn system. The origin of this foreign material remains a mystery, but potential candidates are the small moons at large distances from Saturn or a previously existing outer moon that was broken apart long ago.
Observations of very small bright craters seen for the first time in the recent Cassini images, point to impactors that punched through the dark upper layer to the bright ice beneath and reveal the layer's thickness at no more than a few meters.
In addition to the new revelations about the moon's brightness asymmetry, the recent Cassini images revealed that the tall equatorial ridge bisecting Iapetus' leading side appears to be a competent structure, and most likely tectonic in origin. They also showed, for the first time, giant impact basins on the trailing hemisphere. Enormous basins had previously been observed by Cassini on the leading side, but the new images confirm that the cratering record is similar across the entire surface and that the surface is very old.
Surveying the surface of Iapetus, and determining the origin of the moon's peculiar asymmetry in brightness were two of the key science objectives for this international mission--two that can now be essentially checked off as "done."
"While there are many details yet to be worked out, we think we now understand the essence of why Iapetus looks the way it does," said Carolyn Porco, the leader of the imaging team. "And this discovery too will go down as a major legacy of Cassini's historic exploration of Saturn."
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory (JPL), a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA's Science Mission Directorate, Washington. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging team consists of scientists from the U.S., England, France, and Germany. The imaging operations center and team leader (Dr. C. Porco) are based at the Space Science Institute in Boulder, Colo. | <urn:uuid:dff5d5b8-6114-4dcd-aa66-6287450e1468> | CC-MAIN-2016-26 | http://ciclops.org/view.php?id=3810 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00195-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.941492 | 1,193 | 3.34375 | 3 |
It started with a three part question: What’s the URL to run the DB Console for Oracle Database 11gR2 on Windows 7, and what’s the
ORACLE_UNQNAME and why isn’t it defined by the installation. The first part is easy (shown further below), but the second and third parts were more involved.
ORACLE_UNQNAME is an operating system environment variable that holds the database’s unique name value. You can find it with the following query as the
SYSTEM user (through SQL*Plus):
SELECT name, db_unique_name FROM v$database;
By the way, it’s not set as a Windows environment variable by default. You would need to do that manually (an example of setting an environment variable is here). The Oracle Universal Installer (OUI) actually used it to configure the already running DB Console service (with a successful installation). Once there, it didn’t need to set it as a system-level environment variable.
You may be wondering what generated the question if there’s already a configured service. You encounter the error when dropping down to the command line. First, you verify that the ports are running with this command:
C:\> netstat -an | findstr /C:1158 TCP 0.0.0.0:1158 0.0.0.0:0 LISTENING
While this blog discusses the hard way to determine whether the DB Console is running, you can simply open the Windows Services to see whether the DB Console is running. Likewise, if you know the URL, enter it in your browser. Assuming you don’t know how to do either or are just a gluten for the command line, the rest of this post is important.
You can see your Windows services by typing
services.msc in the Start->Run Command field. That way you don’t need to navigate the various links that differ between Windows releases.
Many know that you can check the status of the running DB Console with the
emctl utility at the command line. It lets you find the URL that you should enter for the DB Console in a browser. This knowledge is where users encounter the problem with
%ORACLE_UNQNAME% environment variable (
$ORACLE_UNQNAME on Linux or Unix).
For example, running the following command raises an error that instructs you to set the
%ORACLE_UNQNAME% environment variable. Although, it leaves many wondering what’s the right value to enter.
C:\> emctl status dbconsole Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name.
If you object to using the Windows services to start and stop the OEM tool, you can do it at the command-line like the status example above. Having set the environment variables, you can start the DB console with this command-line syntax:
C:\> emctl start dbconsole
Having set the environment variables, you can stop the DB console with this command-line syntax:
C:\> emctl stop dbconsole
It’s not hard to find this information when you know how. While the error message complains about one environment variable, there are actually two environment values you need to set. They are:
You can find them by navigating to the
%ORACLE_HOME%\oc4j\j2ee\ folder (or directory). The file name of the DB Console file tells you the values for these environment variables because they’re embedded in the file’s name. A snapshot from Windows Explorer shows them both.
You can set these environment variables as shown below in command shell of Windows (Linux or Unix users should use terminal), and then successfully run
emctl from the command line.
C:\>set ORACLE_HOSTNAME=localhost C:\>set ORACLE_UNQNAME=orcl C:\>emctl status dbconsole Oracle Enterprise Manager 11g Database Control Release 18.104.22.168.0 Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. https://localhost:1158/em/console/aboutApplication Oracle Enterprise Manager 11g is running. ------------------------------------------------------------------ Logs are generated in directory C:\app\McLaughlinM\product\11.2.0\dbhome_1/localhost_orcl/sysman/log
If you’re using Linux or Unix, the export commands differ. You can check this other post for those. They under step 8 in that post.
You then enter the following URL in a browser to use the newly installed DB Console:
The browser will prompt you with a security warning like the following:
Click the Add Exception button and you’ll see the following Windows dialog.
Having granted the exception, you arrive at the following credential web page. Connect as the
SYSDBA using the
SYS user’s account when you require extraordinary privileges. Doing so, shows a security risk in the console. You should connect as the
SYSTEM user with
NORMAL access generally, as shown below.
The following home page shows after your credentials are validated.
Hope that helps those trying to sort out running the DB Console and finding the magic
%ORACLE_UNQNAME% value. Check this other blog post for instructions to reconfigure OEM. | <urn:uuid:241a46db-e30e-410c-98f4-baacccabea8d> | CC-MAIN-2016-26 | http://blog.mclaughlinsoftware.com/2012/08/23/whats-oracle_unqname/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00168-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.845851 | 1,145 | 2.796875 | 3 |
Effects of management practices on the ground beetle assemblages of grassland and related habitats (Coleoptera: Carabidae).
PhD thesis, University of Glasgow.
Full text available as:
In a comparison of grassland, moorland and woodland habitats in north-east England, moorland sites were found to be the most diverse and species-rich and to support a carabid fauna of larger body size than grassland sites. Within the grassland sites, intensification of management resulted in a reduction both in species richness and in body size. The species composition of intensively managed sites differed from that of the less intensive, with management appearing to favour species associated with drier conditions.
Similarly, a study of data from 110 sets of pitfall traps in managed and unmanaged grassland in Scotland found a general reduction in diversity, rarity and body size as management intensified, with silage fields having especially low values of WML. Diversity and rarity fell sharply between the second and third levels of management. Multivariate analysis of the species composition also made a clear distinction between these levels, grouping sites in bands 1 and 2 separately from those in bands 3 to 5. A more detailed examination of the effects of the different components of management found that body size was dependent mostly on the type and age of the sward, while diversity and rarity responded to nutrient inputs.
In a subset of 36 of the 110 Scottish sites, the carabid assemblages of sown wildflower swards, sown grass and clover, and uncultivated grassland were compared. Body size, species richness and diversity were all highest in the unmanaged swards, and species richness and diversity were higher in wildflower swards than in sown grasses. The effects of organic nutrient input were investigated at sites receiving input of slurry, sewage sludge or faecal material from flocks of grazing geese, but not significant relationships could be elucidated due to overwhelming effects of sward type and management intensity.
Actions (login required) | <urn:uuid:e26e0625-ee2a-47ac-a619-5d556e5cb518> | CC-MAIN-2016-26 | http://theses.gla.ac.uk/702/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.937832 | 418 | 2.921875 | 3 |
Dear Mr. Henshaw, a Newbery medal-winning book by Beverly Cleary, is a great way to get students to think about some of the therapeutic benefits of writing. Of course, you don’t have to mention how helpful writing can be when you need to sort out feelings but you can let students figure this out on their own as they read the book.
Leigh Botts writes to his favorite author, Mr. Henshaw, as part of a school assignment and when the author writes back and asks Lee questions, his mother says he has to respond. Through his correspondence with Mr. Henshaw Lee learns about accepting life’s difficulties and — with the encouragement of Mr. Henshaw — starts to keep a journal.
In addition to coping with his parents’ divorce and missing his father, Leigh also deals with moving, adjusting to a new school, and having his lunch continually stolen — certainly timeless topics.
While some children may not think of writing letters to an author, they may keep a journal or know someone who keeps one. There are a lot of projects that can be added to the study of this book, including writing letters or journal entries as one of the characters. Students could also write to offer advice to the characters. Introducing students to the basic format of a personal letter (or e-mail) will provide valuable experience.
Mr. Henshaw certainly proves to be more interesting (and interested) that Leigh probably imagined. Reading this book could also foster discussion about the kinds of people your students admire (authors, celebrities, athletes) and what makes a person worthy of admiration. Ask if there are any local, “hometown heroes” that your students admire in addition to people who are nationally or internationally famous.
One of the many takeaways from the book for adults is that adults encourage Leigh to write and while he is hesitant at first, it grows on him. Students who would not write on their own may learn to enjoy it more if a teacher or parent lays the groundwork for them to get comfortable first. | <urn:uuid:fd1a26fc-1f64-4dcb-a30c-bdae23955276> | CC-MAIN-2016-26 | http://www.hbook.com/2014/07/blogs/lollys-classroom/using-dear-mr-henshaw-encourage-students-write/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00001-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.978328 | 424 | 2.921875 | 3 |
Research Poster Format
The poster presentations provide a forum for you to present your experiences in either research projects or internship. This serves a two-fold purpose in communication: it allows you the opportunity to present your project/internship and it allows the content of your experience to be shared with other students and faculty. In order to fulfill this purpose, the poster should be legible from a distance (~ 5 feet) to permit viewing by more than one person at a time. The guidelines below will give you some of the criteria which will increase the clarity and effectiveness of your poster. There are trifold poster boards available for display. These boards are 48” x 36”. (See picture below.)
B) General Format
Use large type -- at least 72 point font for the title, 20 point for major headings and 16 point for the text. In this document “Poster Format” is in 18 point bold font, the major headings are in 16 point bold font, the A,B,C headings are in 14 point bold font, and the text is in 12 point font. Imagine reading these from a distance of 5 feet. (I am using the Times font -- remember different fonts vary in size.)
Choose a clear font (not too fancy) and use a single font type throughout the poster.
Prepare a title banner for the poster, including the TITLE OF PRESENTATION with the names of the authors and any affiliations listed below. (For the affiliations -- indicate if you worked at Wittenberg, another university, the forestry service, etc.)
The poster should flow either down columns or along rows -- natural patterns for Westerners.
Keep the poster simple. The challenge is to maintain a balance between providing information and not confusing the viewer.
Figures and tables should cover roughly 50% of the viewing area. Each should be clearly labeled and have adequate description so that the reader knows what s/he is looking at!
Text portions of the poster should be concise while providing the necessary information. (Balance, again!) Feel free to use bulleted lists and outlines for clarity. The sections to be included for each type of poster are given below.
The tri-fold cardboard display units are 48” x 36” (w x h).
C) Tips for preparation
Plan ahead! Make a rough sketch of the poster first. What are the main points you wish to convey? What is the best format for figures? Should you use photographs? Graphs? Charts? Tables? What can best be said in prose? Does color help?
What are the appropriate major headings and topics for each section? Make a mock up of the figures and text you plan to include and experiment with layout. Get some feedback from peers and professors. Sleep on it. (No, not literally!) Does the poster convey the message you intend?
Formalize the figures and the text. Don’t forget to check spelling and grammar! Lay out the poster. Stand back and look. Are all the figures clear. Can a reasonably intelligent individual navigate the poster without your assistance?
Add finishing touches, altering figures and text to enhance clarity and flow.
Note: These are general guidelines, not rules. If you have prepared a presentation of your work for a meeting within your discipline, that format is fine as long as it lends itself to a gallery-style presentations
For research projects, your major task is to clearly define your project for a general audience. This should include the purpose or intent of the project, project design, and any results or conclusions. A clear description of the question addressed and the methods for trying to answer that question is important. Remember that your audience is the Wittenberg community. In general, the following sections should be included.
The abstract provides a brief overview of the project and any conclusions. These abstracts will be compiled into a booklet for the poster session.
In the introduction include necessary background information and the rationale for the experiment. What is the question being asked, why is it being asked, and how does this fit into the larger scientific framework?
Figures and tables with brief text explanations
Figures and tables might include a flow chart of the steps of the project, diagrams of the methods of research, photos of relevant sites, support for your conclusions. Where relevant, this should include any data and/or data interpretation. Aim for clarity in all cases.
Briefly summarize the project, including any conclusions which can be drawn from your project.
The posters describing internships should convey the essence of the internship to the observer. You may choose to highlight areas of the internship you found particularly intriguing. Include your duties in the internship: what were your responsibilities? What did you learn from the internship?
The abstract provides a brief, general description of the internship. What was the focus of the internship? What areas were explored? These abstracts will be compiled into a booklet for the poster session.
Describe the internship in a bit more detail than in the abstract. What were your responsibilities? Who were you working with? Did you work in the context of a large organization? If so, how did your internship relate to the overall mission of that organization? What were the goals of the internship?
Figures, tables, and photographs with brief text explanations
Provide graphic (but G-rated) depictions of the components of your internship. What were the high points? Would photographs help to convey your experience more clearly? You may need to rely more heavily on text than some of the experimental projects or than some of the internships. There is wide variety on what will be included in this section depending on the nature of the internship.
Briefly summarize the internship. The challenge here is to provide synthesis without repetition. Were the goals of the internship achieved? What more might you hope to have learned? | <urn:uuid:0e759501-f3ad-4fa9-9ced-f8055f685dd3> | CC-MAIN-2016-26 | http://www.wittenberg.edu/academics/biology/researchposterformat.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00081-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.920201 | 1,203 | 2.59375 | 3 |
AfriGeneas Canada Research Forum
Bibliography of African Canadian History Books
I found this list online (see link, below), which contains dozens of titles, most of which I'd never heard of before. Here's a sampling of some of the books cited on his list:
Aylward, Carol A. Canadiana Critical Race Theory: Racism and the Law. Halifax: Fernwood, 1999.
Bartolo, Oswald. The History of Blacks in Canada: 1608 to Now. Montreal: National Black Coalition of Canada, 1976.
Black, Ayanna. Voices: Canadian Writers of African Descent. Toronto: Harper Collins, 1992.
Black Cultural Centre of Nova Scotia. Traditional Lifetime Stories: A Collection of Black Memories. 2 vols. Dartmouth, Black Cultural Centre for Nova Scotia, 1987 and 1988.
Black Learner's Advisory Committee. BLAC Report on Education: Redressing Inequity-Empowering Black Learners. Halifax: Black Learners Advisory Committee, 1994.
Campbell, Mavis Christine. The Maroons of Jamaica 1655-1796: A History of Resistance, Collaboration and Betrayal. Trenton: Africa World Press, 1990.
Cassidy, Ivan. Nova Scotia: All About Us. Scarborough: Nelson Canada, 1983.
Clairmont, Donald H., and Dennis William Magill. Nova Scotian Blacks: An Historian and Structural Overview. Halifax: Institute of Public Affairs, Dalhousie University, No. 83, 1973.
Dejean, Paul. The Haitians in Quebec: A Sociological Profile. Ottawa: The Tecumseh Press, 1980.
Govia, Francine, and Helen Lewis. Blacks in Canada: A Bibliographical Guide to the History of Blacks in Canada. Edmonton: Harambee Centers Canada, 1988.
Grant, John N. The Immigration and Settlement of the Black Refugees of the War of 1812 in Nova Scotia and New Brunswick. Dartmouth: The Black Cultural Centre of Nova Scotia, 1990.
Henry, Frances. Forgotten Canadians: The Blacks of Nova Scotia. Don Mills, Longmans Canada Ltd., 1973.
Oliver, Pearleen. An Historic Minority: The Black People of Nova Scotia, 1781-1981. Dartmouth: Metrographic Printing Services Ltd., 1981.
Robinson, Carey. The Iron Thorn: The Defeat of the British by the Jamaican Maroons. Kingston: Kingston Publishers, 1993.
Talbot, Carol. Growing Up Black in Canada. Toronto: Williams-Wallace Productions, 1984
Thompson, Colin A. Blacks in Deep Snow: Black Pioneers in Canada. Toronto: J.M. Dent & Sons, 1979. | <urn:uuid:ac98f6dd-4130-4a1c-87e2-4f15023e195a> | CC-MAIN-2016-26 | http://www.afrigeneas.com/forum-canada/index.cgi/md/read/id/241/sbj/bibliography-of-african-canadian-history-books/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00129-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.745477 | 556 | 2.84375 | 3 |
After the meltdown of the Chernobyl nuclear power plant in 1986, Soviet authorities established an “exclusion zone” within a 19-mile radius of the power plant to protect nearby inhabitants from dangerous radiation.
The towns of Chernobyl and Pripyat, which had nearly 120,000 residents before the disaster, are now almost empty.
Although a small number of squatters stayed within the exclusion zone, defying the law, that number has dwindled to less than 200 in 2012, according to Ukraine Inform, and most of those who stayed are elderly.
After the meltdown, the number of children born with birth defects in the surrounding area increased by 200 percent, according to Chernobyl International.
The drone footage from above was taken in Pripyat, starting from the Palace of Culture and surveying sets of 16 story buildings, before flying around a Ferris wheel, a hospital, and a swimming pool. | <urn:uuid:4e9f1fb7-9f21-41eb-9dbd-c2bee58886e8> | CC-MAIN-2016-26 | http://www.theepochtimes.com/n3/1956646-watch-drone-explores-ghost-town-near-chernobyl-power-plant/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00117-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.957392 | 186 | 3.265625 | 3 |
Niall Ferguson, Harvard professor of history and the author of The Great Degeneration: How Institutions Decay and Economies Die (Penguin, 2013), warns that Western democracies have entered a phase of decline.
Excerpt: The Great Degeneration by Niall Ferguson
The voguish explanation for the Western slowdown is ‘deleveraging’: the painful process of debt reduction (or balance sheet repair). Certainly, there are few precedents for the scale of debt in the West today. This is only the second time in American history that combined public and private debt has exceeded 250 per cent of GDP. In a survey of fifty countries, the McKinsey Global Institute identifies forty-five episodes of deleveraging since 1930. In only eight was the initial debt/GDP ratio above 250 per cent, as it is today not only in the US but also in all the major English-speaking countries (including Australia and Canada), all the major continental European countries (including Germany), plus Japan and South Korea.[i] The deleveraging argument is that households and banks are struggling to reduce their debts, having gambled foolishly on ever rising property prices. But as they have sought to spend less and save more, aggregate demand has slumped. To prevent this process from generating a lethal debt deflation, governments and central banks have stepped in with fiscal and monetary stimulus unparalleled in time of peace. Public sector deficits have helped to mitigate the contraction, but they risk transforming a crisis of excess private debt into a crisis of excess public debt. In the same way, the expansion of central bank balance sheets (the monetary base) prevented a cascade of bank failures, but now appears to have diminishing returns in terms of reflation and growth.
Yet more is going on here than just deleveraging. Consider this: the US economy created 2.6 million jobs in the three years beginning in June 2009. In the same period, 3.1 million workers signed up for disability benefits. The percentage of working-age Americans collecting disability insurance rose from below 3 per cent in 1990 to 6 per cent.[ii] Unemployment is being concealed – and rendered permanent – in ways all too familiar to Europeans. Able-bodied people claim to be disabled and never work again. And they also stay put. Traditionally around 3 per cent of the US population moves to a new state each year, usually in pursuit of work. That rate has halved since the financial crisis began in 2007. Social mobility has also declined. And, unlike the Great Depression of the 1930s, our ‘Slight Depression’ is doing little to reduce the yawning inequality in income distribution that has developed over the past three decades. The income share of the top one per cent of households rose from 9 per cent in 1970 to 24 per cent in 2007. It declined by less than four percentage points in the subsequent three years of crisis.
You cannot blame all this on deleveraging. In the United States, the wider debate is about globalization, technological change, education and fiscal policy. Conservatives tend to emphasize the first and second as inexorable drivers of change, destroying low-skilled jobs by ‘offshoring’ or automating them. Liberals prefer to see widening inequality as the result of insufficient investment in public education, combined with Republican reductions in taxation that have favoured the wealthy.[iii] But there is good reason to think that there are other forces at work – forces that tend to get overlooked in the tiresomely parochial slanging match that passes for political debate in the United States today.
The crisis of public finance is not uniquely American. Japan, Greece, Italy, Ireland and Portugal are also members of the club of countries with public debts in excess of 100 per cent of GDP. India had an even larger cyclically adjusted deficit than the United States in 2010, while Japan faced a bigger challenge to stabilize its debt/GDP ratio at a sustainable level.[iv] Nor are the twin problems of slow growth and widening inequality confined to the United States. Throughout the English-speaking world, the income share of the top ‘1 per cent’ of households has risen since around 1980. The same thing has happened, albeit to a lesser extent, in some European states, notably Finland, Norway and Portugal, as well as in many emerging markets, including China.[v] Already in 2010 there were at least 800,000 dollar millionaires in China and sixty-five billionaires. Of the global ‘1 per cent’ in 2010, 1.6 million were Chinese, approaching 4 per cent of the total.[vi] Yet other countries, including Europe’s most successful economy, Germany, have not become more unequal, while some less developed countries, notably Argentina, have become less equal without becoming more global.
By definition, globalization has affected all countries to some degree. So, too, has the revolution in information technology. Yet the outcomes in terms of growth and distribution vary hugely. To explain these differences, a narrowly economic approach is not sufficient. Take the case of excessive debt or leverage. Any highly indebted economy confronts a narrow range of options. There are essentially three:
- raising the rate of growth above the rate of interest thanks to technological innovation and (perhaps) a judicious use of monetary stimulus;
- defaulting on a large proportion of the public debt and going into bankruptcy to escape the private debt; and
- wiping out of debts via currency depreciation and inflation.
But nothing in mainstream economic theory can predict which of these three – or which combination – a particular country will select. Why did post-1918 Germany go down the road of hyperinflation? Why did post-1929 America go down the road of private default and bankruptcy? Why not the other way round? At the time of writing, it seems less and less likely that any major developed economy will be able to inflate away its liabilities as happened in many cases in the 1920s and 1950s.[vii] But why not? Milton Friedman’s famous dictum that inflation is ‘always and everywhere a monetary phenomenon’ leaves unanswered the questions of who creates the excess money and why they do it. In practice, inflation is primarily a political phenomenon. Its likelihood is a function of factors like the content of elite education; competition (or the lack of it) in an economy; the character of the legal system; levels of violence; and the political decision-making process itself. Only by historical methods can we explain why, over the past thirty years, so many countries created forms of debt that, by design, cannot be inflated away; and why, as a result, the next generation will be saddled for life with liabilities incurred by their parents and grandparents.
[i] McKinsey Global Institute, Debt and Deleveraging: The Global Credit Bubble and its Economic Consequences (January 2010).
[ii] Peter Berezin, ‘The Weak U.S. Labor Market: Mainly a Cyclical Problem … for Now’, Bank Credit Analyst, 64, 1 (July 2012), p. 40.
[iii] See e.g. Jeffrey Sachs, The Price of Civilization: Reawakening American Virtue and Prosperity (New York, 2011).
[iv] See e.g. International Monetary Fund, ‘Navigating the Fiscal Challenges Ahead’, Fiscal Monitor, 14 May 2010.
[v] Anthony B. Atkinson, Thomas Piketty and Emmanuel Saez, ‘Top Incomes in the Long Run of History’, Journal of Economic Literature, 49, 1 (2011), pp. 3–71.
[vi] Credit Suisse, Global Wealth Databook (October 2010), tables 3-1, 3-3 and 3-4.
[vii] For a brilliant analysis, see Jamil Baz, ‘Current Crisis Merely a Warm-up Act’, Financial Times, 11 July 2012.
From the book The Great Degeneration. Copyright (c) 2013 by Niall Ferguson. Reprinted by permission of The Penguin Press. | <urn:uuid:7948a587-230d-43b8-9134-8aea1ad761bd> | CC-MAIN-2016-26 | http://www.wnyc.org/story/300140-civil-society-decline/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00163-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947022 | 1,650 | 2.625 | 3 |
Definition of Aegean
1 : of or relating to the arm of the Mediterranean Sea east of Greece
2 : of or relating to the chiefly Bronze Age civilization of the islands of the Aegean Sea and the countries adjacent to it
Origin and Etymology of aegean
Latin Aegaeus, from Greek Aigaios
First Known Use: 1513
Seen and Heard
What made you want to look up Aegean? Please tell us where you read or heard it (including the quote, if possible). | <urn:uuid:9a08e1f4-44da-431d-a6d6-3e0562c562b6> | CC-MAIN-2016-26 | http://www.merriam-webster.com/dictionary/Aegean | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00110-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.879968 | 110 | 2.828125 | 3 |
Initial tests are revealed today which found the compound can reduce the spread of cancerous cells in human cells. The researchers also believe it can reduce cholesterol levels and they hope to begin large-scale human trials to monitor the effects of regularly drinking orange juice.
In laboratory tests with animals and with human
cells, the limonin was found to help fight cancers of the mouth, skin, lung, breast, stomach, and colon. California-based Dr Gary Manners said: "Limonin is present in citrus and citrus juices in about the same amount as vitamin C." For the latest experiment, 16 volunteers were given a drink made from limonin in quantities equivalent to drinking seven glasses of orange juice. The researchers also found the limonin stayed in the body far longer than they had expected.
It showed up in the blood of all the volunteers except one. Five still had traces of it after 24 hours. Dr Manners believes this long-term effect could hold the key to why limonin seems able to stop cancer cells from spreading.
The team plans more experiments to monitor the effect of limonin on cholesterol levels, as the initial research found a reduction in the "bad" cholesterol of some volunteers.
However, nutritionist Natalie Savona warned: "This is great news and could lead to new natural remedies, but it is all about having a balanced diet. Yes, limonin can help, but drinking seven glasses of juice a day isn't really practical. But oranges are a great source of a lot of different chemicals, and people should definitely have them as part of their diet."
Oranges also contain antioxidants which protect against heart disease; flavones which can cut cholesterol; carotenoids which may reduce the risk of eye disease; vitamin C which can boost the immune system, and fibre which helps regulate digestion. | <urn:uuid:bc37853d-0ef5-48a3-8a63-30199390bd14> | CC-MAIN-2016-26 | http://www.standard.co.uk/news/an-orange-a-day-helps-fight-cancer-7190131.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00166-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.966347 | 375 | 3.046875 | 3 |
Kartick Satyanarayan has devoted his life to protecting animals from harm. The organization that he co-founded, Wildlife SOS, has helped elephants, reptiles, leopards, monkeys and other creatures that have been mistreated. In many cases, the group also carries out the dangerous work of pursuing poachers.
And while all that would be noteworthy alone, Satyanarayan is best known for his leadership in saving the dancing bears of India.
The dancing bears rescue
That rescue effort actually began as a research project in 1995. Satyanarayan and his Wildlife SOS co-founder, Geeta Seshamani, wanted to find out why the number of sloth bears had become depleted in the wild. As they gathered information, they learned more about how the sloth bears were captured in the wild and later bought, sold and traded throughout India.
Along the way, Satyanarayan realized that there was another dilemma that had to be dealt with if they hoped to eliminate the problem in a sustainable manner.
On one hand, it was obvious the sloth bears — which had been used for centuries to entertain tourists and wedding guests — needed to be rescued. Even though India banned the practice in 1972, the animals could still be found on the country’s streets where they were routinely mistreated. Most of the time, the animals also had their teeth knocked out and a hole bored through their snouts where a rope was attached so that their handlers could control them more easily.
The dancing bears represented a cruel vestige of a bygone era when a marginalized, semi-nomadic community used the animals to earn a living.
Satyanarayan realized at the end of a two-year research investigation in 1997 that rescuing the performing sloth bears would mean disrupting a centuries-old way of life for the Kalandar people — many of whom depended on the bears for their livelihood.
Without providing an alternate way for the Kalandar people to earn money, Satyanarayan worried that the illegal practice would continue. Meanwhile, the Kalandar community would sink even deeper into poverty and continue to get into trouble with the law.
“The Kalandars were sadly trapped in an evil cycle, which was a glorified form of begging – this was keeping the community well isolated from mainstream society, away from education and a better quality of life for themselves and their families,” Satyanarayan told MNN.
Breaking the cycle
He and Seshamani wanted Wildlife SOS to help the community break out of this cycle while also ensuring the safety of the bears.
To convince the bear owners to give up their animals, Wildlife SOS used seed money to help the Kalandars get started in new businesses. In one instance, a bear owner traded his bear for money to open a small beverage shop. Another man surrendered his bear to Wildlife SOS and now he has a cattle fodder and grain store. Others became rickshaw drivers and some became carpet weavers. Some even became employees at Wildlife SOS.
In addition, Wildlife SOS now provides education for more than 790 Kalandar children and turns Kalandar women into additional wage-earners for their families through vocational training.
Running sting operations
Others who took part in the sloth bear trade were not so lucky. Satyanarayan has helped run sting operations in which bear sellers were duped into meeting with phony buyers. Instead of securing a deal, the sellers were arrested and taken to jail while the bears were sent to one of Wildlife SOS’s four animal refuges.
Fortunately, the last of the dancing bears was rescued from the streets of India in December 2009. Satyanarayan said he considers the rescue of the sloth bears to be his greatest accomplishment.
“Bringing an end to a 400-year-old illegal and brutal practice that was pushing the existence of an endangered species to its brink would, in my opinion, constitute a lifetime achievement,” he states. “I can now die without guilt of not having done my bit for nature and this planet!”
More work to be done
Of course, the work of Wildlife SOS continues. Earlier this year, a team from Wildlife SOS assisted police in arresting some poachers in eastern India. They’ve also rescued a monitor lizard at a Delhi market. And, they saved a python from an angry mob outside a small village.
In addition, Satyanarayan said the dancing bear trade remains active in the region along the India-Nepal border. The two countries share a porous border and, according to Satyanarayan, “there is evidence of Kalandars settled in Nepal crossing over to the Indian side to make a quick buck and then pop back into Nepal to avoid arrests by Indian authorities.”
“We are currently working on establishing intelligence networks in Nepal and collaborations with the Nepal government as well as other NGOs in Nepal to work on this issue,” Satyanarayan tells MNN. “Our success at bringing an end to the dancing bears in India encourages us to emulate this model in Nepal and bring an end to this practice in Nepal as well to protect this endangered species.”
More than 400 bears to feed
Satyanarayan says the biggest challenge for the organization is finding funding for the care of the animals, the education programs for the Kalandar people and the other conservation projects Wildlife SOS hopes to pursue. In addition to the more than 400 bears at its four refuges, the organization will have a total of six elephants under its care by the end of next month.
“Wildlife SOS has recently rescued in collaboration with the government, five abused elephants from very cruel owners who had them in illegal custody,” he tells MNN. “These elephants form the pillars of a very important conservation education platform to create awareness about the plight of working elephants in India. We also have our guns trained on the illegal trafficking on elephant calves and are working towards preventing elephant poaching and creating awareness to protect corridors used by wild elephants.”
It’s hard to imagine the wild animals of India having a better ally than Kartick Satyanarayan.
Get inspired: Learn about others who are making a difference with MNN's Innovation Generation project. | <urn:uuid:aa9e756a-8680-427d-882f-737fbeff5910> | CC-MAIN-2016-26 | http://www.mnn.com/lifestyle/arts-culture/stories/kartick-satyanarayan-animal-rescue-champion | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00110-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974158 | 1,294 | 3.25 | 3 |
Inositol, unofficially referred to as "vitamin B 8 ," is present in all animal tissues, with the highest levels in the heart and brain. It is part of the membranes (outer coverings) of all cells, and plays a role in helping the liver process fats as well as contributing to the function of muscles and nerves.
Inositol may also be involved in depression. People who are depressed may have lower than normal levels of inositol in their spinal fluid. In addition, inositol participates in the action of serotonin, a neurotransmitter known to be a factor in depression. (Neurotransmitters are chemicals that transmit messages between nerve cells.) For these two reasons, inositol has been proposed as a treatment for depression, and preliminary evidence suggests that it may be helpful.
Inositol has also been tried for other psychological and nerve-related conditions.
Inositol is not known to be an essential nutrient. However, nuts, seeds, beans, whole grains, cantaloupe, and citrus fruits supply a substance called phytic acid (inositol hexaphosphate, or IP6), which releases inositol when acted on by bacteria in the digestive tract. The typical American diet provides an estimated 1,000 mg daily.
Experimentally, inositol dosages of up to 18 g daily have been tried for various conditions.
Inositol has also been studied for bipolar disorder , 5panic disorder , 6,7bulimia , 8 and obsessive-compulsive disorder , 9,10 but the evidence remains far from conclusive. Other potential uses include Alzheimer's disease11 and attention deficit disorder . 12
Inositol is sometimes proposed as a treatment for diabetic neuropathy , but there have been no double-blind, placebo-controlled studies on this subject, and two uncontrolled studies had mixed results. 13,14
Inositol has also been investigated for potential cancer-preventive properties, 15-22 and there is some evidence that it may help reduce the side effects of chemotherapy in women with breast cancer. 37
Small double-blind studies have found inositol helpful for depression . 23,24 In one such trial, 28 depressed individuals were given a daily dose of 12 g of inositol for 4 weeks. 25 By the fourth week, the group receiving inositol showed significant improvement compared to the placebo group.
However, a double-blind study of 42 people with severe depression that was not responding to standard antidepressant treatment found no improvement when inositol was added. 26
People with panic disorder frequently develop panic attacks, often with no warning. The racing heartbeat, chest pressure, sweating, and other physical symptoms can be so intense that they are mistaken for a heart attack. A small double-blind study (21 participants) found that people given 12 g of inositol daily had fewer and less severe panic attacks as compared to the placebo group. 27
A double-blind, crossover study of 20 individuals compared inositol to the antidepressant drug fluvoxamine (Luvox), a medication related to Prozac. 28 The results over 4 weeks of treatment showed that the supplement was at least as effective as the drug.
In a 6-week, double-blind study, 24 individuals with bipolar disorder received either placebo or inositol (2 g three times daily for a week, then increased to 4 g three times daily) in addition to their regular medical treatment. 5 The results of this small study failed to show statistically significant benefits; however, promising trends were seen that suggest a larger study is warranted.
Polycystic ovary syndrome (PCOS) is a chronic endocrine disorder in women that leads to infertility, weight gain, and many other problems. In a double-blind, placebo-controlled trial, 136 women with PCOS were given inositol at a dose of 100 mg twice daily, while 147 were given placebo. 31 Over the study period of 14 weeks, participants given inositol showed improvement in ovulation frequency as compared to those given placebo. Benefits were also seen in terms of weight loss and levels of HDL ("good") cholesterol. Other studies have also found positive results. 33,35,36
Metabolic syndrome consists of a cluster of conditions that promote cardiovascular disease, including obesity, unhealthy cholesterol and triglyceride levels, high blood pressure, and pre-diabetes. In one study, 80 postmenopausal women with metabolic syndrome were treated with diet plus inositol (2 grams, twice daily) or diet plus placebo. 34 After 6 months of treatment, those in the inositol group had improvements in cholesterol and triglyceride levels, blood pressure, and insulin resistance (an indicator of pre-diabetes).
No serious ill effects have been reported for inositol, even with a therapeutic dosage that equals about 18 times the average dietary intake. However, no long-term safety studies have been performed.
Although inositol has sometimes been recommended for bipolar disorder, there is evidence to suggest inositol may trigger manic episodes in people with this condition. 29 If you have bipolar disorder, you should not take inositol unless under a doctor's supervision.
Safety has not been established in young children, women who are pregnant or nursing, and those with severe liver and kidney disease. As with all supplements used in very large doses, it is important to purchase a reputable product, because a contaminant present even in small percentages could add up to a real problem.
10. Fux M, Benjamin J, Belmaker RH. Inositol versus placebo augmentation of serotonin reuptake inhibitors in the treatment of obsessive-compulsive disorder: a double-blind cross-over study. Int J Neuropsychopharmcol. 1999;2:193-195.
14. Gregersen G, Bertelsen B, Harbo H, et al. Oral supplementation of myoinositol: effects on peripheral nerve function in human diabetics and on the concentration in plasma, erythrocytes, urine and muscle tissue in human diabetics and normals. Acta Neurol Scand . 1983;67:164-172.
31. Gerli S, Mignosa M, Di Renzo GC. Effects of inositol on ovarian function and metabolic factors in women with PCOS: a randomized double blind placebo-controlled trial. Eur Rev Med Pharmacol Sci . 2003;7:151-9.
33. Gerli S, Papaleo E, Ferrari A, et al. Randomized, double-blind, placebo-controlled trial: effects of myo-inositol on ovarian function and metabolic factors in women with PCOS. Eur Rev Med Pharmacol Sci. 2007;11:347-354.
34. Giordano D, Corrado F, Santamaria A, et al. Effects of myo-inositol supplementation in postmenopausal women with metabolic syndrome: a perspective, randomized, placebo-controlled study. Menopause. 2011;18(1):102-104.
35. Costantino D, Minozzi G, Minozzi E, Guaraldi C. Metabolic and hormonal effects of myo-inositol in women with polycystic ovary syndrome: a double-blind trial. Eur Rev Med Pharmacol Sci. 2009;13(2):105-110.
37. Bacić I, Druzijanić N, Karlo R, Skifić I, Jagić S. Efficacy of IP6 + inositol in the treatment of breast cancer patients receiving chemotherapy: prospective, randomized, pilot clinical study. J Exp Clin Cancer Res. 2010;29:12.
Last reviewed September 2014 by EBSCO CAM Review Board
Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
Copyright © 2012 EBSCO Publishing All rights reserved.
What can we help you find?close × | <urn:uuid:f27a1bfa-6506-4bdf-b526-b7c1abfdcbf8> | CC-MAIN-2016-26 | http://mbhs.org/health-library?ArticleId=21766 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00031-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929507 | 1,706 | 2.984375 | 3 |
Muscle Memory, Fact or Fiction?
A: A few days ago, I would’ve been forced to say that muscle memory as conceived by bodybuilders is mostly myth.
Scientific thinking is that muscle memory exists, but only in terms of movement patterns. Examples are riding a bicycle or swimming; once learned you never forget how. Exercise physiologists are dubious about the belief of bodybuilders that muscles remember size and strength and bounce back rapidly when training resumes. They generally believe that muscle memory atrophies along with unused fibers.
That may be changing. An impressive new study from Norway found that muscles do remember, at least in mice.
Led by Kristian Gundersen, a physiologist at the University of Oslo, the study was reported online August 16, 2010, in the Proceedings of the National Academy of Sciences.
Gundersen’s team found that memory is stored as DNA-containing nuclei, which multiply when a muscle is exercised. Contrary to previous thinking, those nuclei aren’t lost when muscles atrophy.
Nuclei is the plural of nucleus, the central controlling body within a living cell. It contains the genetic codes for maintaining life and issuing commands for growth and reproduction. If muscles have memory, it’s stored in the nuclei.
Gundersen explained to Science News (web edition) that more than one nucleus is needed to supply DNA guidelines for making the proteins that give a muscle fiber its mass and strength. Muscle cells generate additional nuclei to support growth. Researchers had previously believed that the extra nuclei die when muscle fibers atrophy, Gundersen said.
In the new study, Gundersen’s team used advanced imaging techniques to observe—day to day—nuclei production and durability in mice, during exercise and during periods of prolonged inactivity.
Using ingenious methods to overload a specific muscle in mice (you can read about it in the study), the team observed that the number of nuclei increased, starting on day six. Over the course of 21 days, the hard-working muscle increased the number of nuclei in each fiber cell by about 54 percent. Starting on day nine, the muscle cells also started to grow in size. This shows that the nuclei come first and muscle mass is added later. Like building a house, the blueprint comes first and the bricks and mortar follow.
In another set of experiments, the researchers worked the mice's muscle for two weeks, and then stopped (nerves were severed) to allow the fibers to atrophy. As the muscle atrophied, the cells deflated to about 40 percent of their working size—but the number of nuclei in the cells did not change.
The extra nuclei stick around for at least three months, Gundersen told Science News. That’s a long time for mice, which live a couple of years on average, he emphasized.
“I don’t know if it lasts forever,” Gundersen said, “but is seems to be very long-lasting.” Since extra nuclei don’t die, they could be poised to make muscle proteins again, providing a type of muscle memory, he concluded.
In addition, Science News asked two other experts to comment on Gundersen’s report. (They were not involved in the study.)
“That’s fascinating thinking, and there’s nice proof in this article to support it,” said Bengt Saltin, a muscle physiologist at the University of Copenhagen in Denmark. “It’s really novel and helps to explain descriptive findings that muscles are quick to respond upon further training.”
“It does fly in the face of a lot of peer-reviewed, published data,” said Lawrence Schwartz, a cell biologist at the University of Massachusetts Amherst. “The conventional wisdom doesn’t make much sense from a cell and molecular perspective,” he added. Gundersen’s group has come up with an explanation that seems more plausible. “Their data just feels right.”
Stock Up Early
It appears that muscles remember peak condition for a very long time. “There is currently no compelling evidence that nuclei are ever lost from intact muscle fibers,” the Gundersen group wrote. “Our findings suggest that it may be beneficial to ‘fill up’ muscle fibers with nuclei by exercise before senescence.” They suggest that individuals begin strength training at an early age, when they can stockpile the maximum number of muscle building nuclei.
Even better, start early—and never stop.
Why New Year’s Resolutions Fail—What Works
Q: You never write about New Year’s resolutions. Why not?
A: I don’t believe in New Year’s resolutions. It’s common knowledge that they rarely succeed. What’s not well understood is why.
If a person is not willing to eat sensibly and exercise during the year, it’s unlikely they’ll do so when the new calendar goes up. A better plan is to start gradually and spread resolutions over the entire year. Jonah Lehrer explained the problem with New Year’s resolutions in terms of how the brain works (The Wall Street Journal, December 26, 2009). His explanation provides the underpinning for the gradual approach.
Physiology, not character, is the problem, according to Mr. Lehrer. “Willpower,” he says, is an “extremely limited mental resource.” Most New Year’s resolutions, of course, rely on willpower. That’s why 88% of all resolutions fail, according to a survey of 3000 people conducted by a UK psychologist. Three main factors are involved.
First, the part of the brain responsible for willpower, the prefrontal cortex, has responsibility for many other functions. These include mental focus, short-term memory, and abstract thinking. This helps to explain why, after a long day at the office, we’re more likely to indulge in a pint of ice cream, or eat too many slices of leftover pizza. “A tired brain,” Lehrer writes, “preoccupied with its problems, is going to struggle to resist what it wants, even when what it wants isn’t what we need.”
An overloaded prefrontal cortex, in spite of best intentions, has limited capacity to following through on New Year’s resolutions.
Secondly, willpower is a high-energy activity. It requires a well-fed prefrontal cortex. That can be a problem when we’re dieting and exercising. Starving the brain of calories, even for a few hours, Lehrer explains, makes it significantly harder to stick to a weight-loss regimen. Waning blood sugar can torpedo even the best of plans.
An overeager dieter cutting calories a little too close is likely to have difficulty making wise choices. As happened in a study cited by Lehrer, he or she might be disposed to choose a snack of chocolate cake over a bowl of fruit.
The final willpower drain involves negative thinking. Resolving not to repeat bad habits doesn’t work well—because willpower is weak. “Gritting your teeth isn’t the best approach,” Lehrer writes. “Instead, find a way to look at something else.”
A simple example: Kids who are better at resisting the urge to eat a marshmallow—they are promised seconds if they can wait 20 minutes—are the ones who sing songs, play with their shoelaces or pretend the marshmallow is a cloud. “In other words,” Lehrer explains, “they’re able to temporarily clear the temptation out of consciousness.”
Better yet, focus on positive action. Forget what not to do. Focus on what you can do.
Don’t waste precious willpower worrying about your bad habits. Focus on realistic positive steps on the way to achieving your goals.
Use limited willpower sparingly by making small changes and building on your successes. If you’ve been inactive, begin with a walking program; any time and pace that feels good is fine. The important thing is to walk regularly; three days a week is a good place to start. Work up to 30 minutes five or six days a week. When you feel ready, add a simple strength training program twice a week. The workout should take 30 minutes or less. As your stamina and strength improve—and they will—you can add more challenging endurance training. A good target would be two days of endurance challenge and two days of strength, between two and four hours a week total time.
Take your time; there’s no hurry. A reasonable timetable would be to allow a full year to progress from walking to a balanced program of strength and endurance. Walking alone will pay big dividends and put you far ahead of your sedentary peers.
On diet, the biggest mistake is rushing the fat loss process. Remember to keep your prefrontal cortex well fed. Don’t allow yourself to become hungry or dissatisfied. Keep blood sugar on an even keel by eating regular meals. DON’T skip breakfast.
The best plan is to eat a balanced diet of healthy foods; see Simple Diet Patterns for Health: http://www.cbass.com/SimpleDiet.htm
Don’t worry about calories. For most people, replacing fatty, sugary foods with a balanced diet of wholesome foods will put bodyweight on a sustainable downward path. (Exercise is an important factor in making this work.) Take your time and you’ll be amazed.
Don’t bite off more than you’re willing and able to chew. That’s very important—for diet and exercise. If you can’t realistically see yourself sticking to the plan, dial it back until you can.
The only diet and exercise regimen most people are willing to do regularly is one they enjoy. If you don’t enjoy your training and what you eat, something is wrong. Change it. Try something else. Don’t give up.
That’s a resolution that will work. It’s based on human nature—not willpower.
* * *
You’ll find many more details in our books and DVDs; we offer 10 and 3, respectively. Here’s a brief synopsis of my diet and training philosophy: http://www.cbass.com/PHILOSOP.HTM
If that interests you, a good place to start is our new book, TAKE CHARGE: Fitness at the Edge of Science: http://www.cbass.com/PROD08.htm
Eggs: Cooked or Raw?
Q: Is there any problem eating eggs raw in a smoothie or, like Rocky Balboa, swallowing them whole?
A: Surprising as it may be to many bodybuilders (including me), eating eggs raw can get in the way of muscle growth. I can’t tell you how many raw eggs I’ve eaten over the years, but it’s a bunch. In my early years of lifting and right through law school, it was not unusual for me to plop six raw eggs in my breakfast malt. My doctor dad also swallowed eggs raw from time to time. We were not alone, of course. Steve Reeves famously ate raw eggs for breakfast every morning. Arnold mixed raw eggs with thick cream. Sly Stallone’s boxing hero Rocky Balboa downed eggs raw in the Rocky movies; my guess is that Stallone did/does as well. Vince Gironda probably tops the list by recommending up to 36 raw eggs a day.
It never hurt any of us as far as I know--there is some danger of food poisoning; see below--but recent studies described by Harvard biological anthropologist Richard Wrangham indicate that we were squandering much of the high-quality protein in eggs. That seems strange, because eggs require no chewing and their chemical composition is almost perfect. “The amino acids of chicken eggs come in about forty proteins in almost exactly the proportions human require,” Dr. Wrangham writes in his groundbreaking book Catching Fire: How Cooking Made Us Human (Basic Books 2009). “The match gives eggs a higher biological value—a measure of the rate at which the protein in food supports growth—than the protein of any other known food, even milk, meat, or soybeans.”
It’s no secret that eggs are at the top of the totem poll in protein quality. Few, however, know that ancient man and more recent hunter-gathers probably ate most of their eggs cooked. The details are in Wrangham’s book. They apparently sensed what we now know to be a scientific fact.
Wrangham explains that we now have research tools that assess the fate of egg protein as it passes through our digestive tract. Isotopic tracers are fed to hens that attach to the protein in their eggs, allowing scientists to monitor what happens to the protein when the eggs are eaten. Any protein that comes out of the body undigested is “metabolically useless to the person who ate it.” That’s a simple explanation of a complex process, but the logic is clear.
Researchers fed healthy subjects raw or cooked eggs. “When the eggs were cooked, the proportion of protein digested averaged 91 percent to 94 percent," Wrangham reports. On the other hand, the digestibility of raw eggs was a meager 65 percent. “The results showed that 35 percent…of the ingested protein was leaving the small intestine undigested. Cooking increases the protein value of eggs by around 40 percent.” (Emphasis mine)
For those who are interested,Wrangham provides a detailed explanation of how cooking improves the digestion of eggs.
The bottom line is clear: Maximize the growth potential of the ideal protein in eggs. Cook them.
(The danger of food poisoning from eating raw eggs, even with the shell intact, is discussed in The Lean Advantage 2. In a section called “Raw-Egg Danger,” I explained why I stopped eating raw eggs. http://www.cbass.com/PROD02.HTM )
Coffee: Good or Bad?
Q: Not long ago we were told that coffee was harmful, but now we’re bombarded with reports on the benefits. What’s your take?
A: Recent news on coffee has been good. Long-term studies have found associations between coffee drinking and lower rates of advanced prostate cancer, Alzheimer’s disease, strokes, type 2 diabetes, and more. Keep in mind, however, that association means connection; it doesn't prove a cause and effect. The majority of the studies look for patterns of coffee drinking and health, which leaves a lot of questions unanswered.
The same trends are often found for decaf and regular coffee. Coffee contains traces of hundreds of substances, including vitamins, minerals, and antioxidants. Any one or more of these ingredients could be responsible for the positive associations.
Some of the early studies which found an association between coffee and illness have been discredited. Once researchers started adjusting for smoking and other bad habits the risk associations were weakened or disappeared. Many people enjoy having a cigarette with their coffee. That's probably less true now it once was.
Unfortunately, randomized controlled trials are not feasible for the decades-long testing required to assess the effects coffee drinking. Observational studies may be the best we can do.
We do have evidence that coffee can be harmful for people with high blood pressure, high blood sugar, low bone density, insomnia, and pregnant women.
The jury is still out on coffee, and that’s the way it’s likely to be for the foreseeable future. If you don’t drink coffee, there is probably no compelling reason to start now. On the other hand, coffee drinking in moderation is probably safe for most people.
Melinda Beck, a well regarded health reporter for The Wall Street Journal, concluded a comprehensive update on coffee (December 29, 2009) with this advice: “People who love coffee probably don’t need to worry that they are harming their health by drinking it—unless they already have high blood pressure or are pregnant or are having trouble sleeping, in which case it’s prudent to cut down.”
Sounds like good advice to me.
I drink about three cups of coffee a day—two in the morning and one in the afternoon. I'm careful not to have coffee after 3 pm, because it keeps me awake. As many know, I make my coffee with two-thirds skim milk, and a teaspoon of canola oil to slow absorption. I usually have my coffee with food, and never black.
“Moderation in all things,” including coffee.
Ripped Enterprises, P.O. Box 51236, Albuquerque, New Mexico 87181-1236 or street address: 528 Chama, N.E., Albuquerque, New Mexico 87108, Phone (505) 266-5858, e-mail: email@example.com, FAX: (505) 266-9123. Office hours: Monday-Friday, 8-5, Mountain time. FAX for international orders: Please check with your local phone book and add the following: 001-505 266-9123
[Home] [Philosophy] [What's New] [Products] [FAQ] [Feedback] [Order]
Copyright © 2009-2016 Clarence and Carol Bass. All rights reserved. | <urn:uuid:08f4829c-f885-40ca-968e-9b07241510ac> | CC-MAIN-2016-26 | http://www.cbass.com/Faq(8).htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00186-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95024 | 3,692 | 3.25 | 3 |
Displacement: 16,000 Normal, Full Load 17,650 Length at the water line: 456'4" Beam. 76'10" Draft: 24'6" (mean) Speed: 18 kts. Complement: 42 Officers, 838 Enlisted. Armament as built 1907: See notes below. Armour, (Krupp): 9" Belt (amidships), 4" Belt (ends), 3" Deck (slopes), 7" Lower deck side, 10" Barbettes, 12" Turrets, 7" Battery, 2" Casemates, 6" Small Turrets, 9" Conning Tower, 5" Director Station (near Conning Tower). Machinery: 2 sets of Newport News vertical 4 cylinder triple expansion engines. 2 screws (outward turning). Boilers: 12 Babcock. Designed H. P. 16,500 at 18 kts. Coal: normal 900 tons maximum 2420 tons Class: Kansas
Armament as built 1907:
An undated photo of the Minnesota but she is shown with her two cage style masts so this dates this photo to after 1911.
The second Minnesota’s keel was laid down by the Newport News Shipbuilding Co., Newport News, Virginia, on October 27, 1903. The Minnesota’s hull was launched on April 8, 1905, being sponsored by Miss Rose Marie Schaller. The new Armored Cruiser was formerly commissioned into the United States Navy on March 9, 1907, with Captain John Hubbard in command. Following her shakedown off the New England coast, Minnesota was assigned to duty in connection with the Jamestown Exposition, Jamestown, Virginia, from April 22 to September 3, 1907.
A view of the Minnesota's bridge showing her forward 12-inch main gun. She has letters that spell her name on her bridge. These were lighted letters and at night during public events she would turn them on showing her name to all who could see. This photo is not identified but clearly can be seen her original fore-mast, which is not the cage style of her later years. This with the fact that the letters appear on her bridge dates this to the time of the Great White Fleet days of 1908-1909. On her decks can been seen several civilian visitors among who is at least one lady standing on the left side with a man in a derby hat and walking cane or umbrella.
A horrible and mysterious accident killed 11 midshipmen and sailors from the Minnesota and Connecticut on Monday evening June 10, 1907. Both battleships had been in the Hampton Roads, Virginia area to help participate in the Jamestown Exposition. Evidently, the 11 men were heading back to their ships aboard a steam launch from the Minnesota just after midnight Monday evening. The sailors from the Minnesota steam launch did not return to the ship and have not been seen since. The seas were rough and some think a wave overturned the craft. Others think a tug pulling a barge plowed into the small boat. In any case, no bodies have been found, only some caps, capes and clothing belonging to the men and a torn awning from the launch.
By Wednesday June 12, more facts of the disaster in Hampton Roads on Monday night were being uncovered. The results of the Minnesota’s launch sinking caused the drowning of 11 navy men. The dead included six midshipmen, returning from an army and navy ball, at the Jamestown exposition, and five seamen, crew of the steam-launch. The sinking resulted from the launch being run down by a tug towing a coal barge, and owing to the darkness did not see the launch from the Minnesota.
Also that same day Rear Admiral Evans, Commander-in-Chief of the fleet sends a dispatch to the Navy Department informing them that the sinking killed six young midshipmen fresh from Annapolis, and the crew of the motor-launch, a boatswain and four enlisted men. His dispatch read in part, “A ditty box belonging to the fireman of the Minnesota's missing launch has been picked up near berth No. 47, and I am forced to conclude that the launch with all on board is lost. I have ordered a board of investigation. Steamer last seen at exposition pier about midnight last night.”
The Acting Secretary of the Navy, Truman H. Newberry sent telegrams to the relatives notifying them of the disappearance of the Minnesota's launch. Following are the facts regarding the next of kin and other details so far as known at the navy department:
Phillip H. Field was born in Alberamrle County, Virginia, January 3, 1885, and is the son of Wm. C. Field, of Denver, Colo. He graduated from the naval academy in 1906 and was appointed to that institution from Colorado on recommendation of Senator Patterson.
William Hollister Stevenson, of Newbern, North Carolina, is the son of M. B. Stevenson. He graduated in 1906.
Franklin Portens Holcomb, born at Newcastle, Delaware, son of Thomas Holcomb, a clerk in the comptroller’s office at the treasury department and brother of Lieutenant Thomas Holcomb, of the U. S. Marine Corps. He was appointed to the naval academy as a cadet at large from Delaware on the recommendation of Representative Houston. He graduated in February of 1907.
Herbert Leander Holden, son of Susan A. Holden, of Portage, Wisconsin, was born in Chicago, May 6, 1885, and was appointed from Wisconsin on the recommendation of Representative Adams. He graduated in February of 1907.
Henry Clay Murfin, Jr., son of Henry Clay Murfin, of Jackson, Ohio, was born in that city January 1, 1885. He was appointed to the academy from Ohio at the instance of Representative Morgan. He graduated in February of 1907.
Walter Carl Ulrich, son of Carl Ulrich, of Milwaukee, born at La Crosse, Wisconsin, April 10, 1884. He was appointed to the academy at the instant of Representative Otjen and graduated in February of 1907.
The crew of the Minnesota’s steam-launch:
Seaman, Robert H. Dodson, next of kin, father, E. F. Dodson, 158 West Eighty-Fourth Street, New York.
Coal Passer, Jesse Conn, next of kin, father, J. C. Conn, 2824 Cleveland Avenue, Louisville, Kentucky.
Boatswain, Frank R. Plumber, next of kin, mother, Eada Kitchen, of Mabton, Washington.
Ordinary Seaman, Harley L. VanDorne, next of kin, father, C. L. VanDorne, 318 Sixth avenue west, Cedar Rapids, Iowa.
Fireman, First Class, George Westphal, next of kin, sister, Mrs. C. B. Harding of Meenab, Wisconsin.
The conclusion was reached at the Navy Department that either on account of the lateness of the hour of the return trip the Minnesota's launch in her haste, had been driven hard into the heavy sea that prevailed in Hampton Roads at the time, or had been run down by one of the giant tramp steamers that make use of the roads as a refuge in time of storm.
Lieutenant Randall, USMC, who was included in the first list of the missing, has arrived in Norfolk. He did not take passage on the Minnesota’s steam-launch, as had been supposed, having missed the launch and remained over night at a hotel. While the launch carried a good-sized party, no one has been found who can say exactly how many occupants the launch contained. The men in the launch, according to those who saw it leave the dock, were said to be in high spirits after an evening of dancing. How the launch, with so many airtight compartments, could have been lost is yet a matter of speculation at the Navy Department. One theory is that it was run into and cut in half by a passing vessel, which may have passed completely over the unfortunate occupants, and the second theory is that the boiler in the launch exploded, tearing up the launch and killing the occupants.
Eight days after the accident on June 18, only eight bodies had been recovered. Only Midshipman Henry C. Murfin, and Seaman Plumber and Conn’s bodies had not been recovered at that time.
By the 19th of June all but Midshipman Henry Clay Murfin, Jr. had not been recovered. Just as the battleships Ohio, Iowa, Maine, and Indiana sailed from Hampton Roads on June 19th for the Southern target ranges an order was posted from the flagship aboard each battleship offering a reward of $50 for the recovery of the body of Midshipman Murfin.
Four days after the accident after a long exhausting search with the use of dredgers from launches from the battleships Ohio and Iowa the sunken Minnesota’s launch was discovered in 27-feet of water near Fort Wool, two-miles from the pier from which it left.
Divers from the USS Indiana went down to observe the launch at 5 o'clock on the afternoon of June 14 and found a piece of towline across the crushed in canopy of the launch. The divers also reported that the heads and arms of three of the men who were still in the launch were protruding from beneath the canvas covering. It appeared that these men made a desperate fight for life when they were carried down trapped inside.
The officers who found the sunken launch surmised that the Minnesota launch when it left the pier Monday night would have started off for the USS Connecticut to put Midshipman Holcomb aboard being that was his ship. It was guessed that the launch was run down somewhere between a point where she would have started to cross the main channel and the battleship Connecticut. Sure enough this theory was correct and that was where they found the sunken launch. It was then that the tug Crisfield coming from Cape Charles was discovered to be in the exact area at the time. It was surmised that as the launch approached the towline from the Crisfield it ran under the towline pinning the smaller launch from escape and then subsequently hit the square bow of the barge thereby rolling it under the hull of the barge. This would have in effect quickly mashed the launch into the depths and rolled it along the length of the barge until it had completely run over it, giving the men in the barge no chance of escape.
On the morning of June 15 at daylight divers again went down to raise the craft. On the surface were two floating derricks to which the divers attached cables. As far as the divers could tell the hull of the launch was not damaged, only the canopy frame was crushed in. This indicated that a collision with a tug bow did not take place supporting the theory that it rolled under the square bow of the barge while stuck under the towline.
By June 20 a Naval Board of Investigation had been started and found that the steam-launch from the Minnesota somehow tangled into a steel hawser from the Tug Crisfield that was at the time towing a barge carrying a number of loaded freight cars from Cape Charles to Norfolk.
The findings from the Naval Board stated that no criminal responsibility was charged to the officers of the Crisfield who did not know they had fouled the Minnesota’s launch that evening. Upon recovery of the launch from the bottom of the bay it was found that the machinery was intact showing that she had not broken down and floundered uncontrolled. Naval officials concluded that the sinking was not due to un-seaworthiness but due to some sort of collision.
The recovered steam launch of the Minnesota that killed 11 Midshipmen and Sailors in the early morning hours of June 11, 1907.
According to an article in the 18 July 1907 edition of The Washington Post it was reported that there were many desertions from some of the navy ships in the Hampton Roads area. It was reported there were as many as 100 desertions from the Minnesota alone. The article reads: Norfolk, VA 17 July 1907. "There are wholesale desertions from warships at Hampton Roads is indicated by the statement that in the past few weeks 100 deserters have been listed and advertised from the battleship Minnesota alone. The local police yesterday were notified of 15 desertions. The lists are coming in daily. It was stated at the Navy Department last night that there was no official information there regarding wholesale desertions from the Minnesota. Captain Hubbard, of the Minnesota, was at the Navy Department yesterday, but made no report on the subject. The department has several times investigated reports regarding desertions at that port, but, according to the department, without finding the situation very serious."
On 16 December 1907 Minnesota departed Hampton Roads as one of the 16 battleships sent by President Theodore Roosevelt on a voyage around the world, more commonly known as the cruise of the “Great White Fleet.” In 1907, President Teddy Roosevelt, for reasons of national prestige and to test the ability of the American Navy to respond to potential crises in the Pacific, decided to dispatch the battleships of the Atlantic Fleet on what became an around-the-world cruise. The voyage, regarded by President Roosevelt as a dramatic gesture to the Japanese-who had only recently emerged on the world stage as a power to be reckoned with-proved to be a signal success, with the ships performing so well as to confound the doomsayers who had predicted a fiasco. This force, the largest concentration of American naval power sent to the Pacific to that time, was known as the Great White Fleet, due to the soon-to-be-discarded practice of painting American warships with White hulls and Spar colored upper structures. Commanded by Rear Admiral Robley Evans, the last Civil War veteran on active naval duty, the fleet of battleships, along with a torpedo flotilla and some auxiliaries, sailed from Hampton Roads in December 1907, arriving in San Francisco the next May after traveling around South America.
As the Minnesota steamed out to sea with the fleet there was under Captain Hubbard’s command was a young officer who had graduated from the Naval Academy in 1903. This officer was named Harold Rainsford Stark and he would serve aboard the Minnesota through out the entire cruise of the fleet. Stark would rise through the ranks of the navy to at the height of his career would become a Rear Admiral. Before America entered into WWII Stark became Chief of Naval Operations, with the rank of Admiral. In that position, he oversaw the great expansion of the Navy during 1940-41, its involvement in an undeclared war against German submarines in the Atlantic during the latter part of 1941 and the combat operations against Japan and the European Axis Powers that began in December 1941. Admiral Harold R. Stark died on 21 August 1972 after a career of over 43-years on active service.
The fleet arrived in San Francisco on May 6, 1908 from Magdalena Bay, Mexico for a huge celebration hosted by the City of San Francisco. As each ship passed Fort Point it fired a 21-gun salute, which was answered with a salute from land. Crowds flocked to San Francisco to see the fleet and on May 8th 1908 "The Great Naval Parade" was held in San Francisco. Standing on the decks of the Minnesota was a young junior officer by the name of Raymond Spruance and another fellow junior officer serving on the USS Kansas was William Halsey both Spruance and Halsey were to play major roles in the Pacific Theater during World War II.
Approximately 14,000 sailors made up the crews of the ships of the Great White Fleet. During the voyage, 300 sailors deserted their ships. More sailors deserted in California than anywhere else. More than 200 of the deserting sailors stayed behind to marry local girls and so the postcard that claims that "California Captured the Atlantic Fleet in 1908" has some merit.
An undated photo of a burial at sea from the after quarter deck of the Minnesota. Note the line of Battleships behind the Minnesota. They are painted in the "Spar and White" colors that the fleet was painted
during the cruise of the Great White Fleet, so this may date the photo sometime during 1908-09.
The photo on the right shows a view of the Minnesota's stern showing her 12-inch after turret. This photo was taken when the Great White Fleet visited Sydney, Australia in August of 1908. During the visit the ships of the fleet were opened to visitors and the Minnesota's decks are crowded with many curious "Aussies"
Russell Witherow who today lives in Australia was looking into his family history and in the family stories there is one of a man referred to only as "J. Witherow" who supposedly jumped ship from the Minnesota while she was in Australia. In further investigating this story there is a name that appears on a list of sailors from the Great White Fleet, Atlantic Fleet bound for the Pacific 16 December 1907. On this list a name of "J. E. Withrow" appears from the USS Minnesota. So it can be guessed that these two names of "J. Witherow" and "J. E. Withrow" could be one in the same man. There is a difference in spelling of the last names but they are so close and one could imagine that the spelling on the 16 December 1907 list may not be correct.
Additionally there is an article in the Thursday Evening edition of the Oakland Tribune for August 27, 1908 newspaper, which details how that when the Great White Fleet was visiting Sydney, Australia there were over 80 sailors who missed their ships when the fleet sailed from Sydney bound for Melbourne. It was said that due to the large number of sailors who missed their ships that they would not be charged as deserters and would be listed as accidental. Later that day 50 of the sailors were embarked on the Yankton and ferried to Melbourne to meet up with their ships. This may have been the event that was spoken about in the Russell Witherow family stories.
Nothing more about the "J. E. Withrow" from the 16 December list is known, but there are two names that could match this listing. One is a John Withrow who was listed as a prisoner serving sentence at the Folsom State Prison in California in April of 1910. This man was born about 1872 in California and he could have been the "J. E. Withrow" listed from the Minnesota. He would have been old enough and if he was caught as a deserter he may have served his sentence in Folsom. Or there is a second man named James B. Withrow listed as a sailor on the USS Stockton in 1920. This man was born in Indiana about 1882 and was a storekeeper aboard the Stockton. Being that he was a single man and serving in the navy and could have been old enough in 1908 to be in the navy so this also could be the "J. E. Withrow" from the 16 December list. It will never be known for sure who this man was but it is very likely that the "J. E. Withrow" from the 16 December list and the story from Russell Witherow about "J. Witherow" are one in the same.
In San Francisco Admiral Evans, in reality too ill to have even sailed with the fleet, turned over command, first to Rear Admiral Charles Thomas for a week, then to Rear Admiral Charles Sperry. On July 7, 1908 the fleet was reassembled under the command of Rear Admiral Charles Sperry and bid farewell to San Francisco and departed for Honolulu, Territory of Hawaii and then to New Zealand, Australia, the Philippines, China, and, most notably, Japan before returning to the US in February 1909 via Ceylon, the Suez Canal, and the Mediterranean. The cruise began eight days before Christmas of 1907, and ended on Washington's Birthday, 22 February 1909. During the course of the voyage, the ships called at ports along both coasts of South America; on the west coast of the United States; at Hawaii; in the Philippines; Japan; China; and in Ceylon.
During October of 1908 Minnesota was anchored in Manila Bay, Philippines. While aboard the Minnesota on November 4, 1908, Fireman Second Class John Henry Clear is badly scalded by steam in an accident in the engine room of the ship. He is rushed ashore to the Naval Hospital in Canacao, Philippines but died of his burns on November 9, 1908.
The Minnesota was flagship of the Third Division with Captain Hubbard still in command. The Third Division was under the command of Rear Admiral Charles Thomas. The other three Battleships in the Third Division were the USS Maine, USS Missouri and the USS Ohio. On the return leg of the cruise Minnesota was shifted into the First Division under the command of Rear Admiral Charles Sperry, along with the battleships Connecticut (Flagship), Kansas and Vermont.
Returning from her world cruise in 1909, Minnesota resumed operations with the Atlantic Fleet. In early 1910 she underwent some refitting and her original foremast was replaced with the newer cage style foremast leaving her aft-mast in the original form, as well as her superstructure was modified. It would not be until the next year in 1911 that her original aft-mast was removed and replaced with the cage style mast. Also during her 1910 re-fit she was completely repainted from the Great White Fleet colors of Spar and White to the standard Navy Gray paint. During the next three years she operated primarily along the east coast, with one brief deployment to the English Channel.
On 31 October 1911 Secretary of the Navy Meyer reviewed 102 Naval vessels in New York harbor, which was the largest assemblage of United States warships reviewed at that time. The crowd assembled to look at the great warships numbered in the hundreds of thousands. Each ship was decked out with all the trimmings and each sailor was dressed in his whites making quite a sight to the onlookers. The Minnesota was one of the 17 battleships there that day along with the cruisers Washington and North Carolina.
In 1912, her employment schedule began to involve her more in inter-American affairs. During the first half of that year she cruised in Cuban waters and was stationed at Guantanamo Bay, from June 7-22, to support actions aimed at establishing order during the Cuban insurrection.
The following spring and summer 1913 she cruised in Mexican waters. Life was somewhat mundane during her Mexican duty as recorded by her logs. On 11 June 1913 Minnesota was anchored in a Mexican harbor with the USS Idaho. During the day the Idaho left the harbor for sub-caliber target practice. The Division Commander inspected the boats of the Minnesota, under oar and sail. Red Cross relief steamer Mexicano sailed with American refugees for Tempico and Galveston. Thursday the 12th of June the Minnesota held general quarters drill and the mail left via the French steamer Respangne and Ward-Line steamer Esperania. On Friday the crew preformed routine maintenance and painting and the mail was received. Saturday the ship and her crew were inspected by her Commanding Officer. Later in the day a recreation party landed at Sacrifacio Island. The crew of the Minnesota was invited to a smoker on board the German cruiser Bremen, which was enjoyed very much by those who attended from the Minnesota. On Sunday the 15 of June, the German cruiser Bremen left the harbor for Trinidad. More sailing and swimming parties from the Minnesota landed at Verile and Sacrifacio Islands. Monday the 16th, routine maintenance and painting was again the duty of the day. Minnesota sent her mail via Hamburg-American line steamer. The next day on the 17th of June 1913 brought continuous rain from midnight throughout the day. The most exciting thing that happened all week long was a small fire on board when the lead from the wireless aerial burned out. Fire-quarters were sounded and the ships crew put out the fire quickly.
In 1914, Minnesota under command of Captain Edward Simpson, USN, twice returned to Mexican waters (January 26 to August 7 and October 11 to December 19) as that country continued in the throes of political turmoil. On January 21, a battalion of marines, consisting of 11 officers and 387 enlisted men, under the command of Major Smedly D. Butler, U.S.M.C., stationed at Panama, reported on board the U.S.S. Minnesota at Cristobal, Canal Zone and sailed the same day for Vera Cruz, Mexico, where the Minnesota arrived on January 26, 1914. The Marine battalion participated in the occupation of Vera Cruz and in the engagement actions that followed. The battalion was designated as the Third Battalion, Second Advance Base Regiment, and was detached for duty with the U.S. Army, April 30, 1914. While in the Canal Zone on March 7, 1914 the Minnesota became the first warship to tie up at the newly finished government docks in Colon where she loaded 600 marines bound for Mexican waters. The landing force from the Minnesota that landed from April 22 through June 20, 1914 was under command of Lt. R. R. Adams.
In 1915, Minnesota resumed east coast operations, with occasional cruises to the Caribbean area, which she continued until November 1916 when she became flagship, Reserve Force, Atlantic Fleet. During this time she was under the command of Captain Casey B. Morgan and his Executive Officer was Lt. Commander Thomas C. Hart.
On 6 April 1917, as the United States entered World War I, Minnesota rejoined the active fleet at Tangier Sound, Chesapeake Bay, and was assigned to Division 4, Battleship Force as flagship. The 4th Division was made up of the Minnesota, South Carolina and the Michigan. During World War I she was assigned to be the gunnery and engineering training ship, and cruised off the middle Atlantic seaboard until September 29, 1918.
On the 29th of September, 20 miles from Fenwick Island Shoal Lightship (38 d. 11'N, 74 d. 41'W.) Minnesota struck a mine, apparently laid by the German submarine U-117. Suffering serious damage to the starboard side, but with no loss of life, she managed to reach Philadelphia where she underwent 5-months of repairs. During this time of repairs at the Philadelphia Navy Yard she had her 7-inch 45 cal. Broadside Batteries removed.
Captain Jehu Valentine Chase was the commanding officer of USS Minnesota when she struck the mine in September of 1918. Chase was awarded the Distinguished Service Medal in recognition of his splendid seamanship and leadership in bringing his ship safely to port without loss of life. Captain Chase was promoted to Admiral and was Commander-in-Chief, United States Fleet, from 17 September 1930 to 15 September 1931, and Chairman of the General Board from April 1932 until his retirement in February 1933. Jehu Valentine Chase was born in Pattersonville, Louisiana, 10 January 1869, and graduated from the Naval Academy 6 June 1890. He died at Coronado, Calif., 24 May 1937. USS Chase (DE-158) was named in his honor. Admiral Chase was buried with full military honors in Section 1 of Arlington National Cemetery. His wife, Mary Taylor Chase (1873-1950) is buried with him.
During the events of the mine explosion on September 29, Officers and men throughout the ship sprang into action to save the ship. Men in the Engineering division worked tirelessly while risking their own life. One such sailor was Murphy G. Carpenter who was mentioned in the Captains Report for the “Efficient and prompt manner in which he directed the shoring of bulkheads and compartments.” His work was fearless and untiring and required him to enter and work in compartments permeated with gas, as a result of which he was eventually overcome.
But the Engineering Division men where not the only ones to show coolness and quick thinking that day, Lt. F. M. Smith of the Medical Corps distinguished himself that day also. He was recognized by the Secretary of the Navy for Heroic work in removing the sick and wounded men to the upper decks due to the Sick Bay being filled with asphyxiating gasses at the time.
An undated photo of the Minnesota's Engineering Division taken on the after turret.
Minnesota put back to sea on March 11, 1919 as a unit of the Cruiser and Transport Force. In a report of ship locations of the Cruiser and Transport Force the Minnesota was at Hampton Roads and sailed for Brest, France on April 1, 1919. Other battleships at Hampton Roads with the Minnesota on April 1 were the Connecticut, Georgia, Kansas and the Vermont. She was assigned to that force until July 23, where she had completed three round trips to Brest, France, to return over 3,000 veterans to the United States.
The Minnesota was now commanded by Captain Raymond D. Hasbrouck who had commanded the troopship USS Covington when she was Torpedoed and sunk on July 1-2 1918. Primarily employed thereafter as a training ship, Minnesota conducted two midshipmen summer cruises (1920 and 1921) under the command of Captain George Loring Porter Stone. On September 24, 1921 Captain Stone was relieved of duty as the skipper of the Minnesota and was given command of the battleship USS Connecticut. Captain Powers Symington took command of the ship after Captain Stone left, making Captain Symington the last Commanding Officer of the Minnesota when she was decommissioned on December 1, 1921.
The Minnesota, now obsolete in the eyes of the Navy Department, was struck from the Naval Register the same day. She lingered on in storage until finally arriving at the Philadelphia Navy Yard on January 23, 1924 where she was dismantled and sold for scrap.
Undated port side view of the Minnesota.
These are men who served the mast on board the USS Minnesota. If you know of someone or have a family member who served this ship please contact me and I will add a profile of that man.
James Hurd was from Italy, Texas and was the son of Mr. and Mrs. G. W. Hurd. James entered the Navy in April of 1917 and took his basic training at Great Lakes Naval Station. He was assigned to duty on the Minnesota which was then in Cuban waters. He made at least one trip to France during WWI and at wars end was still in the Navy.
William Raymond Rawlings was born on July 9, 1898 in Granite City, IL to John H. and Minnie Belle (DeClare) Rawlings. As young William developed into a young man war clouds over Europe were also developing. William, who was working as a shipping clerk applied and was accepted into the United States Marine Corps at Chicago, IL and then reported to Paris Island, SC for service during the duration of the war on July 17, 1917.
Private Rawlings was promoted to Private First Class on December 22, 1918 and on January 10, 1919 was advanced temporary warrant and then on March 24, 1919 in a letter from the Major General Commandant was reduced back to Private. The reason was his rank of Corporal was a temporary appointment during the war.
As a Private he reported for sea duty on board the battleship USS Minnesota on October 24, 1917 and he would remain on her until April 23, 1918. During his service Pvt. Rawlings held Excellent marks in Military Efficiency, Obedience and Sobriety.
Among the many papers that survive today preserved by his son, William R. Rawlings, Jr. is a document dated June 12, 1918 where private Rawlings signed in receipt of 1 rubber poncho from the Quartermaster Department.
After his sea duty aboard the USS Minnesota, Pvt. Rawlings was retained in the Marine Corps but it is not known for sure where he served. He remained in the Marine Corps until he was discharged on September 10, 1919 where he was paid $131.55 upon his discharge from the Marine Corps.
On November 9, 1920 William Rawlings received from the Headquarters of the Marine Corps, a Good Conduct Medal, No. 30508 and a certificate signed by Captain E. H. Jenkins Aide-de-Camp, USMC, for his service in the Marine Corps from July 17, 1917 September 10, 1919.
William returned to his home in Illinois for a short time and then in 1921 married. He married Helen Clark who was from New Jersey. William now worked as an auto mechanic and did work for the Lynch Brothers Auto in Boston, MA for a time. William continued to work as an auto mechanic for the rest of his life. As early as 1929 William and Helen moved to Belmont, MA where they started a family. Robert the eldest son was born about 1922 and later another son named William R. Rawlings, Jr. was born sometime after 1930. William Rawlings, Sr. was active in the local Belmont, A.F. & A. M. Lodge from at least 1929 1934. The Rawlings family lived in rented homes and during the time they lived in Belmont lived in at least 4 different addresses during the 1930’s before finally settling in the home at 124 Pine Street in Belmont, MA.
William Had hearing loss from his service in the Marine Corps and did wear a hearing aid during the 1950’s as this is known from the family and also from documents from the Veterans Administration.
On May 14, 1959 in Belmont William R. Rawlings, Sr. passed away in his sleep of heart failure. His wife and two sons survived him. William’s funeral was held by the Rev. Dr. D. Joseph Imier of the Belmont Methodist Church and then was buried in the Lawnside Cemetery in Woodstown, NJ.
Pvt. Rawlings, WWI
Rawlings in Dress USMC Uniform
A small metal button with a
hand colored photo of Pvt. Rawlings
Photos and information of Pvt. Rawlings provided by William R. Rawlings, Jr.
The grandfather of Steve Matthews of Savannah, Georgia was aboard the Minnesota struck a mine on September 30, 1918. His name was Emmett C. Matthews and among his effects that was left to his grandson, was a small picture of the ship undergoing repairs enclosed with a record of some of the details. The discription on the photo is very brief and the paper is deteriorating. It reads as follows;
|"U.S.S. Minnesota on September 30th, 1918. Torpedoed off of Delaware Capes, Longitude 39 West, Latitude 73' 30" North. Hit on starboard bow, tearing a forty foot hole from armor plate to keel, and from beam 10 in compartment A to beam 50 in compatrment B. Made port of Philadelphia in 14 hours under own power. 1200 men and 100 Officers in crew. No lives lost but several injured and overcome by gas."|
Emmett C. Matthews, born about 1898 was from Owensboro, Kentucky and it is not known what his rating was in the navy but later in his life he was a Steam Fitter. Shortly after leaving the navy Emmett was married about 1920. His wife's first name was Ethyln and was 2 years younger than Emmett and was from Kentucky. In April of 1930 Emmett and Ethlyn Matthews lived in Cincinnati, Ohio where he worked as a barber. At the time they lived in a rented house where the rent was $30 per month. The family in April of 1930 consisted of Emmett and Ethyln with eldest son Emmett, jr., born about 1922; a second son named Milton born about 1925 and a third son named Raymond born about the end of 1929. All three of the boys were born in Ohio.
Emmett and his wife live most all of thier life in the Cincinnati, Ohio area and on October 31, 1985 at the age of 87 Emmett Matthews passed away in the University Hospital in Cincinnati.
Gunners Mate 1c, Demah Henry Jacob Higginbotham, USN
When raw steel is formed into the hull of a United States Navy ship of the line, it becomes the most important part of the vessel. In fact if the steel of the hull were not of the highest quality and well cared for the ship would sink to the bottom becoming useless. But much of the time the hull is taken for granted, much like many of the men who sail that ship, the men who give life to the steel hull. Such a man was Demah Henry Jacob Higginbotham who was a sailor aboard the USS Minnesota (BB-22).
Demah Henry Jacob Higginbotham was born on February 24, 1890 in Bowling Green, Kentucky. When the Federal Census of 1910 was taken aboard the Battleship USS Minnesota the name of Seaman Demah Higginbotham appears. Seaman Higginbotham was at the time a 24-year old seaman and was single. Higginbotham would serve aboard the Minnesota from 1910 through at least 1912 steaming along the east coast of the United States as a ship in the Atlantic Fleet.
During World War One Higginbotham served in the navy and may have continued serving in the navy from 1910. On his gravestone his final rating of Gunners Mate First Class is listed, so it seems that he worked his way up through the ranks. While in the navy he held two sharpshooter medals.
After he was discharged from the Navy Higginbotham settled back into civilian life and in April of 1930 was living in a home he owned on Williams Avenue in Barrington, NJ. The home in 1930 was valued at $4,500 and he was now married to Wilhelmina who he had married about 1913 while likely still in or just out of the Navy. About 1914 the first child a son named Demah H. Higginbotham was born. And then two years later a girl Katherine W., followed by another girl in 1922 named Virginia and then still a third girl in 1928 named Jessie R. All the children were born in Pennsylvania so it seems that the family must have lived in that state for some time before moving to New Jersey. Demah was working as a machinist in a mill when the family lived in Barrington. According to the 1930 Federal Census the Higginbotham family owned a radio set in the home so being that was one of the few luxury items at the time they must have been frugal enough to purchase a radio set.
Demah Henry Jacob Higginbotham would pass away on April 7, 1955 in Roanoke, Virginia at the age of 65 years.
This photo is the grave stone of Ordinary Seaman Robert Patterson. He is buried in the Hollywood Cemetery in McComb, Mississippi beside his sister who died in 1904, age 4; his father John P. who died in 1910 and his mother Elizabeth who died in 1964. Nothing more is known about the circumstances of his death. The enscriptios reads,
This page is owned by Joe Hartwell ©2004-2014
This page was created on 5 January, 2004 and last modified on: | <urn:uuid:d78243b1-9033-42af-bc36-8a98be7c7a74> | CC-MAIN-2016-26 | http://freepages.military.rootsweb.ancestry.com/~cacunithistories/USS_Minnesota.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00089-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.983114 | 8,103 | 2.53125 | 3 |
The week of May 3-9 is Severe Weather Awareness Week in the Pacific Northwest, including the states of Idaho, Oregon and Washington.
This is an excellent time for all individuals, families, businesses, schools, radio and television stations to review their spring and summer storm preparedness plans. It is especially important for new arrivals to the Pacific Northwest to become familiar with NOAA’s National Weather Service Watch and Warning definitions, and their safety procedures.
Spring in the Pacific Northwest can bring snow one day, then thunderstorms the next. The chance of severe thunderstorms will be increasing through the next several weeks. Are you prepared for severe thunderstorms that produce large hail, tornadoes, flash flooding, mudslides and even lightning caused wildfires? Are you ready for storms along the coast? This is the time to learn more about severe weather, develop severe weather preparedness plans, and test vital communications.
To help our communities learn more about these dangers, NOAA’s National Weather Service will issue Public Information Statements throughout the week to discuss:
MONDAY – Flood and flash flood safety
TUESDAY – Tornadoes and tornado safety, or Special Marine Warnings
WEDNESDAY – Wind, Hail, and Lightning safety
THURSDAY – Wildfires
FRIDAY – NWS Watch and Warning program
SATURDAY – NOAA Weather Radio
Remember, in times of severe weather, you can get all these vital NOAA/National Weather Service messages via NOAA Weather Radio, your favorite local media, or through NOAA’s National Weather Service websites.
This message is brought to you by your local NOAA National Weather Service office in Portland. | <urn:uuid:fb0d7cf1-2658-438d-a126-76bbc605981f> | CC-MAIN-2016-26 | http://www.642weather.com/weather/wxblog/announcements/severe-weather-awareness-week-2009/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00011-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.888705 | 339 | 2.859375 | 3 |
Researchers Infiltrated Facebook Using One Experimental Botnet
A botnet that 4 security researchers built was recently used to infiltrate Facebook to show how easy it was for exploiting social-networking websites. PCWorld.com published this on November 2, 2011.
Reportedly, the experiment had the researchers from the University of British Columbia use a huge 102 fake friends on Facebook to demonstrate how it was possible to dig out personal information related to users that otherwise weren't openly shared on the website and the insufficiency of its protective measures for tackling an enormous size of infiltration.
The researchers, who carried out the scheme for 8 weeks, garnered 250GB of data from numerous members of the social network and over 3,000 members befriended the "sockpuppet" bots; as a result, the network accessed over 1m profiles.
Also, the researchers in their simulated attack against Facebook utilized one novel type of bot-network known as "socialbot." This one is different from other bots in that it behaves like a human. Consequently, it gains an advantageous stature within the social-networking website that of a "friend."
Incidentally, it costs just $29 to buy socialbots that cyber-criminals profusely use today, the researchers note.
Elaborating on the experiment, the researchers stated that when socialbots invaded one targeted social-networking site, they could additionally dig out its members' information including phone numbers, e-mail ids along with their financial information. Thinq.co.uk published this on November 2, 2011.
The above data could be useful for miscreants seeking to create profiles online as well as execute huge phishing or spam scams, the researchers added.
Responding to the aforementioned experiment, Facebook stated that the situation could hardly happen for real as the bots' Internet Protocol addresses had a connection with an academic institutional source that everyone trusted, while the Internet Protocol addresses the actual miscreants utilized would have resulted in anxiety. Bbc.co.uk published this on November 2, 2011.
A Spokesperson of Facebook stated that the company had many systems created for identifying bogus accounts as also restricting abrasion of data. Further, there was a continuous updating of such systems for enhancing their efficacy as also for tackling fresh types of assaults, the Spokesperson added. Bbc.co.uk reported this.
» SPAMfighter News - 09-11-2011 | <urn:uuid:315870cc-7e62-45cd-85ee-c968880a89a2> | CC-MAIN-2016-26 | http://www.spamfighter.com/News-17017-Researchers-Infiltrated-Facebook-Using-One-Experimental-Botnet.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00183-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952435 | 496 | 2.625 | 3 |
LDAP, Lightweight Directory Access Protocol, is an Internet protocol that email and other programs use to look up information from a server.
LDAP is mostly used by medium-to-large organizations. If you belong to one that has an LDAP server, you can use it to look up contact info and the like. Otherwise, if you were just wondering about this acronym, you probably don't need it. But feel free to read on to learn the story of this bit of Internet plumbing.
Every email program has a personal address book, but how do you look up an address for someone who's never sent you email? How can an organization keep one centralized up-to-date phone book that everybody has access to?
Those questions led companies such as Microsoft, IBM, Lotus, and Netscape to support a standard called LDAP. "LDAP-aware" client programs can ask LDAP servers to look up entries in a wide variety of ways. LDAP servers index all the data in their entries, and "filters" may be used to select just the person or group you want, and return just the information you want. For example, here's an LDAP search translated into plain English: "Search for all people located in Chicago whose name contains "Fred" that have an email address. Please return their full name, email, title, and description."
LDAP is not limited to contact information, or even information about people. LDAP is used to look up encryption certificates, pointers to printers and other services on a network, and provide "single sign-on" where one password for a user is shared between many services. LDAP is appropriate for any kind of directory-like information, where fast lookups and less-frequent updates are the norm.
As a protocol, LDAP does not define how programs work on either the client or server side. It defines the "language" used for client programs to talk to servers (and servers to servers, too). On the client side, a client may be an email program, a printer browser, or an address book. The server may speak only LDAP, or have other methods of sending and receiving dataLDAP may just be an add-on method.
If you have an email program (as opposed to web-based email), it probably supports LDAP. Most LDAP clients can only read from a server. Search abilities of clients (as seen in email programs) vary widely. A few can write or update information, but LDAP does not include security or encryption, so updates usually require additional protection such as an encrypted SSL connection to the LDAP server.
If you have OS X and access to an LDAP server, you can enter your LDAP account into System Preferences--Internet Accounts. At bottom of the right pane, click Add Other Account, then choose the LDAP account option. This lets Address Book look up info from your server.
LDAP also defines: Permissions, set by the administrator to allow only certain people to access the LDAP database, and optionally keep certain data private. Schema: a way to describe the format and attributes of data in the server. For example: a schema entered in an LDAP server might define a "groovyPerson" entry type, which has attributes of "instantMessageAddress", and "coffeeRoastPreference". The normal attributes of name, email address, etc., would be inherited from one of the standard schemas, which are rooted in X.500 (see below).
LDAP was designed at the University of Michigan to adapt a complex enterprise directory system (called X.500) to the modern Internet. X.500 is too complex to support on desktops and over the Internet, so LDAP was created to provide this service "for the rest of us."
LDAP servers exist at three levels: There are big public servers, large organizational servers at universities and corporations, and smaller LDAP servers for workgroups. Most public servers from around year 2000 have disappeared, although directory.verisign.com exists for looking up X.509 certificates. The idea of publicly listing your email address for the world to see, of course, has been crushed by spam.
While LDAP didn't bring us the worldwide email address book, it continues to be a popular standard for communicating record-based, directory-like data between programs. | <urn:uuid:3220d904-7def-4deb-820d-5b32628cfbf6> | CC-MAIN-2016-26 | http://www.gracion.com/server/whatldap.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00053-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.930637 | 897 | 3.125 | 3 |
Most people in churches nowadays have never read through the Bible even once; the older Christian habit of reading it from start to finish as a devotional discipline has virtually vanished. So in describing the Bible we start from scratch, assuming no prior knowledge.
The Bible consists of 66 separate pieces of writing, composed over something like a millennium and a half. The last 27 of them were written in a single generation: they comprise four narratives about Jesus called Gospels, an account of Christianity’s earliest days called the Acts of the Apostles, 21 pastoral letters from teachers with authority, and a final admonition to churches from the Lord Jesus himself, given partly by dictation and partly by vision. All these books speak of human life being supernaturally renovated through, in, with, under, from and for the once crucified, now glorified Son of God, who fills each writer’s horizon, receives his worship, and determines his mind-set at every point.
Through the books runs the claim that this Jesus fulfills promises, patterns and premonitions of blessings to come that are embodied in the 29 pre-Christian books. These are of three main types: history books, telling how God called and sought to educate the Jewish people, Abraham’s family, to worship, serve and enjoy him, and to be ready to welcome Jesus Christ when he appeared; prophetic books, recording oracular sermons from God conveyed by human messengers expressing threats, hopes and calls to faithfulness; and wisdom books which in response to God’s revelation show how to praise, pray, live, love, and cope with whatever may happen.
Christians name these two collections the Old and New Testament respectively. Testament means covenant commitment, and the Christian idea, learned from Paul, from the writer to the Hebrews, and from Jesus himself, is that God’s covenant commitment to his own people has had two editions. The first edition extended from Abraham to Christ; it was marked throughout by temporary features and many limitations, like a non-permanent shanty built of wood on massive concrete foundations. The second edition extends from Christ’s first coming to his return, and is the grand full-scale edifice for which the foundations were originally laid.
The writer to the Hebrews, following Jeremiah’s prophecy, calls this second superstructure the new covenant, and explains that through Christ, who is truly its heart, it provides a better priesthood, sacrifice, place of worship, range of promises and hope for the future than were known under its predecessor. Christians see Christ as the true center of reference in both Testaments, the Old always looking and pointing forward to him and the New proclaiming his past coming, his present life and ministry in and from heaven, and his future destiny at his return, and they hold that this is the key to true biblical interpretation.
Christians have maintained this since Christianity began.--J. I. Packer, Taking God Seriously: Vital Things We Need to Know (Crossway, 2013), 21-22 | <urn:uuid:c5d2960e-777e-43e3-bd7c-447cde6e4c78> | CC-MAIN-2016-26 | http://dogmadoxa.blogspot.com/2013/02/what-is-bible.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00156-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.971628 | 621 | 2.75 | 3 |
|P is for Practical|
Re: permutation understandingby Parham (Friar)
|on May 27, 2002 at 20:55 UTC||Need Help??|
i actually thought i'd do a followup on this post, cuz i got a little better understanding of how it works.
Take the inputted number, now move the first number to the beginning of a new list. Take the old and new lists and permute those. Take the first number of the old list and take it to the new list. Permute the two lists. Take the first number of the old list and move it to the new list.
Each time, check to see if the old list is completely empty, if it is, start moving backwards in foreach loops. Go back one foreach loop, move the second number to the new list, and permute those two lists. It's weird to explain, so i'd rather show you what i came up with hoping that this will help those who didn't understand how the permutation worked.
The follwing two URL's show how the permutation works. The first link is a simple explanation of a list of three elements (x, y, and z) while the second takes a deeper look at a list with four elements (1, 2, 3, and 4)
actual example using four elements
i hope these help people who don't quite know how the function works :). | <urn:uuid:cced7eeb-fbe0-40e9-8384-fd9a85bf9128> | CC-MAIN-2016-26 | http://www.perlmonks.org/index.pl?node_id=169644 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00078-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.885397 | 298 | 2.90625 | 3 |
SN's UNIQUE NUMBER BOXES
Among the most distinctive features of Sacramento Northern locomotives were the four-faced number boxes used on the line's 44-ton GE diesels. Although some other Western Pacific-family locomotives had roof-mounted number boards (WP FTs and TS 70-tonners), none were like the SN's.
The boxes were constructed of steel sheet, strip and angle stock. They measured 13 inches high by 30 inches wide on each face. The top was a very flat pyramid shape to shed rainwater. Curiously, the boxes were not rectangular, but were slightly diamond-shaped, probably to allow better visibility when viewed from the sides. The plans presented here do not show the bottom, but it was apparently a flat steel sheet. Inside the box was a two-bulb light fixture (one bulb as a back-up on a separate circuit). It is not known what voltage the lights used, but current was apparently fed by a conduit which ran from the forward engine hood up the face of the cab. The lights were serviced by unbolting the cover.
The boxes were mounted on short legs of different lengths to allow the whole assembly to sit level. The legs were welded directly to the cab roofs. In all available photographs, the boxes appear to be mounted above the engineer's seat, with one corner in line with the forward window post.
The boxes were apparently applied to the locomotives within a year of their 1946 delivery. A photo of 145 in Joseph Strapac's Western Pacific's Diesel Years dated 1947 clearly shows a box. Assuming this date is correct, at least some of the locomotives had their number boxes applied during 1947. The work was done at the Mulberry shops in Chico.
Our drawing is based on measurements taken in 1960 of the box used on SN 147. There is no guarantee that this box is identical to those used on the other 44-tonners. This box was applied to SN 147 at the WP's Sacramento shops when the locomotive was purchased in May 1957 (though it had been on the SN since August 1956). This was almost ten years after the boxes were applied to her sister units. It is possible that SN 147's box was not built to the original plans. It is also possible that SN 147 received the box from SN 141, which had been sold in September 1956. The box was not on this engine while it served its next owner, the Springfield Terminal, though it might also have been removed by General Electric when they remanufactured this locomotive.
The box on ex-SN 145 was the last one known to survive. It was still attached to this engine when received by Shepard Grain Company, but removed and scrapped when the locomotive went into service as their No. 3. The box from SN 146 was removed by the Northwest Oklahoma Railroad during their ownership of this locomotive. It apparently sat around their shops for sometime before being junked. In any case, it was not on the locomotive when it arrived at the Western Pacific Railroad Museum. All the other boxes were apparently scrapped with their locomotives. | <urn:uuid:f1d2b817-2ae4-4917-8b9e-215f802c16f6> | CC-MAIN-2016-26 | http://www.wplives.org/sn/number.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00095-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.990226 | 639 | 2.640625 | 3 |
Wiring. That’s one answer to this question. We know this from topographic maps in the thalamus and neocortex, where the basic units of sensory information are neatly represented in spatially-arranged populations of neurons – the various body parts are represented in specific locations, as are the different frequencies of sound, the different parts of the retina, and different odors and tastes. This basic sensory information has to be represented (i.e. we all need a faithful representation of visual elements, we all need to hear the various frequencies of sound that make up human speech etc.) so why not hard-wire it and make its representation the same for all of us?
It’s often thought that things change as you move into parts of the brain that represent more complex and abstract concepts. For example, in the hippocampus, many neurons receive the same inputs so it’s generally assumed that different neurons are equally capable of representing a given piece of information. While wiring between neurons must play a role in determining which neurons are activated, the diffuseness of the wiring means that related information need not be stored in spatially neighboring neurons as in the sensory regions of neocortex. Indeed, if you look at hippocampal neurons activated by a given experience they don’t appear to have any particular spatial arrangement but are randomly distributed, anatomically. Alternatively, it could be that certain hippocampal neurons are hard-wired to respond to specific stimuli, it’s just that we don’t understand the wiring.
I’ve mentioned before (here and here) how anatomical patterns of activity in the hippocampus are not always so random – in the dentate gyrus the same neurons are often repeatedly activated and by very different experiences. Furthermore, half of the dentate gyrus (the infrapyramidal blade) never seems to be noticeably active, period. But anatomical biases have been reported outside of the dentate gyrus too. Hampson and Deadwyler showed that spatial and nonspatial information is segregated in distinct septotemporal regions of CA1/CA3. Also, Nakamura et al. have suggested that CA1 neurons that represent a given spatial environment are more likely to be spatially clustered together.
While these studies suggest there may be a hard-wired anatomical pattern by which information is represented in regions such as the hippocampus, we really have have no idea how that pattern might be established. I was therefore intrigued to see a couple papers shed new light on this issue. One is a recent paper by Yassin et al. who used a Fos-GFP mouse to identify and record from neurons recently activated by behavioral experience. Fos is an immediate-early gene that is upregulated in neurons that are involved in learning and so, in this mouse, those neurons fluoresced green and could be examined electrophysiologically. They found that the Fos-GFP neurons fired at higher rates than neighboring neurons that were not expressing GFP and that they tended to be more connected to one another (and thus they were dubbed Facebook neurons), suggesting that there may be a subset of neurons that is preselected to be involved in representing experiences (perhaps not unlike the population of highly-active dentate gyrus neurons). There is a bit of a chicken and egg problem here, because we don’t know if the GFP+ neurons always fire at higher rates (and are hard-wired to be more involved in representing experience) or if they only fire at higher rates because they were recently activated (i.e. behavior-induced plasticity changed them). Intriguing nonetheless and a good approach for future studies I think.
The other study is pretty revolutionary I think and also has to do with predetermined, hard-wired patterns of neuronal activity. One of the exciting developments of the last 15 years has been the finding that patterns of neuronal activity are replayed during sleep. It is thought that this “replay” is the physiological correlate of memory consolidation, i.e. the rehearsal of recent experience and integration of that new information into the brain’s circuitry. Now, Dragoi and Tonegawa have found that the patterns of neuronal activity, seen as a mouse explores a novel environment, can also be seen during rest/sleep episodes before the mouse has ever been in that environment. Essentially, they discovered that the brain has created a representation (or at least a fraction) of an experience that has not even happened yet. They call the phenomenon “preplay”.
The preplay phenomenon does fit with previous data. The Mosers, in their News and Views piece on this study, note that “…place cells continue to fire in regular sequences when an animal’s position is fixed, for example, when a rat is running in a wheel. Moreover, rat pups exploring an open space for the first time show adult-like place cell sequences, which indicates that path sequences are hard-wired in the synaptic connection matrix by either genetic programs or early experience.” Also relevant is the finding from John Guzowski’s lab showing that very brief experiences (perhaps too brief to be even remembered) are capable of inducing transcription of the plasticity-related gene, Arc, in a full complement of CA3 neurons. In contrast, CA1 neurons were only fully activated after multiple experiences over multiple days, suggesting less of a role for hard-wiring and more of a role for plasticity and learning in shaping neural representations in this region.
Why preplay? One interesting hypothesis is that the hippocampus is needed to imagine the future (a reasonable role for a structure responsible for remembering the past). Could preplay be an attempt to predict future experience? Or might a shared pattern of activity simply be a way to bind together two events and create a coherent history? Don’t worry – I’m sure that, as we speak, there are rodents with implanted electrode arrays running around, working hard, to give us the answer.
Yassin L, Benedetti BL, Jouhanneau JS, Wen JA, Poulet JF, & Barth AL (2010). An embedded subnetwork of highly active neurons in the neocortex. Neuron, 68 (6), 1043-50 PMID: 21172607
Dragoi G, & Tonegawa S (2011). Preplay of future place cell sequences by hippocampal cellular assemblies. Nature, 469 (7330), 397-401 PMID: 21179088 | <urn:uuid:90bb616b-0bcc-4983-b4c6-65ba82de8481> | CC-MAIN-2016-26 | http://www.functionalneurogenesis.com/blog/2011/02/how-does-the-brain-pick-which-neurons-to-use/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00122-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955946 | 1,334 | 3.21875 | 3 |
WASHINGTON (AP) ― Tropical cyclones worldwide are moving out of the tropics and more toward the poles and generally larger populations, likely because of global warming, a surprising new study finds. Atlantic hurricanes, however, don’t follow this trend.
While other studies have looked at the strength and frequency of the storms, which are called hurricanes in North America, this is the first study that looks at where they are geographically when they peak. It found in the last 30 years, tropical cyclones, regardless of their size, are peaking 53 kilometers farther north each decade in the Northern Hemisphere and 61 kilometers farther south each decade in the Southern Hemisphere.
That means about 160 kilometers toward the more populous mid-latitudes since 1982, the starting date for the study released Wednesday by the journal Nature.
“The storms en masse are migrating out of the tropics,’’ said study lead author James Kossin of the National Climatic Data Center and the University of Wisconsin.
Kossin used historical tracks of storms in the Western Pacific, Eastern Pacific, North Indian Ocean, South Indian Ocean, South Pacific and the Atlantic.
That means more people at risk, especially in the Northern Hemisphere, because “you’re going to hit more population areas,’’ said Yale University historian and cartographer Bill Rankin, who wasn’t part of the study. In the region where Japan tracks cyclones, they are peaking 68 kilometers farther north each decade. That means cyclones that used to hit their strongest around the same latitude as the northern Philippines are now peaking closer to Hong Kong, Taiwan, Shanghai, Japan and South Korea, Kossin said. | <urn:uuid:adf0c06f-892c-4026-a4df-a435fb10979d> | CC-MAIN-2016-26 | http://www.koreaherald.com/view.php?ud=20140515001399 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00002-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.92318 | 347 | 3.421875 | 3 |
Students attending the Kentucky All-State Choir leave their Hyatt Hotel rooms a little before 11 pm each night during the Kentucky Music Educators Association conference, and stand on the 18 inside surrounding balconies and sing the National Anthem. They begin with those on the bottom floor. Someone begins humming in the key of E, which is where the song begins and then a beautiful rendition begins, with those above joining-in. Close to 1,000 student participate in the singing of the Star-Spangled Banner, along with any hotel guests who want to join-in. The choir members make-up three different state choirs, boys, girls, and a third, I guess a combined choir, not sure. The choirs from across the state compete during the week. In the video, you’ll hear a slower version than usual. I just heard a young choir member on Fox say the accoustics inside the hotel are great and that a reverb lingers awhile — which I’m assuming is the reason for the slower pace. Beautiful.
Some history behind the Star Spangled Banner begins at Fort McHenry, Maryland with Francis Scott Key aboard an anchored ship at the breaking of day in the summer of 1813. No wonder Key wrote of a “star-spangled” banner – each star measured two feet across:
…the commander, Maj. George Armistead, asked for a flag so big that “the British would have no trouble seeing it from a distance.” Two officers, a Commodore and a General, were sent to the Baltimore home of Mary Young Pickersgill, a “maker of colours,” and commissioned the flag. Mary and her thirteen year old daughter Caroline, working in an upstairs front bedroom, used 400 yeards of best quality woold bunting. They cut 15 stars that measured two feet from point to point. Eight red and seven white stripes, each two feet wide, were cut. Laying out the material on the malthouse floor of Claggett’s Brewery, a neighborhood establishment, the flag was sewn together….It measured 30 by 42 feet and cost $405.90.
At 7 a.m. in the morning of September 13, 1814, the British bombardment began, and the flag was ready to meet the enemy. The bombardment continued for 25 hours….The Americans had sunk 22 vessels so a close approach by the British was no possible. That evening the connonading stopped, but at about 1 a.m. on the 14th, the British fleet roared to life, lighting the rainy night sky with grotesque fireworks.
At dawn, Key anxiously peers into the distance to see if the flag still waves, proving the battle for God and country has not been lost. And indeed it was waving. Could there be more poignant words to describe our flag than “a star-spangled banner?”He sees her broad stripes and bright stars, still waving over a perilous battle.At twilight, the bombs are still raining down, the glare of the bursts again show proof that the symbol of freedom, the Star Spangled Banner, is still waving over the Land of the Free and the Home of the Brave. | <urn:uuid:51f1a962-8783-477f-b679-407d5a1bef5d> | CC-MAIN-2016-26 | http://www.maggiesnotebook.com/2014/02/kentucky-3-all-state-choruses-sing-star-spangled-banner-our-flag-with-stars-2-across/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00108-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953782 | 660 | 2.96875 | 3 |
- The circumstances in which an event occurs; a setting.
- That which surrounds, and gives meaning to, something else.
GTD encourages the use of contexts to break down long and expansive to-do lists. Without them where would you start? What would you choose to do at any particular time? By breaking down your lists according to different settings and situations, it becomes a simple matter of selecting a list and tasks appropriate to your current context. For instance, if you are near a phone, you only need look at those next actions that require you to make a phonecall.
An extra benefit of contexts is that it it stops you from being distracted by next actions that are not relevant to your current circumstances. For instance, you don’t have to look at any tasks that are to do at home when you are at work. This is known as contextual limitation as it stops your attention being taken up by work you can’t do at that time.
Contexts can be as simple or as complicated as required depending on the actual depth and size of your to-do lists. Traditionally, the author of GTD, David Allen, puts a @ symbol in front of all contexts, and while this is not a requirement, it has become the common defining symbol for them. The symbol means location as in “Where are you at?”. Thus, @computer means “At your computer”. | <urn:uuid:4fc6162d-58f2-4731-8182-7f9775a2d08c> | CC-MAIN-2016-26 | http://www.organizeit.co.uk/2007/06/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00164-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968673 | 293 | 3.578125 | 4 |
Young children who behave badly in school can do just fine academically, new research suggests. But if the bad behavior persists until age eight, education can be compromised, and professional success later in life is less likely.
One new study examined data from six previous large-scale studies of almost 36,000 preschoolers in which the same subjects were observed repeatedly over time. The research included two national studies of U.S. children, two multi-site studies of U.S. children, one study of children from Great Britain and another of children from Canada.
The conclusion: Surprisingly, difficulty getting along with classmates, aggressive or disruptive behaviors, and sad or withdrawn behaviors in kindergarten did not detract from academic achievement in childhood and early adolescence, according to Northwestern University professor Greg Duncan and 11 co-authors.
The researchers examined several indicators, including picking fights, interrupting the teacher and defying instructions. They found that kindergartners who did these things performed surprisingly well in reading and math when they reached the fifth grade, keeping pace with well-behaved children of the same abilities.
Although Duncan's team found no predictive power in early behavior problems for later learning, another study, which examined older children, found such a connection.
Persistent behavior problems in eight-year-olds are powerful predictors of educational attainment and of how well people will do in middle-age, according to the second study's leader, Rowell Huesmann at the Center for the Analyses of Pathways from Childhood to Adulthood (CAPCA) at the University of Michigan.
If behavior problems of the kind seen in younger children continue until age eight, they can create other challenges, Huesmann said.
Huesmann based his conclusion on a prior research study and a recent analysis by CAPCA researchers, who studied data from 856 U.S. children and 369 Finnish children. They found that children who engaged in more frequent aggressive behaviors as eight-year-olds had significantly lower educational success by their 30s and significantly lower status occupations by their mid-40s. The results were published in the journal Developmental Psychology.
"It makes perfectly good sense that persistent behavior problems would have a substantial impact on later success," said Amy Sussman, director of the Developmental and Learning Sciences Program at the the National Science Foundation, which funded both new studies. "When interviewing for jobs and progressing through one's career trajectory, personality and other characteristics that are not measured by tests certainly come into play."
There's a good chance that personality traits also come into play in the classroom. Huesmann and his colleagues hypothesize that children with persistent behavior problems lasting into the third grade are those who cannot be easily socialized to behave well and who therefore are more likely to experience a "hostile learning environment."
They speculate that teachers and peers likely "punish" these children, reducing or eliminating positive support for learning. But researchers note that if a child's aggression is short-lived, it is unlikely to have the same long-term consequences.
"Socialization of disruptive preschoolers by teachers and peers may ensure that a child's behavioral problems do not affect his or her educational achievement," Huesmann said. "Attending class, spending time with classmates, observing the rewards of proper behavior, and being told, 'No,' to correct disruptive behavior can benefit unruly children."
Researchers also noted that popularity and positive social behavior in childhood and adolescence predicted higher levels of educational attainment in early adulthood. They said it is possible that children with stable positive social skills experience a supportive and conducive learning environment. | <urn:uuid:5974a7b6-0ba2-4f6c-a9d9-deae0a09fb9b> | CC-MAIN-2016-26 | http://www.livescience.com/7406-bad-behavior-youth-linked-career-problems.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00039-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970796 | 730 | 3.109375 | 3 |
- Taking Control of Your Desktop
- Managing Open Windows with Expos
- Creating, Using, and Managing Spaces
- Mac OS X to the Max: Controlling Running Applications
Creating, Using, and Managing Spaces
Being able to use Exposé to manage open windows and applications is nice, but if you use a large amount of applications and windows at the same time, getting to the windows you want to work with can still be tedious. Spaces enables you to create environments that contain specific applications so that you can easily switch between them to perform specific actions. For example, you can create an Internet space that has all your Internet applications open and a Project space that contains applications and documents related to a project you are working on. When you want to move from the project to the Internet, simply move into the Internet space and all your applications are immediately available. Getting back to your project space is just as easy.
Enabling and Building Spaces
To get started with Spaces, you need to enable the feature and build your spaces. After you create spaces, you assign applications to those spaces. Only applications that are bound to a space are available to you when you access that space. To enable Spaces and create spaces, perform the following steps:
- Open the Spaces tab of the Exposé & Spaces pane of the System Preferences application (see Figure 12.4).
Figure 12.4 Use the Spaces tab to create and configure spaces on your Mac.
- Check the Enable Spaces check box. If you want to have the Spaces menu appear in the menu bar, check that check box, as well.
- At the top of the pane, you see the preview window that shows a thumbnail of each space you have configured; initially there are two spaces in the same row. As you create spaces, you'll add them by row or by column. Below the preview window is the Application Assignments section; here, you see the applications are part of the selected space. At the bottom of the pane are the controls you use to set keyboard shortcuts for specific actions.
- Use the keyboard pop-up menu to set the keyboard shortcut to activate Spaces.
- If you have a multi-button mouse, use the mouse pop-up menu to set the mouse control to activate Spaces.
- Add applications that you want to be part of the spaces you are creating. There are several ways to do this:
- Drag an application's icon from the desktop and drop it on the space where you want it contained; when the icon is over a space, the space's thumbnail is highlighted to show you that you can drop it on the space.
- Drag an application's icon from the desktop and drop it on the application list. Then, choose the space where you want the application to be used on the pop-up menu in the Space column (more on that in a bit).
- Click Add Application and use the resulting sheet to move to and select the application you want to add. Then use the pop-up menu in the Space column to choose the space where you want that application to appear.
- Add spaces by clicking the Add button in the Row section to add a new row of spaces or the Add button in the Column section to add a new column of spaces.
- Add spaces until you've added all that you want to have available.
- To remove spaces, you must remove an entire row or column by clicking the Remove button in the Row or Column section.
- Use the pop-up menu in the Space column to assign an application to a space (see Figure 12.5). The choices on the menu are each space you've created and Every Space which makes the application available in all spaces. When you select an application, the spaces to which it is assigned are highlighted in the preview window.
Figure 12.5 Assign applications to spaces to make them available when you choose a space.
- To remove an application from the list, select it and click Remove.
- Use the "To switch between spaces" pop-up menu to choose the keyboard shortcut to move among the spaces you have created.
- Use the "To switch directly to a space" pop menu to choose the keyboard shortcut you can use to jump directly into a space.
Using and Managing Spaces
After you've created spaces, you can use them to more efficiently manage your desktop. Here are some space pointers:
- Press the keyboard shortcut you set for switching between spaces (the default is Ctrl-Arrow key). The Spaces manager appears on the screen (see Figure 12.6). The manager has a box representing each of the spaces you've created. To jump to a space, keep pressing the shortcut keys until the space you want to use is highlighted. When you release the keys, you jump into that space and return to the last application you were using in that space.
Figure 12.6 When you activate Spaces, you see the Spaces manager that indicates how many spaces are available to you.
- Press the keyboard shortcut for jumping directly to a space (the default is Ctrl-number key) to move directly into a space. When you do, the Space manager will appear briefly, you move into the space you selected, and applications in that space are available to you.
- When you are in a space, you can open applications that aren't part of that space, just as you can when you aren't using Spaces. That application will be available in the current space, but not in any others. If you open an application that is already assigned to a different space, you jump to the space that it has been assigned to.
- The Finder is available in all spaces.
- If an application isn't running when you move into the space that it has been assigned to, you need to launch it to be able to use it.
- Spaces retain window configurations. If you use multiple monitors and have windows on each display in a space, they will resume their former positions as soon as you move back into that space.
- You can use the Dock to move into open or closed applications. If you open an application in a space, you move into that space. If the application is not part of a space, it opens as usual, but is available only when you are using the space you were using when you opened it. In other words, it is temporarily bound to the space you were using when you launched it.
- If you assign an application to all spaces, its windows will always appear in the same positions in all spaces.
- If you press the Spaces keyboard shortcut (the default is F8), you see large thumbnails of all your spaces (see Figure 12.7). In each space, you see smaller thumbnails of all the applications running in that space. If you use multiple displays, you see a thumbnail for each display. Click a space to move into it.
Figure 12.7 Pressing the Spaces keyboard shortcut shows you all your spaces.
- To turn Spaces off, open the Spaces tab of the Exposé & Spaces pane of the System Preferences application and uncheck the Enable Spaces check box. All open applications will return to the desktop. You can start using your spaces again by checking the Enable Spaces check box. | <urn:uuid:82e33f16-27a9-4ddc-b007-997b6e8fcf89> | CC-MAIN-2016-26 | http://www.quepublishing.com/articles/article.aspx?p=1157196&seqNum=3 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00085-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.90233 | 1,485 | 2.765625 | 3 |
1 Answer | Add Yours
The title presents us with a question that is answered during the course of the story and also points us towards the way in which the story serves as a kind of parable, which is a short, simple story that presents us with some kind of moral lesson. Of course, the question is very careful to ask how much land a man needs, as opposed to wants. This is crucial to the understanding of the story, and is also refered to at the very end of the story in its final paragraph, which ironically answers this question:
His servant picked up the spade and dug a grave long enough for pahom to lie in, and buried him in it. Six feet from his head to his heels was all he needed.
The question is thus answered: a man only needs enough earth to be buried in, which of course ironically contrasts with the vast stretches of land that the ever-more greedy Pahom desires. This of course supports the theme of the story, which is that unchecked ambition and greed destroys people. It was Pahom's desire to secure ever-greater plots of land that directly lead to his death and made him unable to enjoy the simple pleasures of life that he had. Having the title expressed as a question thus focuses our attention as readers on the moral message of the story and also points towards the irony of how this question is answered.
We’ve answered 327,892 questions. We can answer yours, too.Ask a question | <urn:uuid:e3d6fd69-b974-4d09-a821-87011ed93707> | CC-MAIN-2016-26 | http://www.enotes.com/homework-help/justify-title-how-much-land-does-man-require-273707 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00117-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.983658 | 308 | 3.171875 | 3 |
Fountain and Manitou Springs are among the first Colorado cities to enact laws meant to protect kids from tobacco products other than cigarettes.
With popularity growing for electronic cigarettes, edible tobacco and more, cities are aiming to create better regulation that could keep the products out of kids' hands.
More information from El Paso County Public Health:
Ordinances Unanimously Pass to Keep Tobacco Out of Teens’ Hands
El Paso County, Colo. — Manitou Springs and Fountain are among the first communities in Colorado to pass a tobacco licensing ordinance aimed at reducing underage tobacco use.
Manitou Springs City Council unanimously approved a new city ordinance Tuesday, Nov. 15, that allows stricter enforcement for the illegal sales of tobacco to minors. Fountain’s city council unanimously passed a similar ordinance Oct. 11. Steamboat Springs passed an ordinance in July.
The new laws will become effective Jan. 1, 2012 and require non-cigarette tobacco retailers to obtain an annual retail license. Non-cigarette tobacco products include chew or spit tobacco, cigars, snus, sticks, orbs, strips, electronic cigarettes, and hookah.
Both communities adopted the following additional provisions:
· Minors younger than 18 are not permitted to sell, stock or handle non-cigarette tobacco products.
· Self-service displays of any non-cigarette tobacco products are prohibited.
· If a retailer receives four violations within one year, their license to sell non-cigarette tobacco may be suspended or revoked.
“We’re proud of Fountain and Manitou Springs to be among the first communities to further restrict youth access to non-cigarette tobacco products,” said Kiti Hall, the Tobacco Education and Prevention Partnership (TEPP) supervisor at El Paso County Public Health. “Tobacco retailer licensing is an effective method to help reduce youth access to tobacco products and hold retailers accountable for selling tobacco products to minors.”
Despite an existing law prohibiting the sale of tobacco to minors, a 2008 Healthy Kids Colorado Survey found that more than 60 percent of Colorado youth under the age of 18 who attempted to purchase tobacco reported being able to do so, and nearly half of the kids who purchased tobacco illegally say they were not asked to show any proof of age. This ordinance is intended to lower these statistics.
“It was a real pleasure to work on this effort,” said Marc Snyder, mayor of Manitou Springs. “I was especially impressed by the many outstanding high school students who advocated so passionately for the adoption of these ordinances.”
Tobacco use remains the leading cause of preventable death. More than 4,300 Coloradans die each year from smoking-related illnesses. Nearly 90 percent of adult smokers become addicted to tobacco before the age of 18.
“My hope is that this ordinance will contribute to overall prevention efforts aimed at helping youth resist tobacco use and potentially negative health consequences,” said Fountain Mayor Jeri Howells.
The local ordinances were introduced by the Tobacco Education and Prevention Partnership at El Paso County Public Health, Fountain Valley Community Activity and Nutrition group, Partners for Healthy Choices, and students and teachers of Fountain-Fort Carson and Manitou Springs school districts. | <urn:uuid:ea7da229-089f-464f-8aa5-1d86a0030cf8> | CC-MAIN-2016-26 | http://www.csindy.com/IndyBlog/archives/2011/11/18/fountain-manitou-take-aim-at-tobacco-products | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00050-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.920006 | 654 | 2.515625 | 3 |
Multinational companies are giant firms with their origin in one country, but their operations extending beyond the boundaries of that nation. For reasons of marketing, financial and technological superiority, these multinationals are generally considered as a sine qua non of the modernisation of an economy.
They have been responsible for the rapid economic liberalisation in India in 1991, the question of the entry of multinational corporations (MNCs) has assumed significance.
Multinationals corporations, mostly from the United States, Japan and other industrialised nations of the world, have entered our life in a big way. Foreign investment proposals and commercial alliance have been signed on an unprecedented scale, thus giving rise to the controversy whether these multinational corporations are our saviours or saboteurs.
This is so because of the vital difference between the economies of developed and developing nations. This requires that the entry of multinational corporations in India be examined from this angle.
According to A.K. Cairn cross, “It is not possible to buy development so cheaply. The provision of foreign capital may yield a more adequate infrastructure, but rarely by itself generates rapid development unless there are already large investment opportunities going a begging
That is why the intervention of multinational corporations is imperative in the context of the economic growth and modernisation of developing economies where ample investment avenues lie open and yet due to lack of capital and technical know-how, these potentials remain unexploited.
Multinational corporations help in reorganising the economic infrastructure in collaboration with the domestic sector through financial and technical help.
If we consider the case of our country immediately after Independence, ours was an agrarian economy with a weak industrial base and low level of savings.
“Though the public sector was supposed to cure these ills, with problems like paucity of funds, lack of technical know-how and other amenities, it seemed an impossible proposition. Hence, the help of multinational corporation was sought in terms of finance and technology.
As a consequence of the public sector multinational corporation nexus, from a miniature one, the Indian industrial economy assumed colossal dimensions and India is considered one of the most industrialised nations of the world today.
However, there is another school of thought, which denounces multinationals as an extension of imperialist power and potency source of exploitation of the Least Developed Countries (LDCs) by the developed economies of the world.
According to them, MNCs are an expensive bargain for a developing economy from the foreign exchange point of view. These days when developing countries are struggling with massive foreign debts and their development plans are held up due “to paucity of funds” .this may be considered a serious drawback.
Second, multinationals evade paying taxes in most countries by concealing profits. Government agencies entrusted with the task of collecting the taxes and scrutinising their accounts are often bluffed by them as they do not know enough about the industries they are asked to deal with.
Third, multinationals often provide inappropriate technology to the developing nations. The technology provided by them is very often too sophisticated to adopt or too absolute by international standards. Further, transfer to technology in accordance with resource endowment of LDCs involves high cost and this may prevent MNCs from transferring appropriate technology to these countries.
Fourth, some of the evils of the multinationals emanate out of their oligopolistic character. Collision is the main determinant of its price policy, which ensures profit at the cost of high level of consumption at a lower price. Even the impact of high productivity brought about by them through the technology-cal advancement is not conducive to the working class because of pre-determined level of profit under oligopolistic criterion.
Fifth, concentration of economic power is the main charge against MNCs.This economic power is often used to distort national politics and international relations by multinationals. These enterprises build up a power entity of their own. They never hesitate in exploiting the social and political weakness and economic backwardness of the LDCs to their own benefit.
A multinational corporation is neither a saviour as its protagonists claim, nor a saboteur as its detractors make it out to be. It is a mix of virtues and vices, boons and banes.
Charges levelled against multinationals are serious, yet it also remains a fact that, despite all these disastrous consequences of their working, multinationals have emerged as the most dominant institutions of the late twentieth century. As such, third world countries in general, and India, in particular, will have to deal with multinationals despite their ugly designs.
The Government must, therefore, have an optimally balanced policy towards MNCs after weighting the various pros and cons of the issue.
It would not go for foreign collaboration in areas where adequate Indian skills and capital are available. Whenever the need for foreign collaboration is felt in areas of high priority, emphasis should be on purchasing outright technical know-how, technological skills and machinery. But only if this is not possible, should MNCs be allowed to operate in India?
Once these safeguards are taken, multinational corporations will give an uplift to national economy by bringing in quality goods and services to the country. They will reward enterprise and talent; the inefficient would, of course, have no place in the new scheme of things. Hence, the hue and cry by interested party, who, dub MNCs as saboteurs.
Multinational corporations will demand efficiency, punctuality and dedication things which are deadly lacking in national life today. They will demand a certain work culture from the employees as well as the employers besides offering the best of goods and services to their clientele.
They should, therefore, be viewed as saviours of national economy rather than saboteurs because we have seen where our previous policies, have landed us right at the bottom of the list of industrialised nations. The economy has steadily picked up since the liberalisation measures were introduced.
This must ‘continue if we are to emerge as a global economic power in the next century. And multinational corporations are the only answer. | <urn:uuid:cdf47365-88df-4ad6-91a7-a756c9f30dde> | CC-MAIN-2016-26 | http://www.shareyouressays.com/6719/essay-on-multinational-companies | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00043-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.951951 | 1,244 | 2.78125 | 3 |
This detail of a Dawn FC (framing camera) image shows dark colored mountains (top of the image) in the northern region of Vesta. The origin of such mountains is currently being investigated. The largest crater, near the center of the image, contains both bright and dark material. This material, also visible in many of the other craters, mostly crops out from the crater rims and then slumps towards the crater's centers. The bottom part of the image includes many areas of dark material, several of which are not associated with any impact structures visible at this resolution. Better resolution images are necessary to understand the origin of this 'unassociated' dark material.
NASA's Dawn spacecraft obtained this image with its framing camera on August 14th 2011. This image was taken through the camera's clear filter. The distance to the surface is 2740km and the image resolution is about 260 meters per pixel.
The Dawn mission to Vesta and Ceres is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, Calif., for NASA's Science Mission Directorate, Washington. UCLA is responsible for overall Dawn mission science. The Dawn framing cameras were developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by DLR German Aerospace Center, Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The Framing Camera project is funded by the Max Planck Society, DLR, and NASA/JPL.
More information about Dawn is online at http://www.nasa.gov/dawn and http://dawn.jpl.nasa.gov. | <urn:uuid:0331a511-2dfd-4686-8128-ad19c57ab25d> | CC-MAIN-2016-26 | http://photojournal.jpl.nasa.gov/catalog/PIA14793 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00182-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923653 | 357 | 3.390625 | 3 |
A vaccine used to prevent tuberculosis in other parts of the world may help prevent multiple sclerosis in people who show the beginning signs of the disease, according to a new study published in the December 4, 2013, online issue of Neurology, the medical journal of the American Academy of Neurology. The vaccine, called Bacille Calmette-Guerin (BCG), contains a weakened bacterium that induces an immune response that might, in turn, dampen inflammatory factors.
The study involved 73 people who had a first episode that was suggestive of MS, such as numbness, vision problems, or problems with balance, and an MRI that showed signs of possible MS. About half of all people in this situation, called clinically isolated syndrome, develop definite MS within two years, while 10 percent have no more MS-related problems.
For the study, 33 of the participants received one injection of BCG, a live vaccine which is used in other countries to prevent tuberculosis, but is not used for that in the United States. The other participants received a placebo. All of the participants had brain scans once a month for six months. They then received the MS drug interferon beta-1a for a year. After that, they took the MS drug recommended by their neurologist. The development of definite MS was evaluated for five years after the start of the study.
After the first six months, the people who received the vaccine had fewer brain lesions that are signs of MS than those who received the placebo, with three lesions for the vaccinated and seven lesions for the unvaccinated. By the end of the study, 58 percent of the vaccinated people had not developed MS, compared to 30 percent of those who received the placebo. There were no major side effects during the study. There was no difference in side effects between those who received the vaccine and those who did not.
"These results are promising, but much more research needs to be done to learn more about the safety and long-term effects of this live vaccine," said study authorGiovanni Ristori,MD, PhD, of Sapienza University of Rome inItaly. "Doctors should not start using this vaccine to treat MS or clinically isolated syndrome."
The results provide support to the "hygiene hypothesis" that better sanitation and use of disinfectants and antibiotics may account for some of the increased rate of MS and other immune system diseases in North America and much of Europe compared with Africa, South America and parts of Asia, according to Dennis Bourdette, MD, of Oregon Health & Science University in Portland and a Fellow of the American Academy of Neurology, who wrote an accompanying editorial. "The theory is that exposure to certain infections early in life might reduce the risk of these diseases by inducing the body to develop a protective immunity." | <urn:uuid:0a454605-944f-41d7-896b-e9d6ceb36432> | CC-MAIN-2016-26 | http://msfocus.org/news-details.aspx?newsID=263 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00175-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974095 | 568 | 3.453125 | 3 |
Updated 3/13/13 to reflect new grades for NY and RI.
WASHINGTON, DC — A new analysis from the Sunlight Foundation presents a “Transparency Report Card” on how well state legislative information is made available to the public. Using data collected from our Open States project, Sunlight ranks the good, the bad and the ugly of state websites.
Evaluated across six criteria, the Sunlight Foundation developed a scorecard and letter grades for all 50 states and the District of Columbia. The Transparency Report Card judges legislative websites in relation to how government information is publicly available. Factors include: completeness, timeliness, ease of electronic access, machine readability, use of commonly owned standards and permanence.
Full state rankings and our methodology are below but here is how the top and bottom of the class fared:
Today marks the start of the Sunshine Week, an annual celebration of policies, programs and activities that maintain the public’s right to know and the importance of open government. But sometimes, getting that information can be a challenge.
That’s what the developers at Sunlight Labs and a group of civic hacker volunteers discovered as they collected legislative data from state websites across the country to build OpenStates.org. Creating a comprehensive database of bill text, roll call votes and lawmaker contacts that is both developer- and user-friendly came with its own frustrations.
“In the course of writing code to scrape data for all state legislatures, our Open States team and volunteers spent a lot of time looking at state websites and struggled with the often inadequate information made available,” said James Turk, a Sunlight Labs developer. “We hope states will use this report card as a guidepost to improve how they present what their legislature is doing online. Having this data released the right way is important for holding our state governments accountable.”
The Sunlight Foundation’s methodology used the Ten Principles for Opening Up Government Information. We used six of the principles to evaluate the openness of the each state’s legislative data. Letter grades were calculated based on the score received for each criterion.
Completeness — We evaluated each state on the data collected by Open States: bills, legislators, committees, votes and events. We also took note if a state went above and beyond to provide this information and other relevant contextual information such as supporting documents, legislative journals and schedules. Points were deducted for missing data, often roll call votes.
Timeliness — Legislative information is most relevant when it happens, and many states are publishing information in real time. Unfortunately, there are also states where updates are more infrequent and showing up days after a legislative action took place. States were dinged if data took more than 48 hours to go online.
Machine Readability — For many sites, the Open States team wrote scrapers to collect legislative information from the website code — a slow, tedious and error prone process. We collected data faster and more reliably when data was provided in a machine-readable format such as XML, JSON, CSV or via bulk downloads. If a state posted PDF image files or scanned documents, it received the lowest score possible.
Use of Commonly Owned Standards — This provision measured how a state made their bill text available. Making text available in HTML or PDF is the norm and was considered acceptable since any one could view them within a web browser. States that only make documents available a Microsoft Word or other text document format require the user to have (or purchase) the proper software to read the bill got a negative score.
Permanence — Many states move or remove information when a new session starts, leaving 404 pages and broken links where there was once bill text, resulting in a lower score. Most (but not all) states are good about at least preserving bill information. Few were equally as good about preserving information about out-of-office legislators and historical committees.
Review the scores for all U.S. states and the District of Columbia and point system for each criterion here.
Open States Transparency Report Card Letter Grades
State Letter Grade
New Hampshire A
New York A
North Carolina A
New Jersey B
New York B
West Virginia B
South Carolina C
South Dakota C
District of Columbia C
New Mexico C
North Dakota C
Rhode Island D
About Open States
OpenStates.org is a website anyone can use to discover more about lawmaking in their state. Open States is a comprehensive database of legislative information for all 50 states, plus the District of Columbia and Puerto Rico. The website makes it easy to find state lawmakers, review their votes, search legislation and track bill progress, as well as compare legislation from state to state.
The Sunlight Foundation is a non-partisan non-profit that uses cutting-edge technology and ideas to make government transparent and accountable. Visit http://SunlightFoundation.com to learn more about Sunlight’s projects, including http://PoliticalPartyTime.org and http://influenceexplorer.com. | <urn:uuid:7018a873-165b-4eb6-8db4-6d16249f9849> | CC-MAIN-2016-26 | http://sunlightfoundation.com/press/releases/2013/03/11/how-does-your-state-rank-legislative-transparency/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00134-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.907093 | 1,031 | 2.65625 | 3 |
|Product #: CD-404094|
STRENGTHING PHYSICAL SCIENCE SKILLS
128 pages - Grades 4 - 12
Develop interest and confidence in advanced science by building science vocabulary and math skills while exploring physical science concepts. Topics include matter, gravity, density, motion, simple machines, electricity, light, and more. Includes CD-ROM with interactive exercises that are automatically scored and printed plus printable worksheets and reading activities. Supports NSE standards.
Submit a review | <urn:uuid:72b6469b-28be-4e82-8ec6-dff6be370273> | CC-MAIN-2016-26 | http://www.schoodoodle.com/home/sch/page_15819_843/strengthing_physical_science_skills.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00047-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.83989 | 104 | 3.0625 | 3 |
This is the text of an essay on the web site “The Discovery of Global Warming” by Spencer Weart, January 2007. For an overview see the book of the same title (Harvard Univ. Press, 2003).
Copyright © 2003-2007 Spencer Weart & American Institute of Physics. Reprinted here with permission.
Here are gathered in chronological sequence the most important events in the history of climate change science. (For a narrative see the Introduction: Summary History.) The list of milestones includes major influences external to the science itself.
On Weart's web site, nearly all items have links to essays.
Level of carbon dioxide gas (CO2) in the atmosphere, as later measured in ancient ice, is about 290 ppm (parts per million).
First Industrial Revolution. Coal, railroads, and land clearing speed up greenhouse gas emission, while better agriculture and sanitation speed up population growth.
Joseph Fourier calculates that the Earth would be far colder if it lacked an atmosphere.
Tyndall discovers that some gases block infrared radiation. He suggests that changes in the concentration of the gases could bring climate change.
Arrhenius publishes first calculation of global warming from human emissions of CO2.
Chamberlin produces a model for global carbon exchange including feedbacks.
Second Industrial Revolution. Fertilizers and other chemicals, electricity, and public health further accelerate growth.
World War I. Governments learn to mobilize and control industrial societies.
Opening of Texas and Persian Gulf oil fields inaugurates era of cheap energy.
Global warming trend since late 19th century reported.
Milankovitch proposes orbital changes as the cause of ice ages.
Callendar argues that CO2 greenhouse global warming is underway, reviving interest in the question.
World War II. Grand strategy is largely driven by a struggle to control oil fields.
U.S. Office of Naval Research begins generous funding of many fields of science, some of which happen to be useful for understanding climate change.
Ewing and Donn offer a feedback model for quick ice age onset.
Phillips produces a somewhat realistic computer model of the global atmosphere.
Plass calculates that adding CO2 to the atmosphere will have a significant effect on the radiation balance.
Launch of Soviet Sputnik satellite. Cold War concerns support 1957-58 International Geophysical Year, bringing new funding and coordination to climate studies.
Revelle finds that CO2 produced by humans will not be readily absorbed by the oceans.
Telescope studies show a greenhouse effect raises temperature of the atmosphere of Venus far above the boiling point of water.
Downturn of global temperatures since the early 1940s is reported.
Keeling accurately measures CO2 in the Earth’s atmosphere and detects an annual rise. The level is 315 ppm.
Cuban Missile Crisis, peak of the Cold War.
Calculations suggest that feedback with water vapor could make the climate acutely sensitive to changes in CO2 level.
Boulder meeting on causes of climate change, in which Lorenz and others point out the chaotic nature of the climate system and the possibility of sudden shifts.
Emiliani’s analysis of deep-sea cores shows the timing of ice ages was set by small orbital shifts, suggesting that the climate system is sensitive to small changes.
International Global Atmospheric Research Program established, mainly to gather data for better short-range weather prediction but including climate.
Manabe and Wetherald make a convincing calculation that doubling CO2 would raise world temperatures a couple of degrees.
Studies suggest a possibility of collapse of Antarctic ice sheets, which would sea levels catastrophically.
Astronauts walk on the Moon, and people perceive the Earth as a fragile whole.
Budyko and Sellers present models of catastrophic ice-albedo feedbacks.
Nimbus III satellite begins to provide comprehensive global atmospheric temperature measurements.
First Earth Day. Environmental movement attains strong influence, spreads concern about global degradation.
Creation of U.S. National Oceanic and Atmospheric Administration, the world’s leading funder of climate research.
Aerosols from human activity are shown to be increasing swiftly. Bryson claims they counteract global warming and may bring serious cooling.
SMIC conference of leading scientists reports a danger of rapid and serious global climate change caused by humans, calls for an organized research effort.
Mariner 9 spacecraft finds a great dust storm warming the atmosphere of Mars, plus indications of a radically different climate in the past.
Ice cores and other evidence show big climate shifts in the past between relatively stable modes in the span of a thousand years or so.
Oil embargo and price rise bring first “energy crisis.”
Serious droughts and other unusual weather since 1972 increase scientific and public concern about climate change, with cooling from aerosols suspected to be as likely as warming; journalists talk of ice age.
Concern about environmental effects of airplanes leads to investigations of trace gases in the stratosphere and discovery of danger to ozone layer.
Manabe and collaborators produce complex but plausible computer models which show a temperature rise of several degrees for doubled CO2.
Studies find that CFCs (1975) and also methane and ozone (1976) can make a serious contribution to the greenhouse effect
Deep-sea cores show a dominating influence from 100,000-year Milankovitch orbital changes, emphasizing the role of feedbacks.
Deforestation and other ecosystem changes are recognized as major factors in the future of the climate.
Eddy shows that there were prolonged periods without sunspots in past centuries, corresponding to cold periods.
Scientific opinion tends to converge on global warming as the biggest climate risk in next century.
Attempts to coordinate climate research in U.S. end with an inadequate National Climate Program Act, accompanied by temporary growth in funding.
Second oil “energy crisis.” Strengthened environmental movement encourages renewable energy sources, inhibits nuclear energy growth.
U.S. National Academy of Sciences report finds it highly credible that doubling CO2 will bring 1.5-4.5EC global warming.
World Climate Research Programme launched to coordinate international research.
Election of Reagan brings backlash against environmental movement; political conservatism is linked to skepticism about global warming.
IBM Personal Computer introduced. Advanced economies are increasingly delinked from energy.
Hansen and others show that sulfate aerosols can significantly cool the climate, raising confidence in models showing future greenhouse warming.
Some scientists predict greenhouse warming “signal” should be visible by about the year 2000.
Greenland ice cores reveal drastic temperature oscillations in the span of a century in the distant past.
Strong global warming since mid-1970s is reported, with 1981 the warmest year on record.
Reports from U.S. National Academy of Sciences and Environmental Protection Agency spark conflict, as greenhouse warming becomes prominent in mainstream politics.
Villach conference declares expert consensus that some global warming seems inevitable, calls on governments to consider international agreements to restrict emissions.
Antarctic ice cores show that CO2 and temperature went up and down together through past ice ages, pointing to powerful biological and geochemical feedbacks.
Broecker speculates that a reorganization of North Atlantic Ocean circulation can bring swift and radical climate change.
Montreal Protocol of theVienna Convention imposes international restrictions on emission of ozone-destroying gases.
News media coverage of global warming leaps upward following record heat and droughts plus testimony by Hansen.
Toronto Conference calls for strict, specific limits on greenhouse gas emissions.
Ice-core and biology studies confirm living ecosystems make climate feedback by way of methane, which could accelerate global warming.
Intergovernmental Panel on Climate Change (IPCC) is established.
Level of CO2 in the atmosphere reaches 350 ppm.
After 1988 it is difficult to identify historical milestones. Not only do we lack perspective, but the effort was so large that progress on a given topic, even more than before, came through a variety of results spread over several groups and several years.
A TENTATIVE LIST:
Fossil-fuel and other industries form Global Climate Coalition in US to lobby politicians and convince the media and public that climate science is too uncertain to justify action.
First IPCC report says world has been warming and future warming seems likely. Industry lobbyists and some scientists dispute the tentative conclusions.
Mt. Pinatubo explodes; Hansen predicts cooling pattern, verifying (by 1995) computer models of aerosol effects.
Global warming skeptics emphasize studies indicating that a significant part of 20th-century temperature changes were due to solar influences. (The correlation would fail in the following decade.)
Studies from 55 million years ago show possibility of eruption of methane from the seabed with enormous self-sustained warming.
Conference in Rio de Janeiro produces UN Framework Convention on Climate Change, but US blocks calls for serious action.
Study of ancient climates reveals climate sensitivity in same range as predicted independently by computer models.
Greenland ice cores suggest that great climate changes (at least on a regional scale) can occur in the space of a single decade.
Second IPCC report detects "signature" of human-caused greenhouse effect warming, declares that serious warming is likely in the coming century.
Reports of the breaking up of Antarctic ice sheets and other signs of actual current warming in polar regions begin affecting public opinion.
Toyota introduces Prius in Japan, first mass-market electric hybrid car; swift progress in large wind turbines and other energy alternatives.
International conference produces Kyoto Protocol, setting targets to reduce greenhouse gas emissions if enough nations sign onto a treaty.
The warmest year on record, globally averaged (1995, 1997, and 2001-2006 were near the same level). Borehole data confirm extraordinary warming trend.
Qualms about arbitrariness in computer models diminish as teams model ice-age climate and dispense with special adjustments to reproduce current climate.
Criticism that satellite measurements show no warming are dismissed by National Academy Panel.
Ramanathan detects massive "brown cloud" of aerosols from South Asia.
Global Climate Coalition dissolves as many corporations grapple with threat of warming, but oil lobby convinces US administration to deny problem.
Variety of studies emphasize variability and importance of biological feedbacks in carbon cycle, liable to accelerate warming.
Third IPCC report states baldly that global warming, unprecedented since end of last ice age, is "very likely," with possible severe surprises. Effective end of debate among all but a few scientists.
Bonn meeting, with participation of most countries but not US, develops mechanisms for working towards Kyoto targets.
National Academy panel sees a "paradigm shift" in scientific recognition of the risk of abrupt climate change (decade-scale).
Warming observed in ocean basins; match with computer models gives a clear signature of greenhouse effect warming.
Studies find surprisingly strong "global dimming," due to pollution, has retarded arrival of greenhouse warming, but dimming is now decreasing.
Variety of studies increase concern that collapse of ice sheets (West Antarctica, perhaps Greenland) can raise sea levels faster than most had believed.
Deadly summer heat wave in Europe accelerates divergence between European and US public opinion.
In controversy over temperature data covering past millenium, most conclude climate variations were substantial, but not comparable to the post-1980 warming.
First major book, movie and art work featuring global warming appear.
Kyoto treaty goes into effect, signed by major industrial nations except US. Japan, Western Europe, regional US entities accelerate work to retard emissions.
Hurricane Katrina and other major tropical storms spur debate over impact of global warming on storm intensity.
Level of CO2 in the atmosphere reaches 380 ppm.
—From “The Discovery of Global Warming” by Spencer Weart | <urn:uuid:e0b05ace-6dc8-4716-ae1e-330878cbc2da> | CC-MAIN-2016-26 | http://www.livescience.com/1292-history-climate-change-science.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00140-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.895213 | 2,442 | 3.359375 | 3 |
Body Measurements Help Determine Healthy Weight Status
A healthy weight is a range that relates statistically to good health. Being overweight or obese is statistically related to weight-related health problems, such as heart disease and hypertension.
Healthcare professionals use three key measurements to determine whether a person is at a healthy weight:
Body Mass Index (BMI): A measure that correlates to how much fat is on your body.
Waist size: A measure that helps to indicate whether the location of your body fat is a health hazard.
Risk factors for developing weight-related health problems: For example, your cholesterol level, blood pressure, and family history.
What you should weigh for optimal health may be quite different from what someone else should weigh, even if that someone is your same height, gender, and age.
When you step onto your bathroom scale, the number shows you how much your total body weighs. This total includes fat, muscle, bone, and water. Even though a healthy weight depends on more than the number on the scale, that number is a general starting point that you can use to assess your weight.
After you know your weight, you can compare it to the healthy weight ranges of the quick estimate method or use it to calculate your BMI. But what if your weight falls above these ranges? For most people, that’s less healthy. The more that you weigh beyond and above the healthy weight range for your height, the greater your risk for weight-related health problems. | <urn:uuid:fe26f634-2a56-4e16-951d-931af82435cd> | CC-MAIN-2016-26 | http://www.dummies.com/how-to/content/body-measurements-help-determine-healthy-weight-st.navId-323469.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00025-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939843 | 306 | 3.75 | 4 |
|Name: _________________________||Period: ___________________|
This test consists of 15 multiple choice questions and 5 short answer questions.
Multiple Choice Questions
1. Isabel Allende is a native of what country?
(d) Puerto Rico
2. What does Allende wish for more than anything at this point?
(a) For Paula to reawaken
(b) For Pancho to return
(c) For Willie to return
(d) For Ernesto to get on with his life
3. How old is Allende when she returns to Tata's house from Lebanon?
4. Who lures Allende into the woods for inappropriate reasons?
(a) A cab driver
(b) A traveling salesman
(c) A local fisherman
(d) A policeman
5. To what city do Allende, her mother and brothers follow Tio Ramon when he is transferred?
6. Who is Willie?
(a) Allende's editor
(b) Allende's driver
(c) Allende's husband
(d) Allende's accountant
7. What is important to know about Paula at this time?
(a) She has just graduated
(b) She has just married
(c) She is critically ill
(d) She is starting a business
8. What did Paula do prior to her illness?
(a) Taught music theory
(b) Taught art history at a university
(c) Volunteered as a school psychologist
(d) Volunteered at the art institute
9. Allende states that the only certainty of her life was _____________________.
(a) Her husband
(b) Her books
(c) The hospital
10. What had Allende recently completed?
(a) Knitting a sweater
(b) A new novel
(c) A cooking course
(d) A trip around the world
11. In what country was Allende visiting when she was notified about her daughter?
12. Who is Ernesto?
(a) Allende's husband
(b) Paula's husband
(c) Paula's son
(d) Paula's stepson
13. Where was Allende's home at the time?
(b) San Francisco
(c) New York
14. What disease was Paula diagnosed with?
15. What was Pablo's passion?
Short Answer Questions
1. When does Allende see her father again?
2. What did Allende's mother do to make some extra money?
3. Which one of Allende's brothers was a great scholar?
4. With whom does Allende and her family live with after their return home?
5. What does Allende try to force Paula's doctors to do?
This section contains 361 words
(approx. 2 pages at 300 words per page) | <urn:uuid:94557e16-ff17-489e-a3bc-72e38f5b8dc8> | CC-MAIN-2016-26 | http://www.bookrags.com/lessonplan/paula/test1.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00108-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.870987 | 607 | 2.640625 | 3 |
Périgord (pārēgôrˈ) [key], region of SW France, now included in Dordogne and parts of Lot-et-Garonne depts. Périgueux (the capital) and Bergerac are the chief cities. The region consists of low, arid limestone plateaus, the deep and fertile valleys of the Lot and Dordogne rivers, and extensive oak forests. Périgord is noted for its truffles and goose livers, which are its major exports. Its farms produce wheat, corn, and tobacco, and raise livestock. The traditional metallurgical industry is concentrated at Fumel. Near Madeleine and Moustier are numerous cave dwellings from the Paleolithic period. Occupied during Gallic and Roman eras by the Petrocorii, Périgord became a county under the Merovingians (9th cent). First enfeoffed to the dukes of Aquitaine, it later passed to England, was returned to France c.1370 as a fief of the French crown, and passed eventually, through a complicated succession, to the house of Bourbon (1574). It was inherited by Henry of Navarre and, after he became king of France as Henry IV (1589), was incorporated (1607) into the royal domain as part of the province of Guienne.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | <urn:uuid:21379372-7763-42b3-90e6-357c4845a397> | CC-MAIN-2016-26 | http://www.factmonster.com/encyclopedia/world/perigord.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00015-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960044 | 309 | 2.5625 | 3 |
It's heavy trucks that wear out highways, but truckers pay a hefty price for use of the public roads, officials say.
"It is pretty well documented that trucks are responsible for most damage to road surfaces," says Raymond Forsyth, chief of the pavement laboratory of the California Department of Transportation in Sacramento.
"The next time you're on a four-lane highway, compare the outside lane where the trucks travel with the inside lane where there are only cars," he suggests. "The outside lane is likely to be very badly distressed, but the inside lane may be almost pristine."
45,000 Miles of Highway
There are about 45,000 miles of highway in the California road system. About one-third of that is freeway.
Even though trucks are responsible for most of the wear and tear, they account for only a fraction of the miles traveled. Of the 196 billion miles driven on California roads last year, only 7.2 billion, or 3.7%, were logged by trucks.
"Cars aren't even considered in designing a pavement," Forsyth says. "Passenger vehicles add almost nothing to the equation."
The most striking example of that, he says, is the Arroyo Seco Parkway section of the Pasadena Freeway, where trucks are restricted.
"Normally, such a road surface (with trucks) would be expected to last 12 to 14 years," he says. "It has had a service life of 35 years without need for major rehabilitation."
More Axles Help
Damage to roads isn't caused by the overall weight of trucks, Forsyth says. Rather, it's how the weight is distributed.
"The more axles you have, the less weight is transferred to the pavement on each axle, and hence the less damage to the road," he says.
California law limits loads to 20,000 pounds per axle. Those who exceed it and get caught face fines of thousands of dollars.
The state spends about $510 million a year on highway rehabilitation (major reconstructive surgery) and maintenance (everything else, including patching potholes, clearing snow and litter, landscaping and fixing roadside rest stops).
Of that, about $172 million is spent on road surfaces alone--$110 million for rehabilitation and $62 million for maintenance.
Truckers pay the state $387 million a year in fuel taxes and commercial license fees.
Depending on how you look at it, that's either too much or too little.
It's 225% of the amount spent annually to repair road surfaces, but it's about 75% of the total when you include other items like snow clearing, litter pickup and landscaping.
Fuel taxes and commercial license fees are a major factor in making the truckers one of the most heavily taxed industries in the nation, with an average corporate tax rate of about 38%. That compares to the average for all companies of about 18%. | <urn:uuid:ab136aed-5bab-4a37-bb56-d907abba2489> | CC-MAIN-2016-26 | http://articles.latimes.com/1986-01-12/news/mn-27161_1_road-surface | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00166-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963236 | 595 | 2.609375 | 3 |
James Watson Hypothesis Links Cancer to Antioxidants
In a paper he called “among my most important work since the double helix,” James D. Watson. Ph.D., posited a theory that links cancer progression in late stages of the disease to the presence of antioxidants.
The paper was published today in Open Biology, a journal of Great Britain’s Royal Society. According to the DNA co-discoverer and chancellor emeritus of Cold Spring Harbor Laboratory, reactive oxygen species (ROS) pose a dichotomy: They play a positive role in mediating the killing of stressed cells during apoptosis. But when no such cells exist, however, ROS’s are constantly being neutralized by antioxidative proteins, causing irreversible damage to DNA and RNA among key nucleic acid molecules and proteins.
“Unless we can find ways of reducing antioxidant levels, late-stage cancer 10 years from now will be as incurable as it is today,” Dr. Watson said in a statement. “Although mortality from many cancers has been steadily falling, particularly those of the blood [i.e., leukemias], the more important statistic may be that so many epithelial cancers (carcinomas) and effectively all mesenchymal cancers (sarcomas) remain largely incurable.”
So much, he adds, for consuming higher levels of antioxidants found in fruits: “Blueberries best be eaten because they taste good, not because their consumption will lead to less cancer,” Dr. Watson wrote.
Dr. Watson has long theorized on cancer and cancer policy. In 2009, writing in The New York Times, he restated his call for concentrating federal cancer spending on basic research rather than clinical centers. Two years later, in the journal Cancer Discovery, Dr. Watson suggested that more effective anticancer drug targets may be found through RNAi technologies designed to pinpoint key regulatory and metabolic weaknesses of "always-on" cancer cells, rather than through reversal of "always-on" signals.
In his latest paper, Dr. Watson theorized that the cell-killing ability of current anticancer therapies—from radiation to Taxol and other chemotherapeutic agents—occurs mainly due to the induction of apoptosis by ROS. This, he wrote, would explain why cancers that resist chemotherapy also equally resist ionizing radiotherapy: Both depend upon a ROS-mediated cell-killing mechanism.
High levels of ROS-destroying antioxidants, Dr. Watson added, accounts for cancer cells largely driven by mutant proteins being among the hardest to produce a response via treatment. He cited recent research showing up-regulation of the gene transcription factor Nrf2—which controls the synthesis of antioxidants—both when cells proliferate as well as when such oncogenes as RAS, MYC, and RAF are active.
“This makes sense,” Dr. Watson concluded, “because we want antioxidants present when DNA functions to make more of itself.” | <urn:uuid:6659353b-8049-4875-b048-c6b51ac5d953> | CC-MAIN-2016-26 | http://www.genengnews.com/keywordsandtools/print/4/29968/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00006-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.931181 | 619 | 3.140625 | 3 |
This file is also available in Adobe Acrobat PDF format
"To see what is in front of one's nose requires a constant struggle."--George Orwell
Organisms play two roles in evolution. The first consists of carrying genes; organisms survive and reproduce according to chance and natural selection pressures in their environments. This role is the basis for most evolutionary theory, it has been subject to intense qualitative and quantitative investigation, and it is reasonably well understood. However, organisms also interact with environments, take energy and resources from environments, make micro- and macrohabitat choices with respect to environments, construct artifacts, emit detritus and die in environments, and by doing all these things, modify at least some of the natural selection pressures present in their own, and in each other's, local environments. This second role for phenotypes in evolution is not well described or well understood by evolutionary biologists and has not been subject to a great deal of investigation. We call it "niche construction" (Odling-Smee 1988) and it is the subject of this book.
All living creatures, through their metabolism, their activities, and their choices, partly create and partly destroy their own niches, on scales ranging from the extremely local to the global. Organisms choose habitats and resources, construct aspects of their environments such as nests, holes, burrows, webs, pupal cases, and a chemical milieu, and frequently choose, protect, and provision nursery environments for their offspring. Niche construction is a strongly intuitive concept. It is far more obvious than natural selection because it is far easier to observe individual organisms doing niche construction than to observe them being affected by natural selection. It is self-evident that all organisms must interact with their environments to stay alive, and equally obvious that, when they do, it is not just organisms that are likely to be affected by the consequences of these interactions, but also environments. That organisms actively contribute toward both the "construction" and "destruction" of their own and each other's niches is scarcely news. So why write a book about it?
The answer is that, when subject to close scrutiny, it becomes clear that niche construction has a number of important, but hitherto neglected implications for evolutionary biology and related disciplines. In fact, in this book we go so far as to argue that niche construction changes our conception of the evolutionary process. Niche construction should be regarded, after natural selection, as a second major participant in evolution. Rather than acting as an "enforcer" of natural selection through the standard physically static elements of, for example, temperature, humidity, or salinity, because of the actions of organisms, the environment will be viewed here as changing and coevolving with the organisms on which it acts selectively.
Using a combination of empirical data, comparative argument, and mathematical modeling we will try to convince the reader of the merits of this new way of thinking about evolution. We will illustrate how niche construction can change the direction, rate, and dynamics of the evolutionary process. Niche construction is a potent evolutionary agent because it introduces feedback into the evolutionary dynamic. Niche construction by organisms significantly modifies the selection pressures acting on them, on their descendants, and on unrelated populations. The later chapters of this book describe how niche construction can be incorporated into empirical and theoretical evolutionary analyses, and how it can be used to generate hypotheses. We will present methods for testing these hypotheses and point to the broad areas of biology and the social sciences to which they are applicable. Our hope is that the niche-construction perspective will prove fruitful by leading to the development of testable new theories and facilitating greater understanding of the evolutionary process.
In this first chapter we introduce the concept of niche construction, and spell out its major consequences with illustrative examples from natural history. We describe four major ramifications of niche construction. Niche construction may (1) in part, control the flow of energy and matter through ecosystems (ecosystem engineering), (2) transform selective environments to generate a form of feedback that may have important evolutionary consequences, (3) create an ecological inheritance of modified selection pressures for descendant populations, and, finally (4) provide a second process capable of contributing to the dynamic adaptive match between organisms and environments (see fig. 1.1). We then consider some of the implications of these consequences for three different bodies of biological theory, namely, evolutionary theory itself, the relationship between evolutionary theory and ecosystem ecology, and the relationship between evolutionary theory and the human sciences.
1.1 THE CONSEQUENCES OF NICHE CONSTRUCTION
1.1.1 Ecosystem Engineering
We begin with an example of a potent niche constructor, the genus of leaf-cutter ants, Atta, as described by the myrmecologists Bert Hölldobler and Edward Wilson (1994). At present, 15 species of leaf-cutter ants are known to science. All of them live in the New World across a geographical range that stretches from the southern states of the United States of America to the south of Argentina. The most salient niche-constructing activity of this genus is "agriculture." Leaf-cutter ants grow fungi on substrates of fresh vegetation that they initially cut and collect from outside their nests and then carry into their nests to form the basis of fungal gardens (fig. 1.2). The fungal crop that the ants grow consists of a fluffy white mold, resembling bread mold, made up of masses of thread-shaped hyphae. The ants' agriculture is so efficient that it not only provides them with an abundant supply of food, but enables individual colonies to reach staggeringly large sizes, with a single colony containing millions of workers. In one extreme case described by Hölldobler and Wilson, a nest of the species Atta sexdens consisted of about a thousand chambers, with the chambers varying in size from that of a closed fist to that of a soccer ball. Three hundred and ninety of its chambers were still in use when it was discovered, and they were filled with both fungal gardens and ants. This particular nest was so huge that the loose soil that had been brought out and piled on the ground by the ants in the course of making their nest occupied over 22 cubic meters and weighed approximately 40,000 kilograms, or 44 tons. Such an example makes it clear that the collective leaf-cutting activities of such large colonies of ants can have enormous impacts on the ants' surrounding environment.
Given such a prodigious capacity for niche construction, it is not surprising that several species of leaf-cutter ants, including Atta cephalotes and Atta sexdens, turn out to be among the worst pests of Central and South America. They destroy billions of dollars worth of agriculturally valuable crops each year. For instance, in Brazil, leaf-cutter ants are especially destructive in eucalyptus and citrus plantations. What is, perhaps, more surprising is that the same ants produce beneficial effects in ecosystems. For example, the ants turn over and aerate large quantities of soil in forests and grasslands, and they also circulate nutrients that are essential to the lives of many other species of organisms with whom they share their ecosystems. Moreover, it has recently been discovered that leaf-cutter ants can help the recovery of rainforests in areas where the primary forest has been destroyed by human farmers and loggers. Here, the ants' activities benefit newly established plants because the soil from their nests is much easier than the surrounding soil for young plant roots to penetrate. Also, the decomposition of the plant material that the ants store in their nests increases the soil's pH, thereby increasing its capacity to retain its nutrients, preventing them from being washed away out of reach of the plants.
Leaf-cutter ants are a good illustration of the first major consequence of niche construction. The activities of organisms can result in significant, consistent, and directed changes in their local environments. Simply by choosing or perturbing their habitats, for example, by repeatedly consuming the same resource, or repeatedly emitting the same detritus, organisms can substantially modify their worlds, and do so in a nonrandom or predictable manner. As a consequence, niche-constructing organisms frequently modify the environments of other organisms too, including organisms in other species. They also affect some of the properties of the ecosystems that they share with other species, in ways that may either harm or benefit other organisms. For instance, as the major herbivores of the neotropics, leaf-cutter ants clearly have an impact on the growth and density of those species of plants that they exploit, as well as on those plants that grow in the improved soil of their nests and those species that rely on the ants to disperse their seeds. Moreover, leaf-cutter ants have glands that secrete substances that kill virtually all bacteria and fungi, except for the single fungus that they cultivate.
While the leaf-cutter ants provide a particularly striking example, there is nothing remarkable about the fact that they have an impact on their local ecology. In chapter 2 of this book we will demonstrate that niche construction is extremely common. Population-community ecologists know a good deal about how organisms can affect each other's environments, both inter- and intraspecifically, and how, by doing so, they can influence such phenomena as the distribution and abundance of organisms, population and community structures, food webs, and trophic dynamics (Begon et al. 1996; DeAngelis 1992; Rosenzweig 1995). Similarly, ecosystem ecologists already have a good understanding of the many ways in which organisms can influence energy and matter flows through ecosystems when they take resources from them, or return detritus to them, and also how their influence can, in turn, affect the structure and function of ecosystems, the resistance and the resilience of ecosystems to perturbations, and the nature of various biogeochemical cycles (O'Neill et al. 1986; Odum 1989; Jones and Lawton 1995; Patten and Jorgensen 1995).
For our purposes, however, a recent insight from a team of ecosystem ecologists, Jones et al. (1994, 1997) and Jones and Lawton (1995), is particularly valuable. Jones et al. describe organisms that choose or perturb their own habitats as "ecosystem engineers," where "ecosystem engineering" is essentially the same as "niche construction." Jones et al. claim that when organisms invest in ecosystem engineering they not only contribute to energy and matter flows and trophic patterns in their ecosystems but in part also control them. They propose that organisms achieve their control via an extra web of connectance in ecosystems, which they call an "engineering web," and which is established by the interactions of diverse species of engineering organisms (Jones et al. 1997). This engineering web operates in conjunction with the familiar material (stoichiometric) and energy (thermodynamic) webs of connectance in ecosystems that are already studied by ecologists (Reiners 1986). Jones et al. also suggest that it is not always necessary for ecosystem engineers to contribute directly to a particular energy or material flow among a set of trophically connected organisms in an ecosystem for them to control the flow (Jones et al. 1997, p. 1952).
We can illustrate these ideas by using two of Jones et al.'s own examples, both taken from the Negev desert in Israel. The first is a case of engineering by microorganisms. In many deserts, including the Negev, the soil is extensively covered by dominant microphytic communities of blue-green algae, cyanobacteria, and fungi. Although these microorganisms are barely visible to the naked eye, they nevertheless have a powerful engineering effect because they secrete polysaccharides that bind the desert's soil and sand together to form a crust that not only protects their own colonies from heat, but also controls erosion, runoff, and site availability for the germination of higher plants in the desert (West 1990; Zaady and Shachak 1994; Jones et al. 1997). After rain, the asphaltlike patches that are created by these microorganisms reduce the absorption of water by about 30%, and this increases the runoff of water, allowing the water to form pools in pits previously dug, for example, by desert porcupines digging for geophytes. Windblown seeds then germinate in these moist pits and give rise to lush oases that may eventually harbor dozens of other species (Alper 1998). Yet all of this ultimately depends on the long reach of the engineering activities of microorganisms.
The second example is provided by three species of snail, Euchondrus spp., that eat endolithic lichens that grow under the surface of limestone rocks in the Negev desert. One consequence of this unusual form of herbivory is that the snails are major agents of rock weathering and also of soil formation in this desert. Their agency, however, is not due to the amount of lichens they consume, which is actually rather little. Instead, it is due to the unexpected fact that these snails have to physically disrupt and ingest the rock substrate in order to consume the lichens. They later excrete the rock material ingested as feces, which they deposit on the soil under the rocks. Shachak et al. (1987) estimated that the annual rate of biological weathering of these rocks by snails is 0.7 to 1.1 metric tons per hectare per year, which is sufficient to affect the whole desert ecosystem (Shachak et al. 1987; Shachak and Jones 1995). By converting rock to soil at this rate, the snails become major agents in soil formation.
So ecosystem control is one major new idea associated with the ecological effects of niche construction. It stems from the capacity of niche-constructing organisms to modify not only their own environments but also the environments of other organisms in the context of shared ecosystems.
1.1.2 The Modification of Selection Pressures
The second consequence of niche construction, and its first evolutionary consequence, derives from these ecological effects. If organisms modify their environments, and if in addition they affect, and possibly in part control, some of the energy and matter flows in their ecosystems, then they are likely to modify some of the natural selection pressures that are present in their own local selective environments, as well as in the selective environments of other organisms. In fact, it is difficult to see how organisms can avoid doing this. Environmental change modifies natural selection pressures (Endler 1986), while organisms are a known source of environmental change in ecology (Jones et al. 1997).
However, in order for niche construction to be a significant evolutionary process, it is not sufficient for niche-constructing organisms to modify one or more natural selection pressures in their local environments temporarily, because whatever selection pressures they do modify must also persist in their modified form for long enough, and with enough local consistency, to be able to have an evolutionary effect. Often this criterion will not be met. Moreover, independent agents in a population's environment may erase or overwhelm the effects of the population's niche construction, thereby ensuring that there is no persistent environmental change caused by the population's activities. For instance, other environmental agents may disperse a population's detritus by dissipating it over time, or, if the agents are detritivores, they may consume the population's detritus, or recycle it, instead of allowing the detritus to accumulate.
There are, however, at least two ways in which this persistence criterion can be satisfied. If, in each generation, each individual repeatedly changes its own ontogenetic environment in the same way, for instance, because each individual inherits genes that express the same niche-constructing phenotypes, then ancestral organisms may modify a source of natural selection by repetitive niche construction. The immediate environmental consequences of this kind of niche construction may be transitory, and may be restricted to single generations only, but if the same environmental change is reimposed sufficiently often and persists for a sufficient number of generations, it may modify the pressures of natural selection in local environments and therefore drive a new evolutionary episode.
For example, individual web spiders repeatedly build webs in their local environments, generation after generation, because they repeatedly inherit genes from their ancestors that are expressed in web construction. Even though spiders' webs are transitory objects, and are only too likely to be destroyed on a daily basis by other agents in the environment, such as other animals, or the weather, every time a spider's web is destroyed the spider's genes "instruct" the spider to make a new one. As a result there is almost always a web in the local environments of these spiders. The omnipresent web appears to have fed back, over many generations, in the form of modified natural selection. For instance, spiders on a web are exposed to the threat of avian predators, but they frequently engage in courtship and process prey on the web. Thus the web may have been a source of selection to favor further phenotypic changes in these species, including the marking of their webs to enhance crypsis, differential responses to the frequency of web vibration for prey and for a potential mate, or, as in the case of one genus of African orb-web spider, Cyclosa, the building of dummy spiders in the web probably to divert the attention of birds that prey on them (Edmunds 1974; Preston-Mafham and Preston-Mafham 1996).
The second way of satisfying the same persistence criterion occurs when all or a part of the consequences of one generation's niche-constructing activities persist in their modified form in the selective environments of a succeeding generation. By this means, ancestral organisms can bequeath legacies of modified natural selection pressures to their descendants via the external environment. This between-generational transmittal may be restricted to just two generations, as happens, for example, in maternal inheritance (Kirkpatrick and Lande 1989; Cowley and Atchley 1992; Schluter and Gustafsson 1993; Mousseau and Fox 1998a,b) when mothers modify the selection pressures in the local environments of their offspring. Alternatively, it may be a multiple-generation phenomenon in which the cumulative effects of generations of niche construction modify the selective environments of more distant descendants.
The common cuckoo, Cuculus canorus, provides a familiar two-generation example. In this species of brood parasite, cuckoo mothers repeatedly select a host belonging to some other bird species and lay their eggs in the chosen host's nests, subsequently relying entirely on this host to incubate the cuckoo eggs and raise the cuckoo young to independence. Cuckoo mothers have parasitized other birds in this way for generations and, as a result, have apparently bequeathed modified natural selection pressures to their offspring, in the form of these alien nurseries. The modified natural selection pressures have probably contributed to several novel adaptations in cuckoo chicks, including an extremely short incubation period, which ensures that the cuckoo chicks usually hatch before the host's chicks, and the ejection of the host's eggs from the nest or the killing of any of the host's chicks that have managed to hatch. These latter acts are themselves further examples of niche construction, this time via the agency of the cuckoo chicks rather than their mothers. The effect is that each cuckoo chick is raised on its own by its host and does not have to compete with any rival chicks when its foster parent arrives with food. However, having killed its rivals, the cuckoo chick must stimulate an adequate rate of feeding by its host. It appears to accomplish this task by behaving as if it were the equivalent of a whole brood of its host's chicks, instead of just a singleton. It does so by emitting a rapid begging call that mimics the begging sounds, as well as the calling rate, of a complete brood of its host's chicks (Davies et al. 1998). The initial choice of host's nests by cuckoo mothers may also have made possible some additional adaptations in their offspring when the latter become parents. For example, cuckoos that were raised in the nests of a particular host species subsequently tend to parasitize the same host species, possibly because as they developed they learned their hosts' characteristics (Krebs and Davies 1993). The mother's niche construction has modified the selection on her offspring, resulting in a cascade of evolutionary events, including the selection of further niche construction on the part of the chick. In recent years there has been increasing recognition that such maternal effects are both taxonomically widespread and evolutionarily significant (Wade 1998; Mousseau and Fox 1998a).
Earthworms provide an equally familiar multigenerational example, one which has the added distinction of having been described by Darwin (1881). Through their burrowing activities, their dragging organic material into the soil, their mixing it up with inorganic material, and their casting, which serves as the basis for microbial activity, earthworms dramatically change the structure and chemistry of the soils in which they live, often on a scale that exceeds even the soil-perturbing activities of leaf-cutter ants. For instance, in temperate grasslands earthworms can consume up to 90 tons of soil per hectare per year. Similarly, as a result of their industry, earthworms affect ecosystems by contributing to soil genesis, to the stability of soil aggregates, to soil porosity, to soil aeration, and to soil drainage. Also, because their casts contain more organic carbon, nitrogen, and polysaccharides than the parent soil, earthworms can also affect plant growth by ensuring the rapid recycling of many plant nutrients. In return, the earthworms probably benefit from the extra plant growth they induce by gaining an enhanced supply of plant litter (Kretzschmar 1983; Hayes 1983; Stout 1983; Lee 1985; Ellis and Mellor 1995). All of these effects typically depend on multiple generations of earthworm niche construction, leading only gradually to cumulative improvements in the soil. It follows that most contemporary earthworms inhabit local selective environments that have been radically altered, not just by their parent's generation, but by many generations of their niche-constructing ancestors. It is likely that some earthworm phenotypes, such as epidermis structure, or the amount of mucus secreted, coevolved with earthworm niche construction over many generations. Moreover, because these originally aquatic creatures are able to solve their water- and salt-balance problems through tunneling, exuding mucus, eliminating calcite, and dragging leaf litter below ground, that is, through their niche construction, earthworms have retained the ancestral freshwater kidneys (or nephridia) and have evolved few of the structural adaptations one would expect to see in an animal living on land (Turner 2000). For instance, earthworms produce the high volumes of urine characteristic of freshwater rather than terrestrial animals.
The production of oxygen by photosynthetic organisms is another multiple-generation example, which illustrates the extreme effects that niche construction can have on a global scale if its consequences happen to build up over long periods of time. When photosynthesis first evolved in bacteria, particularly in cyanobacteria, a novel form of oxygen production was created. The contribution of these ancestral organisms to the earth's 21% oxygen atmosphere must have occurred over billions of years, and it must have taken innumerable generations of photosynthesizing organisms to achieve. It is highly likely that modified natural selection pressures, stemming from the earth's changed atmosphere, played an enormous role in subsequent biological evolution. For example, many organisms have evolved a capacity for aerobic respiration, and they have also evolved other mechanisms, such as the enzyme superoxide dismutase, that protect cells against oxidation (Futuyma 1998).
In the next chapter we illustrate traits in many species that appear to have evolved as a consequence of selection generated by prior niche construction. However, if organisms evolve in response to selection pressures modified by their ancestors, there is feedback in the evolutionary dynamic, and it is well established that biological systems with feedback behave quite differently from those without it (Robertson 1991). This is a further point to which we shall repeatedly return in this book.
1.1.3 Ecological Inheritance
With the exception of the special cases of maternal and cultural inheritance (reviewed in chapter 3) standard evolutionary theory is typically concerned with only a single general inheritance system in evolution. It assumes that natural selection among individual organisms influences which individuals survive and reproduce to pass on their genes to the next generation (fig. 1.3a), and this genetic inheritance is generally regarded as the only inheritance system to play a major role in biological evolution. This assumption is not affected by niche construction as long as the physical consequences of the niche-construction process are erased in the selective environments of populations between each generation, and therefore last only a single generation. For instance, in the orb-web spider case, the repetitive construction of webs by spiders owes its capacity to influence the evolution of populations of spiders not to any between-generation persistence of the webs themselves (spiders' webs are far too transitory for that), but rather to the spider's genetic inheritance system. This ceases to be true, however, when the physical consequences of one generation's niche construction are not completely erased in the environments of its descendants but are instead bequeathed, either wholly or in part, from one generation to the next, in the form of legacies of modified natural selection pressures. This is what happens in the case of cuckoos over two generations, and in earthworms and in cyanobacteria over multiple generations. Here, then, is a third major consequence of niche construction. Where niche construction affects multiple generations, it introduces a second general inheritance system in evolution, one that works via environments. This second inheritance system has not yet been widely incorporated by evolutionary theory.
We call this second general inheritance system ecological inheritance (Odling-Smee 1988; Odling-Smee et al. 1996). It comprises whatever legacies of modified natural selection pressures are bequeathed by niche-constructing ancestral organisms to their descendants. Ecological inheritance differs from genetic inheritance in several important respects. First, genetic inheritance depends on the capacity of reproducing parent organisms to pass on replicas of their genes to their offspring. Ecological inheritance, however, does not depend on the presence of any environmental replicators, but merely on the persistence, between generations, of whatever physical changes are caused by ancestral organisms in the local selective environments of their descendants. Thus, ecological inheritance more closely resembles the inheritance of territory or property than it does the inheritance of genes. Although the inheritance of property is common enough among human beings, it is not restricted to humans. As we have seen, cuckoos inherit an alien nest while earthworms inherit a modified soil environment. Ecological inheritance also has a lot in common with the more familiar concept of ecological succession, except that it has evolutionary as well as ecological consequences because it involves the inheritance by populations of modified natural selection pressures, via a succession of environmental states, which may then drive further evolutionary changes in those populations.
Second, when organisms inherit naturally selected genes, they are, in effect, inheriting information molecularly encoded in the nucleotide sequences of DNA. Genetic information is, of course, noncognitive (see chapter 4). Nevertheless, it is information that is used to inform the expression of phenotypes in ontogenetic environments, relative to their local selective environments (J. Holland 1992, 1995; Eigen 1992). In contrast, when organisms inherit legacies of modified natural selection pressures they typically do not inherit information. Instead they inherit some of the agents in their environments that select for their genes and that thereby determine which information the organisms express (J. Holland 1995).
Third, genes and biotically modified natural selection pressures are passed on from one generation to the next by completely different processes. Genetic inheritance depends on the between-generation processes of reproduction, including sexual reproduction, which means that genes can only be transmitted to new organisms once during their lives. It also means that genes can only be transmitted to organisms by parents, and in one direction only, from parents to offspring, rather than the other way round. However, an ecological inheritance, in the form of one or more biotically modified natural selection pressures, can potentially be bequeathed by any organism to any other organism, at any stage during an organism's lifetime, and therefore within as well as between generations. It is also possible for an ecological inheritance to travel backward in generational terms because offspring may sometimes modify their parents' selective environments, as well as their own and those of their descendants.
Finally, the selective environments of organisms can be modified either by their genetic relatives or by other unrelated organisms. In fact, any organism's selective environment is potentially modifiable by any other organism that happens to be a neighbor or that shares, or that has previously shared, some common physical aspect of a mutual environment or that is capable of exerting an indirect influence by affecting the flow of energy or materials through that environment. All such neighbors are ecologically related but they need not be genetically related. Ecological and genetic ancestors are not necessarily identical.
The way in which the two general inheritance systems operate in evolution, and how they interact with each other, is summarized in figure 1.3b. On the right of figure 1.3b genes are shown being transmitted by genetically related ancestral organisms at time t, to their genetic descendants at time t 3 1, in the usual way. On the left, however, selected habitats, modified habitats, artifacts, or in general, ancestrally modified sources of natural selection persist or are actively or effectively transmitted by these same organisms to their descendants in their local environments (E). Thus, the selective environments encountered by the descendent organisms at time t 3 1 do not just comprise independent sources of natural selection pressures as evolutionary theory currently implies. They stem partly from such independent environmental agents, for example, climate, weather, or physical or chemical events, but they also stem in part from sources of natural selection that have previously been modified by ancestral niche construction.
This capacity of organisms to modify some of their own selection pressures, whether between generations or within generations, also has a fourth consequence. It requires us to revise the concept of adaptation in evolution and to adjust its meaning along lines anticipated by Richard Lewontin (1983). Lewontin pointed out that contemporary evolutionary theory implicitly assumes that natural selection pressures in environments are decoupled from the adaptations of the organisms for which they select. Therefore, with some exceptions (reviewed in chapter 3), for example, those that involve frequency-dependent or habitat selection, standard theory treats sources of natural selection in environments and adaptations in organisms as independent of each other or, as Lewontin puts it: "The environment 'poses the problem'; the organisms 'posit solutions,' of which the best is finally 'chosen'" (1983, p. 276). What this classical approach overlooks, and what we are stressing here, is that the selective environments of organisms are themselves partly built by the niche-constructing activities of the organisms that they are selecting for. To quote Lewontin again: "Organisms do not adapt to their environments; they construct them out of the bits and pieces of the external world" (1983, p. 280). Therefore, some selection pressures cannot be decoupled from the adaptations of organisms. Instead they must be participants in a system of feedbacks between natural selection pressures in environments and adaptations in organisms.
We have already encountered several examples of this kind of feedback in action. For instance, the cuckoo chicks, having destroyed their host's brood, adapt to mimic the missing broods they have killed (Davies et al. 1998). Other equally simple examples are found in spiders. One discussed by Dawkins (1996) on the basis of work by Vollrath (1988, 1992) concerns how, when its prey crashes into its web, the prey neither breaks the web nor bounces off it, but sticks to it. Many web spiders evolved the ability to make the threads of their webs sticky enough to hang on to the prey. But how are the spiders to ensure that they themselves do not get stuck to their own webs yet are free to move around on them? Dawkins offers two answers. One involves the anointing of spiders' legs with a special oil that provides the spiders with some protection against the stickiness of their own webs, while the other involves spiders making some of the spokes of their own webs nonsticky, to allow themselves free movement along these spokes. Such examples nicely illustrate Lewontin's point.
Lewontin (1983, 2000) argued that the classical picture of evolution can be represented formally as a pair of differential equations in time:
dt = f (O, E), (1.1)
dt = g (E), (1.2)
Equation 1.1 states that evolution, or change in the organism over time, depends on both the current state of the organism and its environment, while equation 1.2 states that environmental change depends only on environmental variables. The crucial point is that these two equations are separable. Adapted organisms are not supposed to cause any of the environmental changes that subsequently select for adapted organisms. Hence, the evolution of organisms is generally assumed to be directed exclusively by independent natural selection pressures in environments, and not at all by the niche-constructing activities of organisms. Lewontin argued that what is actually happening in nature is better represented by a pair of coupled differential equations
dt = f (O, E), (1.3)
dt = g (O, E), (1.4)
in which the histories of both environment and organism are functions of both environment and organism. Equations 1.3 and 1.4 describe a situation in which niche-constructing organisms and their environments are, in effect, coevolving, because they are codetermining and codirecting changes in each other. Equations 1.3 and 1.4 describe the coevolution of organism and environment in which both are acting as both causes and effects.
Evolutionary biology has provided a compelling explanation for why organisms appear so extraordinarily well suited to the environments in which they live: namely, through the action of natural selection, species have come to exhibit those characteristics that enable survival and reproduction. However, there are in fact two logically distinct routes to the evolving match between organisms and their environments: either the organism changes to suit the environment, or the environment is changed to suit the organism. The first alternative is brought about through the process of natural selection, and the second is one possible outcome of the process of niche construction. Of course, in reality these two processes can seldom be separated.
Yet the standard view is that niche construction should not be regarded as a process in evolution because it is determined by prior natural selection. The unstated assumption is that the environmental source of the prior natural selection is independent of the organism (as formalized by eq. 1.2). However, in reality, the argument that niche construction can be disregarded because it is partly a product of natural selection makes no more sense than would the counter proposal that natural selection can be disregarded because it is partly a product of niche construction. One cannot assume that the ultimate cause of niche construction is the environments that selected for niche-constructing traits, if prior niche construction had partly caused the state of the selective environments (as formalized by eq. 1.4). Ultimately, such recursions would regress back to the beginning of life, and as niche construction is one of the defining features of life (see chapters 2 and 4), there is no stage at which we could say natural selection preceded niche construction or that selective environments preceded niche-constructing organisms. From the beginning of life, all organisms have, in part, modified their selective environments, and their ability to do so was, in part, a consequence of their naturally selected genes.
1.2 THE IMPLICATIONS
We can now start to consider some of the implications of adding niche construction to contemporary evolutionary theory. In doing so we introduce the three principal fields we shall be dealing with in later chapters: evolutionary theory itself, the relationship between evolutionary theory and ecosystem-level ecology, and the evolutionary basis of human cultural processes.
1.2.1 Implications for Evolutionary Theory
What difference does it make if the selection pressures acting on organisms stem from an independent environment or a niche-constructed environment? The principal difference is equivalent to the difference between Lewontin's coupled and uncoupled equations and can be encapsulated by one word, namely, "feedback." If organisms evolve in response to selection pressures modified by themselves and their ancestors, there is feedback in the system. In chapters 3 and 6 of this book we will describe and analyze theoretical models that illustrate some of the differences that this feedback makes to the evolutionary process. We show how traits whose fitness depends on alterable sources of selection (recipient traits) coevolve with traits that alter sources of selection (niche-constructing traits), resulting in very different evolutionary dynamics for both traits from what would occur if each had evolved in isolation. Our models demonstrate how feedback from a population's niche construction can cause either evolutionary inertia or momentum, lead to fixation of otherwise deleterious alleles, support stable polymorphisms where none are expected, eliminate what would otherwise be stable polymorphisms, and influence levels of linkage disequilibrium. There is no escaping the conclusion that niche construction is evolutionarily consequential.
A second difference is ecological inheritance. The niche-construction perspective stresses two legacies that organisms inherit from their ancestors, genes and a modified environment with its associated selection pressures. As we document in chapter 2, ecological inheritance is likely to be ubiquitous, particularly when the widespread evidence for maternal inheritance is taken into account (Mousseau and Dingle 1991; Roach and Wulf 1987; Bernado 1996; Mousseau and Fox 1998a). Consider, for instance, the observation that most species of insects are oviparous, with the female depositing eggs on or near the food required by the offspring upon hatching (Gullan and Cranston 1994). These offspring inherit from their mother the legacy of a readily available, nutritious larval food and a nursery environment. When one considers that careful selection of appropriate sites by ovipositing females is found in the vast majority of insects and that estimates of the number of insect species range from 5 to 80 million, the pervasiveness of ecological inheritance becomes clear.
The analyses that we will present in chapters 3 and 6 demonstrate that, because of the multigenerational properties of ecological inheritance, niche construction can generate unusual evolutionary dynamics. Theoretical population-genetic analyses have established that processes that carry over from past generations can change the evolutionary dynamic in a number of ways, generating time lags (in the response to selection of the recipient trait), momentum effects (populations continuing to evolve in the same direction after selection has stopped or reversed), inertia effects (no noticeable evolutionary response to selection for a number of generations), opposite responses to selection, and sudden catastrophic responses to selection (Feldman and Cavalli-Sforza 1976; Kirkpatrick and Lande 1989; Robertson 1991; Laland et al. 1996; Mousseau and Fox 1998a,b; Wolf et al. 2000). Wherever there is ecological inheritance, a product of niche construction, the evolutionary process may include some or all of these complications.
A third implication of niche construction is that it allows acquired characteristics to play a role in the evolutionary process, in a non-Lamarckian fashion, by their influence on selective environments through niche construction. When phenotypes construct niches, they become more than simply "vehicles" for their genes (Dawkins 1989), as they may now also be responsible for modifying some of the sources of natural selection in their environments that subsequently feed back to select their own genes. However, relative to this second role of phenotypes in evolution, there is no requirement for the niche-constructing activities of phenotypes to result directly from naturally selected genes before they can influence the selection of genes in populations. Animal niche construction may depend on learning and other experiential factors, and in humans it may depend on cultural processes.
The Galápagos woodpecker finch provides a specific example (Alcock 1972). These birds create a woodpecker-like niche by learning to use a cactus spine or similar implement to peck for insects under bark (Tebbich et al. 2001). While true woodpeckers' (Picidae) bills are adaptive traits fashioned by natural selection for grubbing, the finch's capacity to use spines to grub for insects is not an adaptation. Rather, the finch, like countless other species, exploits a more general and flexible adaptation, namely, the capacity to learn, to develop the skills necessary to grub in environments that reliably contain cactus spines and similar implements. The finch's use of spines develops reliably as a consequence of its ability to interact with the environment in a manner that allows it to benefit from its own experience (Tebbich et al. 2001). Moreover, the finch's learning certainly opens up resources in the bird's environment that would be unavailable otherwise and is therefore an example of niche construction. This behavior probably created a stable selection pressure favoring a bill able to manipulate tools rather than the sharp, pointed bill and long tongue characteristic of woodpeckers. Since tool manipulation can depend in part on learning, there is a further twist to this example. Niche-constructing skills influenced by learning could modify natural selection in favor of an enhanced learning ability, and it would certainly be interesting to know whether the learning capabilities and their neural substrates in this species differ from those in closely related non-tool-using species. While the information acquired by individuals through ontogenetic processes cannot be inherited because it is erased when they die, processes such as learning can nonetheless still be of considerable importance to subsequent generations because learned knowledge can guide niche construction.
Beyond individual learning, a few species, including most vertebrates, have also evolved a capacity to learn from other individuals, and to transmit some of their own learned knowledge to others. The resulting "protocultural" processes may also underlie niche construction. An example is the spread of milk-bottle-top opening in a variety of British birds (Fisher and Hinde 1949; Hinde and Fisher 1951). These birds learned to peck open the foil cap on milk bottles and to drink the cream, and this behavior spread throughout Britain and into several other countries in Europe. Hinde and Fisher found that this behavior probably spread by local enhancement, where the birds' attention was drawn to the milk bottles by a feeding conspecific, and after this initial tip-off, they subsequently learned on their own how to open the tops. However, further analysis by Sherry and Galef (1984) revealed that, in addition to social learning by local enhancement, milk-bottle-top opening could be acquired by other means, for example, it could also spread if the birds were merely exposed to opened milk bottles, even if there were no other birds present and performing the opening behavior. In this example, the birds' niche-constructing behavior is propagated by local enhancement. However, by creating opened milk bottles, this niche construction biases the probability that other birds will learn to open bottles. Moreover, any selection acting on genetic variation at loci affected by milk-bottle opening would be modified in essentially the same manner as if genes were directly responsible for the behavior. For example, the niche construction might influence selection acting on the birds' learning capacities, foraging behavior, or digestive enzymes.
Acquired niche-constructing traits have almost certainly played a significant role in the evolution of hominids among whom cultural transmission processes are ubiquitous. In chapter 6 we will describe theoretical models that reveal circumstances under which cultural transmission can overwhelm natural selection, accelerate the rate at which a favored gene spreads, initiate novel evolutionary events, and trigger hominid speciation.
1.2.2 Implications for Ecology
The niche-construction outlook may also shed light on problems traditionally considered within the domain of ecology. This is largely because of ecosystem engineering, which modulates and partly controls the flow of energy, matter, and information through ecosystems. Genes that interact via niche construction's effects on an external environment do not always have to be in the same population. In later chapters we will demonstrate how genes in different populations may interact with each other via biotic and even abiotic components in the environment to form environmentally mediated genotypic associations (EMGAs). Such associations may, of course, be present within a population as well (Wolf et al. 1998).
If, in a single population, genetic variation is expressed in a niche-constructing phenotype that affects natural selection acting on other genes in the same population, then the population will merely co-direct its own evolution through niche construction. However, if the niche construction modifies natural selection acting on genes in a second population, then the first population will now codirect the evolution. Conceivably, the induced change in the second population could feed back to the first population in the form of another modified natural selection pressure. The two populations would therefore coevolve through niche construction.
This coevolution could also be indirect. For instance, the first population's niche construction could influence the evolution of the second population by changing an intermediate component of their shared environment. An example here could be two species that are competing for the same environmental resource or nutrient and that coevolve because of this competition (DeAngelis 1992).
It may be possible to model many cases of coevolution by standard coevolutionary models, in terms of standard evolutionary ecology or genetics, without making any reference to either niche construction or ecological inheritance (Futuyma and Slatkin 1983; Thompson 1994; Heesterbeek and Roberts 1995; Abrams 1996). This is either because niche construction is already implicit in some of these standard models or because in a lot of cases the explicit inclusion of niche construction would make no difference.
In some cases, however, for instance, where there is interspecific exploitative competition or where prey species share a common predator, niche construction cannot be omitted from formal analyses without distorting the processes involved, and in order to describe coevolution accurately it is necessary to treat niche construction as a process in its own right (Tilman 1982; Holt 1985; Abrams 1988; DeAngelis 1992; Holt et al. 1994). When the coevolution of populations is indirect and depends on the modification of an intervening environmental component by the niche-constructing phenotypes of either one or more coevolving populations, then the explicit inclusion of niche construction and ecological inheritance adds significantly to the models. This is especially likely to be true when the intermediate environmental component concerned is abiotic. For example, if niche construction resulting from a gene in a plant population causes the soil chemistry to change in such a way that the selection on genes in a second population of plants, or possibly of microorganisms, is also changed, then the first population's niche construction will drive the evolution of the second population simply by changing the physical state of the intervening abiotic environmental variable, in this case the soil. This kind of indirect coevolution via intermediate abiota is not well described by conventional population-genetic coevolutionary models for the simple reason that abiotic components are not alive, they do not carry genes, and they cannot evolve. While the demographics of such interspecific interactions, and some issues, such as the conditions for coexistence, are well captured by ecological models, the evolutionary ramifications are comparatively underexplored. Yet abiota are continuously subject to change by niche-constructing organisms (Jones et al. 1997), and any changes brought about through the activities of one population of organisms may easily serve as a legacy of modified natural selection for another. Thus adding niche construction and ecological inheritance to population-genetic coevolutionary models may make it possible to capture these interspecific interactions. As the dynamics of physical change in abiota are likely to be quite different from the dynamics of evolutionary change in populations, this kind of indirect feedback among co-evolving species via intermediate abiota may generate some interesting and as yet underexplored behavior in coevolutionary systems.
Ecosystem engineering (Jones et al. 1994, 1997) further illustrates the utility of the niche-construction perspective. Jones et al. point to several ecosystem phenomena that cannot be understood in terms of energy and matter flows only. They stress the critical role played by the creation of physical structures and other modifications of their environments by organisms that partly control the distribution of resources for other species. Ecosystem engineering does not always conform to the principles of mass flow and the conservation of energy, nor to stoichiometry requirements, because ecosystem engineers are not necessarily part of these flows or cycles, but they can control them (Jones et al. 1997). We elaborate on this point in chapter 5. Gurney and Lawton (1996) have demonstrated theoretically how the efficacy with which niche construction acts to degrade a virgin habitat determines not only whether there will be no engineers, a stable population of engineers, or population cycles in the frequency of engineering, but also the extent of virgin and degraded habitat.
Evolutionary phenomena associated with niche construction complement and add to Jones et al.'s observations of the ecological repercussions of engineering. For example, when they engineer, niche-constructing organisms frequently influence their own evolution by modifying their own selective environments, perhaps by changing abiotic components or chains of such components. Second, niche-constructing organisms also influence the evolution of other populations, again often indirectly via intermediate abiotic components. Third, some organisms create new niches for themselves, for example, through technological innovation or relocation to a novel environment, which again can influence the dynamics of their ecosystems. Fourth, evolutionary and coevolutionary events can operate on ecological time scales, which means that the dynamics of abiotic components may reflect gene frequency changes in evolving engineering species. However, these complications do not necessarily mean that ecological analyses become intractable, and in chapter 8 we describe empirical methods and theory that can be used to investigate the ecological ramifications of niche construction.
A niche-construction perspective might also promote a much closer integration between ecosystem-level ecology and evolutionary theory. Hitherto, it has proved difficult to apply evolutionary theory to ecosystems, or even to much reduced ecosystem modules, because of the presence of nonevolving abiota in ecosystems. However, the proposed extension of evolutionary theory, illustrated in figure 1.3b, is indifferent to whether any source of natural selection that is modified by niche construction is biotic or abiotic. In chapters 5 and 8 we will show how extending evolutionary theory along these lines allows abiotic ecosystem variables to be included in both evolutionary and coevolution models.
With the omission of niche construction, standard evolutionary theory underplays the full set of interactions that occur between biotic and abiotic components in ecosystems and ignores diverse forms of feedback that contribute to coevolutionary scenarios and ecosystem dynamics. This is one reason why it has hitherto been difficult to integrate process-functional and population-community ecology with each other and with standard evolutionary theory (O'Neill et al. 1986). When niche construction is incorporated, information (in the sense spelled out in chapter 4) can be seen to flow through ecosystems, and evolutionary control webs begin to emerge.
1.2.3 Implications for the Social Sciences
We shall also address the relationship between human cultural processes and human genetic evolution. At present, contemporary evolutionary theory provides a restricted basis for understanding how human cultural processes relate to human genetic processes in evolution (Laland et al. 1999). Most theory includes only one evolutionary inheritance system, genetic inheritance. It can therefore assign only one role to phenotypes in evolution, that of contributing to genetic inheritance through their differential survival and reproduction. The theory does concede that human cultural activities may influence or may actually be human adaptations, or be the result of other human adaptations, and that cultural processes may also influence human fitness, but it does not concede anything more. In effect, the assumed exclusiveness of the genetic inheritance system, as espoused by classical sociobiology (Wilson 1975), renders all the other consequences of human cultural activities evolutionarily irrelevant.
Niche construction extends contemporary evolutionary theory by the introduction of two liberating innovations. First, as we have already seen, niche construction assigns a second role to phenotypes in evolution, while ecological inheritance provides a second inheritance system to which phenotypes can potentially contribute. In chapter 6 we will see that ecological inheritance is likely to have been of paramount importance in human evolution, where material culture has played a number of roles. Second, there is no requirement for niche construction to result directly from genetic variation before it can influence the selection of genetic variation. For example, niche construction may depend on learning, as in the case of the woodpecker finch and British birds discussed above, and in humans niche construction may also depend on cultural processes. To cite one well-known example, when our ancestors first domesticated cattle by agricultural niche construction, they apparently modified a natural selection pressure on a gene that enables the enzyme lactase, needed for the digestion of milk, to be synthesized by human adults (Feldman and Cavalli-Sforza 1989; Durham 1991; Holden and Mace 1997). This demonstrates how cultural processes are not just a product of human genetic evolution, but also a cause of human genetic evolution. Adding niche construction and ecological inheritance to contemporary evolutionary theory may therefore improve our understanding of the relationship between human genetic and cultural processes.
There have been two principal reasons why many human scientists have found it difficult to make use of evolutionary theory. One is that the theory appears to offer too little. Human scientists are predominantly interested in human behavior and cultural processes, rather than just genes, and as a consequence they see little useful point of contact with evolutionary theory. Our niche-construction framework may provide such a bridge because it emphasizes the active role that organisms play in the evolutionary process. Humans are not just passive vehicles for genes, they actively modify sources of natural selection in environments. They are the ultimate niche constructors. A second reason why human scientists have difficulty with evolution is the simplicity of adaptationist accounts. Adding niche construction inevitably makes evolutionary theory more complicated, and any extra complexity must prove worthwhile to those scholars for whom environmental effects and interactions between organisms and environments are the focus of study. The relevance of the niche-construction perspective to these issues is discussed in chapter 9, where we illustrate how our framework can apply in the human sciences, providing methods and making empirically testable predictions. Indeed, many social scientists have already started to use niche construction as a useful theoretical tool.
1.3 PREVIOUS APPROACHES
If niche construction has as many consequences and implications as those we have now listed, why has it not already been incorporated into contemporary evolutionary theory? There are some theoretical devices by which contemporary evolutionary theory deals with niche construction and we discuss these in chapter 3. Here it is more appropriate to introduce some of the early forerunners of the idea, both to indicate how long the concept of niche construction has been appearing in the margins of evolutionary theory and to show that, in spite of its frequent appearances, the concept itself has received surprisingly little attention from biologists.
Perhaps the first person to draw attention to the idea of niche construction in a clear way was not even a biologist, but a physicist, Schrödinger (1944, p. 108), who did so in a lecture "Mind and Matter" given at Cambridge in 1956, as a companion to his earlier and more famous "What is Life?" lecture. It may have been because he was not a biologist that Schrödinger was able to take the outsider's advantage of being able to discriminate between the forest and the trees more easily than those who are already in the forest.
The evolutionary biologist Ernst Mayr also made an early contribution with a much-cited quotation from his book Animal Species and Evolution:
A shift into a new niche or adaptive zone, is almost without exception, initiated by a change in behavior. The other adaptations to the new niche, particularly the structural ones, are acquired secondarily. With habitat and food selection--behavioral phenomena--playing a major role in the shift into new adaptive zones, the importance of behavior in initiating new evolutionary events is self-evident (Mayr 1963, p. 604).
In this passage Mayr is clearly drawing attention not just to the importance of behavior in evolution but also to how organisms can, in part, actively determine their own selective environments by niche-constructing-type activities, which then select for different structural adaptations. However, as Plotkin (1988) pointed out, having made this emphatic claim, Mayr himself did not follow it up. The idea was left, floating and unexploited.
Conrad Waddington (1959, 1969), another biologist, thought about niche construction in the same decade, but primarily in the context of organismal development, rather than for evolving populations. Waddington was also an early advocate of bringing developmental biology and evolutionary biology closer together, and it may have been this concern that drew his attention to the many ways in which organisms modify their own selective environments throughout their lives, by choosing and changing their own environmental niches. He called this phenotype-dependent component of both development and evolution "the exploitive system," and he pointed out that, as far as evolutionary theory was concerned, the exploitive system had originally been left out of the modern synthesis (Huxley 1942) and that it was still being left out by contemporary evolutionary theory. Once again, possibly because Waddington was a developmental rather than an evolutionary biologist, his concept of the exploitive system was not taken up.
The next important figure in this story was the Harvard population geneticist Richard Lewontin. In the 1970s and 1980s Lewontin wrote a series of articles on adaptation. For example, Gould's and Lewontin's (1979) influential article "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme" made many biologists think again about adaptation. However, that part of Lewontin's attack on adaptationism that was based on niche construction proved much less influential, and it has drawn little response (Futuyma 1998). Even for those biologists who accepted that Lewontin was correct, it was not immediately clear what to do about it.
Writing at roughly the same time as Lewontin, although from a very different point of view, Richard Dawkins came up with a pragmatic partial solution to this puzzle. In his book The Extended Phenotype, Dawkins (1982) proposed that genes not only express phenotypes, but that some of them also express "extended phenotypes" that, through the activities of organisms, reach beyond the bodies of the organisms themselves to change various components of their selective environments. To cite just one of his examples, Dawkins argued that the lodges, lakes, and dams that are built by beavers are extended phenotypes of beaver genes.
As far as this argument goes it is obviously right, but it is also too restricted. For instance, Dawkins recognized that any genes that are expressed in an extended phenotype should affect the probability of the survival and reproduction of the organism that is carrying them, and therefore their own representation in the next generation. However, Dawkins did not consider that the same gene might also affect the fitness of other genotypes, at other genetic loci, by changing their selective environment. A beaver's dam modifies many selection pressures in the beaver environment, some of which are likely to feed back to affect the fitness of genes that are expressed in quite different traits, such as their teeth, tails, feeding behavior, susceptibility to predation, diseases, life-history strategies, and social systems. Similar limitations constrain almost all the other approaches to niche construction in contemporary evolutionary theory, which are discussed in more detail in chapter 3.
Aside from the advocates of niche construction that we have mentioned, a number of other researchers have pursued and in some cases continue to pursue related ideas (Levins and Lewontin 1985; Wilson 1985; West-Eberhart 1987; West et al. 1988; Bateson 1988; Plotkin 1988; Wcislo 1989; Holt and Gaines 1992; Michel and Moore 1995; Brandon and Antonovics 1996; Moore et al. 1997; Wolf et al. 1998; Oyama et al. 2001; Sterelny 2001; Jablonka 2001; Griffiths and Gray 2001). There were, and still are, other scientists who have resisted the idea, for a variety of different reasons. For the moment, we will introduce only the two principal reasons. The first and probably the most straightforward reason for rejecting niche construction is a belief that it does not exist. For example, both George Gaylord Simpson (1949) and Theodore Dobzhansky (1955) maintained that, humans aside, organisms either do not construct or regulate their niches to any significant degree, or their impact on their environments is invariably too weak, too transient, or too capricious to have any substantial effect on selection pressures. They argued that there are always other more potent independent agents in environments that invariably override the effects of the organisms themselves, thereby preventing organisms from influencing either their own natural selection or the natural selection of their successors. Ultimately, this is an empirical issue, but there is already sufficient evidence to show that organisms can, and indeed do, modify at least some of their own natural selection pressures with sufficient consistency to render this older critical position implausible (see chapter 2).
Originally, this kind of criticism may have stemmed from an intuition that the environment is so vast, and organisms are so small, that the capacity of organisms to change their environments must be negligible. This intuition overlooks two points. One is that natural selection is local--indeed, it is famous for being "myopic." Niche construction becomes an effective codirecting agent in evolution through the modification of local selection pressures. The second point is that, in spite of its local ramifications, because niche construction may be influenced by inherited genes and the same genes may be inherited for many generations, niche construction may sometimes generate some truly large-scale changes in the wider world through the accumulation of effects over long spans of time. The production of oxygen by photosynthetic organisms is a clear example.
Resistance to the idea of niche construction usually takes a different form today. From many personal communications, we have found that most contemporary biologists are prepared to admit that niche construction occurs, and that when it occurs it is bound to have some ecological consequences, but they may still doubt whether it has anything other than trivial evolutionary consequences. Advocates of this position typically maintain that it does not matter much whether natural selection pressures originate from niche-constructing organisms or from other independent sources in environments, as the process of evolution will still be the same. Others accept that sometimes niche construction does affect the process but argue that the effect is not great enough to require anything more than some ad hoc adjustments to contemporary evolutionary theory. Such protagonists would suggest that niche construction is not sufficiently consequential to justify the kind of major revision of evolutionary theory that we are proposing here (fig. 1.3b). In the subsequent chapters we expand on the consequences of niche consequences that have been badly underestimated in the past and are still being underestimated today.
1.4 A PRECIS OF SUBSEQUENT CHAPTERS
The remaining chapters in this book represent a summary of our attempt to begin to redress the neglect of niche construction as an evolutionary agent. In chapter 2 we begin with definitions of niche construction, ecological inheritance, and other important terminology. Chapter 2 also presents a systematic collation and categorization of examples of niche construction, as well as of traits that appear to have evolved as a consequence of selection pressures modified by niche construction. These empirical data illustrate the ubiquity of niche construction.
Chapter 3 discusses previous attempts to handle aspects of niche construction, including frequency- and density-dependent selection, habitat selection, coevolution, indirect genetic effects, maternal inheritance, and various other approaches. We show that, while each of these separate bodies of theory has features germane to niche construction, none of them captures all of the pertinent characteristics. Thus, aside from our own analyses, there has been no attempt to explore the evolutionary consequences of niche construction in a systematic and general manner. Nonetheless, findings from these disparate approaches strongly suggest that niche construction is likely to be an important evolutionary process.
Chapter 3 goes on to investigate the likely evolutionary consequences of niche construction by presenting theoretical population-genetic models that explicitly incorporate the process of niche construction into the evolutionary dynamic. If niche construction is as important an evolutionary process as we claim, then its inclusion should make a significant difference to the behavior of theoretical models and should generate some unusual and hitherto unpredicted dynamics. In the text of chapter 3 we describe the findings of our formal analyses, with all technical and mathematical details relegated to the appendixes. The results of these analyses clearly demonstrate that there are myriad ways by which niche construction is likely to have an evolutionary impact.
There is one prerequisite of evolutionary theory that is often taken for granted. Natural selection can obviously only work when it is fed with a continuous supply of organismal diversity. Superficially, however, organisms appear to violate the second law of thermodynamics merely by staying alive and reproducing, since this law dictates that net entropy always increases and that complex, concentrated stores of energy will inevitably break down. In chapter 4 we ask what characteristics any organism must have merely to live. Drawing from theoretical developments in physics and thermodynamics, which offer a description of the Maxwell's-demon-type properties any agent needs to drive a system out of equilibrium, we identify the universal properties that niche construction must have if organisms are not to violate physical laws. As some characteristics of niche construction are universal, it follows that some aspects of the impact that niche-constructing organisms have on their environments will also be universal. Moreover, we suggest that, like natural selection, niche construction is a selective process and that, distinct from other evolutionary processes (e.g., drift, mutation), it introduces directedness to the evolutionary process.
If there are universal and characteristic features of niche construction then it follows that the evolutionary process must have universal and characteristic impacts on the local environments of evolving species. This raises the possibility that niche construction may have implications for ecosystem-level ecology and that a niche-construction perspective may shed light on problems traditionally considered within the domain of ecology. We spell out these implications in chapter 5, where we draw heavily on the insights of ecosystem-engineering researchers. We also illustrate how, with niche construction, evolutionary theory can help describe ecosystem dynamics in spite of the fact that ecosystems include abiotic components. An extended evolutionary theory that takes account of how evolving organisms affect both biota and abiota can provide an integrative evolutionary framework for ecology.
In chapter 6 we address the repercussions of the niche-construction perspective for the human social sciences. A focus on niche construction has important implications for the relationship between genetic evolution and cultural processes. By integrating developments in niche construction and gene-culture coevolutionary theory and explicitly recognizing the guiding role of learning and cultural processes in the niche construction of complex organisms, we develop a new evolutionary framework for the human sciences. This conceptual model is designed to act as a hypothesis-generating framework around which human scientists can structure evolutionary approaches to their disciplines.
In the final section of chapter 6 we illustrate how aspects of this new evolutionary framework can be translated into formal models that illustrate how cultural niche construction may have driven genetic evolution throughout the last two million years. Many results characteristic of gene-based niche construction are also found for cultural niche construction, although cultural niche construction may well have been, and may continue to be, even more potent. Any bias in cultural transmission, or differences in the rate at which alternative behavior patterns are acquired, can increase the impact of niche construction over and above that resulting from genes. Where cultural transmission and natural selection conflict, there is a broad range of circumstances under which cultural transmission can overwhelm natural selection. This is one reason why maladaptive behavior is possible among humans (Cavalli-Sforza and Feldman 1981).
We maintain that these proposed extensions fundamentally alter evolutionary theory. If we are correct, then there should be a set of empirical predictions that would generate data consistent with the niche-construction perspective and inconsistent with more conventional evolutionary perspectives. We acknowledge that unless and until we, or others, generate data that are irreconcilable with conventional neo-Darwinism, or at least are more consistent with the niche-construction perspective, the revisions to evolutionary thinking that we suggest are unlikely to become accepted by the biological community. Consequently, in chapters 7-9 we describe how our hypotheses concerning the evolutionary role of niche construction may be tested and suggest methods for doing so.
Empirical methods and predictions for evolutionary biology, ecology, and the social sciences, respectively, are spelled out in chapters 7, 8, and 9. These methods range from experiments that investigate the consequences of canceling or enhancing a population's capacity for niche construction, to comparative analyses that explore the phylogeny of trait evolution across related species, to directly testing the predictions of our theoretical models. In these chapters we also suggest areas in which our perspective may stimulate empirical study. There is a rich array of possibilities for testing the evolutionary credentials of niche construction, and we hope that this new perspective will stimulate empirical research in the biological and social sciences.
Finally, chapter 10 integrates these findings to make the case that niche construction should be regarded as a significant evolutionary process in its own right; part of an "extended evolutionary theory." For readers without the time or inclination to read all the preceding chapters, this final statement summarizes the contents of the book and our overall argument. It describes why we believe not only that the niche-construction perspective is a more accurate depiction of the evolutionary process than the conventional view, but that it will eventually prove to be a more useful evolutionary framework. We suggest that niche construction is not just an important addition to current evolutionary theory; it requires a reformulation of evolutionary theory. When evolutionary biologists and researchers in related disciplines start using niche construction as a means of formulating hypotheses and generating insights in their fields, then niche construction will be seen to earn its keep.
Return to Book Description
File created: 8/7/2007
Questions and comments to: email@example.com
Princeton University Press | <urn:uuid:d80476f8-cdbb-421a-98a7-5585c15fb46f> | CC-MAIN-2016-26 | http://press.princeton.edu/chapters/i7691.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00075-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943918 | 14,517 | 2.9375 | 3 |
Melbournes third electric tramway was not to suffer the fate of the first two closure. Instead, it was to become a key component of what is now the largest urban electric tramway in the English-speaking world, serving suburbs to the northwest of the CBD. This tramway was built, owned and operated by a private company, the North Melbourne Electric Tramways & Lighting Company.
Despite its name, the tramway run by this company was almost entirely within the municipalities of Essendon and Flemington.
The locality of Essendon first rose to prominence in 1851, as the first nights camp on the way to the Mt Alexander (Castlemaine) diggings, hence the name of Mt Alexander Road, the major road through the area until the coming of the Tullamarine Freeway.
Growth of Essendon was steady, which boded well for promoters of railways to the area. As a result, the Melbourne & Essendon Railway Company opened with a flourish on 22 October 1860. Unlike the private rail lines operating to the southeast of Melbourne, the company was not soundly based, as it had to lease its equipment from the government-owned Victorian Railways (VR). Furthermore almost half its income went to obtain running rights over VR metals to Spencer Street Station. The company collapsed less than four years later, closing on 1 July 1864.
Some three years later the assets were purchased by VR, but it was not to re-open until 1871, as part of the North Eastern line that ultimately terminated on the NSW border at Albury-Wodonga.
This salutary lesson was to linger in the minds of public transport promoters for many years. The area only had a limited horse omnibus operation between Flemington Bridge and Essendon Town Hall (Moonee Ponds), which connected with horse cab services to the city. Local residents viewed this situation as most unsatisfactory.
In the early 1880s the Melbourne Tramway & Omnibus Company (MTOC) proposed serving the growing metropolis with an extensive cable tramway system. Essendon residents actively lobbied the company to include the area in its plans.
However, the management of MTOC was much more hard-headed than that of the former railway company. MTOC was not about offering a public service, but making substantial profits for its investors. In order to achieve these profits, MTOC extensively analysed the revenue potential of any proposed tram routes, using revenue data on its existing horse bus services as a guide. Where this data was not available, it surveyed local residents and businesses as to their proposed usage.
Unfortunately for Essendon, MTOC determined that a cable tramway to Essendon would not be a paying proposition so the closest a line came to Essendon was Flemington Bridge, which was opened in 1890.
This was the period of the notorious land boom in Melbourne, and tramway schemes were two-a-penny. All of these schemes were aimed to increase profits of real estate promoters, and Essendon was to see two of these proposals, neither of which got off the ground.
Messrs Booth, Ellison & Company put forward the first effort in June 1888. This was to build an electric tramway from the MTOC terminus at Flemington Bridge to North Essendon along Mt Alexander Road and Buckley Street. Given the primitive state of electric tramway technology, this indeed was a courageous proposal. It should not be a surprise that this initiative disappeared without a trace.
The Essendon Land, Tramway & Investment Company was an altogether different affair. It wanted to build a tramway from Essendon railway station westwards along Buckley Street to the municipal boundary with Keilor, a distance of 1.5 miles. The Essendon Council endorsed the company, but to no avail. Unlike Messrs Booth et al, this proposal did not disappear without a trace, but instead collapsed in a spectacular fashion, taking the savings of many naïve investors with it.
The collapse of the land boom in 1891 and onset of depression put paid to such proposals for the next decade.
At the beginning of the twentieth century, electric tramways were buoyant in Western Australia. New systems were opened in Perth (1899), Kalgoorlie (1902), and Fremantle (1905). There was even an electric tramway in the mining town of Leonora (1908) with one solitary tramcar. All this activity was boosted by the booming Western Australian economy, fuelled by gold.
The Perth and Kalgoorlie companies had both been floated on the London stock exchange to raise the required funds for construction. A Perth-based venture capitalist, Alfred Edward Morgans, had done very well out of these floats, and was looking for profit-making opportunities in the eastern states of the new Commonwealth. Morgans dispatched an associate, a Mr Rodgers, to Melbourne in order to test the waters.
Rodgers original proposal was for a tramway from Port Melbourne to Beaumaris, but MTOC blocked this initiative as it refused to release certain unused rights it held to portions of the route. MTOC had no wish for a new-fangled competitor to challenge its monopoly.
However, all was not lost. During the changing of ends at a game of lawn bowls, Rodgers mentioned his frustrations to a resident of Flemington, Mr W. Pridham. Pridham suggested that the Essendon area would be perfect for such a scheme, and arranged for Rodgers to inspect the locality in company with the Mayor (Cr A.E. Young) and the Town Clerk (Mr W. Cattanach).
As a result of this visit, Rodgers produced a favourable report on the business potential of an Essendon tramway. Morgans was suitably impressed by the profit making potential of such an enterprise, and made the proposal to council to build two routes, from Flemington Bridge to North Essendon and Maribyrnong.
However, his business plan was not restricted to tramway operation. An essential part of the proposal was the generation and supply of electricity for lighting, to both business and retail customers, as well as the municipal council for street lighting.
This strategy was common in the early days of electric tramways. The major advantage was that it gave a company a ready form of cash receipts through tramway operation, whilst it built up its retail customer base. Furthermore, it also ensured that generator load was present during the day, and when retail demand increased during the evening for lighting, the electricity needs of the tramway were reduced due to less frequent services. This made effective use of the heavy capital investment required for generator plant, and maximised the profit potential of the undertaking.
Other adopters of this basic strategy included the Electricity Supply Company of Victoria tramways in Ballarat (1905) and Bendigo (1903), and the Melbourne Electricity Supply Company tramway in Geelong (1912).
Both Essendon and Flemington councils were strongly behind Morgans tramway system, and given that it was proposed to be carried out by private capital, it was thought that State Government approval would be a mere formality.
But they did not reckon with the strenuous opposition of Victorian Railways, which claimed that Morgans trams would skim the cream off its traffic. When it was pointed out that the planned routes were designed to feed railway stations at four points, the VR commissioners produced a previously unknown scheme for a motor omnibus service operating on the same routes. This pronouncement was greeted with howls of derision.
The railway commissioners were extremely influential with the Irvine ministry then in State government, which opposed Morgans at every opportunity. At a large protest meeting against the Irvine government, it was accused of pursuing a policy of monopoly, conservatism and stagnation…dog-in-the-manger spirit which sought to strangle private enterprise and aim cowardly blows at municipal activities…incapable of shaping the destinies of a progressive country.
By this stage the Irvine ministry was under extreme pressure, falling on 16 February 1904. A new government was formed under Thomas Bent as Premier, who gave Morgans a much better hearing.
However, the Metropolitan Gas Company raised objections against the plan, as it was worried that its gas mains would suffer from electrolysis as a result of the introduction of an electric tramway. While this was a natural concern, the Metropolitan Gas Company was probably desperate to retain market share against a company that would be selling clean, safe electric light.
In the interim, a referendum of ratepayers regarding the tramway proposal was held, resulting in an overwhelming victory for the pro-tram lobby (2,746 to 146). With this strong showing of local support, State Cabinet approval was rapidly obtained, and an Order-in-Council was issued on 4 May 1904 authorising the Councils of Essendon and Flemington to construct a tramway in their districts.
This action was very unusual, as it was normal practice for tramway and railway construction to require the passing of a bill by Parliament before it could proceed, rather than the mere issuing of an Order-in-Council. However, this disregard for due process was very typical of Thomas Bent, who was notorious for corrupt and manipulative behaviour throughout his political career.
The Councils delegated the authority conferred by the Order-in-Council to Morgans, who subsequently transferred his rights to the North Melbourne Electric Tramways & Lighting Company (NMETL), which was incorporated under the Companies Act of the United Kingdom to carry on business in Victoria. This company was floated on the London Stock Exchange with a capital base of £200,000. Former Victorian Premier and sitting MLA for Clunes and Allandale, Sir Alexander Peacock, was appointed as local Managing Director. It is very likely that Bent rushed through the unorthodox approval in order to obtain Peacocks continuing support for his government.
The NMETL franchise gave the company the rights to construct and operate electric tramlines on specific routes through the municipalities of Flemington and Essendon, and provide a municipal electricity supply for a period of thirty years. At the end of the franchise the companies assets and operations would revert to the municipalities, with the exception of the property on which the power station and tram depot was located, which was to be purchased from the NMETL by the councils.
In addition to this, the franchise gave the councils the option of buying out the company after ten, twenty or twenty-five years. The ten-year option was specified to coincide with the termination of the MTOC franchise of the cable tram system in 1915 (subsequently extended to 1916) when the cable trams were to be acquired by the Melbourne City Council and other municipalities. Even in 1905, it was forecast that the cable tram system would be converted to electric traction, and the councils wanted to ensure through running by the trams into the CBD was possible. The purchase price was to be cost price plus 4% for each year of operation.
An acre of land on the east side of Mt Alexander Road was purchased for the site of the power station, company offices and tram depot, near the intersection with South Street. This site was chosen as it was approximately midway between the Keilor Road and Saltwater River termini. As a result the potential voltage drop to the outermost termini would be minimised.
The foundation stone for the power station was laid at a lavish ceremony on 24 May 1905, in front of a large and fashionable gathering. The Mayors of Essendon (Cr Showers) and Flemington (Cr Raisbeck) officiated at this event, after which the officials retired to a sumptuous banquet held in two marquees.
A month later, in front of a large crowd, the Premier laid the first rail at the western end of Racecourse Road. In typical Bent style, he remarked that he regretted not having been a party to the venture as he was sure that the trams would pay handsomely.
The tracks were laid on the concrete stringer principle, using 90 lb per yard rails in 30 foot lengths. The rails were laid in trenches, supported to the correct height by temporary packing. Tie bars spaced 7 feet 6 inches apart held the rails to standard gauge, and rail joints were double bonded to ensure good negative return. Concrete was rammed in the trenches up to the height of the foot of the rail, and once it had set, the road surface was reinstated with conventional bluestone macadam.
Special work (points and crossings) was of toughened cast steel, and supplied by Hadfields of Sheffield and Lloyds of Darlaston.
Both centre poles and span poles, depending on location, supported the overhead. Poles were 30 feet long and were of either ironbark or grey box timber, set 6 feet into the ground. The exception to this was in Puckle Street and around the Essendon Town Hall where steel poles were used in lieu of timber.
The tram depot building was 200 feet long and 68 feet wide, had six roads and could store 28 tramcars under cover. The company offices were located in a modest two-storey building alongside the entry track to the depot fan.
The power station was a large brick building, 90 feet by 63 feet, adjacent to the tram depot. Designed by architects Ussher & Kemp of Melbourne , it contained three 360 hp steam engines manufactured by Browett & Lindley of Manchester. Steam was supplied by three Babcock & Wilcox water-tube boilers.
The engines provided direct drive at 350 rpm, powering three General Electric 250kW generators producing electricity at 350 volts. The British Thomson-Houston Company supplied controlling switchgear, consisting of three generator panels, two traction feeder panels, four lighting panels and one Board of Trade panel.
A fifty-foot high cooling tower was to the rear of the engine house, being used to condense exhaust steam from the engines and reduce overall water consumption.
The actual generating plant was already obsolete, as in 1904 a comparative trial in Newcastle (UK) between the best technology in reciprocating steam engine generating sets and turbine generating sets had shown the turbines to be more than 25% more fuel efficient than the four cylinder triple expansion compound engines used as comparison. However, Australia did not have the technical base to support and maintain the latest in steam turbine technology, so in retrospect use of reciprocating engines for the generating plant was a sound choice.
The NMETL tramway system was officially opened on 11 October 1906 with a series of ceremonies, finishing with a banquet in a large marquee, where the official guests enjoyed many toasts and endured the many speeches expected of such an occasion. The following day, local schoolchildren were treated to a free service, before normal services began on Saturday 13 October 1906 with all twenty-five tramcars . Huge crowds turned out on the first Sunday to enjoy the new form of transport, testing the lightly built trailer cars to the limit. The cars had to be withdrawn for strengthening before they could be used in traffic again.
Initially tram crews worked 60-hour weeks. There was no provision for meal breaks, and crews had to eat their meals on the cars. Remuneration for drivers was £2/6/6 per week, conductors receiving one shilling less. Furthermore, there was no protection for drivers on the open driving platforms at the end of the cars.
These oppressive working conditions were a recipe for industrial unrest. The NMETL suffered significant amounts of strike action over the years, until a more reasonable working environment was obtained. These struggles were key in the establishment of the Australian Tramways Employees Association (later the ATMOEA), and also led to a number of key decisions in Australia's industrial legal framework.
Despite this, the crews had a reputation for friendly service. Passengers could (quite illegally) travel on the end platform and chat to the driver. The schedules were not demanding, and trams would often wait for late passengers as they ran through the paddocks that made up much of the area. However, crews had to work hard at the terminus when hauling a trailer car, as shunting facilities were limited, and the trailer had to be moved by hand when changing ends.
The councils did not exercise their option to purchase the NMETL undertaking in 1915, due to a combination of factors, including the extension of the MTOC cable tram franchise by twelve months to 1916, and the difficulty in obtaining wartime finance to fund the purchase. Furthermore, there were substantial government proposals to create a single tramway operator for Melbourne as a result of the 1911 Royal Commission on the Railway and Tramway System of Melbourne and Suburbs. In this climate, the councils were not inclined to take a risk on purchasing the NMETL.
State Parliament passed two key bills in 1918. The first bill, the Melbourne & Metropolitan Tramways Board Act 1918 (No 2995), provided for the creation of the M&MTB. It also empowered the M&MTB to acquire the tramway from the Essendon and Flemington Councils, but specifically excluded electricity generation from the Boards activities, so the M&MTB could not acquire the entire operation of the NMETL.
The second bill created the framework for the State Electricity Commission of Victoria (SECV), which was to be the state monopoly electricity generator, distributor and retailer. It was authorised to acquire private electricity companies and incorporate them into its own operations.
The NMETL could see the writing on the wall, so it did not make any significant investment into the tramway infrastructure which in any case was only a small contributor to its revenue stream. However, it added a fourth engine and generator set into the engine house to cater for increased commercial load.
The only substantial investment in the tramway after the initial opening was the extension of the terminus at Flemington Bridge to be adjacent to the cable tram terminus. This work was authorised by the Flemington Road Tramways Act 1911 (No 2333), and the extension was opened on 27 August 1913. This was the only part of the tramway outside the municipalities of Flemington and Essendon, being in the Borough of Kensington. The objective of the extension was to remove a walk of around a hundred yards for passengers transferring between the cable trams and the NMETL electric trams.
After 1918 protracted negotiations were held between the NMETL, the Essendon and Flemington Councils, and the State Government, in an attempt to come to an agreement agreeable to all parties, as there was some contention over reaching an acceptable purchase price. What finally drove the purchase of NMETL through was Sir John Monashs actions to cement the position of the SECV as the prime power generator, distributor & retailer in Victoria.
Basically, the Essendon and Flemington Councils were unhappy with the electricity supply service provided by NMETL, and had been trying to replace it by the electricity supply business of the Melbourne City Council (MCC), but the MCC was not interested as it believed that significant profits would not be generated. Monash was facing resistance from the MCC over the SECV taking over state-wide generation & retailing, as the MCC was enjoying good profits from its own electricity supply business. Therefore Monash drove the acquisition of the NMETL by the SECV, but as he did not want the tramway part of the business it was spun off to the M&MTB, which then had to provide £31,250 of the total £110,000 purchase price as compensation. The government passed the North Melbourne Electric Tramways and Lighting Company Act (No 3247) authorizing the acquisition of the NMETL in its entirety from its British parent, taking effect as from 1 August 1922.
Monash’s objective was to use the NMETL acquisition as a weapon against the MCC, by supplying cheap power to Essendon ratepayers at substantially less than the MCC. This action was to strengthen his assertion that the MCC was overcharging its customers, and that the SECV would undercut these prices, expand the market and benefit a broader range of voters. Naturally, he received support from both the Essendon and Flemington Councils. The MCC tried to run a vilification campaign on Sir John in order to try to derail this initiative, but he was well known and respected for his impeccable business ethics. In any case, Sir John Monash was a bona fide Gallipoli veteran, and victorious commander of the Australian Imperial Force in the Great War, so his reputation was unassailable. Therefore, by using the threat of the NMETL electricity supply business to move into MCC supply areas as leverage, and by demonstrating the unjustified cost overheads in the MCC electricity business in great detail, Monash imposed a deal favourable to the SECV regarding municipal power retailing businesses. The result was a significant expansion of the powers of the SECV in another five Acts of Parliament. This arrangement in relation to municipal electricity supply businesses stood basically unchanged until the breakup and privatization of the SECV seven and a half decades later.
So the NMETL tramway acquisition by the M&MTB was just a footnote in the struggle to control the electricity industry in Victoria.
The NMETL tramcars were renumbered into the M&MTB sequence by adding 201 to the original number. The trailer cars were not renumbered, as the Board planned to replace them. However the M&MTB immediately banned the use of trailers after a serious accident when a crowded tramcar and trailer went out of control in Mount Alexander Road on 10 September 1923, injuring many of the passengers.
In the aftermath of the accident, the M&MTB fitted all the former NMETL tramcars with airbrakes and eighteen of the new W class tramcars (Nos 219-235 and 262) were rushed to Essendon Depot (which is actually in Ascot Vale) to supplement services as a result.
The Puckle Street line was closed on 12 January 1924, whilst the original Victoria Street connection between Mt Alexander Road and Racecourse Road was closed on 4 August 1929, being replaced by the current direct Racecourse Road line to Flemington Road. The long sort-after direct city connection opened in 1925, to William Street.
Despite this, the Essendon lines were still isolated from the rest of the M&MTB electric network, so a third track was laid parallel to the Elizabeth Street tramway as far as Victoria Street, where it diverged to connect with the main system in Swanston Street. This enabled the easy transfer of tramcars to and from Essendon Depot.
Over the years, the former NMETL lines underwent many improvements and extensions, but they remain a core part of the Melbourne network. The original depot building survives as roads 13-18 of the current Essendon Depot, whilst the company offices and power station are long gone.
Against all the odds, three NMETL tramcars survive today. NMETL cross-bench car 13 is owned by VicTrack and is in the care of the Friends of Hawthorn Tram Depot. The Tramway Museum Society of Victoria owns NMETL saloon car 4 and ballast trailer 24.
Brimson, S. (1983) The Tramways of Australia, Dreamweaver Books
Cross, N., Budd, D., and Wilson, R. (1993) Destination City (Fifth Edition), Transit Australia Publishing
Keating, J. D. (1970) Mind the Curve!, Melbourne University Press
M&MTB (1930) Melbourne & Metropolitan Tramways Board - Its Progress and Development 1919-1929
Perry, R. (2004) Monash: The Outsider Who Won A War, Random House Australia
Richardson, J. (1963) The Essendon Tramways, Traction Publications
Richardson, J. (1967) Destination Subiaco, Traction Publications
Richardson, J. and Kings, K.S. (1960) Destination City (Second Edition), Traction Publications
Sheard, H. (1972) The Melbourne Tramway & Omnibus Company Limited Running Journal June 1972, Tramway Museum Society of Victoria
Van Riemsdijk, J.T. and Brown, K. (1980) The Pictorial History of Steam Power, Octopus Books
Public Records Office Victoria Series VA 2974 North Melbourne Electric Tramways & Lighting Company
The Melbourne-based partnership of Ussher & Kemp had a major influence in development of the Federation style of domestic architecture, sometimes known as the domestic Queen Anne style. The firm was notable for designing a number of buildings across Victoria, besides their work on Essendon Depot, and several of these buildings have been placed on the Victorian Heritage Register. Some examples of their work are:
All the twenty-five tramcars of the NMETL were manufactured by J.G. Brill of Philadelphia, and imported in complete knock-down condition. The Adelaide firm of Duncan & Fraser assembled the single truck cars, which were of three types.
|Truck type||Brill 21E||Brill 21E||Brill 74T|
|Motors||2 x 45 hp (GE67)||2 x 45 hp (GE67)||-|
|Controllers||GE K10||GE K10||-|
|Weight||12.65 tons||10.32 tons||n.a.|
|Length||31 feet 11 inches||31 feet 10 inches||28 feet 6 inches|
The first of these to be withdrawn from passenger service were the trailer cars, in 1923. One of these was sold to the Geelong system, motorised and rebuilt into a scrubber car. Five were cut down into ballast trailers, and the remaining cars were scrapped. Airbrakes were fitted to the fifteen motorised cars over 1924-5.
The V class crossbench cars were withdrawn from passenger service in 1925 and used as locomotives by the M&MTB permanent way branch. In 1927 two of these cars were converted for other purposes 214 as a freight car (2A renumbered as 17 in 1934) and 216 as a ballast car (4A renumbered as 11 in 1934). The latter car was withdrawn and scrapped in 1948, whilst the freight car survived in service until 1977. The other cars were scrapped in 1928.
The saloon cars were fitted with windscreens prior to the M&MTB takeover, in order to give the drivers some scant protection in wet weather. Numbers 202, 208 and 211 were modified with a full length railroad clerestory roof. Some cars were fitted with Malvern type destination boxes, but all were converted to the standard M&MTB type, as well as being fitted with end platform doors. Five cars were withdrawn in 1929 (cars 203, 204, 207, 208, 210) but several of them lingered in the boneyard at Preston Workshops until 1945.
U class 202 was converted into service stock as a breakdown/freight car 19 in 1934 and scrapped four years later after an accident. In 1930 cars 205, 209 and 211 were fitted with bow collectors in place of trolley poles for use on the Holden Street shuttle, until they were withdrawn from passenger service in 1938 along with number 206. Numbers 209 and 211 saw no further service but joined the scrapped cars in the boneyard. Number 205 was converted to advertising car 19 the same year, whilst 206 replaced the former 202 as freight car until it was withdrawn in 1947 and scrapped in 1950. Number 19 then continued as freight car until its withdrawal in 1978. | <urn:uuid:6a5868ee-edcc-4f39-8728-ded1d5b23f37> | CC-MAIN-2016-26 | http://www.hawthorntramdepot.org.au/papers/nmetl.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00093-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975118 | 5,670 | 2.65625 | 3 |
What color is your pond water?
If you can see the bottom of your new pond after it has settled down for a few weeks, you likely have no problems with your water quality.
However, if your fish are dying, you likely have levels of chlorine and chloramines that are excessive. Chlorine and chloramines are toxic to fish. You should add New Pond to control these levels, every time new water is added to your pond. This treatment can also be used in existing pond water to reduce the chlorine and chloramines to a safe level.
All bodies of water go through what we call an "algae bloom" that will turn the water green. It’s a very natural occurrence that happens whenever the water heats up and there is enough "fuel" in the water to feed the algae. Mother Nature has her way of clearing the water. One day, after weeks of not being able to see your fish (much less the bottom of the pond), you may walk out and find that your pond is clear. The following is a list of things that you can do to help Mother Nature do her job.
- Sun / Shade
Algae, like most plants, need sunlight to survive. Most of us can’t move our pond to the shade, but there are ways to simulate shade. There are products available that color your water blue, such as Pond Shade. You can also create shade for your sunny pond by adding floating plants such as water hyacinths, water lettuce and waterlillies.
- Starve the Algae
Water plants, especially floaters and anacharis, compete with algae for nutrients in the water. The more plants you have, the more the algae starves and reproduces less. Stock up with plants. You may not want to use fertilizer in your plants until your ecological balance has been met.
Do not scrub the sides of your pond. The green coat that forms on the liner and on the sides and the bottom of the pond is beneficial to the pond itself. The jelly-like substance is algae that is packed with nitrifying bacteria. Nitrifying bacteria is paramount in order to limit the Ammonia levels in the pond. If you want to give your pond a thorough cleaning, start with the bottom of the pond, where parasites and bad bacteria usually forms.
Prevent Runoff by using plants that cover the entire ground surface. Runoff can carry harmful chemicals, pesticides and excessive amounts of phosphorus and nitrogen into waterways.
If your water is a white, milky color or cloudy you are probably experiencing a bacterial bloom. Nature Clear is a perfect remedy for this situation. It is important that you dose the water correctly and have plenty or aeration (if you have fish) because the coagulation that occurs (after Natural Clear is applied) will consume a great amount of the water’s dissolved oxygen.
Brown water indicates that there is floating dirt and particles in the water. Rotting leaves and debris create "tanning" of the water. There are three things that you can do to clear the brown water.
- Clean the Filter.
Don’t wash all the filter material with chlorinated water. Instead, take the least dirty pads and wash them with water from your pond (this keeps the good bacteria alive).
- Use a Water Clarifyer.
Applying the Natural Clear pond treatment can help. It binds minute particles in your water together and forces it to the bottom of the pond. Again, follow precautions and make sure that your system is highly aerated during the process. If in doubt, we have aeration equipment that you can rent for this reason.
- Vacuum the Pond.
Now you should be able to see the bottom of the pond and all the debris and trash you never knew existed. One of the ponder’s best tools is a shop vac or wet vac. Use it to vacuum the bottom and sides of the pond. Don’t scrub the slime off the sides. It’s beneficial to your eco-system.
Keep an Eye on the pH
pH affects all aquatic life, including plants. A stable pH of 6.8 to 8.0 is the best suited for pond fish. Concrete used in the waterfall or to hold rocks in place can leach into the water in new ponds, which causes the pH become too basic and rise above 8.0. pH changes throughout the day, so testing should be done about three times a day until the stability of the pH has been established. If the pH is still fluctuating, there are commercially available products to change and stabilize the pH. | <urn:uuid:b82377ac-2f41-4c8f-ba10-8537d7e4ee7c> | CC-MAIN-2016-26 | http://www.gardensupermart.com/articles/garden-pond-care.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00183-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936165 | 952 | 3.4375 | 3 |
Microwave radiation as a heat source for young livestock
|Paper title||Microwave radiation as a heat source for young livestock|
|Paper author(s)||L. A. Braithwaite, W. D. Morrison, L. Otten, Lius A. Bate|
|Conference||E., Collins, C., Boon|
|Abstract||Several types of equipment utilizing microwave (MW) radiation were developed to provide supplemental heat to young animals including broiler embryos and chicks, newly-weaned piglets and hypothermic piglets and lambs. Exposure of broilers to MW during incubation or brooding resulted in no change in growth, reproduction, or egg production, however, immunocompetence may have been altered. Newly-weaned piglets in a cool environment with supplemental MW performed similarly to piglets with infrared radiation (IR). Hypothermic piglets and lambs were rewarmed significantly faster with MW than IR and MW-rewarmed piglets exhibited no change in growth to weaning. This work indicates that MW may be a more efficient heat source for young animals..|
Using APA 6th Edition citation style.
Times viewed: 547 | <urn:uuid:0f84447d-527f-4ba7-8cad-c5a683eb4bbe> | CC-MAIN-2016-26 | http://www.islandscholar.ca/fedora/repository/ir%3Air-batch6-132 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00090-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.908201 | 251 | 3.03125 | 3 |
Japan is introducing a new system of lay judges who will serve as judges on important cases brought to court. The idea is to more actively engage citizens in the operation of government and make them more aware of responsibilities of being a citizen in a modern society. Under the lay judge system to be introduced in May, 2009,. about half of people who are registered on the list of candidates for being a lay judge may wind up actually sitting as a judge in a court case. The actual number of cases to be tried before lay judges will be about 1,511. According to the process, each district court will pick by lot 50 to 100 people who will be summoned to court to be interviewed by a judge regarding their suitability to serve as a judge.
Some people who received the letter were somewhat surprised, but several said they were now more closely following news in the media in case they would up as a judge in one of the cases. This is an interesting experiment in justice and the results will be fascinating in terms does the prosecution or the defense find the new system more beneficial. | <urn:uuid:d1db1dbf-e958-402f-82a5-69a5929d7401> | CC-MAIN-2016-26 | http://theimpudentobserver.com/world-news/japan-experiments-with-lay-judge-system/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00003-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.985632 | 215 | 2.59375 | 3 |
The Unicode Standard, Version 2.0 (TUS2.0) provides different ways to encode accented characters, either decomposed (a combining character sequence [CCS]) or composed (as a single precomposed character). For example, the following are equivalent:
|Ã||A + ~|
The TUS2.0 specifies an algorithm for determining whether any two sequences of Unicode characters are canonical equivalent (see TUS2.0, pages 3-9 through 3-10). This algorithm basically decomposes any precomposed characters, then sorts them according to special rules, based on each character's combining class. This produces a normalized form.
Two common functions on Unicode text are to fully decompose the text (as far as possible), and to fully compose the text (as far as possible). In both cases, the correct result can only be achieved if the text is first converted to a normalized form.
The following describes mechanisms for composing and decomposing Unicode text that do not require fully normalizing the text, and yet produce the correct results. By avoiding the normalization phase, they represent significant performance advantages.
|Note:||In the following discussion, we will abbreviate the Unicode names for brevity. Thus LATIN CAPITAL LETTER G WITH BREVE will be represented as G-breve. A plus sign will be used to indicate a sequence of characters.|
The following discussion requires that the reader have first read Chapter 3 of TUS2.0.
The simple method for producing a normalized decomposed form is to replace each character by its decomposition, then normalize the entire string. However, this does more work than is necessary, especially in the common cases. The optimized method works as follows:
This method avoids bubble-sorting all of the combining marks in a string, and optimizes for the common cases:
Since you are guaranteed that the decomposition is already in normalized order, as each successive combining character is appended, it is bubble-sorted up in the decomposition. Since the sequence starts in normalized order, and after each successive character the result is in normalized order, then the final result is in normalized order.
|Ã`||A + ~||A + ~ + `|
|Ã.||A + ~||A + . + ~|
The simple method for producing a normalized composed form is to match each possible CCS against a database to see what matches, then replace the CCS with the result. However, this does more work than is necessary, especially in the common cases. The optimized method works as follows:
The following algorithm depends on the fact that except for one anomolous case, every CCS of length greater than two (which is canonical equivalent to a precomposed character) is also equivalent to a CCS of length exactly two. For example, C + cedilla + acute is equivalent to C-cedilla + acute, and C + acute + cedilla is equivalent to C-acute + cedilla.
Since all combinations of characters that could combine are in the mapping table, in every order that they could occur in, all the precomposed forms will be generated. Since we scan for illegal reversals, we eliminate non-canonical equivalents. At each point in this process, the result string contains a valid composition of the initial portion of the source string.
|Notes:||If we didn't scan the intervening combining characters, then we could end up with a non-canonical equivalent sequence. For example, consider the following sequence: G + acute + breve. If we didn't scan, then this would produce G-breve + acute, since G-breve is a precomposed Unicode character, but G-acute is not. When decomposed, this represents G + brev + acute, which is not a cononical equivalent to the orginal string, since breve and acute have the same canonical class.|
|The one anomolous precomposed character does require a special case in this algorithm--for simplicity of presentation, this complication is omitted.| | <urn:uuid:757e1d29-bf43-4737-9a4d-a63ffeb3da82> | CC-MAIN-2016-26 | http://icu-project.org/docs/papers/optimized_unicode_composition_and_decomposition.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00122-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.918063 | 850 | 3 | 3 |
Kyphosis is a spinal deformity characterized by a rounding of the back. While some rounding of the back is normal, the kyphosis curve refers to exaggerated rounding of more than 50 degrees. This condition is also referred to as a dowager hump, round back (postural kyphosis), or hunch back. Patients may develop kyphosis as a congenital disease or as a result of:
Kyphosis can affect both children and adults and can start at any age.
At UPMC, we treat kyphosis with a combination of bracing and medication. In children and adolescents, the sooner the treatment begins, the more effective it will be in stopping the deformity. For severe cases, more aggressive treatment is required. Spinal fusion can be an effective treatment to reduce the degree of the curvature.
Most cases of kyphosis can be diagnosed during a physical exam. The doctor will ask the patient about his or her symptoms and medical history, and check for abnormal curve in the spine, rounded shoulders, and a hump on the back.
Symptoms may include:
The doctor may order an x-ray of the spine to confirm a diagnosis of kyphosis. A pulmonary function test may also be ordered to measure how well the patient is able to breath, since some severe cases of kyphosis can impair breathing.
At UPMC, we typically treat kyphosis with a combination of pain medication and bracing. Non-steroidal anti-inflammatory drugs (NSAIDs) may be given for pain, as well as medicine to treat any underlying conditions, such as osteoporosis. Braces can help correct kyphosis or reduce discomfort.
Surgery is reserved for severe cases of kyphosis. Surgeons will straighten the spine by fusing the backbones (vertebrae) together. This is done using bone grafts from the pelvis, or with a metal rod inserted into the spine to straighten it. If the kyphosis is caused by a compression fracture, it may be treated with special cement, which is injected into the affected vertebrae during procedures called vertebroplasty and kyphoplasty.
How can we help you?
Schedule anappointment >
Ask a question >
Request our expertopinion >
1-877-986-9862(within the U.S.)
Affiliated with the University of Pittsburgh Schools of the Health Sciences
Supplemental content provided by Healthwise, Incorporated. To learn more, visit www.healthwise.org
For help in finding a doctor or health service that suits your needs, call the UPMC Referral Service at 412-647-UPMC (8762) or 1-800-533-UPMC (8762). Select option 1.
UPMC is an equal opportunity employer. UPMC policy prohibits discrimination or harassment on the basis of race, color, religion, ancestry, national origin, age, sex, genetics, sexual orientation, gender identity, marital status, familial status, disability, veteran status, or any other legally protected group status. Further, UPMC will continue to support and promote equal employment opportunity, human dignity, and racial, ethnic, and cultural diversity. This policy applies to admissions, employment, and access to and treatment in UPMC programs and activities. This commitment is made by UPMC in accordance with federal, state, and/or local laws and regulations.
Medical information made available on UPMC.com is not intended to be used as a substitute for professional medical advice, diagnosis, or treatment. You should not rely entirely on this information for your health care needs. Ask your own doctor or health care provider any specific medical questions that you have. Further, UPMC.com is not a tool to be used in the case of an emergency. If an emergency arises, you should seek appropriate emergency medical services.
For UPMC Mercy Patients: As a Catholic hospital, UPMC Mercy abides by the Ethical and Religious Directives for Catholic Health Care Services, as determined by the United States Conference of Catholic Bishops. As such, UPMC Mercy neither endorses nor provides medical practices and/or procedures that contradict the moral teachings of the Roman Catholic Church.
Pittsburgh, PA, USA UPMC.com | <urn:uuid:0ccaed3a-cd6b-4fcb-9eef-c16225837090> | CC-MAIN-2016-26 | http://www.upmc.com/Services/neurosurgery/spine/conditions/spinal-deformities/Pages/kyphosis.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00079-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.924112 | 906 | 3.1875 | 3 |
It’s been known for well over a century that different parts of the brain handle different tasks. This was certainly true for the autonomous functions, such as breathing and hormone activity, but it was also apparently true for higher level functions such as speech and language. Two regions of the brain, Broca’s area and Wernicke’s area are known to be necessary for speech production and language capacity. It’s long been thought that they have specific neuron patterns and nerve connections, which makes them language specialists. In a similar way for hearing, sight, smell, and touch it was thought that specific areas of the brain were involved – more or less exclusively. It’s that last bit, ‘more or less exclusively,’ that is now in doubt.
Research by Marina Bedny and colleagues at the Massachusetts Institute of Technology (USA) and published in the Proceedings of the National Academy of Sciences, 28 February 2011 [Language processing in the occipital cortex of congenitally blind adults] has shown that in people born blind, part of the visual cortex is converted to language processing.
It’s important to understand that this does not mean language and speech can be produced without Broca’s or Wernicke’s areas, but it does mean that other parts of the brain previously thought dedicated to a specific function can be marshaled for other purposes. The implications are many and important. This discovery, born out by fMRI (functional magnetic resonance imaging) of brain areas in people born without sight, implies that even very complex so-called higher level functions such as language, can be performed by ‘non-language-specialist’ regions of the brain.
In fact, it is part of a follow-up study by the research group to see if the additional brain cells acquired from the visual cortex may give blind people certain advantages in language processing. It’s a matter of common observation that people who have lost one of their senses tend to have one or more of the other senses strengthened. This was born out by research, for example from animal studies by Mriganka Sur (also at M.I.T.), where brain regions were surgically rewired early in life, and the brain cells eventually adapted to the new role. However, the Bedny study is the first to indicate the same thing can happen with more complex mental processes.
They found that was indeed the case — visual brain regions were sensitive to sentence structure and word meanings in the same way as classic language regions, Bedny says. “The idea that these brain regions could go from vision to language is just crazy,” she says. “It suggests that the intrinsic function of a brain area is constrained only loosely, and that experience can have really a big impact on the function of a piece of brain tissue.”
In short, this is another piece of evidence that the brain is more flexible than thought. The word is plastic, flexible. Under certain conditions, the brain can do extraordinary things by way of rerouting neuron circuits, changing or developing different neuron patterns, and coordinating previously unconnected regions. It’s almost ironic that the tool that has done the most to help neuroscientists isolate the functioning of brain regions, the fMRI, is now showing that the concept of dedicated brain regions needs something of a re-think. | <urn:uuid:aa915290-8b72-4813-b553-8d5472496da2> | CC-MAIN-2016-26 | http://scitechstory.com/2011/03/01/the-visual-cortex-can-learn-to-do-speech-and-language/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00127-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96768 | 694 | 3.75 | 4 |
Teacher resources and professional development across the curriculum
Teacher professional development and classroom resources across the curriculum
Epping, Randy Charles. A Beginner's Guide to the World Economy: Eighty-One Basic Economic Concepts That Will Change the Way You See the World. 3d ed. New York: Vintage Books, 2001.
Lindblom, Charles E., and David K. Cohen. Usable Knowledge: Social Science and Social Problem Solving. New Haven, Conn.: Yale University Press, 1979.
Shostak, Arthur B. Modern Social Reforms: Solving Today's Social Problems. New York: Macmillan, 1974.
Gordon, Stafford. Applying Economic Principles. New York: Glencoe/MacMillan McGraw Hill, 1994.
Krulik, Stephen, and Jesse A, Rudnick. The New Sourcebook for Teaching Reasoning and Problem Solving in Junior and Senior High School. Boston: Allyn and Bacon, 1996.
Prehn, Edward C. Teaching High School Economics: The Analytical Approach. 2d ed. Columbus, Ohio: McGraw Hill Textbooks, 1966.
The Economics ClassroomThis site accompanies the Annenberg Media Social Studies and History video workshop and contains resources and lesson descriptions.
The Shell Island DilemmaThis site features an investigative problem-solving exercise for high school students, in which they must consider political, economic, and social issues.
Web Quests: Problem Solving TechniquesThis Web quest addresses problem solving in a group context, focusing on community concerns and cooperation to find a solution.
Foundation for Teaching EconomicsA companion to the FTE program, this site offers lesson plans, curriculum guides, and extensive resource information on economics.
Online High School Economics LessonsThe James Madison University site provides links to economics lesson plans for high school students.
National Council on Economic EducationDesigned for teachers and students, the NCEE's site offers online lessons and reference material devoted to the study of economics.
Home | Site Map | ©Support Materials
Economic Dilemmas and Solutions: About the Class | Watching the VideoConnecting to Your Teaching | Standards | Resources
© Annenberg Foundation 2016. All rights reserved. Legal Policy | <urn:uuid:a9df4dc8-8a34-410a-b2ff-e90f3b1cea1e> | CC-MAIN-2016-26 | http://learner.org/libraries/socialstudies/9_12/page/resources.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00054-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.799701 | 455 | 2.984375 | 3 |
The power to bring new hopes to a sick child; to be able to free a dialysis patient; to offer another human being the most wonderful gift: organ transplantation, a gift of life ¦ With this perception is that we have to approach the transplantation of human organs. Nevertheless, with the complicated moral and ethical issues, the transplants have become a standard medical procedure to preserve the life of thousands of hopeless patients throughout the world.
Organ transplantation, once an uncommon, often extraordinary event, has rightfully assumed a prominent position among routine procedures performed at most medical centers. Few endeavors in medicine enable the physician to profoundly influence change in a patient's physiology and thereby improve the quality of life. Rapid expansion of the scientific foundations of immunology and surgery, as they apply to transplantation, has contributed to the accelerated growth in this field. No other development in the history of medicine has had the conceptual and philosophic implications of organ transplantation. In all past times, the objective of physicians and surgeons faced with diseases of an specific organ system was to extract the last moment of function from a failing organ using medicines or with surgical procedures that often were poorly conceived, but brilliantly executed. When the function of a vital organ system reached a certain level, the whole body died even though all the other organ systems were without defect.
It is breathtaking to contemplate the departure from this "rear-guard approach, which has been made possible through transplantation. With one bold stroke, health and life can be restored with considerable reliability and safety. The ability to provide these services has descended into the consciousness of a new generation of interdisciplinary observers including physicians, psychologists, social workers as well as patients.
At present, there is a sig | <urn:uuid:70ff0003-fc89-456b-88b5-4a515957ec9f> | CC-MAIN-2016-26 | http://www.exampleessays.com/viewpaper/90861.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00149-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965721 | 348 | 2.859375 | 3 |
Standardizing prescribing practices for single-fraction radiation therapy (SFRT) for palliation of bone metastases could lead to cost savings and improvement in patients' quality of life, according to a study published in the August 1, 2014 edition of the International Journal of Radiation Oncology • Biology • Physics (Red Journal), the official scientific journal of the American Society for Radiation Oncology (ASTRO).
Bone metastases are a common manifestation of distant spread of disease, occurring most frequently with prostate, breast and lung cancers. Of these patients, two-thirds develop bone metastases to the spine, pelvis or extremities. Radiation therapy is an effective form of palliative treatment for bone metastases. There are more than 25 randomized controlled trials demonstrating that SFRT provides the same amount of pain control as multiple-fraction radiation therapy (MFRT); however, there is low use internationally of SFRT for bone metastases.
"Use of Single- versus Multiple-Fraction Palliative Radiation Therapy for Bone Metastases: Population-Based Analysis of 16,898 Courses in a Canadian Province," is one of the largest, current studies on the use of SFRT. The study was designed to determine the use of SFRT in British Columbia, a publicly funded health care system where there is no financial incentive for extended fractionation and all radiation therapy is provided by the BC Cancer Agency with no direct cost to patients.
Patients who received palliative radiation therapy for bone metastases, regardless of the primary cancer site at diagnosis, from 2007 to 2011 were identified using the BC Cancer Agency's Cancer Agency Information System (CAIS). During the study period, 8,601 patients received 16,898 courses of radiation therapy. Patients who received re-irradiation for bone metastases were included, and patients who received more than one course of radiation therapy were considered independently for each course (patients could be counted more than once). Radiation therapy fractionation was divided into two categories: SFRT or MFRT. The most common primary disease site was breast (23.4 percent), and the most frequently treated bony metastatic site was the spine (42.2 percent).
SFRT was used to treat bone metastases in 49.2 percent (7,097) of the radiation therapy courses. SFRT was most commonly used to treat bone metastases that originated from hematological (56.6 percent) and prostate (56.1 percent) cancers; the most common bony metastatic sites treated with SFRT were the ribs (83 percent) and extremity (66.4 percent).
There was a significant variation in the use of SFRT by each of the five cancer centers operated by the BC Cancer Agency during the time of the study, with a range of 25.5 percent to 73.4 percent (p<.001). The study found that the overall utilization rate of SFRT in British Columbia is 49.2 percent, a rate consistent with other Canadian and European data that shows SFRT use ranges from 32 percent to 64 percent. SFRT use is much higher, however, than in the United States, where SFRT use ranges from only 3 percent to 13 percent.
"Previous research has shown that single-fraction radiation therapy is equally as effective as longer multiple-fraction courses. Single-fraction radiation therapy offers greater convenience for patients, is associated with fewer side effects and incurs a lower cost. Even a modest change in the frequency of single-fraction radiation therapy use, in Canada and America, could lead to meaningful cost-savings, improved patient convenience and reduced patient side effects, thereby increasing patients' quality of life," said Robert A. Olson, MD, MSc, lead author of the study, and the research and clinical trials lead and a radiation oncologist at the BC Cancer Agency Centre for the North. "As a result of discussing our study outcomes among radiation oncologists in British Columbia, we have already seen an increase in the use of single-fraction radiation therapy for bone metastases. We are hopeful that these results will motivate practice change worldwide."
Explore further: Antioxidant beta-carotene use safe during radiation treatment for prostate cancer | <urn:uuid:f54b8259-aaef-40f5-b7c5-1a79d65ac657> | CC-MAIN-2016-26 | http://medicalxpress.com/news/2014-08-single-fraction-rt-effective-multiple-fraction-palliation.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00031-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953117 | 879 | 2.5625 | 3 |
“Moon to Moon to Mons”: Synergies for Moon and Mars development
by Al Anzaldua and Dave Dunlop
|The potential extraction of resources from Deimos regolith makes that moon a particularly tempting target.|
At a regional conference presentation in St. Louis on November 8, 2014, Schubert indicated that his proposed dust roaster, delivered on one mission to the lunar surface, could produce sufficient oxygen to combine with hydrogen for refueling its lander four times a year. Because the molecular weight of hydrogen versus oxygen is 1:16, even hydrogen brought from Earth would be feasible. Schubert’s regolith separator would provide pragmatic “lunar ferry” capability based on lunar in-situ production of resources for E-M L1/L2 locations. A number of such production facilities could provide ever more local oxygen production for life support and a higher flight rate.
The potential extraction of resources from Deimos regolith makes that moon a particularly tempting target. First, Deimos is easier to get to energetically from LEO than the lunar surface (Logan et al. 2015). The moon is also easier to reach than Phobos by a delta V of 400 meters per second (Hopkins et al. 2011). At around 20,000 kilometers from the Martian surface, telerobotics from Deimos would be nearly real-time. Even better, because Deimos orbits just above Mars synchronous orbit (MSO), from the perspective of Deimos, Mars would appear to slowly rotate eastward at only 2.7 degrees per hour, thus offering a generous line-of-sight telerobotics time that is unavailable from Phobos. In fact, if several Deimos surface assets were placed at regularly spaced longitudes, Deimos-based human teleoperators could circulate westward or rotate control from one to the next and explore 24/7. Over a period of nearly five and a half days, the entire planet could be seen except for its extreme polar regions (Logan et al. 2015 and Hopkins et al. 2011).
Deimos as a resource also goes far beyond being an ideal telerobotics platform. Deimos, measuring 15 x 12.2 x 10.4 kilometers, is much bigger than the near Earth asteroids NASA is considering visiting at considerable expense. Yet escape velocity from Deimos would still only be five meters per second (Logan et al. 2015), making it a fuel-sparing staging platform for solar system transit. In addition, several meters of Deimos regolith between a Deimos-based crew and interplanetary space, would provide shelter from cosmic radiation equivalent to the protection provided by the Earth’s atmosphere at sea level.
Better still, although the origins of both Deimos and Phobos are yet unsettled, both appear to have the characteristics of dark carbonaceous asteroids, with anhydrous silicates, carbon, organic compounds, and ice (Bell et al. 1992). If this bears out, Deimos’ regolith would be able to provide water and other volatiles for life support and propellant. Besides silicates, its regolith will also likely contain metals and other valuable materials for construction and manufacturing (Norton 2002).
Simple heating in an enclosed environment could recover Deimos volatiles (Nichols 1993). For non-volatile material extraction, Schubert has developed an “Isotope Separator” with three patents for various configurations. One of these configurations could be the basis for a device to separate Deimos regolith into components for life support and fabrication (Schubert 2008). Other ideas and devices may also work well. Because the light intensity reaching Mars PhD is only around 44 percent as strong the intensity reaching Earth's Moon, powering a Deimos regolith separator will be harder, but not impossible. Luckily, Deimos does not have a light-scattering atmosphere like Mars. Sufficiently large solar panels could therefore be constructed to power the regolith separator.
Deimos-sourced propellant could power reusable spacecraft (ferries and tankers) going to cislunar space from the Mars-Deimos-Phobos (Mars PhD) system and back; similar reusable spacecraft to Mars and back; and similar spacecraft bound for solar system destinations beyond Mars and back.
To sum up, the resources potentially provided by Deimos includes telerobotics, oxygen and water for life support, vehicle staging and fueling, shelter from cosmic radiation, and construction and manufacturing materials. Deimos would be the best ISRU site for a Mars Orbital Complex (MOC), Mars settlement, and further solar system expansion.
Effective Mars surface exploration will require a phased strategy that will include access to shelter and natural resources for life support and construction, while respecting both forward and back planetary protection. In this regard, the shield volcanos of the Tharsis Montes ridge (Arsia Mons, Pavonis Mons, Ascraeus Mons), as well as Olympus Mons and outlying Elysium Mons, might be excellent sites for initial Mars bases.
A topographical map of Mars, showing the line of volcanoes that comprise the Tharsis Montes ridge just left of center, and Olympus Mons to the northwest. (credit: NASA)
Olympus Mons and the Tharsis Montes volcanos, in particular, would provide relatively close access to shelter, natural resources, and equatorial sites. These basaltic shield volcanoes are laced with many cubic kilometers of lava tubes likely containing useful frozen volatiles (Wall 2012 & Bleacher 2011). Judging from lava tubes on Earth, the yields for science, life support,and infrastructure development on both the Moon and Mars will be significant. Moreover, our experience exploring the lava tube environments on the Moon (Redd 2014) will likely advance settlement strategies for Mars. The Moon could therefore be a major engineering onramp for the practical experience of exploring and modifying lava tube environments on Mars.
About 600 kilometers wide, the top of Olympus Mons reaches an altitude of 25 kilometers and is very cold. Nevertheless, the north and west sides of this majestic volcano plunge to below Mars zero-elevation or datum, offering relatively warm sites for exploration. Moreover, near-surface ice is likely on the western side. (See map below.)
Dr. William Feldman of Planetary Science Institute in 2012 analyzed data from NASA’s Mars Odyssey Neutron Spectrometer & found evidence of massive amounts of water ice just beneath the surface. (>4.5 % of water equivalent hydrogen in orange & red areas)
The typical atmospheric pressure at the top of Olympus Mons is about 12 percent of the average Martian surface pressure, which in turn is less than 1 percent Earth atmospheric pressure at sea level. Even so, high-altitude orographic clouds frequently drift over the Olympus Mons summit, and airborne Martian dust is still present (Hartmann 2003). Analogous to how our experience with lunar lava tubes will inform us about how to utilize Martian lava tubes, our experience dealing with dust on the Moon should inform us on dealing with dust on Mars.
|Occupying the “high ground” on Mars from any of the Mons volcanos would provide a springboard for human expansion into lower altitude sites of interest.|
The middle Tharsis volcano, Pavonis Mons, lies smack on the equator. However, because it also lies on the Tharsis bulge, its base remains at a high altitude. Pavonis Mons is the smallest of the Tharsis Montes volcanos. Yet it is still 367 kilometers across and its summit 14 kilometers high (Scott 1998 & Gazetteer). The summit experiences an atmospheric pressure of about 21% of Mars’ mean surface pressure. Data from the Mars Global Surveyor and Mars Odyssey spacecraft suggest that glaciers once existed on Pavonis Mons and significant amounts of near surface, equatorial ice may remain within the deposit today (Shean 2005).
Occupying the “high ground” on Mars from any of the Mons volcanos would provide a springboard for human expansion into lower altitude sites of interest, including the Echus Chasma, Valles Marineris and the lowlands west of Olympus Mons. In sum, Olympus Mons, Pavonis Mons, and the other Mons volcanos are potential base sites with readymade shelter from radiation, materials for fabrication and construction, and volatiles for propellant and life support.
Assuming we are able successfully to establish a permanent base on Earth’s Moon and on Deimos, the conditions (in terms of near vacuum and high radiation) at the summit of the volcanoes should allow for similar infrastructure and life-support technologies that will have previously been demonstrated on the Moon. These technologies should include waste recycling to produce useful products, like fertilizer and fuels (Schubert 2011). In the latter case, for instance, researchers at NASA Ames Research Center has produced a small bioreactor that uses urine as a feedstock to produce, with the help of microbes, electricity, methane, and clean water (Verger 2014). Thermal management in these cold locations will also be critical.
With regard to mission sequencing, the appropriate volatile heater-extractors and element separator systems could begin robotically producing water and oxygen for life support and propellant, as well as metals and other fabrication materials, all before humans arrive. As on Deimos, however, a Schubert-type isotope separator on a Mons volcano will require a great deal of energy from very large solar arrays. Assuming the lack of nuclear power, fabrication of large arrays from silicon and other materials would therefore be a critical first step.
Reaching a volcano from forward orbital base Deimos would give us supporting infrastructure that can back up Martian surface operations. Surface operations can “abort to orbit” if needed, or conversely, a rescue from orbit could also be realized. The ability to provide precursor surface missions with the assistance of real time telepresence from orbit will also be of great advantage in setting up surface accommodations for a sustained presence, whether on a volcano or elsewhere. Robotic and human exploration sorties will be easier and more effective after a Mars ferry system fueled and maintained with Deimos resources is operating.
|We do not argue that our proposed architecture will be the fastest way for humans to step on the Martian surface. We argue instead for sustainability and cooperation.|
There is still another advantage. Bases on the surface of Mars and in Mars orbit will need resupply of some Earth-sourced materials, especially in the beginning. There will also likely be need to return materials such as soil samples or possibly even astrobiological samples to Earth-Moon Lagrange points, the lunar surface, or terrestrial labs for analysis. However, that will only be the beginning. The movement of people and materials beyond samples is inevitable, and complete protection from forward and back contamination will become impossible. Nevertheless, the planetary protection protocols that should really count to us are those that might present risks to human and other terrestrial life. For this reason, the sequestration of potential lifeform samples arriving from Mars should be restricted to isolation within the life-hostile context of Deimos and the lunar surface. A volcanic summit base might provide an additional way at least to minimize forward and back contamination of life forms.
To underestimate the challenges of Moon and Mars settlement is to fail in overcoming them. Somehow Antarctica, with a relatively low profile, has for decades successfully drawn the kind of sustained support we need for the settlement of both the Moon and Mars. Apollo, although started with huge national effort, did not create a sustainable political constituency for the settlement of the Moon. Our government sold the Apollo program like a football game between the US and Soviet Union. This time around, space development advocates must sell “season tickets” to a sustainable Moon, Mars PhD, and solar system settlement program.
A long-term commitment to an Antarctic-style research station will not do. Nor is the simple goal of exploration sufficient. For self-sufficiency and sustainability, the selling of goods and services must be an integral part of the settlement mix. Tourism will undoubtedly be one of the first services companies will provide for profit, and the nascent space tourism industry has already taken off. Mining of water and mineral resources will be a close second. Eventually, refined goods and sophisticated services will also evolve. A “Moon to Moon to Mons” strategy could rapidly extend engineering, technological, and commercial advances from cislunar space to the Mars PhD system, greatly facilitating solar system development for the benefit of humankind.
NASA has plans to spend a considerable sum of money on its Asteroid Redirect Mission (ARM). Meanwhile, Deimos is an orbiting “platform” already in place and ready for staging, communications, and telepresence to explore and sustainably settle Mars. In addition, the vast resources on Deimos are likely similar to those found on carbonaceous asteroids, and Phobos is nearby for continued resource utilization. Deimos is relatively accessible compared to most other solar system sites. Methods and technologies for utilizing asteroidal resources could be tested and developed on Deimos, while we otherwise use that moon to enhance lunar and Martian infrastructure. In other words, a “Moon to Moon to Mons” campaign is just the ticket for developing asteroid utilization technologies, and in so doing, vastly accelerate sustainable solar system settlement. It is time to replace the ARM with a Moon to Moon to Mons strategy.
We do not argue that our proposed architecture will be the fastest way for humans to step on the Martian surface. We argue instead for sustainability and cooperation. We urge that various competing camps for destinations such as the Moon, Mars, asteroids, and orbital spaces drop their “us first” postures and see a compelling case unifying their priorities in the context of a Moon to Moon to Mons strategy. We believe that pioneers from Earth can use such a strategy to undertake a rational, cooperative, sustainable campaign for the expansion of human space settlement throughout the solar system.
Bleacher, J. E., Richardson, P. W., Garry, W. B., Zimbelman, J. R., Williams, D. A., Orr, T. R. (2011). Identifying Lava Tubes and Their Products on Olympus Mons, Mars, and Implications for Planetary Exploration. 42nd Lunar and Planetary Science Conference 2011.
Gazetteer of Planetary Nomenclature. IAU Working Group for Planetary Nomenclature. Pavonis Mons. U
Hartmann, W. K. (2003). A Travelers Guide to Mars: The Mysterious Landscapes of the Red Planet.
Hopkins, J. B. & Pratt, W. D. (2011). Comparison of Deimos and Phobos as Destinations for Human Exploration and Identification of Preferred Landing Sites. AIAA 2011 Conference and Exposition, September 27–29, 2011.
Logan, J. S. & Adamo, D. R. (2014). Destination Deimos, Part I. The Space Review, November 3, 2014; Destination Deimos, Part II. The Space Review, November 10, 2014.
Milestones to Space Settlement: An NSS Roadmap Milestone 15: Creation of a Logistics System for Transporting Humans and Cargo to the Martian Surface. Ad Astra. Spring 2014, pgs. M17 – M19.
Nichols, C.R. (1993). Volatile Products from Carbonaceous Asteroids. Bose Corporation.
Norton, R. O. (2002). The Cambridge Encyclopedia of Meteorites. Cambridge: Cambridge University Press. p. 139. ISBN 0-521-62143-7
Redd, N.T. (2014). Home, Sweet, Moon Cave: Astronauts Could Live in Lunar Pits. SPACE.com, July 23, 2014.
Scott, D.H., Dohm, J.M., Zimbleman, J.R. (1998) Geologic Maps of Pavonis Mons, Mars. USGS, 1-2561.
Schubert, P. J. (2008). US Patent No.: US 7,462,820 B2. December 9, 2008.
Schubert, P. J., Williams, J., Bundorf, T., Di Sciullo Jones, A. P. (2010). Advances in Extraction of Oxygen and Silicon from Lunar Regolith, AIAA SPACE 2010 Conference and Exposition, August 30 –September 2, 2010, Anaheim, California.
Schubert, P. J. (2011). Dual Use Technologies for Self-Sufficient Settlements: From the Ground Up. International Space Development Conference (ISDC) 2011 in Huntsville, Alabama.
Schrunk, D., Sharpe, B., Cooper, B. L., & Thangavelu, M. (2007). The Moon: Future Resources and Settlement, August 14, 2007.
Shean, D.E., Head, J.W., Marchant, D.R. (2005). Origin and Evolution of Cold-Based Tropical Mountain Glacier on Mars: the Pavonis Mons Fans-Shaped Deposit. Journal of Geophysical Research. Volume 110, Issue E5, May 2005.
Spudis, P. (2010). The Four Flavors of Water. AirSpaceMag.com. May 2, 2010.
Spudis P. D. and Lavoie A.R. (2011). Using the Resources of the Moon to Create a Permanent Cislunar Space Faring System. Space 2011 Conference and Exposition, American Institute of Aeronautics and Astronautics, Long Beach CA, AIAA 2011-7185, 24 pp. See also, Spudis P.D. (2011). The Moon: Port of Entry to Cislunar Space. In Toward a Theory of Space Power: Selected Essays, C.D. Lutes and P.L. Hays, eds., Institute for National Strategic Studies, National Defense University, Washington DC, Chapter 12.
Verger, Rob (2014). Recycling on Mars. Newsweek, May 22, 2014.
Wall, M. (2012). Mars Cave-Exploration Mission Entices Scientists. SPACE.com, November 20, 2012.
Zubrin, Robert, The Case for Mars, 1996. | <urn:uuid:54e0a5a1-dbe9-446a-a5b7-47af7f463883> | CC-MAIN-2016-26 | http://www.thespacereview.com/article/2725/1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00018-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.885919 | 3,825 | 2.953125 | 3 |
Harvesting and Storing Pumpkins,
Winter Squash, and Gourds
Pumpkins: Halloween pumpkins are harvested September through October.
Sometimes harvesting may start in mid August to early September
which requires good handling and storage of the pumpkin fruit before
selling to the customers in late October. The first frost occurs
in early to mid October in northern parts of the state when the
pumpkin fruits are still curing outside in the fields. The growers
in pick-your-own pumpkin operations use this method to ensure that
pumpkins are well cured in the field before picked up by their customers.
Some growers practicing conventional pumpkin marketing systems where
the fruit is picked, washed, dried and sold to customers on weight
or per fruit basis also use this method. It is important to note
that pumpkin fruits can tolerate light frost that kill the vines
only but more fruit loss can occur if the frost caused injury on
the fruit surface as the damaged areas act as avenues for fungal
and bacterial fruit rot pathogens. Remove pumpkins from the fields
before the hard freeze (when the night temperatures are less than
27 degrees (F) or else you may risk losing 80-90 percent of the
The pumpkin fruit is harvested when it is uniformly orange and
the rind is hard. Green immature fruits may ripen during the curing
process but not after the vines are killed by frost. The vines need
to be dry when fruits are mature. Handle the fruit with care to
avoid cuts and bruises. Harvest the fruit by cutting it off the
vine with a sharp knife or a pair of looping shears leaving 3-6
inches of the stem attached to the fruit. This makes the fruit look
more attractive and less likely to be attacked by fruit rot pathogens
at the point of stem attachment. Do not carry the pumpkin fruit
using the fruit stems because the fruit is very heavy and may lead
to detachment of the fruit stem. Wash the fruit with soapy water
containing one part of chlorine bleach to ten parts of water to
remove the soil and kill the pathogens on the surface of the fruit.
Make sure the fruits are well dried before setting in a shed to
Pumpkin fruits are cured at 80-85F and 80-85 percent relative
humidity for 10 days. This is done to prolong the post harvest life
of the pumpkin fruit because during this process the fruit skin
hardens, wounds heal and immature fruit ripens. After curing, the
fruits can be sold to the customers and the remaining fruits stored.
Store the fruits in a cool dry place. Put the fruits on a single
layer on wooden pallets with enough space in between the fruits
(the fruits should not touch each other) and do not place them on
a concrete floor. Improve the air circulation within the storage
area by letting in cool air at night and use a fan to circulate
air during daytime. Do not let in warm air from outside into the
storage during the daytime. The optimal storage condition is 50-55F
temperature and relative humidity of 50-70 percent. The relative
humidity is very important within the 50-70 percent range because
high humidity leads to settling of moisture on fruit surfaces, which
increases decay of the fruit and low relative humidity may cause
dehydration of the fruit. Under these conditions you can keep the
fruits for about 2-3 months. Store the fruits away from apples since
apples produce ethylene gas as they ripen which speeds up the ripening
process in pumpkins, hence decreased shelf life. Check the fruits
regularly and remove the ones that are rotten because if not removed,
they will spread the pathogens in the storage area.
Winter squash such as Butternut, Acorn, Hubbard, and other types
are mature when the skin (rind) is hard and cannot be punctured
by thumbnails. The mature fruit has a dull and dry skin compared
to shiny, smooth skin of immature fruits. Remove stem completely
from Hubbard types and if desired leave only 1-inch long stump on
the fruit. Stems longer than 1-inch tend to puncture adjacent fruits
when in transit or storage. Butternut, Hubbard and other squash
types do not need to be cured as the benefits are less compared to
pumpkins, while curing is very detrimental in Acorn types as it
leads to a decline in quality. Acorn types have the shortest storage
time of 5-8 weeks at 50F and relative humidity of 50-75 percent.
Butternut, Turban, and Buttercup types can be stored at the same
temperature and relative humidity as Acorn types but have a longer
storage time of 2-3 months. The Hubbard types can be stored much
longer than the rest (5-6 months) at 50-55F and relative humidity
of 70-75 percent. Winter squash should be marketed or used immediately
when taken out of storage to avoid development of fruit rot diseases.
Gourds are of different colors, shapes and sizes. They should
be harvested before frost when fruit is mature. As gourds mature,
stems turn brown and become dry. Don't use "thumbnail"
test on gourds as it can cause a dent on the shell of the unripe
gourd and lower its quality. Harvest the fruit by using a sharp
knife or shears to cut the stem from the vine and leaving a few
inches of the stem attached to the fruit. Do not handle the gourd
by its stem since the stem can easily detach from the fruit and
lower its decorative value. If the fruit is dirty, wash in soapy
water to remove soil and rinse in clean water with household bleach.
One part to 10 parts water kills soil-borne pathogens. Then dry
each fruit with a soft cloth. Spread the fruits so that they do
not touch each other in shelves lined with newspapers in a well-aerated
shed. Turn the gourds daily and change damp newspapers for 1 week.
The outer skin will harden this time and surface color develops.
The gourds need to be wiped with a damp cloth soaked in household
disinfectant and placed in a warm, dry dark area for 3-4 weeks for
further curing. The decorative gourd can stay in its natural state
for 3-4 months and as long as six months with a protective coat
of paint or wax on the surface.
October/November 2004: Diseases
and Insects of Shrubs and Small Trees | Does
Your Ash Tree Have the Emerald Ash Borer? | Harvesting and Storing
Pumpkins, Winter Squash, and Gourds | <urn:uuid:768d7130-4b28-4de3-9deb-44a4b5cd3e4b> | CC-MAIN-2016-26 | http://extension.illinois.edu/hortihints/0410c.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00150-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923674 | 1,412 | 3.1875 | 3 |
Chinese researchers alter embryo DNA: Do results cross ethical tripwires?
The study has added urgency to recent calls from scientists, ethicists, and leaders in the biotech industry to take immediate, serious steps globally to weigh the legal, ethical, and social implications of manipulating DNA within heritable cells.
For the first time, scientists have altered genes in human embryos and allowed them to briefly develop.
The research crosses a boundary that scientists have avoided since the dawn of genetic engineering 42 years ago. It marks the first time researchers have attempted to modify the genetic makeup of a human embryo in ways that would allow the genetic change to be passed to succeeding generations.
For some researchers working to develop gene therapies, in principle such genome editing could be used to combat inherited diseases. But to others, it also could lead to attempts to genetically enhance humans in ways that could last for generations with unpredictable results.
Once again, humanity is faced with the fundamental question of “whether humans should have this degree of control over their own physical futures,” says Alta Charo, a professor of law and bioethics at the University of Wisconsin at Madison. “We’re hitting the point where people are asking: Do we really want to have the power not just to select among the choices given to use by nature, but to create entirely new choices of our own specification?”
The ethical concerns surrounding genetic manipulation of reproductive cells are so weighty that many scientists and bioethicists are urging colleagues to begin substantive conversations with colleagues, biotech-industry leaders, regulators, interest groups, and the public to figure out what clinical uses, if any, would be acceptable. Others, including some in the biotech industry, argue for a moratorium on any germ-line research involving human reproductive cells.
One measure of the ethical minefield into which the Chinese team marched: Two of the most high-visibility science journals, Science and Nature, refused to publish the results on ethical grounds, the project’s lead scientist, Junjiu Huang, explained to reporters from Nature's news division.
Many of the initial concerns in the scientific community involve safety and efficacy for a tool they see as having potentially powerful therapeutic applications. The Chinese team shares this concern and cites its own results as evidence that CRISPR-Cas9 is nowhere near ready for the clinic.
"That type of use of the technology needs to be on hold pending a broader societal discussion of the scientific and ethical issues surrounding such use," writes Jennifer Doudna, a molecular biologist at the University of California at Berkeley and a member of the team that developed the tool, in an e-mail.
The tool is designed to cleave DNA, the molecule that carries the basic instructions covering the formation and functions of most organisms. The ability to intentionally snip a segment of DNA and replace it has been around for four decades, but in more cumbersome forms.
The new tool, first described in the journal Science in 2012, combines small strands of RNA, molecular middlemen influential in a range of genetic processes, with an enzyme effective at snipping DNA.
This new system is unique because the enzyme, Cas9, is delivered to a specific sequence of DNA "with something really easy to generate – this guide RNA," which researchers can program to hunt for the DNA segments they are interested in cleaving, explains Kate O'Connor-Giles, a molecular biologist a the University of Wisconsin at Madison.
This relative ease of production and use has led labs around the world to quickly adopt the tool for genetic research, particularly related to disease. But it also has brought the potential for conducting experiments in so-called germ-line engineering within the range of even modest laboratories that at least in some countries can fly under the regulatory radar.
“The technology is getting ahead of the ethical and regulatory discussions,” Dr. O’Connor-Giles says. “This is evolving rapidly because it's so easy to use. But it's easy to use in the sense of making something happen. It's not easy to use necessarily in the sense of making what you want to have happen.”
The new study helps make that point. Dr. Huang’s team at Sun Yat-sen University started with 86 fertilized but defective eggs from a fertility clinic. They used defective eggs that couldn’t have resulted in live births because of ethical concerns surrounding the use of normal embryos, the researchers explained.
Seventy-one of the resulting embryos survived the introduction of the CRISPR-Cas9 package which was aimed at a gene researchers have associated with a serious blood condition that can be fatal. They also introduced a molecular template for repairing sequence they aimed to snip.
Of the 71, the team analyzed 54 embryos the team analyzed, 28 showed evidence that the CRISPR-Cas9 package has snipped the DNA. Seven of the 28 showed evidence that they had undergone some form of repair, but only four used the template the team introduced. But the researchers also detected what they interpret as so-called off-target activities – genetic mutations that the CRISPR-Cas9 package had triggered elsewhere along the genome and whose numbers could well be underestimated.
Those inadvertent mutations underscore concerns that even a single edit could set into motion an unpredictable series of events that could, in theory, be worse than the problem being targeted.
In 1975, when scientists and ethicists gathered in Asilomar, Calif., to recommend boundaries to recombinant DNA research, most of the concerns centered on the risk of altered organisms escaping into the environment. Even then, however, some were envisioning the possibility of treating disease by repairing genetic defects associated with a disease.
In one sense, CRISPR-Cas9 and the Chinese experiment "is an attempt to do what gene-transfer folks have been trying to do all along, but to do it better," says Erik Parens, a senior scholar at The Hastings Center, a bioethics-research institution in Garrison, N.Y.
The new tool for editing genomes is valuable for basic research with model organisms, such as fruit flies, to understand how genes, individually and in groups, function and the interplay between various components of a genome, notes the University of Wisconsin's O'Connor-Giles. That work lays a foundation for understanding genetic mechanisms in more complex organisms, including humans.
Beyond basic research, scientists also see the new tool as useful for genetic therapies whose repairs cannot be passed from one generation to the next.
But when it comes to germ-line editing, in the eyes of many, what didn't seem possible until now “all of a sudden is possible,” Dr. Parens says.
In addition to the issue of introducing unintended mutations, stability remains a question: If a new gene is introduced, how likely is it to mutate at a pace faster than the one it replaced?
The sheer complexity of the human genome and factors outside of the presence of a particular segment of DNA that influence its expression remains presents its own challenge.
Yet for many people, safety is important but misses what they see as a bigger picture. An increasing control could lead to hubris that could led to applications of gene-editing that either are inadvertently harmful or harmful by design, ethicists say.
For others, the unease is likely to be deeply rooted in religious convictions or in a reverence for natural order that deems human intervention as contamination.
Whatever the mix, even those in the scientific community are raising the caution flag. In March, two groups published calls to begin a serious general conversation on germ-line editing now.
Writing in the journal Science, a group of scientists and bioethicists, including Charo, that includes Nobel Prizewinners David Baltimore and Paul Berg – who also played key roles in the 1975 Asilomar meeting – offered several recommendations that range from strongly discouraging work on clinical applications for germ-line editing, particularly in countries with lax regulations, to conducting the needed studies with high degrees of transparency, using "human and non-human model systems" to resolve questions about what, if any, clinical applications might be acceptable.
A second group, writing in the same week's issue of Nature, called for a halt to research on editing genes in reproductive cells.
"What we've called for is a moratorium on research in fertilized human embryos such that we don't perfect and publish on the Internet these techniques before there's an opportunity to have a robust discussion about whether there is any circumstance where we as a human species would think this makes sense," says Edward Lanphier, president and CEO of Sangamo Bioscience in Richmond, Calif., and chairman of the Alliance for Regenerative Medicine in Washington.
Once the germ-line Rubicon is crossed, the group noted, even clearly identified therapeutic uses could eventually lead to the use of germ-line editing to enhance humans, rather than treat them for a disease.
Outside of the science community and the biotech industry, CRISPR-Cas9's potential impact has been flying under the radar, says George Annas, who heads the department of health law, bioethics, and human rights at Boston University.
But now that it’s appearing on the radar screen, researchers, ethicists, and the public are going to have to confront the issues it raises sooner rather than later, he suggests.
“If we keep waiting until it can be done, is that too late?” Dr. Annas says. “It would be nice if we could all agree on a line we don't want to cross, knowing that when we get to that line, we’ll revisit it again.” | <urn:uuid:8c8e1490-fa5c-47d5-bc9f-47c4efec45ba> | CC-MAIN-2016-26 | http://www.csmonitor.com/Science/2015/0424/Chinese-researchers-alter-embryo-DNA-Do-results-cross-ethical-tripwires | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00139-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.948547 | 1,994 | 2.5625 | 3 |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
orion coloring scribes
S.G. took issue with a comment in one of my recent posts, suggesting that I
may have responded without thinking. Well, thats possible. But I continue
to maintain that it is patently absurd to suggest that scribes colored their
texts with different inks.
First, to drag in late talmudic or midrashic material is a false methodology
. When S.G. suggests that the later Rabbis were concerned with ink color it
does not mean that such a concern existed in the first couple of centuries
BCE. In fact, there is no evidence at all regarding such concerns.
Second, and most important, if we assume that S.G. and Crowder are correct,
we must posit scribes sitting around with different ink bowls around them.
We must further posit that these same scribes would choose these colors for
some purpose. What might that purpose be? Why would a scribe want to color
his text? Are we now to suppose that the Qumran mss are the earliest
examples of medieval illustrated manuscripts?
In short, it is just silly to suggest that scribes in Palestine in the era
of our concern would do such a thing. As an earlier poster pointed out- it
is not just the letters that are colored- so are some portions of the
leather. Its time for common sense to return to this debate. Those who
wish to see scribal coloring practices are now obliged to give some reason
for it. Or desist.
Jim West, ThD
Quartz Hill School of Theology | <urn:uuid:7283c82c-20b8-4b54-9aed-96b10a3c3062> | CC-MAIN-2016-26 | http://orion.mscc.huji.ac.il/orion/archives/1998a/msg00465.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00018-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949867 | 358 | 2.625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.