text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
The History Show Sunday 28 April 2013 Marriage in Medieval Ireland by Gillian Kenny In Gaelic Ireland during the medieval period the laws and practices surrounding marriage were secular in nature. People didn't get married in Church, marriage was a civil contract between two individuals (representing their family interests) and it was celebrated with a feast and then that was that. Divorce was common, illegitimacy was an unknown concept (no idea of bastardy as it would have been understood elsewhere) and the lawyers tried to make allowances for all types of unions so they listed 9 different types of legal union which people could engage in. These lawyers formulated the legal code known as the Brehon Laws and the laws were first committed to paper in about the 8th century and were adhered to for the rest of the middle ages. Women could control the wealth they brought with them to their marital contract and this enabled married women to have a real say in how family life and finances were managed. Some women's wealth consisted of soldiers and there are accounts of women leading those soldiers into battle. Other women patronised cultural pursuits such as poetry and also became noted builders of churches. Women could act quite independently as wives which is remarkable considering the times. Women were also crucial in building up alliances with other families by becoming loving foster mothers to children who they took into their households. Fosterage was very common in medieval Ireland and the word 'Mammy' originally referred not to the birth mother (known as the Mathair) but to one's foster mother (the Muimmne - modern Mammy) with whom many children had extremely close relationships. Of course later medieval Ireland was split between two cultures - the 'English' of Ireland and the Gaelic Irish. It is also worth maybe pointing out that, as far as I can tell, many people in 'English' Ireland also got married outside church. These were called 'clandestine' marriages and were, in fact, backed by the Catholic Church. You could marry in front of witnesses but they weren't entirely necessary and no ceremony in church was needed. This appears to have been also extremely common according to my research until the later sixteenth century. The sacramentalisation of marriage in Ireland on a popular scale is a relatively new phenomenon and that for century after century people engaged in marriage with no reference at all to the Church. Of course all this complexity began to come to an end during the 17th century as Gaelic Ireland was wiped out as a viable and strong culture within Ireland. Listowel Military Weekend Events Listowel Military Weekend aims to commemorate Irish people who served in wars throughout history, most notably the American Civil War, both World Wars, and UN Peacekeeping missions. The main event of the weekend is the German ‘invasion’ of the town on the Saturday, followed by Listowel’s ‘liberation’ on the Sunday by American troops. Programme of Events Listowel Military, Agricultural & Vintage Weekend. May Bank Holiday 3rd, 4th, 5th, 6th. Friday May 3rd Meet and greet for participants at The Listowel Arms Hotel 7pm. Followed by a lecture at 8pm by Mark McShane author of “Neutral Shores” Ireland and the battle of the Atlantic. Venue The Seanchai Centre. Saturday May 4th Military exhibits and stalls in the Square from 12pm. 3pm Invasion and occupation by German forces in The Square. 4pm parade from Market Street to St John’s Arts Centre, The Square by members of The Irish Army and various Veteran Associations where a memorial plaque will be unveiled honouring Irish men & women who gave their lives in the line of duty. Wreath laying ceremony by Ambassadors from France and Belgium & Veteran Associations. Brief prayer service and Coast Guard helicopter flyover. Reception after in Listowel Arms Hotel. Irish Military Vehicle Group will have a display of vehicles in The Square. Living History display by American Civil War Re-enactors and also Great photo opportunity for all the family, Sunday May 5th 12pm to 5pm Farmers Market and Stalls, Living History display in The Square. Scale Model & Diorama Display in The Listowel Arms Hotel 12pm - 5pm. 3pm Liberation battle to free Listowel with over 50 re – enactors using blank rounds, smoke bombs and flash bombs. A loud afternoon in Listowel. . 4pm Live music and Pig on The Spit in The Square. 9pm – 1am Hangar Dance in The Listowel Arms Hotel Music by The Bombshell Belles from the UK. This is a 1940’s themed dance with prizes for the best dressed lady, best dressed man, and couple. Vintage and Military clothing optional. Monday May 6th. From 11am. Military Living History Display in The Square. Agricultural Machinery (New and Vintage) display will take place in Market St. from 11am to 6pm. Vintage Car Rally, gathering in The Square at 11am and leave to tour the villages of North Kerry returning to the Square at 1pm approx. Cars will be on display till 6pm. Children’s attractions will include Bouncing Castle, Slide, Zorb Ball in a pool. Pet Farm display, Face painting, and various stalls. BBQ and Music in the Square to finish off what we hope will be a great weekend of fun for all the family.
<urn:uuid:d4bb2ce3-708d-4da9-bc79-237bfbca88c6>
CC-MAIN-2016-26
http://www.rte.ie/radio1/the-history-show/programmes/2013/0428/385889-the-history-show-sunday-28-april-2013/?clipid=1065350
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97481
1,155
3.609375
4
Drawing a Blank (October 14, 2004) Hi-res TIF image (5.0M). What is remarkable about this image of the Sun (Oct. 11, 2004)? Nothing at all. And that precisely is the point. For the first time in almost six years, the Sun was blank. No sunspots. Not one. Of course, it only remained that way for a day or two, then one popped up. But it is a strong reminder that, day by day, the Sun is inexorably approaching its minimum period of activity in its 11 year solar cycle. The solar cycle is most usually determined by the average number of sunspots, and here is a case in point. The actual period of solar minimum will be around the period of 2006-07, then it will begin its slow climb back up again. Fewer sunspots translates to fewer solar storms, less "space weather" in general, and thus, fewer opportunities for seeing aurora. SOHO began its Weekly Pick some time after sending a weekly image or video clip to the American Museum of Natural History (Rose Center) in New York City. There, the SOHO Weekly Pick is displayed with some annotations on a large plasma display. If your institution would also like to receive the same Weekly Pick from us for display (usually in Photoshop or QuickTime format), please send your inquiry to firstname.lastname@example.org.
<urn:uuid:d81334ec-584c-4bf5-8685-03735df4fe3e>
CC-MAIN-2016-26
http://sohowww.nascom.nasa.gov/pickoftheweek/old/14oct2004/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.911302
311
2.828125
3
The state’s contract is with a California-based renewable energy company called Green Planet Power Solutions. It plans to locate the chicken-litter plant in Federalsburg, in Caroline County. The governor’s office said it hopes the initiative will save the state between $53 million and $80 million over the course of the 15-year contract period. The governor’s statement also said construction of the plant would create 200 construction jobs and 24 permanent ones as well as reduce 230,000 pounds of nitrogen runoff into the Chesapeake Bay annually. The state advertised in late 2011 that it would be accepting proposals for animal waste to energy initaitives. In order to qualify, “The successful supplier must have an electric generating capacity of up to 10MW from animal waste – such as poultry litter or livestock manure – and must be directly connected to the regional electricity grid. The selected supplier must begin providing electricity to the State by December 31, 2015.” (Wash Post, 1/25/2013)
<urn:uuid:db3494f5-826a-4298-83cf-c4fdcede7088>
CC-MAIN-2016-26
http://cenvironment.blogspot.com/2013/01/chicken-litter-to-power-maryland.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00194-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945073
209
2.6875
3
Energy conservation is a cornerstone of Kendall-Jackson’s sustainability program. Electricity production is responsible for 35% of all U.S. Green House Gas (GHG) emissions. That means generating power contributes nearly twice as much pollution as all the cars, planes and other forms of transportation in the world. North Americans alone consume 2.3 trillion pounds of coal each year to generate electricity, which is 18 pounds per day per person. Or to put it another way, a single train carrying that much coal would be 102,000 miles long and reach almost half way to the moon. (Check out Chris Jordan’s Website and click on the photo to get a better sense of scale.) I’m not making this stuff up – it’s real. Now don’t get me wrong, I’m not the doom and gloom type, I’d rather spend my time trying to find solutions to problems, but I came across this info and it kind of blew my doors off so I thought I might share it. Here at Kendall-Jackson we’ve followed a three-step progression to most effectively save energy across all of our operations. It’s kind of like the reduce, reuse, recycle motto you might know for waste materials, but this one’s for energy: Conserve by simply eliminating wasted energy — turn off office equipment at night and use cold water instead of hot. Optimize equipment by updating our process to best use the efficient gear we have and then retrofit old, inefficient equipment with new technology. Produce renewable energy on-site or buy it from third parties. You might think the first thing to do is to put some solar panels on the roof. Turns out, that’s the last project we should work on after we’ve found every opportunity to reduce the amount of energy we use. This way we’ll need fewer solar panels, in turn requiring fewer natural resources to build those solar panels. So, we started with conservation and optimization. We performed detailed audits of our buildings, equipment and processes to find every opportunity to save energy; we next recomissioned and retrofitted our buildings. Recommissioning — or fine tuning — our equipment and processes essentially sets the efficiency clock back to save energy without having to buy lots of new gear. Over time our wineries change from how they were originally designed and built. After 10, 15 or even 20 years almost nothing is operating the way it was originally designed. We feel pretty good about our early results. We’re now reducing our GHG production by over 6,600 tons per year. Check out this chart to see where we made the most significant energy savings.
<urn:uuid:f2b3f72f-c012-48a8-b036-b5113c9e134b>
CC-MAIN-2016-26
http://blog.kj.com/energy-crunch/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940023
562
3.0625
3
19 Oct, 10 | by BMJ Group At the recent European Health Forum Gastein, a group of “young Gasteiners” blogged live from the talks. A selection of the blogs are on the BMJ blogsite. Tessa Richards, assistant editor, BMJ, also attended the conference. You can read her blog and introduction to the “young Gasteiners” here. Health literacy is a big challenge in Europe nowadays. It is the capacity of people to meet the complex demands of health in modern society, so why is it so important? Research has shown that people with low health literacy are less knowledgeable about the importance of preventive health measures, they have a higher risk of hospital admission, and the additional costs of limited health literacy range from 3-5% of the total health cost per year. In this forum the pilot results of the European Health Literacy Survey (HLS-EU) were presented. The HLS-EU is a project which will measure health literacy in various European regions and create awareness of its societal and political impact in Europe. In this pilot study 99 people were interviewed. The results presented suggest that the health information provided by family doctors is the most reliable for the respondents. It is the best way to pass the message, and they are better informed about physical activity and nutrition. The main conclusion of the pilot results are that there are differences among younger and older generations; the ability to access, understand, appraise, and apply information is related to life events; and suggests people are confident in terms of managing their health and navigating the system, but place less trust in authorities as health advisors. Key messages from speakers at the forum on health literacy: - “Recognizing health literacy as a public health goal.” (K. Sorensen) - “HLS-EU helps establish the issue of health literacy in Europe www.health-literacy.eu.” (G. Doyle) - “Health literacy is something that saves the next generations. I really believe in the importance of this” (A. Parvanova) - “Health literacy is health promotion by another name (health education) plus empowerment.” (J. Wills) - “Health literacy has universal relevance in Prevention, Health Promotion, Patient centered chronic disease management.” (N. Bedlington) - “Health literacy should be a real priority at EU level – comprehensive strategy.” (N. Bedlington) Can the promotion of Health Literacy be really important to create a better future in Europe? Ana Rita Pedro is a research assistant from the Department of Policies and Health Administration at the Portuguese School of Public Health You can read more blogs from the Young Gasteiners on the BMJ blogsite. The rest can be read on www.ehfg.org
<urn:uuid:38c1dae8-dfbe-4205-8d6f-5ba1c5f072a2>
CC-MAIN-2016-26
http://blogs.bmj.com/bmj/2010/10/19/ana-rita-pedro-on-health-literacy-in-europe/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948008
594
2.5625
3
Question: "What is the Nazarene Church, and what do Nazarenes believe?" Answer: The Church of the Nazarene is a denomination in the Wesleyan-Holiness tradition. Their roots go back to the teachings of John Wesley, as well as to various elements of the Holiness movement of the 19th century. Today there are about 1.8 million members in the Church of the Nazarene, making it the largest of the Holiness movement denominations. The full history of the Nazarene Church is woven with threads from many sources, but the primary ones will be identified here. In 1895, Phineas Bresee and others formed a church in Los Angeles, California, which they named the Church of the Nazarene. This church was organized as the “first of a denomination that preached the reality of entire sanctification received through faith in Christ.” Entire sanctification means that the soul of the believer attains such a work of the Spirit that it loses its desire to participate in acts of sin. Most churches within the Methodist movement taught some form of this doctrine, but none had formalized it and made it a distinctive of their organization. In 1907, sensing a need to draw together the various independent groups that were part of the Holiness movement, a general conference was called, which resulted in the merger of the Church of the Nazarene and the Association of Pentecostal Churches of America. The merged body was named the Pentecostal Church of the Nazarene. In 1908, the Holiness Association of Texas, the Pennsylvania Conference of Holiness Christian Churches, and the Holiness Church of Christ were merged in. The Pentecostal Church of Scotland and the Pentecostal Mission joined in 1915. In 1919, the church dropped the name “Pentecostal” because of the rise of the modern Pentecostal tongues movement. From the very beginning, the focus of the Nazarene Church has been personal holiness for believers. According to the church website, their goal is that all believers “experience a deeper level of life in which there is victory over sin, power to witness and serve, and a richer fellowship with God, all through the filling of the Holy Spirit.” In contrast to the modern Pentecostal tongues movement, which teaches that the evidence of Spirit baptism is speaking in tongues, the Nazarene Church teaches that the evidence of Spirit baptism is the fruit of the Spirit (Galatians 5:22-23). Following the Arminian doctrine of Wesley, the Nazarene Church believes that a person can renounce or walk away from his/her saving relationship with God, and, therefore, Nazarenes have no assurance of salvation. As a result, there is a real emphasis on working to maintain that right relationship with God. The Church of the Nazarene is an evangelistic, missions-minded body that takes their relationship with God seriously and desires to share that reality with the world around them.
<urn:uuid:da9cc590-892e-40e4-a5f6-56edaa031386>
CC-MAIN-2016-26
http://gotquestions.org/Printer/Nazarenes-PF.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00037-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965252
620
2.5625
3
Common name(s): Longan witches’ broom Host(s): Dimocarpus longana (longan) (DOA, 2003b; Qui, 1941); Litchi chinensis (lychee) (Chen I., 1996) but questioned by AQSIQ (2003). Plant part(s) affected: Flower, leaf, seed, shoot (Chen et al., 2001; Menzel et al., 1989). Distribution: Brazil, China (Guangdong, Guangxi, Hainan, Hong Kong), Taiwan, Thailand (Kitijima et al., 1986; Koizumi et al., 1995; Menzel et al., 1989; So and Zee, 1972; Zhu et al., 1994). Longans: The earliest description of this disease on longans is by Qui (1941). Young leaves of infected shoots are small and light green in colour, with curved margins. They appear stunted and deformed, and tend to roll up rather than expand (Zhang and Zhang, 1999). Adult leaves are light yellow-green with marbled yellow spots and brown veins. Leaves form blisters and become distorted and dry before falling off (Menzel et al., 1989; Zhang and Zhang, 1999). Shoots on infected branches become compacted clusters and the inflorescences are unable to extend. The flower organs develop abnormally, and consequently, the flowers either fail to produce fruits or develop into small and empty fruits. A characteristic symptom of the disease is the loss of flowers from panicles, resulting in a ‘broom-like’ appearance of inflorescences (Menzel et al., 1989). In Thailand there are reports of fine light green hairs forming an erinium on both sides of affected leaves. Aceri dimocarpi mites reside inside the erinium mass (Visitpanich et al., 1996). Different cultivars of longan vary in their sensitivity to damage by this disease (So and Zee, 1972). Although the causal organism appears to be systemic, not all branches of an infected tree show symptoms of the disease (Vera and Zee, 1972). Symptoms of longan witches’ broom resemble those described for lychee witches’ broom (Chen et al., 1996; Koizumi, 1995). A study conducted in Hong Kong revealed that disease symptoms were more frequent on younger trees (10-25 years) than on older trees (30 years) (So and Zee, 1972). However, there is disagreement among the available literature as to whether longan witches’ broom is caused by a virus (Chen et al., 1996; Chen et al., 2001; So and Zee, 1972; Ye et al., 1990), a mycoplasma (MLO) (Menzel et al., 1989), or a mite (He et al., 2001). Several studies indicate that the causal agent of the disease is viral. So and Zee (1972) carried out electron microscopy of ultrathin sections from diseased leaves and found filamentous particles that measured about 12 nm in diameter and about 1000 nm in length. The virus particles seemed to be restricted to the sieve tubes, and in the mature sieve tubes appeared to be closely associated with the plasma lemma and the cell wall. They were rarely present in the lumen of the sieve tubes and have never been seen in the non-infected tissues. These virus particles seldom occurred singly, but usually in a cluster. Ye et al. (1990) partially purified a filamentous virus from the leaves and bark of infected longan trees and reported filamentous virions with a diameter of about 15 nm and a length of 300-2,500 nm, with most 700-1300 nm in length. Details of longan witches’ broom virions from diseased trees were detected by means of an enzyme linked immuno sorbent assay (ELISA). Chen et al. (1996) also found filamentous virus particles in leaf phloem cells of infected plants. Using immuno sorbent electron microscopy (ISEM) technique, filamentous viral particles were trapped from the extract preparations of diseased plant materials and the salivary glands of Corngenasylla sinica (longan psylla) and Tessaratoma papillosa (litchi stink bug) (Chen et al., 1994). From these results, Chen et al. (2000) concluded that the disease is caused by a filamentous virus. Since no photos of the virus were available and the experimental results were not replicated, the existence of a virus pathogen of the disease remained controversial. In order to clarify the cause, a series of research projects have been conducted since 1986 in the Fujian Academy of Agricultural Sciences. Other organisms such as a phytoplasm (= MLO) and twig borer insects were suspected of being the causal agent (Li, 1983), although administration of anti-biotic treatments to seedlings failed to suppress the disease, indicating that a phytoplasm was unlikely to be the cause (Chen et al., 1989). He et al. (2001) carried out investigations in orchards in Guangdong Province between 1995 and 1998, and reported that longan witches’ broom is caused by the mite ^ Kuang, and not by a virus or a twig borer. They observed that witches’ broom diseased shoots could occur both in the presence and absence of twig borer tunnel damage. However, when longan seedlings were inoculated with mites, 50% developed symptoms of witches’ broom disease and hosted mites, whilst no mites were found on the leaves of the symptomless plants. The mite was always found to exist on diseased shoots and spikes, and the number of mites was positively correlated to the severity of the disease. Integrated management of pruning and spraying with a mitecide on diseased shoots restored blossoming and reduced the average incidence of diseased spikes from 80% to 9% in three orchard trials. Further evidence of a mite being implicated in the witches’ broom disease has been reported in Chiang Mai and Lam Phun provinces of Thailand, where the aetiology is thought to be the mite Aceria dimocarpi (Kuang) and a transmitted phytoplasma (Chantrasri et al., 1999; Visitpanich et al., 1999). After one month, feeding by the mites caused witches’broom symptoms along the shoot of seedlings. Electron micrographs revealed phytoplasma cells in the cytoplasm of infected sieve tube elements and were confirmed by PCR techniques (Chantrasri et al., 1999). However, Sdoodee et al. (1999) were not able to confirm the presence of phytoplasmasa in infected longan tissue with PCR despite the DNA indicated the presence of a prokaryote. Studies relating to the transmission of the ‘virus’ were undertaken from 1985-89 by Chen et al. (1992). It was found that longan witches’ broom was transmitted from one longan tree to another and from longan to lychee trees by the vectors Tessaratoma papillosa Drury (litchi stinkbug) and a longan psyllid, Cornegenasylla sinica (Koizumi, 1995). The transmission rate by adults and nymphs of the litchi stink bug was 18.8-36.7% and 26.7-45%, respectively, with the latent periods ranging from 53-72 days up to one year. The transmission success rate by the longan psylla was 23.3-36.7% with a latent period of the disease from 80-88 days up to one year. Transmission has also been demonstrated by inarching or marcotting from diseased parent trees (Li L.R., 1955; Menzel et al., 1989). Another possible vector of longan witches’ broom is dodder weeds. A study of transmission by Cuscuta campestris (dodder) conducted in 1987 and 1988 in China, found that infectivity caused by the dodder weeds was 20-40% with a latent period of 130-136 days (Chen et al., 1990b). Dodder feeding on infected longan shoots was able to transfer the phytoplasma and produce symptoms in periwinkle plants (Catharanthus rosea) (Chantrasri et al., 1999). A preliminary survey of the incidence of the disease in Hong Kong indicated that witches’ broom of longan was most likely to have originated from Kwantung, China (Li L.R., 1955), where the proprietors of the local orchards obtained planting materials. A study that followed the discovery of the disease in Hong Kong, indicated transmission of the disease via seeds and grafting prompting Li L.R. (1955) to suggest that the cause of longan witches’ broom may be viral. So and Zee (1972) grafted seriously infected longan trees onto two-year old disease-free trees. Seven months later, typical symptoms were evident on the young foliage of all test plants, with the exception of one that failed to graft. The controls did not show virus symptoms. These results agreed with preliminary findings by Li L.R. (1955) on the transmission of the disease in China. Chen and Ke (1994) reported that the incidence of the disease on seedlings in Fujian Province was 5-30%, while the incidence after grafting onto three different longan varieties was 4.3, 14.0 and 19.4% respectively. Longan witches’ broom has spread quickly in Guangdong Province in China with 11% of trees infected in 1995 rising to 50% by 1997. Results obtained in a grafting test indicated that scions may have caused 4.26-19.44% morbidity of the graftlings, which showed symptoms of the disease within 3-10 months (Chen et al., 1990b). An investigation revealed that the morbidities of seedlings, aerial layerings and tongue graftings in the field were 0.1-45.2%, 21-32% and 5-20%, respectively (Chen and Ke, 1994). The extremely high morbidity of seedlings in the field was most likely to be caused by repeated infection by insect vectors. Seedlings grown from the seeds of infected trees cultivars ‘Youtanben’ and ‘Dongbi’, showed an average morbidity of 2.17% (0.19-4.41%) (Chen et al., 1990b), suggesting that seed of the fruit was one of the factors spreading the virus (Chen et al., 1992) supporting the work of Li (1955). In another test, pollen from diseased flowers of longan were aseptically cultured and typical symptoms of longan witches’ broom were present on some of the anther-derived plantlets, indicating that the pathogen may have been transmitted by pollen (Chen et al., 1990b). It remains uncertain whether pollen of the infected longan flowers carried the virus, however, the healthy leaves smeared with the juice of young leaves from diseased trees did not develop any symptoms of the disease (Chen et al., 1990b) excluding the possibility of virus transmission by sap smearing. Chen et al. (2001) reported after conducting further transmission tests suggesting that the seeds and budwoods of longan; insects, litchi stink bug (^ ) and longan psylla (Cornegenapsylla sinica); and dodder plants (Cuscuta campestris) were positive in transmitting this virus. Witches’ broom has variously been described as ‘the only significant disease affecting longan in Asia’ (Menzel et al., 1989), as ‘a widely spread and most important longan disease in China’ and ‘most serious disease to the crop’ (Chen et al., 1992). An early survey in China revealed that 80-100% of longan trees in an old orchard, and 5-10% in newly established orchards were attacked by witches’ broom disease (So and Zee, 1972). According to an investigation conducted into longan production areas in 17 counties or cities in Fujian Province of China, the percentage of diseased trees varied from 20-100% with higher infestation in mature groves. The disease causes crop losses of 10-20% in average years, whilst crop losses of over 50% have been recorded in some severe cases (Chen et al., 1990a). Lychee: Chen et al. (1992) report that witches’ broom symptoms have been observed on lychee in Fujian Province for 10 years. The disease is transmitted by seedling, inarching and by the vector, ^ , and is also associated with the presence of filamentous virus particles in leaf phloem cells. This suggests that lychee and longan witches’ disease are caused by the same virus (Chen et al., 1996). Lychee witches’ broom is known to infect seedlings, juvenile and adult trees. Young leaves on the shoots of infected plants become rolled and reduced in size, with excessive proliferation of shoots that become broom-like in appearance. The flowering panicles become considerably aggregated in clumps and resemble those described for longan witches’ broom. Chen et al. (1992) reported that longan witches’ broom disease is closely related to that of lychee, because Tessaratoma papillosa can successfully transmit the pathogen of longan witches’ broom to lychee. However, other Chinese technical experts reported a lack of adequate evidence to prove that witches’ broom disease infects lychee fruit or that the disease exists in lychee (AQSIQ, 2003). Witches’ broom disease has never been recorded on lychee in Thailand (DOA, 2003). Control: The pathogen may be controlled by integrated methods, including strict quarantine of longan material from infected areas; use of resistant varieties; careful selection of propagating material and virus-free seedlings; and chemical control of the vectors (Coates et al., 2003). The best strategy for disease management appears to be controlling the vectors (Chen et al., 2001; Zhang and Zhang, 1999). It was found that spraying with chlorophos (trichlorfon) or with Sumicidin gave good control of the vector (Chen et al., 1999b). In Thailand, sucking insects were controlled with carbaryl and infected trees injected with the anti-biotic Pyrrodinimethyl tetracycline (PMT) near the affected tip. The tip was then cut and in 1-2 months the disease allegedly disappeared (Ungasit et al., 1999). Experiments to eliminate the virus from planting material showed that alternative heat treatments at 40C in daytime and 30C at night for 40-90 days gave a disinfection rate of 10-20%. Shoot-tip culture gave a rate of 18.5%, and the combination of alternate heat treatment and shoot-tip culture, gave 47.3%. Virus-free plantlets were obtained by heat treatment and used as scions (Chen et al., 1999a). Biological and timely chemical control of insect vectors, and removal of the infected branches and inflorescences were also important measures for the management of the disease (Chen, 1990; Chen et al., 1990). The close relationship between different varieties of longan and the incidence of disease was first observed in China in the mid-1980’s (Chen et al., 1990a), but few further investigations have been made since. Chen et al. (1988) found great differences in susceptibility to the disease among longan varieties, and suggested careful selection and breeding as an important means of control. Varieties such as ‘Lidongben’ and ‘Shuinan No. 1’ were found to be highly resistant, whilst ‘Pumingyan’, ‘Youtanben’, ‘Dongbi’, and ‘Honghezgi’ were more susceptible. Top grafting with scions of resistant varieties effectively reduced the morbidity caused by the disease in severely infected orchards. However, none of the longan cultivars from China, Hong Kong or Thailand can be guaranteed to be free of the virus. Consequently, Menzel et al. (1989) advised that all longan [nursery] material introduced into Australia should be closely examined for symptoms of the mycoplasma. In Thailand, the popular longan cultivar ‘Biew Kiew’ and ‘Deang Klom’ and ‘Ma Teen Klong’ are the most prone to witches broom and develop severe symptoms (Ungasit et al., 1999; Visitpanich et al., 1996); however, cultivars ‘Daw’ and Heaw’ are only mildly affected (Visitpanich et al., 1996). The longan cultivar of choice for export is ‘Daw’ which is considered resistant (DOA, pers. comm., 2003). Based on the knowledge of the pathogen, its transmission sources and vectors and the principles of pest control (Chen et al., 1999b), six measures have been proposed for an integrated pest management program: strict quarantine inspection; selection and use of disease-resistant varieties (e.g. ‘Lidongben’ and ‘Shuinan No. 1’); establishment of virus-free nurseries; timely control of vectors; removal of infected branches, inflorescences and trees from nurseries and orchards; and judicious fertilisation, irrigation and soil management to promote tree vigour and enhance resistance to the disease (Chen et al., 2001). AQSIQ (2003). Comments provided on the Technical Issues Paper on the IRA on Longan and Lychee Fruit from China. State General Administration for Quality Supervision and Inspection and Quarantine of the People’s Republic of China (AQSIQ), 18 June 2003. Chantrasri, P., Sardsud, V. and Srichart, W. (1999). Transmission studies of phytoplasma, the causal agent of witches’ broom disease of longan. Abstract, The 25th Congress on Science and Technology of Thailand, 20-22 October 1999, Pitsanulok, Thailand. 188.8.131.52/info&research/cmuabstract/00/Istrd.pdf Chen, J.Y (1990). The spreading period of longan witches’ broom disease by insect vectors and their timing control. Fujian Agricultural Sciences and Technology 1: 18. Chen, J.Y., Chen, J.Y., Fan, G.C. and Chen, Xi. (1999a). Preliminary study on the elimination of the virus of longan witches’ broom disease. Advances on Plant Pathology. Yunnan Science and Technology Publishing House, pp. 163-166. Chen, J.Y., Chen, J.Y., Fan, G.C. and Chen, Xi. (1999b). The integrated control method for longan witches’ broom disease. South China Fruits 28(3): 29. Chen, J.Y, Chen, J.Y. and Xu, X.D. (2001). Advances in research of longan witches’ broom disease. In: Huang, H.B. and Menzel, C. (eds). Proceedings of the First International Symposium on Litchi and Longan, Guangzhou, China June 2000. ISHS Acta Horticulturae 558: 413-416. Chen, J.Y, Chen, J.Y., Xu, X.D., Fan, G.C. and Chen, X. (1998). An investigation into the susceptibility of varieties to longan witches’ broom disease and some considerations about the breeding and utilisation of resistant varieties. Prospects of Plant Protection in the 21st Century. Beijing Press of Science and Technology of China, pp. 410-413. Chen, J.Y., Chung, K. and Ke, X. (1991). Studies on longan witches’ broom disease. III Affirmation of viral pathogen. Virologica Sinica 9: 138-143. Chen, J.Y. and Ke, C. (1994). The preliminary study on the transmission of longan witches’ broom disease by seedlings. China Fruits 1: 14-16. Chen, J.Y., Ke, C. and Lin, K.S. (1990a). Studies on longan witches’ broom disease. History, and symptom, distribution and damage. Journal of Fujian Academy of Agricultural Sciences 5: 34-38. Chen, J.Y., Ke, C., Xu, C.F., Song, R.L. and Chen, J.Y. (1990b). Studies on longan witches’ broom disease. Transmissive approaches. Journal of Fujian Academy of Agricultural Sciences 5(2): 1-6. Chen, J.Y., Ke, C. and Ye, X.D. (1989). A brief report on the pathogen of longan witches’ broom disease. Fujian Agricultural Sciences and Technology 5: 42. Chen, J.Y., Ke, C. and Ye, X.D. (1994). Studies on longan witches’ broom disease, confirmation of viral pathogen. Virologica Sinica 9: 138-142. Chen, J.Y, Li, K.B., Chen, J.Y. and Fan, G.C. (1996). A preliminary study on litchi witches’ broom and its relation to longan witches’ broom. Acta Phytopathologica Sinica 26: 331-335. Chen, J.Y., Xu, C.F., Li, K.B. and Xia, Y.H. (1992). On transmission of longan witches’ broom disease by insect vectors. Acta Phytopathologica Sinica 22: 245-249. Chen, Y.H., Lin, L.Q. and Chen, J.Y. (1990). Study on control of litchi stink bug (Tessartoma papillosa Drury) by release of parasitic wasps. Fujian Agricultural Sciences and Technology 2: 15-16. Coates, L.M., Sangchote, S., Johnson, G.I. and Sittigul, C. (2003). Diseases of lychee, longan and rambutan. In: Ploetz, R.C. (ed.) Diseases of Tropical Fruit Crops. Wallingford, UK: CABI Publishing, pp. 307-325. DOA (2003). Personal communication, Department of Agriculture plant pathologist, Chatuchak, Bangkok, Thailan,d 21 May 2003. He, D.P., Zhou, B.P., Zeng, M.L., Lin, S.X, Peng, J.X., Li, J.Y. and Huang, W.M. (2001). Occurrence, cause and control of longan witches’ broom in Guangdong Province. In: Huang, H.B. and Menzel, C. (eds). Proceedings of the First International Symposium on Litchi and Longan, Guangzhou, China June 2000. ISHS Acta Horticulturae 558: 407-412. Kitajima, E.W., Chagas, C.M. and Crestani, O.A. (1986). Virus and mycoplasma-associated diseases of passionfruit in Brazil. Fitopatologia Brasileira 11: 409-432. Koizumi, M. (1995). Problems of insect-borne virus diseases of fruit trees in Asia. Food & Fertiliser Technology Center Extension Bulletin. http://www.fftc.agnet.org/library/article/eb417b.html (accessed 21 July 2000). Li, L.R. (1955). Preliminary study on viral diseases of longan. Acta Phytopathologica Sinica 1: 211-217. Li, L.R. (1983). Longan Cultivation. Beijing, China: Agricultural Press, pp. 128-131. Li, L.Y. (1955). A virus disease of longan, Euphoria longana, in Southeast Asia. Lingnan Science Journal 1: 211-215. Menzel, C.M., Watson, B.J. and Simpson, D.R. (1989). Longans – a place in Queensland’s horticulture? Queensland Agricultural Journal September-October 1989: 251-264. Qui, W.F. (1941). Records on diseases of plants of economic importance in Fujian (1). Quaterly Journal of New Agriculture 1: 70-75. Sdoodee, R., Schneider, B., Padovan, A.C and Gibbs, K.S. (1999). Detection and genetic relatedness of phytoplasmas associated with plant diseases in Thailand. Journal of Biochemistry, Molecular Biology and Biophysics 3: 133-140. So, V. and Zee, S.Y. (1972). A new virus of longan (Euphoria longana Lam.) in Hong Kong. Agriculture and Fisheries Department, Hong Kong 18: 283-285. Ungasit, P., Lamphong, D.N. and Apichartiphongchai, R. (1999). An Important Economic Fruit Tree for Industry Development. Chiang Mai, Thailand: Faculty of Agriculture, Chiang Mai University, 137 pp. (Translation by Srisuda MacKinnon). Visitpanich, J., Sittigul, C. and Sardsud, V. (1996). Longan leaf curl symptoms in Chiang Mai and Lam Phun. Journal of Agriculture 12(3): 203-218. Visitpanich, J., Sittigul, C., Sardsud, V., Chanbang, Y., Chansri, P. and Aksorntong, P. (1999). Determination of the causal agents of decline, witches’ broom and sudden death symptoms of longan and their control. Final report, Thailand Research Fund Project, Department of Plant Pathology, Chiang Mai University, Chiang Mai, Thailand. (English abstract). Ye, X., Chen, J. and Chong, K. (1990). Partial purification of a filamentous virus from longan (Dimocarpus longana Lam.) witches’ broom disease trees. Chinese Journal of Virology 6: 284-286. Zhang, Q. and Zhang, Q. (1999). Investigation of the occurrence of longan witch-broom and its control. South China Fruits 28: 24. Zhu, W.S., Huang, H.Y., Huang, T.L., Lei, H.D. and Jiang, Y.H. (eds). (1994). The Handbook of Diseases and Pests of Fruits in Southern China. Beijing, China: Agricultural Press, 258 pp.
<urn:uuid:e8f3c817-4eff-4770-8a1f-856a3e051c06>
CC-MAIN-2016-26
http://docs.exdat.com/docs/index-323096.html?page=23
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.896137
5,764
2.765625
3
Autodesk Software Helps Increase Profitability and Efficiency for Civil Engineers with Intuitive Functionality 9.8.04 Civil 3D is a powerful civil engineering tool for land planning, subdivision design, parcel layout, road design, and grading. Because the software creates intelligent relationships between objects, design changes are now dynamically updated; when an engineer changes one element of a design, all related elements respond accordingly. Because Civil 3D is built on AutoCAD software, civil engineers can immediately leverage their AutoCAD skills. Additionally, with round-trip data exchange from Autodesk Land Desktop, they can seamlessly transition projects--and realize the benefits of 3D visualization and design by gradually building Civil 3D into their workflow. - Solid User Interface: Civil 3D offers standard AutoCAD menu interaction, design, and layout toolbars for use with alignments, profiles, parcels, and grading, and direct interaction of graphical objects. - Flexible Object Styles: Civil 3D enables engineers to control object and label appearance via style settings, maintain drafting standards, and simplify the process of sharing drawings with non-Autodesk Civil 3D users. - Points and Surfaces: Civil 3D can create points using a variety of creation methods and collect points in logical groups based on advanced criteria. Point appearance and labeling is controlled from the Point Group feature. Civil 3D is equipped with a robust description key mechanism and can import or export including custom formats, MDB, and LandXML. Users can store and retrieve points from a project for collaborative engineering. - Alignments, Profiles, and Sections: Civil 3D can extract existing ground profiles from multiple surfaces and design proposed vertical alignments such as graphical layout, tabular input, and dynamic editing. Civil 3D allows engineers the ability to select sections at specific stations or at intervals along the alignment. Engineers can create section plots (single station, full section sheets) and finished drafting with dynamic annotation, control drafting standards, station offset, and grade labels. Complex Corridor Modeling: Civil 3D corridor modeling is used to design model lanes, grading, side slopes, ditches, medians, and barriers to model complex roadway corridor designs. Changes made to the model-based design are dynamically updated and seen in the model, which improves design iteration time and directly affects the billable hours the engineers spend on the design. The corridor model also generates geometry, terrain models, site volumes, and visualization, which allows engineers to get crucial data from designs quickly and easily. The corridor model takes advantage of the Civil 3D style-based functionality, allowing the drafting and annotation of the design to adhere to company standards. Autodesk Civil 3D 2005 will be available in the United States later in the quarter. It is also planned to be available in the United Kingdom, France, Germany, Italy, China, Japan, and Korea at a later date. Source: Autodesk, Sept. 7, 2004
<urn:uuid:ff45c5e8-321b-4745-9cf0-ee55be06431f>
CC-MAIN-2016-26
http://www.pobonline.com/articles/86095-autodesk-software-helps-increase-profitability-and-efficiency-for-civil-engineers-with-intuitive-functionality-9-8-04
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897983
604
2.625
3
Mercury is darker than the moon, something that has perplexed scientists, especially because Mercury lacks iron, the darkening agent found on the moon. Now, scientists think they know the reason why: The surface of the innermost planet is enriched in carbon, in the form of graphite, aka “pencil lead.” Scientists using NASA’s MESSENGER orbiter found evidence for carbon at levels of a few percent—much higher than is typically found on Earth, the moon, and Mars. The observations came from the last days of the MESSENGER mission, just before it crashed into the surface in 2015, when the spacecraft got up close and personal to large craters (seen above) where the darkening agent is most prevalent, scientists report today in Nature Geoscience. Scientists suspect that the graphite comes from Mercury’s original crust 4.5 billion years ago, when the planet was solidifying from a ball of molten magma. Whereas most minerals crystallizing out of the magma ocean would sink, graphite would have floated to the top.
<urn:uuid:bcefef0d-6ae0-4d1a-aea1-e825c289be6a>
CC-MAIN-2016-26
http://www.sciencemag.org/news/2016/03/mercury-covered-pencil-lead
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934709
223
4.125
4
Both fossil evidence and studies of the molecules of heredity themselves - DNA - suggest not only that the common ancestor walked the Earth far more recently than Darwin could have imagined, but that it was more like a human being than like a chimp or gorilla. To use the same sloppy, but emotive, language used by those Victorians, the best evidence now is that "apes are descended from man". The key word in that description of the common ancestor is "walked". Upright walking - proper upright walking - is a uniquely human characteristic. We know from the fossil evidence that our ancestors (including the famous Lucy) were walking upright by about five million years ago. Traditionally, this was regarded as long after the "man-ape split", which was set (on the basis of extremely limited fossil evidence, and a lot of wishful thinking) as about 15 million years ago. By the time Lucy walked the Earth, the story ran, humans had been evolving separately from other species for at least 10 million years. But the molecular studies show that this is not the case. DNA analysis (just like the genetic fingerprinting that can be used to identify individual people) is now so accurate that it can tell us that we share 98.6 per cent of our genetic material with the hairy apes. This makes us extremely close relations - more closely related, for example, than the horse and the donkey, or sheep and goats. And because molecular biologists know how long it takes for changes to accumulate in the DNA, this also tells us that the man-ape split actually happened a bit less than five million years ago - crucially, after our ancestors on the human line had learned to walk upright. Just after that time, the fossil evidence shows that the line that leads to us (Homo sapiens) shared the plains of Africa with two close relations, the Australo- pithecines, very similar species but with one larger than the other. On the traditional picture, they both vanished from the scene a couple of million years ago, leaving no descendants today. On the other hand, nobody has identified fossils ancestral to the modern chimpanzee and the gorilla - but they must have had ancestors! And they are very similar species, one larger than the other. It scarcely takes a Sherlock Holmes to solve the mystery of what happened to the Australopithecines, once the evidence is presented like that. The new dating of the man-ape split matches up these fossils without descendants with the descendants without fossils. The Australopithecines, it seems, did not die out, but gave up the ability to walk upright and re-adapted to a life in the trees, becoming the chimpanzee and the gorilla. There were three closely related species around three to four million years ago, and there are three closely related species around today. The techniques on which these conclusions are based are all well established, and entirely non-controversial. At least, they are not controversial as long as you don't apply them to human beings. But even today, more than a hundred years after Darwin, there are many people who still want to think that we are special, and somehow not subject to the same evolutionary rules as other animals. But they are wrong. Human beings are just one twig on the bush of evolution, growing right alongside the chimpanzee twig and the gorilla twig. It isn't that we are descended from the apes, or the apes from us. We are apes,whether we like it or not. John Gribbin presents the Radio 4 series `Evolution After Darwin' at 9 pm on Wednesdays, starting 14 October
<urn:uuid:407142be-5e07-47aa-a3ac-1ea4fda7bf34>
CC-MAIN-2016-26
http://www.independent.co.uk/arts-entertainment/evolutionary-notes-we-are-apes-whether-we-like-it-or-not-1176617.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969976
745
4.03125
4
Changes in transit design that aim to make roads and car traffic safer are one critical component of the complete streets movements underway across North America. Vehicle usage is responsible for staggering CO2 emissions, human injury and death, energy consumption, and more. Still, cars remain a part of the urban landscape, and street design that integrates them safely is imperative. Speed bumps, street markings, speed limits and other measures have all been used to create safer conditions for all users of the road. But what about trees? Via Ana Valdés, Sandro Malfitano, Jandira Feijó
<urn:uuid:8261c6bb-3202-48f2-a739-844c8eea3377>
CC-MAIN-2016-26
http://www.scoop.it/t/green-streets?tag=completestreets
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919085
118
2.578125
3
CMFRI, Kochi (2005) Mangroves for livelihood and coastal protection. CMFRI Newsletter No.105 January- March 2005, 105. pp. 1-2. The ecotones between aquatic and terrestrial environments are unique habitats. The ecosystem includes mainly mangroves, estuaries and other wetlands. Mangroves dominate the coastal areas in tropical countries, they also represent rich and diverse living resources and are essential to both economy and protection of coastal areas. |Uncontrolled Keywords:||Mangrove; coastal protection| |Subjects:||Marine Ecosystems > Mangroves| |Divisions:||CMFRI-Kochi > Biodiversity Subject Area > CMFRI Brochures > CMFRI-Kochi > Biodiversity CMFRI-Kochi > Biodiversity |Depositing User:||Arun Surendran| |Date Deposited:||22 Nov 2010 09:24| |Last Modified:||09 Sep 2015 15:39| Actions (login required)
<urn:uuid:c4a4faba-f774-41a0-a1ad-36d9f4cbe342>
CC-MAIN-2016-26
http://eprints.cmfri.org.in/6632/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.69937
230
2.609375
3
3.11 Pensions: How Good is a Teacher’s Pension? Teaching pays better than you think, because… Obviously, no one gets filthy rich on a teacher’s pension. But it is easy to underestimate the value of the pension system in the “big picture” of teacher compensation. Teachers aren’t lavishly paid, but each year of teaching comes with a significant promise toward a financially secure retirement. Teacher pay is better than it looks A teacher’s total pay is better than it looks. Unless something goes terribly wrong. It might. Teachers’ pensions in most states, including California, are “defined benefit” systems. That is, when a teacher retires, he or she receives payments in a manner defined by the rules of the pension system. Few jobs in the private sector offer a pension in this manner anymore. In the private sector, employees commit a portion of their pay into personal retirement savings accounts such as an IRA or 401(k) account. This approach is known as a “defined contribution” model. The difference in approach creates an apples-and-oranges problem. It is difficult to accurately compare teacher pay with private sector pay, and in a simple comparison teacher salaries seem worse than they are. Private-sector workers’ retirement dollars flow through paycheck deductions and build up in a way that is easy to count. They show up on a monthly statement and accumulate in an account. Teacher pensions, by contrast, don’t accumulate, they just exist. Like a life insurance contract, teacher pensions are a promise of future payments that will vary depending in large part on how long an individual lives. It’s like Social Security, but less secure California public school teachers do not pay Social Security taxes or earn Social Security benefits. Instead, they participate in the California State Teachers’ Retirement System (STRS). Retirement benefits are a very important and significant element of teacher compensation. California teachers pay into their STRS system through an 8.15% withholding on gross wages (a rate higher than Social Security’s 6.2% but less than STRS rates in other states; in Louisiana the withholding rate exceeds 20%). In return, in an average retirement lifetime of thirty years, CalSTRS has historically paid back about five or six times what teachers have put in, adjusted for inflation. Pension benefit calculations are complicated, and few teachers fully understand them. The graph below expresses the total financial compensation a hypothetical teacher in Oakland receives each year, including each year’s increase in promised lifetime STRS pension benefits. This example is based on the Oakland salary schedule in 2006, a suitably representative example. The chart requires a bit of explanation. In year ten, for example, the teacher receives gross pay of about $59,000, from which 8% (about $4,700) is withheld for contribution to the California STRS system. (In the graph, this is the very small negative portion below each column.) The school district matches this contribution, plus an extra 0.25%. The plan includes a few specific anomalies: A teacher qualifies for no pension at all unless he or she works at least five years. There is a very significant pension incentive to stay for a thirtieth year. And in years 32-37 teachers receive pension commitments virtually equivalent to their full salary. Examining a sample year for a sample teacher can help you understand how it works. For completing his or her 10th year in the system, the teacher’s defined monthly pension check upon retirement increases by about $100. You can think of this as an upgrade in the value of the teacher’s pension which will benefit him or her each month, upon retirement. This small monthly increase, granted in year ten, will add up to about $36,000 (in 2006 dollars) over the course of an average 30-year retirement. (Of course the “increase in value” shown here is an estimate based on a typical 30-year retirement. The actual value of these bumps will vary, depending on how long a teacher lives in retirement.) A person who works part of a career in STRS employment and part of a career in Social Security employment receives retirement benefits from both systems, but Social Security benefits, which are progressively indexed to favor initial earnings, are reduced by STRS receipts, which are indexed to favor end-of-career earnings. The complexity and interaction of these systems creates barriers to entry and exit from the teaching profession. Most criticism of STRS is similar to criticism of other defined benefit pension systems. For example, it’s pretty clear that $36,000 is a very strong risk-free inflation-adjusted return on a $4,700 contribution. This payout ratio worked fine until about 2008, thanks to strong stock market performance and steady growth in the number of teachers paying into the fund. Concerns About STRS in the Long Term What the market gives it can also take away. Over the long term, STRS benefitted greatly from steady increases in California public school enrollment, which added to the number of teachers paying into the system faster than the growth in the number of retired teachers served by it. As enrollment has flattened and investment returns have swooned, teachers and analysts began to express concerns about whether these pensions were safe. Oh give me a pension / can trust / don’t mention / of a bust In 2008 the Pew Center on the States raised eyebrows by estimating California’s unfunded current public liabilities at more than $60 billion, much of it from STRS. By 2013, even after two years of strong growth in fund assets, the unfunded liabilities of the STRS system alone were reckoned to be nearly $167 billion – equivalent to hundreds of thousands of dollars per teacher. The Legislative Analyst Office (LAO), charged with providing impartial analysis to the state legislature, reported in its sober style that unfunded teacher pensions could be California’s “most difficult fiscal challenge.” David Crane, president of the nonpartisan advocacy organization Govern for California, compared the pension obligation to being on the wrong end of a giant zero-coupon bond. He said that the state needed to quickly begin setting aside significant funding each year to pay down this debt, or the interest would swell into an unfunded “$600 billion dollar sinkhole.” How much funding is “significant”? The LAO analysis suggested that the problem could be stabilized with a steady investment of $4.6 billion per year. The legislature responded to these concerns in June 2014. Over a period of seven years, AB1469 phases in changes to the amounts that teachers, school districts (which employ teachers) and the state each pay into the STRS system. The bulk of the changes fall on school districts. As employers, districts previously contributed the equivalent of 8.25% of teacher salaries to STRS, but under the new law that will eventually rise to 19.1%. Could be Worse! All pension systems are complex, and teacher pension systems vary enormously from state to state. Overall, comparisons with other states tend to show California’s system as fairer and more stable than most. Because of the difficulty in comparing the personal value of a defined benefit system with that of a defined contribution system, teachers and prospective teachers in California tend to underestimate the substantial value of the pension benefits they are earning. In the long run, California teacher pay is better than it looks on a pay stub. This post concludes the “Teachers” section of Ed100. The overall structure of Ed100 is “Education is Students and Teachers spending Time in Places for Learning with the Right Stuff in a System with Resources for Success. So Now What?” In the next chapter, we tackle the educational implications of life’s most precious resource: Time.
<urn:uuid:d0469489-d11b-43a9-8bba-16597d87a989>
CC-MAIN-2016-26
http://ed100.org/teachers/pensions/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00154-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953504
1,641
2.765625
3
DANGERS ON THE ICE OFF THE COAST OF LABRADOR With Some Interesting Particulars Respecting the Natives of that Country Printed for the Religious Tract Society [Price One Penny] The Moravian Missionaries on the coast of Labrador (a part of North America) for many years suffered much from the severity of the climate, and the savage disposition of the natives. In the year 1782, the brethren, Liebisch and Turner, experienced a remarkable preservation of their lives; the particulars show the dangers the Missionaries underwent in pursuing their labours. To this Narrative are added some further particulars, which show their labours were not without success. Early on March the 11th, they left Nain to go to Okkak, a journey of 150 miles. They travelled in a sledge drawn by dogs, and another sledge with Esquimaux joined them, the whole party consisting of five men, one woman, and a child. The weather was remarkably fine, and the track over the frozen sea was in the best order, so that they travelled at the rate of six or seven miles an hour. All therefore were in good spirits, hoping to reach Okkak in two or three days. Having passed the islands in the bay, they kept at a considerable distance from the shore, both to gain the smoothest part of the ice, and to avoid the high and rocky promontory of Kiglapeit. About eight o’clock they met a sledge with Esquimaux driving towards the land, who intimated that it might be well not to proceed; but as the missionaries saw no reason for it, they paid no regard to these hints, and went on. In a while, however, their own Esquimaux remarked, that there was a swell under the ice. It was then hardly perceptible, except on applying the ear close to the ice, when a hollow grating and roaring noise was heard. The weather remained clear, and no sudden change was expected. But the motion of the sea under the ice had grown so perceptible as rather to alarm our travellers, and they began to think it prudent to keep closer to the shore. The ice in many places had fissures and cracks, some of which formed chasms of one or two feet wide; but as they are not uncommon, and the dogs easily leap over them, the sledge following without danger, they are terrible only to new comers. As soon as the sun declined, the wind increased and rose to a storm. The snow was driven about by whirl winds, both on the ice and from off the peaks of the high mountains, and filled the air. At the same time the swell had increased so much, that its effects upon the ice became very extraordinary and alarming. The sledges, instead of gliding along smoothly upon an even surface, sometimes ran with violence after the dogs, and shortly after seemed with difficulty to ascend the rising hill; for the elasticity of so vast a body of ice, of many leagues square, supported by a troubled sea, though in some places three or four yards in thickness, would, in some degree, occasion a motion not unlike that of a sheet of paper upon the surface of a rippling stream. Noises were now likewise heard in many directions, like the report of cannon, owing to the bursting of the ice at some distance.
<urn:uuid:0d8b8f96-fc9b-431a-86b7-6b571c1651bd>
CC-MAIN-2016-26
http://www.bookrags.com/ebooks/14014/1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973657
704
3.109375
3
Hitchhiker's Guide to Publishing in the Wizarding World by Diana Patterson Wizards are very fond of books, newspapers, and magazines, ones filled with moving photographs. But their presence in the world of quills, ink and parchment raises some problems about publishing - at least for us Muggles. What is Publishing? Before beginning to decipher the ways of publishing, it may be necessary to define exactly what publishing is: it is making ideas public. Technically, yelling in the street or scribbling in chalk on a pavement would be a form of publishing. But more practically, publishing involves making copies of material in such a way that someone can make money from the exercise. A publisher need not be anyone who makes copies of something (such as a printer), but must be someone who distributes the copies. Medieval Setting and its Implications Now the wizarding world is mainly medieval. Wizards wear robes, presumably in such a way as to get a 'healthy breeze' around their privates (GF7) in the medieval manner, and similarly they use drippy ink with mainly quills (reed pens for thicker writing) on parchment, in the late Classical or early Medieval manner. There certainly was publishing of books in the Classical and Medieval times. One person copied a book from another, or, for more 'mass-market items', copying was done in scriptoria, where either a text was read aloud to several copyists, or the exemplar was taken apart and each copyist had a portion of the exemplar to work on, then everyone traded until each copy had all the parts (this is called the pecia system of copying, and need not be carried on in one room at the same time). Of course what I have just described is the Muggle Medieval world. Once we have magic, why things change. What do Wizard Books Look Like Inside? The Muggle technology to replace magic was the printing press, a device that delineates the Middle Ages from the Renaissance. But we are never told whether particular books or newspapers or magazines are done in type, or in manuscript (handwriting). The films (non-canonical) tell us that the Daily Prophet is in manuscript, but books are nineteenth-century, printed, Victorian items, like Moste Potente Potions, although the miraculous book that Hermione refers to that mentions Nicolas Flamel, and updates itself to record his current age, is in manuscript with metal tags sewn to the presumably parchment leaves. This makes sense since Nicolas Flamel was born around 1330, before paper was much used in We know that some books are as big as paving stones, and some are very tiny. We know that they are in the familiar form of books in our era: the codex, as opposed to the Classical form of the book, the scroll. The only books whose pages we have been able to turn are Quidditch through the Ages and Fantastic Beasts and Where to Find Them. These are printed books on paper, at least to my Muggle eyes, although they could be done by magical quills that draw extremely good book hands, and the paper might just be for us Muggles. Of course neither of these books contains photographs, only some sketches that appear to have been done in manuscript, not like the images in Moste Potente Potions [film version], which appear to be wood blocks. The spelling of Moste Potente Potions suggests that the book ought to be a Renaissance production, not a Victorian one, however; in which case, in the Muggle world, illustrations would have been either more elaborate wood blocks, or engravings on copper. Parchment and Paper We do know that the students write essays using quills on parchment. Now in the Muggle world, parchment is a really tough, whitish substance made of animal skin. The skin has the hair removed, and it is stretched and dried, and has all the fat scraped off it as it stretches and dries. The best parchment is made from young sheep or goats, but really, it could be made of many animals, including dog and cat. It looks vaguely like paper, but it is not paper. (My publishing students have trouble describing a sheet of writing material as anything but paper, but parchment is not a kind of paper — paper is vegetable, and parchment is animal.) The animal skin is enormously difficult to tear. It is impossible to crumble. But when it gets wet, it warps and thickens: it tries to turn back into the skin of the animal it came from. Nobody dealing with Muggle parchment could wad it up into a ball, tear off a strip to hand somebody a note, or do any of the many things that wizards seem able to do with their parchment. We do not know whether most books are made of paper or parchment, although we do know that at least one book, the one containing information about the (CS16) is paper. Paper came into the western world from China, through the Arabic countries, at about the same time that the printing press (actually movable type, not really the printing press, but . . .) came to be used in 1450. Thus the particular copy of the basilisk book is that late. We also know that for some odd reason, although temporary lists for bulletin boards are on parchment, Snape uses a roll of paper to give the clues about the vials guarding the So we see elements of the Renaissance here. Still very old books, even printed ones, are likely to be of parchment, as they are in the It takes many dead animals to make a parchment book. One complete sheep is required to make between four and eight pages of a book as big as a paving slab (depending on the size of the paving slab and the size of the sheep). Thus a book of, say, 394 pages, in a size we would recognize as the size of a textbook (octavo), would require 394/16 sheep, or about 25 sheep. If everyone in a class of 20 had such a textbook, a Muggle would have to have 500 sheep on hand to slaughter. Presumably wizards have developed some magical-synthetic parchment, since we don't run into many sheep around Flourish and Blotts or Hogwarts. Not only does this magical material tear, and crumple, but also it withstands being thrown into toilets too, since nobody seemed surprised when the diary of Tom Riddle survived being thrown into a toilet. In the Muggle world, books made of parchment were bound in wood (which is why we use the word ‘board’ to describe the heavy paper used for covers). To keep the parchment from changing shape, the boards had latches on them to press the parchment flat. One of the possibilities of making a Daily Prophet would be using a collection of enslaved that would write out thousands of copies of Rita Skeeter's text at one go so that they would be available next morning for owl post. Or, there might just be a reproduction spell for making copies: (meanwhile the wizard imagines how much abundance he wants in order to produce the correct number of copies). Integrating photographs into this process would be difficult if not impossible in Muggle technology, but clearly not in magic. The photographs, so far as we know from seeing Colin Creevey using a camera and the photographer from the Daily Prophet with his purple flash, are the result of a mechanical process, much like Muggle photography, only using potions to create the moving images (CS6). How one copies the photograph into a manuscript is, well, magic. One of the problems of publishing is distributing the weight of paper and cardboard. Magically transporting parchment and wood solves this problem, of course. But as in the Summoning Charm, the materials requested fly through the air as heavy objects rather than presumably as atoms. Similarly, sending material by would create flying objects through chimneys: imagine attempting to go to Diagon Alley and meeting a shipment of the Monster Book of Monsters coming from another direction! We don't know how published material gets distributed other than by owl post and by carrying the books around after a purchase in a book shop. The HP Lexicon lists the known publishers of the wizarding world (see), including their addresses and published books. Two of these have offices in Diagon Alley, where we know the Daily Prophet also has its offices, but whether they have other premises at which they reproduce their materials, or warehouse them, we know not. Somehow, one is sceptical that The Quibbler has offices in One last problem in considering publishing in the wizarding world is the quality of the books. Literature, composition, and editing are not skills taught at Hogwarts, and no books described so far seem to have fiction in them, the only stories being histories, biographies or travels. So far we know of only one play (Hélas, Je me suis Transfiguré mes Pieds [Alas, I have Transfigured my Feet] a play by a wizard named Malecrit [he writes badly] (QA8)), and one book of poetry (Sonnets of a Sorcerer in all those tens of thousands of books in the How good is the writing and editing in these many wizard books? (Editor's note: Since this essay was first written, further wizarding books have come to light; see books by title for more information.) To learn more about parchment and medieval books: Brown, Michelle P. The British Library Guide to Writing and Scripts: History and Techniques. Toronto: U of Toronto P, 1998. ISBN 0-8020-8172. (Available from the British Library bookshop on-line). To look at some manuscript books, browse the Bodleian Library's sample To see how books changed between manuscript and printing, you might look at the University of South Carolina's on-line exhibit:
<urn:uuid:6372ad24-3ee6-4c62-a5c0-2aaaf26805e9>
CC-MAIN-2016-26
http://www.hp-lexicon.org/about/concordance/Publishing_in_the_Wizarding_World.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936858
2,239
2.796875
3
Killer whales have been revealed to be one of only three species whose females are known to undergo menopause—living on long after their reproductive years in order to help their offspring, particularly their sons, survive the rigors of young adulthood and later to help raise their grandchildren. It is a rare evolutionary trait, shared with humans and pilot whales, that may have much to do with the killer whales' success as a species, say British researchers at the University of Exeter and the University of York, who have conducted a 30-year study. Like human mothers, female killer whales can bear offspring well into their 30s, but then may live on for another 50 years or so to look after their families, pass on their knowledge and skills to the younger generation, and assume a leadership role in the community. Their steadying presence is believed to dramatically improve a young male offspring's survival chances. Males are particularly vulnerable when young; one study suggests there is a 14-fold increase in mortality in the first year after the death of their mother. The survival of female offspring seems less dependent on the presence of the mother. "Killer whales have a very unusual social system whereby sons and daughters don't disperse from their social group but instead live with their mother her entire life," says Darren Croft, a lecturer in animal behavior at the University of Exeter, who led the study with University of York biologist Dan Franks. How did you determine that killer whales go through menopause? Franks: The data comes from tracking over 500 whales for more than 30 years. Looking at this we can see that female killer whales stop reproducing in their 30s or 40s, yet live into their 90s. This is a striking 50 to 60 years of their life in which they no longer give birth. Does menopause occur among other species of whales too? Franks: We know that short-finned pilot whales undergo menopause. Anecdotally, there are some suggestions that other species of whales may undergo menopause as well, particularly sperm whales. More data is needed, however, to establish whether these other species actually do undergo menopause. You mention that only three mammals undergo menopause—humans, killer whales, and pilot whales. How do we know there are no others? Croft: Biologically speaking, menopause is a bizarre concept, and very few species have a prolonged period of their life span where they no longer reproduce. There are only three species that we can be sure undergo menopause. It is certainly possible that other species may also have menopause, particularly in species that live in close-knit family groups, but this is not yet known. Do the whales, like humans, appear to undergo any other physiological symptoms or changes, such as hot flashes, as they go through menopause, or is that impossible to know? Croft: Unfortunately, we don't have the physiological data on the whales to know what changes females experience as they go through menopause. Given the size of the animals and the fact that they live in the open ocean, it is difficult to get such data. What are the evolutionary advantages to having older menopausal females in a population? What specific things do older females do to enhance the survival of young males? Franks: We know that older females increase the chances of their offspring's survival, but exactly how they do this is a topic of our current research. Our team has a few ideas about what could be going on here, and we can speculate that they may provide support during encounters with other whales and help with foraging through knowledge and leadership. If there are clear evolutionary advantages to menopause, why haven't other creatures developed this? Why whales? What's so special about them? Croft: For menopause to evolve, the benefits of stopping reproduction in late life have to outweigh the costs. Theory suggests that the key to understanding why menopause has evolved in the killer whales is in understanding their social structure and how a female's relatedness to her group changes as she ages. Killer whales live in close-knit family groups where both sons and daughters stay with their mother her entire life. Under these conditions, as females age, their relatedness to the group increases and there comes a point where females can benefit more from switching from having offspring themselves to helping to care for either their existing offspring or grand-offspring. Given the perceived rarity of menopause across the animal kingdom, what would whales and humans have in common? Croft: Theory suggests that the key to understanding why menopause has evolved in whales and humans is in understanding their social structure. In both humans and whales, as females age, their relatedness to their local group increases. This increase in relatedness means that evolution will favor females that switch from having their own offspring in late life to helping their offspring and grand-offspring survive and reproduce.
<urn:uuid:f49d49a0-1234-4c89-812b-323ff6a9811d>
CC-MAIN-2016-26
http://news.nationalgeographic.com/news/2013/10/131019-killer-whales-menopause-hot-flash-marine-mammals/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00168-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96699
1,016
3.46875
3
Inspecting Sump Pumps When a home inspector is inspecting the basement or crawlspace of a home, they may encounter something called a sump pump. A sump pump is a small pump that sits in a sump pit/crock and helps keep the area under the structure dry and helps prevent flooding. Water enters the perimeter drainage system or by water moving through the soil. In a basement, for example, the sump pit/crock is located below the basement slab. It typically has a plastic liner but may be concrete or have earth walls. The electric sump pump is located in or above the sump pit/crock. Water is pumped away from the house through a discharge pipe. Some areas may allow the sump pump to discharge into the storm sewer but more and more jurisdictions are getting away from allowing this. It should never discharge into the sanitary sewer. Ideally the sump pump will have a dedicated circuit breaker for the outlet that the pump gets plugged into. You may encounter a sump pump that is plugged into a GFCI protected receptacle but from my experience it is best that the sump pump be plugged into a normal properly grounded receptacle without GFCI protection. The reason behind this is that not many people pay attention to the sump pump/pit, and if the GFCI trips, they don’t know that the basement/crawlspace could flood causing damage. The codes may vary from this opinion and state that any receptacle in an unfinished basement or a crawlspace should have GFCI protection. The home inspector should test the pump for operation. It can be a pedestal-style pump which has the motor mounted on a shaft sitting above the water level. A lever will stick out of the pit/crock. To test the operation of the pump, pull up on this lever. Be careful when doing this, as electrical safety is something to consider. In addition to the pedestal style sump pump, the home inspector may encounter a submersible type, which sits below the water level in the sump pit/crock. To test this type of sump pump, use a plastic or wooden stick to pull up on the pressure switch or float. Another way to test each kind of sump pump is to run the water into the pit/crock with a hose or pour water from a bucket into the sump pit/crock to activate the float. The sump pump motor should run quietly and should discharge water; you may even see a check valve on the discharge line. The pit/crock should be covered and kept free of silt buildup and debris. Back to Home Inspection Industry News >>
<urn:uuid:ea7e7c19-d99f-4039-a82d-5cc3598e3a31>
CC-MAIN-2016-26
http://www.ahit.com/news/inspecting-sump-pumps.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927095
555
3.1875
3
For air at low speeds, the ratio of the specific heat capacities is a numerical constant equal to 1.4. If the specific heat capacity is a constant value, the gas is said to be calorically perfect and if the specific heat capacity changes with temperature, the gas is said to be calorically imperfect. At subsonic and low supersonic Mach numbers, air is calorically perfect. But under low hypersonic conditions, air is calorically imperfect. The specific heat capacity changes with the of the flow because of excitation of the vibrational modes of the diatomic nitrogen and oxygen of the atmosphere. This computer simulation illustrates molecular vibration: Click on the slider bar, hold the mouse button down and drag to the right to increase the temperature. As the temperature increases, the vibration increases and more energy is associated with the vibration. The equations shown on the figure were developed using the of gases including a simple harmonic vibrator for the diatomic gases. The details of the analysis were given by Eggars in NACA Report 959. A synopsis of the report is included in NACA Report 1135. The equation for the specific heat capacity at constant volume is: where cv is the specific heat capacity at constant volume, (cv)perf is the specific heat capacity for a calorically perfect gas, gamp is the ratio of heat capacities for a perfect gas, theta is a thermal constant equal to 5500 degrees Rankine, and T is the static temperature. Similarly, for the specific heat capacity at constant pressure: where cp is the specific heat capacity at constant pressure, and (cp)perf is the specific heat capacity for a calorically perfect gas. The ratio of specific heats is designated by gam, which is given by:
<urn:uuid:cde92729-5edf-4cfc-a865-c07d5709f47b>
CC-MAIN-2016-26
http://www.grc.nasa.gov/WWW/BGH/realspec.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.88267
386
3.171875
3
|Index to this page| There is nothing mysterious or even particularly unusual about the things that scientists do. There are many ways to work on scientific problems. They all require common sense. Beyond that, they all display certain features that are especially — but not uniquely — characteristic of science.For example: Sometimes, as one listens to scientists vigorously defending their views, their confidence seems absolute. But deep in their hearts, they know that their views are based on probabilities and that a new piece of evidence may turn up at any time and force a major shift in their views. Although scientific methods are as varied as science itself, there is a pattern to the way that scientists go about their work.Scientific advances begin with observations. But science is more than a catalog of facts. The goal of science is to find an explanation for why the facts are as they are. Such an explanation is a hypothesis. |Link to a case study illustrating the scientific method at work.| So if a generalization is valid, then certain specific consequences can be deduced from it.One of the most exciting events in science is to |Link to an example.| The hypothesis that the experimental treatment had no effect is called the null hypothesis. Most workers feel that if the probability (designated p) of the observed difference is less than 1 in 20 (p = <0.05), then the null hypothesis is disproved and the observed difference is significant. |Link to discussion of statistical methods.| But significance is not proof. In fact, hypotheses can never be proven to be absolutely "true" is the sense that a theorem in geometry can. The most we can say is that there is a high probability that the hypothesis provides a valid explanation of the phenomenon being studied. Hypotheses that are supported by many observations come to be called theories. So, in contrast to some areas of human thought, science can never prove that a theory is "true". But it can show that a theory is false. Lest the tentative nature of science cause you to lose confidence in it, think of what science has produced. The many achievements of scientific methods, despite the absence of absolute certainty, has been well-expressed in the sonnet "Paradox" by the late mathematician Clarence Wylie, Jr. Not truth, nor certainty. These I foreswore In my novitiate, as young men called To holy orders must abjure the world. "If..., then...," this only I assert; And my successes are but pretty chains Linking twin doubts, for it is vain to ask If what I postulate be justified, Or what I prove possess the stamp of fact. Yet bridges stand, and men no longer crawl In two dimensions. And such triumphs stem In no small measure from the power of this game, Played with the thrice-attenuated shades Of things, has over their originals. How frail the wand, but how profound the spell! |Link to a description of the format of scientific papers.| Although this is acutely embarrassing for the original investigators, it represent one of the great strengths of science: its built-in system for self-correction. In the vast majority of cases, irreproducible results in science are caused by honest errors. On rare occasions, however, laboratory reports cannot be confirmed because they are fraudulent. This is distressing to all concerned. If such a fraud becomes widely known, it is also likely to cause a great deal of excitement among the general public. I believe, however, that rather than casting a cloud over the scientific enterprise, these rare aberrations reveal its great strength. There is probably no other area of human activity where error is detected and corrected more rapidly. I am confident that you can think of a number of other fields of human study and activity where errors have been made that went uncorrected for years and caused widespread harm. Dishonest scientists usually harm only themselves. They are disgraced; their careers often at an end.But the progress of science usually moves forward as fast as (sometimes faster than) before. Only rarely does a scientific discovery spring full-blown on the scene. When it does, it is likely to create a revolution in the way scientists perceive the world around them and to open up new areas of scientific investigation. Darwin's theory of evolution [Link] and Mendel's rules of inheritance [Link] are examples of such revolutionary developments. Most science, however, consists of adding another brick to an edifice that has been slowly and painstakingly constructed by prior work. In fact, it is possible to construct a genealogical tree that traces the historical development of any scientific discovery (even, to a degree, Darwin's and Mendel's). The way in which science builds on the work of others is another illustration of what a communal activity science is. The development of a new technique often lays the foundation for rapid advances along many different scientific avenues. Just consider the advances in biology that discovery of the light microscope and, later, the electron microscope have made possible. Throughout these pages, there are many examples of experimental procedures. Each was developed to solve a particular problem. However, each was then taken up by workers in other laboratories and applied to their problems. In a similar way, the creation of a new explanation (hypothesis) in a scientific field often stimulates workers in related fields to reexamine their own field in the light of the new ideas. Darwin's theory of evolution, for example, has had an enormous impact on virtually every subspecialty in biology (and in other fields as well). To this very day, biologists in specialties as different as biochemistry and animal behavior are guided in their work by evolutionary theory. The distinction between basic and applied science is more one of goals than of methods. The same rules and standards apply to each. However, the motivation behind the work is somewhat different. Researchers in applied science have before them a practical problem to be solved. Much of the research that goes on in medicine and in agriculture is applied. The researcher in basic science, on the other hand, is primarily driven by curiosity - the desire to find out more about how nature works.Both types of research are not only honorable and demanding professions, but they are mutually dependent as well.
<urn:uuid:2a9de92f-9205-4c7d-b532-29b5f8795fd7>
CC-MAIN-2016-26
http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/S/ScientificMethods.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959785
1,318
3.359375
3
Browser anonymization functions compared Firefox has enjoyed years of growing popularity. The browser by the Mozilla project is one of the most used browsers. In our lab, I used version 21. Firefox clears the browsing history, cookies, cache, active logins, all search terms entered, and form data, as well as the off-site data (files stored by a website) and site settings (exceptions saved by the user) with one click. Users cannot not delete or view web storage – at least not manually. However, instructing Firefox to eliminate all data on closing also includes the web storage contents. Speaking of which, web storage usually is available to web applications, and Firefox users do not even know which sites have stored data. By default, Firefox accepts cookies from all sites, but on request, only from the original page, or not at all. Alternatively, the browser can, before accepting a cookie, ask what to do with it; users can also define rules for individual sites. In the well-hidden cookie management, users can then view and delete cookies, either individually, or in bunches for a domain. The dialog also manages geolocation queries, access to offline storage (but not web storage), and switching to full screen mode. Access to the geolocation API is only possible through this dialog, a general setup is not available. While you type a keyword into the search box on the right, Firefox contacts the selected search engine and looks for suggestions. The browser natively includes various search engines depending on your distro, but you can add more engines with a mouse click. The settings for the engine, however, only apply to the search field. If you type your a term in the address bar, Firefox always uses the distro's default search engine. In the preferences, users can prohibit the integration of web fonts. In the Privacy tab, the configuration dialog lets you specify if and when the browser sends do-not-track messages, when and how it generates a history, whether it remembers forms you completed and search keys you typed, and whether it saves cookies (Figure 5). Firefox users also can choose to surf in private browsing mode at all times. In this mode, the browser does not store a history, does not save form entries, and discards the associated cookies and web storage data when you close the private window. On request, Firefox will synchronize your settings, bookmarks, and private data with other browser instances via the Mozilla cloud. Unlike Chrome, you need to switch this feature on explicitly, and you can set up your own servers as data stores . Mozilla turns out to be a data collector elsewhere: If the user does not expressly disable this, Firefox wires crash and status reports to the manufacturer that include statistics on the total running time and the number of installed add-ons. Firefox can also send telemetry data to Mozilla, such as information about the processor, memory, and IP address. Fortunately, this feature is disabled by default. The browser also points out its acquisitiveness when first launched, and at the same time offers to remedy the settings. Firefox warns against access to sites with phishing or malware and orients its warnings on the Mozilla blacklist. Unlike Epiphany, it is not possible to add to the list or add your own list. Anonymizing add-ons are available a dime a dozen – no other browser has so many extensions. Other browsers need add-ons to do this, if they can do it at all, but Konqueror has this ability natively and can report a false name either to all sites or just the currently accessed site. In the settings, users can even switch off the ID completely or not define their own (Figure 6). The Tools menu also lets users set up a proxy and disable the browser cache. The individual functions can only be switched on or off here. To fine-tune, you need to change to the program settings. One exception is the cache for which three options are available in Tools | HTML Settings | Cache Policy: the Keep cache in Sync, Use Cache if Possible, and Offline Browsing Mode options. Konqueror optionally displays Bookmarks, History, and other items in a sidebar on the left side of the screen. Right-clicking lets users remove individual items or the entire history. In the browser settings, you can determine how many addresses to store in the history and the number of days to keep entries. For each URL, the KDE program remembers by default how often it is visited and the first and last dates it was visited. Users can prevent this behavior in the settings. Buy this article as PDF Founder of ownCloud launches the Nextcloud project. Will The Machine change the way future programmers think about memory? The new Torus distributed storage system is available under an open source license on GitHub Juries decides Google’s use of Java APIs Was Fair Use But if you are not using the latest Linux kernel, your system is insecure. Home routers will give room for custom firmware but still comply with FCC rules Frank Karlitschek will continue to lead the open source ownCloud project “Xenial Xerus” comes with a new packages format and several improvements for the enterprise. Linux users can now download and install the Windows code editor New initiative will address security and interoperability concerns around container technology.
<urn:uuid:a086c658-5380-4055-9d94-a89b6bff3c96>
CC-MAIN-2016-26
http://www.linuxpromagazine.com/Issues/2013/154/Anonymous-Browsing/(offset)/3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.881682
1,107
2.609375
3
Osteosarcoma is an oft-seen bone tumor in the canine realm. Dogs of all breeds can potentially experience bone cancer, though bigger pooches are often even more susceptible to it. Immediate veterinary care is key for all dogs with bone tumors. Note that the symptoms of this type of cancer are frequently mild, however. Varieties of Osteosarcoma As far as histologic forms of osteosarcoma go, three varieties are often seen, according to veterinarian Jaime Modiano of the AKC Canine Health Foundation. These include fibroblastic, osteoblastic and finally, chondroblastic. Fibroblastic osteosarcoma involves mostly fibroblast tumor cells that are capable of manufacturing tumor and collagen osteoid alike. With osteoblastic osteosarcoma, tumor cells manufacture osteoid in abundance. With chondroblastic osteosarcoma, however, the tumor cells make both cartilage and osteoid. Osteosarcoma and Chondrosarcoma Difference When dogs have chondroblastic osteosarcoma, their tumor cells make cartilage and osteoid. When their tumor cells don't make osteoid and exclusively make cartilage, they have a condition that's referred to as chondrosarcoma. Chondrosarcoma is another type of canine bone cancer that expands rapidly. As with osteosarcoma, chondrosarcoma is also especially prevalent in bigger dogs. Elderly dogs are also particularly vulnerable to chondrosarcoma. If you notice that your pet is showing signs of problems and discomfort walking, pay close attention to him. Lameness is a typical indication of canine osteosarcoma. Some dogs experience troubles walking gradually, while the difficulties are much more abrupt and seemingly "out of nowhere" in others. Weight loss and exhaustion also frequently denote bone cancer in affected canines. Some dogs even get conspicuous swellings on their legs, although many others do not. Aching of the joints and bones sometimes occurs, too. Prompt Veterinary Management Since osteosarcoma can be fatal to dogs, it's important to never brush off any potential symptoms of the disease. Osteosarcoma is a fierce condition and can swiftly expand throughout your pet's body. If your dog has chondroblastic osteosarcoma, your vet can analyze his situation and determine which mode of management is most appropriate for his needs. These management options include radiation therapy, chemotherapy and surgery. The goal of canine osteosarcoma management is both to slow down metastasis and get rid of the tumor. Chemotherapy and surgery often increase dogs' chances of survival greatly.
<urn:uuid:0b5c5528-bc6f-4209-a5a3-d3da613182cf>
CC-MAIN-2016-26
http://dogcare.dailypuppy.com/chondroblastic-osteosarcoma-canines-7717.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00163-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945215
567
2.671875
3
- freely available Int. J. Environ. Res. Public Health 2010, 7(6), 2607-2619; doi:10.3390/ijerph7062607 Abstract: The short-term effects of high temperatures are a serious concern in the context of climate change. In areas that today have mild climates the research activity has been rather limited, despite the fact that differences in temperature susceptibility will play a fundamental role in understanding the exposure, acclimatization, adaptation and health risks of a changing climate. In addition, many studies employ biometeorological indexes without careful investigation of the regional heterogeneity in the impact of relative humidity. We aimed to investigate the effects of summer temperature and relative humidity and regional differences in three regions of Sweden allowing for heterogeneity of the effect over the scale of summer temperature. To do so, we collected mortality data for ages 65+ from Stockholm, Göteborg and Skåne from the Swedish National Board of Health and Welfare and the Swedish Meteorological and Hydrological Institute for the years 1998 through 2005. In Stockholm and Skåne on average 22 deaths per day occurred, while in Göteborg the mean frequency of daily deaths was 10. We fitted time-series regression models to estimate relative risks of high ambient temperatures on daily mortality using smooth functions to control for confounders, and estimated non-linear effects of exposure while allowing for auto-regressive correlation of observations within summers. The effect of temperature on mortality was found distributed over the same or following day, with statistically significant cumulative combined relative risk of about 5.1% (CI = 0.3, 10.1) per °C above the 90th percentile of summer temperature. The effect of high relative humidity was statistically significant in only one of the regions, as was the effect of relative humidity (above 80th percentile) and temperature (above 90th percentile). In the southernmost region studied there appeared to be a significant increase in mortality with decreasing low summer temperatures that was not apparent in the two more northerly situated regions. The effects of warm temperatures on the elderly population in Sweden are rather strong and consistent across different regions after adjustment for mortality displacement. The impact of relative humidity appears to be different in regions, and may be a more important predictor of mortality in some areas. There is a growing literature on the impacts of exposure to heat on morbidity and mortality [1,2]. The physiological effects of heat on the thermoregulatory system are well documented and heat can, for example, cause dehydration, cardiovascular illness, endocrine diseases and kidney dysfunction, while respiratory effects are less well understood, but are still strongly associated with high temperatures . However, the effects of weather on mortality in more northern regions are sparsely studied and therefore less is known about potential impacts of climate change, as well as contrasts to the more frequently studied regions, such as central and southern Europe and the US [1,2]. Several studies have been published on the differences in temperature susceptibility to heat and cold in warm and temperate climates [3–9]. However, so far, very few studies have assessed heat susceptibility on the arctic area borders [10,11]. Albeit, susceptibility to warm temperatures has been shown to increase over time in north Europe . General estimates of mortality rates and temperature have shown a region-specific minimum mortality point that differs with climate [6,8,13]. The minimum mortality point has also been shown to change over time [7,14]. Some change is explained by socio-economic development, but it seem also be influenced by influenza and seasonality [6,11]. So far one study has proposed the effect of heat to depend on childhood early programming of climatologic conditions . A few studies discuss the homogeneity in how different populations at widespread latitudes react to low temperatures and the heterogeneity in how they react to heat [9,13], and found that socio-economic conditions were associated with risks at both high and low temperatures [6,9]. It has been suggested that acclimatization to low temperatures takes place, but not to high temperatures . Another study found heat thresholds to depend on climate . The distribution of the temperature effects over days or weeks after exposure is a main issue, which is often accounted for by establishing distributed lag models . In general, low temperatures act on longer lag times and high temperatures on shorter lag times, as well as the two major outcomes cardio-respiratory deaths often do [4,5]. It is even possible that a more resistant population has a longer delay between exposure and effect for warm temperatures . The longer lag times also show on deficit in mortality rates if there is mortality displacement associated with the exposure effect [17,18]. The cumulative effects ranging over a number of lag strata, will therefore more accurately estimate the effect taking into account such tendencies if apparent . Many studies of mortality associated with heat waves reveal greater consequences under the very extreme conditions where there is no relief during a longer period [11,19,20]. Studies on the impacts of ambient temperature often use either simply daily maxima, minima or mean temperatures, or a temperature index also incorporating indirectly levels of relative humidity or dew point temperature [2,4,5,13]. This can partly been motivated by the thermo-physiological impacts of reduced sweating capability with high relative humidity . However, so forth the introduction of such weather indexes has shown to make a very small contribution, if any, to the model predictive performance [2,20]. Few studies have described the direct impacts of relative humidity on mortality rates, and the effect modification of high temperatures and high relative humidity. We aim to study the similarities and differences in how the Swedish elderly population (aged 65 and above) responds to summer ambient temperatures in terms of population-level all-cause mortality in three regions of Sweden, ranging from the southern border of Scandinavia to the Stockholm region. Further, we aim to study the lag structure and the general effect of temperature generally, and assess the effect of high temperatures as well as the effect of low temperatures within summers. We also aim to study the effect of high and low ambient relative humidity, and the effect modification of high relative humidity and high temperatures. Mortality data from the Cause of Death Register at the Swedish Board of Health and Welfare was collected for the period 1998–2005 as region specific daily mortality from natural causes (excl. external causes) in people aged 65 and above. The regions studied were Skåne (whole county), the Göteborg region (Göteborg and Mölndal municipalities) and Greater Stockholm. These regions represent large parts of Sweden’s densely populated areas from the south (Skåne) to the north (Greater Stockholm) corresponding to latitudes from 54° to 59° (Polar circle: 66.6°). Temperature and relative humidity for the same time period were collected from the Swedish Meteorological and Hydrological Institute. In Skåne these observations were measured at the meteorological station at Jägersro, in Göteborg at Säve airport and in Stockholm at Bromma City airport. All weather data were delivered as daily measurements. In the models we considered the daily mean values, since these have been found to best predict the mortality in previous time series studies . We avoided unequal spacing of the observations by inputing missing values in the weather data as the mean of four surrounding observations. In the meteorological data for Göteborg the whole first summer (1998) was missing, and was therefore excluded from the analysis. Weather and mortality statistics and percentage of missing observations are presented in Table 1 and Table 2. 2.1. Statistical Analysis In the first step of the analysis we established models including parametric smooth functions revealing the temperature-mortality relations for the three regions over the whole scale of temperatures (all-year). Counts of daily deaths in ages 65+ were assumed to follow an over-dispersed Poisson distribution in a generalized linear model in the R-statistical software (21). The lag strata of daily temperature were moving averages of lag 0–1, lag 2–6 and lag 7–13, while for relative humidity, lag 0 performed well as predictor according to the UBRE criterion. The UBRE criterion implemented in the mgcv package of R can be described as a generalized Akaike Information Criterion. Smooth functions (cubic splines) of temperature and relative humidity were controlled for by factors representing weekdays and national holidays, smooth functions for trends (between year changes) and seasonality (within year changes; unit days). The smooth function of seasonality was fit using periodic boundary conditions, and as all parametric smooth functions with fixed degree of freedom (df) [16,22]. The spline function of mean temperature was allowed 5 df in each lag stratum. Lag 0 of mean relative humidity was allowed 4 df, within year patterns (season) were allowed 5 df and between year patterns (trend) were allowed 4 df. In the next step of the analysis we focused on the summer period, June-August, and patterns of mortality associated with daily mean temperature and daily mean relative humidity for the same lag strata and the same control for confounders as in the initial approach. Studying only the summer period we fitted general estimation equations with STATA treating daily mortality between summers as independent observations and observations within summers as dependent on each other according to a covariance structure . The covariance structure we chose to employ was a first order autocorrelation structure as has been used previously in such models [4,5,20]. In the models, the daily counts of mortality were assumed to follow a Poisson distribution. Temperature lag strata variables were introduced into the model as linear spline functions at 0th–50th, 50th–90th and 90th–100th percentile of the first lag strata variable (mean of temperature lag 0 and 1). The placement of the spline knots (50th and 90th percentile) were chosen to relax the often assumed linear relationship of mortality following high temperatures. Because the knots of the spline functions of the temperature lag variables were set at the same value the calculation of cumulative effects are more meaningful. The most intuitive and appealing interpretation is the net effect on a particular day induced by the past two weeks’ 1 degree mean increase of temperature above a knot (threshold). The daily relative humidity was centered and modeled as a piecewise linear spline function with breakpoint at the mean value. Additionally, effect modification of high levels of relative humidity and high temperatures were modeled by a dummy variable that was non-zero when relative humidity levels were above the 80th percentile and the first lag strata variable of temperature (lag 0–1) were at levels above the 90th percentile. The estimates of relative humidity and temperature did not appeared sensitive to collinearity. Moreover, we also controlled for several variables; trend as a parametric cubic spline function with 2 df per the eight years of the study, by a factor for month, a factor for weekday and by a indicator for national holidays. The standard deviations and confidence limits that resulted from the models are robust to misspecification of the covariance structure within summers (the Huber/White/sandwich variance estimator was used instead of the generalized least squares estimator). Models were tested according to the Wald goodness of fit test based on the chi-square statistic and a residual diagnostics test was made to further evaluate the model fit. There were no obvious indications of model misspecification. In addition, we assessed meta-estimates and homogeneity in effects using the Metareg package in STATA. All estimates for temperature are presented as relative risk (RR) per one unit increase in temperature with 95% confidence limits (CI). In Table 1 we present weather statistics for daily temperature and relative humidity for the whole study period, while Table 2 describes the range of daily temperatures and relative humidity in summer together with the daily mean and range of mortality (natural causes of deaths). The all-year mean temperature during the study period differs somewhat between regions, but the maximum daily temperatures are similar as well as the summer mean temperature. The standard deviation (std dev) of the daily mean summer temperature increases with increasing latitude as does the range of maximum and minimum temperatures. The summer mean relative humidity decreases a little with latitude. The range of the relative humidity in summer is similar in Stockholm and Göteborg, but much smaller in the most southerly situated region, Skåne. Generally, the variation of the relative humidity in summer seems to take on slightly greater values in the north and decrease the further south the region is situated. In Figure 1 the smooth functions of the effect of temperature on mortality (lag 0–1) over the whole span of temperature are presented as risks relative to the minimal point of the curve for the three regions. The minimum mortality point of the curves is higher in the most southerly region and decrease with higher latitudes. As can be seen from the figure natural mortality in ages 65+ tends to increase more steeply for warm temperatures compared to cold temperature in lag strata 0–1. Note, however, that these curves are controlled for influences that act on longer lag times (the lag strata 2–6 and 7–13). The sharpest increase in relative risks is seen for to the most southerly situated region, while the others reach similar levels but over a larger range of temperature. In Figure 2 we have plotted the distributed lagged relative risks of a unit increase in temperature lag 0–1, lag 2–6 and lag 7–13 in the study regions, together with the combined estimate for all regions. Statistically significant relative risks can be seen in models for temperatures above the 90th percentile for Stockholm and Skåne (confidence interval not including 1). All the relative risks of lag 0–1 are of about the same sizes. On longer lag times the relative risks indicate increased mortality in Skåne and Göteborg, while lag 7–13 in Stockholm show statistically significant reduction in mortality rates the second week following exposure. Below the 50th percentile of summer temperatures in Skåne there is a significant increase in mortality associated with decreasing temperatures in the lag strata, 7–13, of RR = 0.982 (95% CI = 0.969, 0.994). This effect is also indicated in the Göteborg region with about the same size of effect. The piecewise linear functions of relative humidity reveal no apparent increase in mortality associated with high effect of high level, with the exception of the Stockholm region where this effect is statistically significant. In Figure 3 the effects of high and low levels of relative humidity for lag 0, with high and low according to the mean level in each region, are shown with confidence limits for a 20 units increase. The effect below the threshold indicates increasing mortality with decreasing levels of relative humidity. The combined relative risks (i.e., for all three regions studied) for temperature are presented in Figure 2. The combined relative risk associated with 1 °C increase in lag 0–1 temperature is 1.057 (CI = 1.047, 1.068). The estimated effect modification of temperatures above 90th percentile and humidity above the 80th percentile indicates increasing mortality rates in all regions after controlling for the main effects (TMP+RH +RH*TMP), but the only statistically significant effect modification was seen for the Stockholm region corresponding to a RR = 1.338 (95% CI = 1.141, 1.569), as can be seen in Figure 3. In Figure 3 the relative risk for relative humidity is also graphed as corresponding to a 20 unit increase in relative humidity for low (below summer mean) and high (above summer mean) levels of relative humidity. The cumulative relative risk of temperature over two week time is shown in Figure 4. The RR corresponds to the effect on a single day per one degree increase in temperature over 2 weeks above the threshold of the 90th percentile. The combined effect for a one unit increase in temperature above the 90th percentile corresponds to a relative risk of 1.051 increase in mortality with confidence interval (CI = 1.003, 1.010). The relative risk of same or previous day temperature above the 90th percentile on mortality in age 65+ is rather similar in all three regions studied, even though the climate in the regions is somewhat different. We believe the populations in the three regions are quite homogeneous in terms of health and standard of living, since there are only small differences between regions in Sweden. However, the effect of high relative humidity and the effect modification of high level of relative humidity and high temperature were largest in the most densely populated area—Stockholm. Moreover, the susceptibility to cold temperatures during the summer appears greater in the south. The over-all combined relative risk estimate for the three regions and over two week show a rather significant increase in deaths rates in population age 65+ when temperatures increase above the 90th percentile, while the location specific cumulative effects do not. This is likely due to the weak signal in the lag times longer than lag 0–1. Not surprisingly, the two regions with the most events per day, Greater Stockholm and Skåne, show more stable estimated effects; interpretations and contrasts in results between those models will therefore be more trustworthy. The lag of the high temperature effect in Skåne and Göteborg are similar with indications of an effect at lag 0–6, while the positive effect for the Stockholm region is apparent only in lag 0–1. The latter is consistent with prior studies for Sweden [11,12]. The longer lag times in the two southerly situated regions together with indications of mortality displacement in the Stockholm region, may indicate that less susceptible people fall victim to higher temperatures in more southerly regions . A prior study indicated no confounding with air pollution . Because of this, possible effects of air pollution, in particular ozone, were not controlled for, but also because the local formation of ozone is small in Sweden and it is not strongly dependent on sunlight and temperature. The support for effects of other pollutants, such as particulate matter as confounders during summer is weak, with relatively low level in summer in Sweden. However, the lack of controlling for influenza in the all-year smooth functions may be a more serious concern for the all-year estimates and the cold impacts on mortality. However, we focused on shorter lag times, and the cold effects are generally seen on longer lag times [4,11,17]. Also, this study focuses on summer mortality and as is well known influenza is generally bound to the winter season in temperate countries. The cumulative relative risks have big confidence limits with the exception of the combined cumulative effect. The cumulative measures of effects are unbiased even if there is colinearity between the explanatory lag strata variables of temperature , as well as they account for mortality displacement. Also, since the heat effect acts on shorter timescales, e.g., same or next day after exposure, the cumulative effects over two weeks are likely to have large variance due to greater uncertainty and smaller effects the following time that adds up to two weeks. The cumulative effect indicates that the effect of temperature is greater the further south the region is situated. This has also previously been found established in a larger European study . These models do not take into account heat wave effects since the estimates do not necessarily correspond to a two week prolonged exposure, but rather to individual consecutive and non-consecutive lag strata exposure, and secondly because daily temperatures can be lower than the threshold, but still add up to fit the inclusion criteria in the model for moving averages. However, prior studies have shown that there might be an additional heat wave effect [11,19,20]. The less precise effect of weather on mortality in the Göteborg region may be because the predictors and definitions of strata fit less well here, possible because of climatologic differences due to more direct maritime influence from the Atlantic Ocean. However, to get combined effect estimates we needed to use the same predictors, lag strata and threshold, in all regions studied. It is of course possible that other weather predictors would have explained mortality even better in other regions, but the mean temperature, the lag strata and the threshold used have been shown to explain mortality in Stockholm well [11,12], and other generally small differences have been found between different predictors [2,20]. A more detailed approach taking many weather indexes into account might have revealed interesting differences, but still our approach allows the contribution of relative humidity to be different in different regions. In fact, our results highlight regional differences in the impacts of relative humidity on mortality rates, while some studies investigate heat related mortality in several cities or regions with meta estimates as an endpoint and utilizes a pre defined estimation procedure in all regions analyzed, with some but relatively little flexibility due to the wish to have comparability within the study. However, lag strata, biometeorological indices of temperature and relative humidity may to varying extents explain mortality rates at different locations with different climates, urban structures, demography, population characteristics, health conditions and behavior. Estimation of multiple exposure-response relationships assuming similar relationships (including lag structures) at different locations, e.g., for relative humidity and temperature and their interaction, give location insensitive estimates. Therefore, the use of biometerological indexes would be more or less appropriate in the regions studied depending on the impact and effect modification with relative humidity. The use of relative humidity as predictor adds valuable information on how it impacts mortality rates in the regions. However, temperature and relative humidity is correlated, and simultaneous introduction of these two predictors may induce bias as collinearity. In this study this did not appear to be a problem with the estimates appearing robust to changes in the models. The overall results indicate similar heat susceptibility for temperature in different parts of Sweden. Interestingly cold related mortality in summers was observed in the most southerly of the regions, but not in the other regions. It is well known that southern Europe experiences more cold related mortality than northern Europe does . If so, it might be due to mal-adaptation, in part influenced by behavioral factors that have been shown to be important to reduce mortality in the winter season . The burden of temperature related mortality will likely increase with the ageing population in Sweden,; projected numbers from Statistics Sweden are a general 38% increase of ages 65–79 and a 57% increase in ages 80+ for Sweden by 2030. This would have large impacts in the annual attributed burden of heat according to the impacts estimates from this study. However, to understand how temperature related mortality will change with a changing climate we need to synthesize knowledge from a wide variety of locations, that when contrasted can be used to identify the unique contribution of climate change as a relative measure and the contribution of climate change as an absolute measure (not depending on the change but the actual level). More research on both cold related and heat related mortality as well as climatologic determinants of death and disease patterns such as seasonality is needed to prevent increases in mortality in the future as a consequence of climate change. This study was supported by grants from the Swedish Environmental Protections Agency as a part of the Climate Change Adaptation Research Program Climatools. - Basu, R; Samet, JM. Relation between elevated ambient temperature and mortality: a review of the epidemiologic evidence. Epidemiol. Rev 2002, 24, 190–202. [Google Scholar] - Kovats, RS; Hajat, S. Heat stress and public health: a critical review. Annu. Rev. Public Health 2008, 29, 41–55. [Google Scholar] - The Eurowinter Group. Cold exposure and winter mortality from ischaemic heart disease, cerebrovascular disease, respiratory disease, and all causes in warm and cold regions of Europe. Lancet 1997, 349, 1341–1316. [Google Scholar] - Analitis, A; Katsouyanni, K; Biggeri, A; Baccini, M; Forsberg, B; Bisanti, L; Kirchmayer, U; Ballester, F; Cadum, E; Goodman, PG. Effects of cold weather on mortality: results from 15 European cities within the PHEWE project. Am. J. Epidemiol 2008, 168, 1397–1408. [Google Scholar] - Baccini, M; Biggeri, A; Accetta, G; Kosatsky, T; Katsouyanni, K; Analitis, A; Anderson, HR; Bisanti, L; D'Ippoliti, D; Danova, J. Heat effects on mortality in 15 European cities. Epidemiology 2008, 19, 711–719. [Google Scholar] - Curriero, FC; Heiner, KS; Samet, JM; Zeger, SL; Strug, L; Patz, JA. Temperature and mortality in 11 cities of the eastern United States. Am. J. Epidemiol 2002, 155, 80–87. [Google Scholar] - Davis, RE; Knappenberger, PC; Novicoff, WM; Michaels, PJ. Decadal changes in summer mortality in U.S. cities. Int. J. Biometeorol 2003, 47, 166–175. [Google Scholar] - Keatinge, WR; Donaldson, GC; Cordioli, E; Martinelli, M; Kunst, AE; Mackenbach, JP; Nayha, S; Vuori, I. Heat related mortality in warm and cold regions of Europe: observational study. BMJ 2000, 321, 670–673. [Google Scholar] - Medina-Ramon, M; Schwartz, J. Temperature, Temperature Extremes, and Mortality: A Study of Acclimatization and Effect Modification in 50 United States Cities. Occup. Environ. Med 2007, 64, 827–833. [Google Scholar] - Nayha, S. Heat mortality in Finland in the 2000s. Int. J. Circumpolar Health 2007, 66, 418–424. [Google Scholar] - Rocklov, J; Forsberg, B. The effect of temperature on mortality in Stockholm 1998–2003: a study of lag structures and heatwave effects. Scand. J. Public Health 2008, 36, 516–523. [Google Scholar] - Rocklov, J; Forsberg, B; Meister, K. Winter mortality modifies the heat-mortality association the following summer. Eur. Respir. J 2009, 33, 245–251. [Google Scholar] - McMichael, AJ; Wilkinson, P; Kovats, RS; Pattenden, S; Hajat, S; Armstrong, B; Vajanapoom, N; Niciu, EM; Mahomed, H; Kingkeow, C. International study of temperature, heat and urban mortality: the ‘ISOTHURM’ project. Int. J. Epidemiol 2008, 37, 1121–1131. [Google Scholar] - Carson, C; Hajat, S; Armstrong, B; Wilkinson, P. Declining vulnerability to temperature-related mortality in London over the 20th century. Am. J. Epidemiol 2006, 164, 77–84. [Google Scholar] - Vigotti, MA; Muggeo, VM; Cusimano, R. The effect of birthplace on heat tolerance and mortality in Milan, Italy, 1980–1989. Int. J. Biometeorol 2006, 50, 335–341. [Google Scholar] - Armstrong, B. Models for the relationship between ambient temperature and daily mortality. Epidemiology 2006, 17, 624–631. [Google Scholar] - Braga, AL; Zanobetti, A; Schwartz, J. The time course of weather-related deaths. Epidemiology 2001, 12, 662–667. [Google Scholar] - Dominici, F; Zeger, SL; Samet, JM. Response to Dr. Smith: Timescale-dependent Mortality Effects of Air Pollution. Am. J. Epidemiol 2003, 157, 1071–1073. [Google Scholar] - Anderson, BG; Bell, ML. Weather-related mortality: how heat, cold, and heat waves affect mortality in the United States. Epidemiology 2009, 20, 205–213. [Google Scholar] - Hajat, S; Armstrong, B; Baccini, M; Biggeri, A; Bisanti, L; Russo, A; Paldy, A; Menne, B; Kosatsky, T. Impact of high temperatures on mortality: is there an added heat wave effect? Epidemiology 2006, 17, 632–638. [Google Scholar] - R development core team. R: a language and environment for statistical computing; R foundation for statistical computing: Vienna, Austria, 2007. [Google Scholar] - Peng, RD; Dominici, F; Louis, TA. Model choice in time series studies of air pollution and mortality. J. R. Statist. Soc. A 2006, 169, 179–204. [Google Scholar] - Stata Statistical Software: Release 10; Statacorp LP: College Station, TX, USA, 2007. - Zeger, SL; Liang, KY; Albert, PS. Models for longitudinal data: a generalized estimating equation approach. Biometrics 1988, 44, 1049–1060. [Google Scholar] - Healy, JD. Excess winter mortality in Europe: a cross country analysis identifying key risk factors. J. Epidemiol. Community Health 2003, 57, 784–789. [Google Scholar] |Variable||Mean daily temperature||Daily temperature||Daily relative humidity| |Variable||No obs||Mean||Std dev||Minimum||Maximum||Missing| |Number of deaths||736||22||5.1||9||38| |Number of deaths||644||10||3.4||1||21| |Number of deaths||736||22||5.3||7||39||<4%| |Temperature lag 0–1:||50th percentile||90th percentile||80th percentile of relative humidity| |Skåne||16.4 °C||20.1 °C||82.8| |Göteborg||16.6 °C||21.0 °C||79.1| |Stockholm||16.4 °C||20.7 °C||79.1| © 2007 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
<urn:uuid:995d6134-e4d6-430f-a2db-9866945abd6d>
CC-MAIN-2016-26
http://www.mdpi.com/1660-4601/7/6/2607/htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90367
6,378
2.84375
3
Adjectives and adverbs are very similar, except an adverb can modify everything that is not a noun or a pronoun, while adjectives only modify or explain nouns and pronouns. An adverb can modify or explain a verb, an adjective, or another adverb. They can modify clauses, phrases, and some modify whole sentences. An adverb usually answers a question such as how? or when? or where? They can describe how much or in what way something happened. A list of adverbs quickly reveals that an adverb usually ends in “ly,” as in “quickly,” “audibly,” or “calmly.” Adverbs examples also include words like “behind,” “beside,” “before” and “between.” An adverb can be classified by type. The manner adverb describes how something is done. An adverb of degree, explains how much. An adverb of time explains when something is done. A manner, time or degree adverb can be found after the verb or at the end of a sentence. The frequency adverb explains how often, and can be found before the main verb. An adverb of comment is always found at the very beginning of a sentence. This kind of adverb expresses an opinion or comment about the situation described in the sentence. The use of an adverb in a certain way, to modify a verb, creates special adverbs known as intensifiers. Intensifiers increase, decrease or otherwise define the level of importance of the verb. Intensifiers can be used to manipulate human emotion. Imagine the sentence, “You are slightly infected with swine flu.” Swine flu is scary but ‘slightly’ makes it sound a lot better. “He is extremely contagious,” has a much stronger negative impact. Knowing adverbs from adjectives is simple. Adjectives describe nouns, and pronouns, while an adverb describes almost everything else. An adverb usually ends in “ly.” In fact most of the time an adverb is just an adjective with an ly ending. For example: slow, in the phrase “a slow train” is an adjective describing train. Slowly is an adverb. In the sentence, “The train moved slowly.” slowly modifies the verb “moves,” and that causes it to be an adverb. Popularity: 14% [?]
<urn:uuid:1aa6d14e-991f-4b76-ba4c-70bca82e0c63>
CC-MAIN-2016-26
http://www.grammar-check.com/2011/01/20/adverbs.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962258
525
3.796875
4
From 11:00PM PDT on Friday, July 1 until 5:00AM PDT on Saturday, July 2, the Shmoop engineering elves will be making tweaks and improvements to the site. That means Shmoop will be unavailable for use during that time. Thanks for your patience! How's this for a sad, haunting image of what old age is like? The phrase "thine own deep-sunken eyes" is meant to stand for the opposite of everything that is beautiful about the young man, and to paint a scary picture of what he will become. Line 7: In this moment, we can see those eyes sinking back in the old man's head until he almost looks like a skull. The speaker of the poem wants to remind us that death is always waiting for us, so there's no sense in wasting time.
<urn:uuid:fd5beedb-176f-4ec7-8ffa-1a6ff1429a19>
CC-MAIN-2016-26
http://www.shmoop.com/sonnet-2/deep-sunken-eyes-symbol.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965034
172
2.625
3
An infant’s hearty appetite may appear as a healthy sign, but could signal a predisposition to obesity, scientists have warned. A new research has found that infants with a heartier appetite grew more rapidly up to age 15 months, potentially putting them at increased risk for obesity. The research linked appetite to more rapid infant growth and to genetic predisposition to obesity. The research investigated how weight gain is linked to two key aspects of appetite, namely lower satiety responsiveness (a reduced urge to eat in response to internal ‘fullness’ signals) and higher food responsiveness (an increased urge to eat in response to the sight or smell of nice food). The authors used data from non-identical, same-sex twins born in the UK in 2007. Twin pairs were selected that differed in measures of satiety responsiveness (SR) and food responsiveness (FR) at 3 months, and their growth up to age 15 months was compared. Within pairs, the infant who was more food responsive or less satiety responsive grew faster than their co-twin. The more food responsive twin was 654g heavier than their co-twin at six months and 991g heavier at 15 months. The less satiety responsive twin was 637g heavier than their co-twin at six months and 918g heavier at 15 months. “Obesity is a major issue in child health,” said Professor Jane Wardle, lead author of the study from the University College London (UCL) Health Behaviour Research Centre. “Identifying factors that promote or protect against weight gain could help identify targets for obesity intervention and prevention in future. These findings are extremely powerful because we were comparing children of the same age and same sex growing up in the same family in order to reveal the role that appetite plays in infant growth. “It might make life easy to have a baby with a hearty appetite, but as she grows up, parents may need to be alert for tendencies to be somewhat over-responsive to food cues in the environment, or somewhat unresponsive to fullness. This behaviour could put her at risk of gaining weight faster than is good for her,” Wardle said. The research was published in journal JAMA Pediatrics.
<urn:uuid:40a80d67-43d7-48fe-990a-8845ea002254>
CC-MAIN-2016-26
http://indianexpress.com/article/lifestyle/health/controlling-appetite-key-to-prevent-obesity/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00066-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972838
461
3.375
3
RTI-CCP is a grant-funded project to enhance the knowledge, skills, and dispositions of in-service teachers, pre-service teachers, and education faculty around the federal law of Response to Intervention (RTI) and the theoretical and instructional concepts of Culturally- Congruent Pedagogy (CCP). RTI is a process for providing high quality instruction, assessment, and intervention that allows schools to identify struggling students early, provide appropriate instructional interventions, and increase the likelihood that the students can be successful learners. According to Geneva Gay, author of the book; Culturally Responsive Teaching: Theory, Research, and Practice, “the way to improve the achievement of marginalized students of color is to change classroom instruction, not to change students” (2000). For a quick overview of the grant project, see our RTI-CCP PowerPoint Presentation.
<urn:uuid:566d43e6-898d-4cff-9b54-cb5606ab0937>
CC-MAIN-2016-26
http://www.uwec.edu/RTI-CCP/index.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898081
183
2.71875
3
One of the interesting things I learned from reading Nicholas Steingardt's The German War (see my review) is that the Nazis created a cult of the nineteenth century Romantic poet Friedrich Hölderlin (number 2 on my list of the 10 greatest German poets). This evolved throughout the Third Reich but reached a climax in 1943 with a celebrations throughout Germany commemorating the 100th anniversary of the poet's death. How this rather obscure, emotionally hypersensitive German poet who had only contempt for his fellow German "philistines" became transformed into a spiritual Führer of the German Volk - the "purest of pure" the "most German" of all German poets - is a fascinating story. What interests me here is that the peak of the Hölderlin cult coincided with the realization among most Germans that the war effort was lost. Steingardt writes: "Listening to Hyperion's Song of Fate (DV note: from Hölderlin 's 1797 novel Hyperion oder Der Eremit in Griechenland ) provided a glimpse into the abyss, a retreat into reverie, a safe haven into which readers could surrender - momentarily - and marshal their own inner, moral reserves. Hiding the war behind a veil of lyrical abstractions this literary canon helped 'apolitical Germans' to reinvent themselves, unwillingly to be preached to by the Nazi hacks, but at the same time blocking out the possibility that the war would confront them with immediate moral and political choices. Instead, they ransacked their cultural heritage to help bear its burdens." So, while Schiller and Kleist were the poets of German military success in the early days of the war, Hölderlin was the poet of German military defeat. Still, the Nazis kept positioning the Third Reich as the fulfillment of Hölderlin's vision. Here is the Nazi ideologue Alfred Rosenberg writing in the Nationalsozialistische Monatshefte: Es war die Tragik Hölderlins, dass er sich aus der Gemeinschaft der Menschen lösen musste, ohne dass ihm die Gestaltung der kommenden Gemeinschaft beschieden war. Er blieb ein Einsamer, ein Unverstandener seiner Zeit, der aber die Zukunft als Gewissheit in sich trug. Er wollte eine Wiedererlebung, kein neues Griechenland, aber er fand im Griechentum die nordisch-heidische Lebenshaltung wieder, die in dem Deutschland seiner Zeit verkümmert war, aus der jedoch allein die kommende Gemeinschaft wachsen kann... Und der Kampf um die Gestaltund des Reiches aber is das Ringen um die gleiche Tat, die Hölderlin nicht tun konnte, weil die Zeit noch nich erfüllt war. But while the Nazis were promoting Hölderlin as a spiritual forerunner of a national-socialist Volksgemeinschaft, the poet also served as inspiration for those who would try to sabotage the Nazi regime. Steingardt writes about how Sophie Scholl, the leader of the White Rose student opposition group, revered Hölderlin. Likewise Claus von Stauffenberg, the leader of the failed 20 July plot to assassinate Hitler, saw in Hölderlin the prophet of a "secret Germany" which had been perverted by Hitler and the Nazis. With the collapse of the Reich, the cult of Hölderlin lost its appeal, especially to the troops returning from the front. Günter Eich expressed his disgust in his famous 1946 poem Latrine: Irr mir im Ohre schallen Verse von Hölderlin. In schneeiger Reinheit spiegeln Wolken sich im Urin. (Mad in my hearing echo verses of Hölderlin in snowy pureness clouds are reflected in pools of urine.) Here is a recording of the Nazi philosopher Martin Heidegger reading Hölderlin's poem Der Ister.
<urn:uuid:e4a63787-e978-452d-8082-09c020ef3128>
CC-MAIN-2016-26
http://www.dialoginternational.com/dialog_international/german_culture/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00094-ip-10-164-35-72.ec2.internal.warc.gz
en
0.740405
896
2.578125
3
Bernal Hill provides visitors with a breathtaking 360-degree panorama and clear views of San Francisco Bay, the Golden Gate Bridge, downtown, San Bruno Mountain, and the hills of the East Bay. These windswept slopes are still sunny when Twin Peaks is shrouded in afternoon fog. Red-tailed hawks soar overhead, the breeze sends waves undulating through the native grassland community, and visitors hike around the hill’s peaceful summit to escape from the complexities of urban life. As one of the few remaining natural refuges in San Francisco, Bernal Hill is a special place for the city’s human and wildlife inhabitants. A paved limited-access road and a network of well-defined dirt trails wind around the hill’s flanks and provide access to the summit. Bernal Hill was originally part of a 4,446-acre land grant awarded to José Cornelio de Bernal, a soldier in Juan Bautista de Anza’s 1776 expedition. The grant extended south from current-day Cesar Chavez Street to Daly City. The Bernal region became a squatter’s paradise over the next several decades, until the Van Ness Ordinance of 1855 decreed that these illegitimate tenants were in fact bona fide landowners and citizens of the city. In the mid-1800s, as San Francisco began to outgrow the capacity of the downtown district, the Bernal Heights neighborhood emerged. The streets were laid out and many small- to moderate-sized homes were built. A tight-knit community, including many Irish, Scots, and Scandinavians, took up residence in the shadow of the hill, which residents used extensively for cattle and dairy ranching. One of the most dramatic tales of Bernal Hill took place in May of 1876. The Bernal Heights community caught the California gold rush fever after Frenchman Victor Resayre announced his discovery of gold on the Bernal summit, ore that he claimed would fetch $1 million per ton. For several days the hill was the site of extensive mining efforts, until it was revealed that the original discovery consisted of the considerably less valuable quartz. During the 1906 earthquake, homeowners in Bernal Heights were some of the luckiest in the city because of the hill’s stable rock. Few homes here were damaged, compared to those in parts of the city that were built on landfill or former sand dunes. Bernal Hill’s steep slopes support a thriving grassland community, suggesting how much of the northern San Francisco peninsula might have looked 250 years ago. In the summer and fall its grasslands are dry and parched, and Bernal appears from a distance to be a tawny, uninhabitable monolith. The native grasses and wildflowers have dropped their seeds, which wait patiently in the soil for the winter rains to awaken them. By early February, the hill is transformed into a palette of brilliant colors as a multitude of native wildflowers bloom, including footsteps of spring, sun cup, blue-eyed grass, checkerbloom, and shooting star. Native purple needlegrass and red fescue blow in waves from the almost constant ocean breeze. With a diversity of plant life, an assortment of native animals can survive. More than 40 species of birds are known to use Bernal Hill, including Anna’s hummingbirds, dark-eyed juncos, American kestrels, western meadowlarks, and Townsend’s warblers. These birds forage and hunt in the grasslands in search of a tasty meal of seeds, berries, nectar, or invertebrates. The American kestrel effortlessly hovers above the steep slopes of the hill, waiting patiently for its next meal to reveal itself among the lupines, fragrant coyote mint, and red fescue. Kestrels, like all raptors, are excellent hunters, aided by their specialized hooked beaks, keen eyesight, swift flight, and curved talons. Any insects, field mice, or small birds and reptiles on Bernal Hill could fall prey to this falcon. As rodents scurry through grasslands, they mark their trails with urine, which absorbs ultraviolet light. Raptors such as kestrels can see UV light, a trait that enables them to track the rodents along these “urine highways” through the grasslands. Smaller rodents such as voles urinate almost continuously, so a kestrel can simply follow the fresh trail directly to its prey. California alligator lizards and Pacific gopher snakes bask in the sunny warmth provided by Bernal’s rocky outcrops. Other wildlife, such as Botta’s pocket gophers, California slender salamanders, arboreal salamanders, and raccoons, call Bernal Hill home. In recent years coyotes have been spotted at Bernal Hill; this native mammal, once prevalent in San Francisco, had gone extinct from the city but now is making its way back. If you are lucky enough to see one of these elusive creatures, please don’t try to approach or feed it, for its safety and yours. Remember that a fed coyote is a dead coyote: While not normally a threat to humans, coyotes that become too accustomed to being approached can become aggressive. Keep your dog on leash in areas that coyotes are known to frequent, for its own safety. For more information about learning to coexist with coyotes, see Project Coyote. In 1973, Bernal Heights residents Barbara and Roland Pitschel began organizing volunteer projects on Bernal Hill. Originally the work concentrated on park beautification and trash removal, but gradually the focus turned more to invasive plant removal. In 1980 the group received official authorization from the San Francisco Recreation and Park Department to pursue grassland habitat restoration, the first project of its kind in the city. Today, the Natural Areas Program and the Bernal Hilltop Restoration Project are working together to preserve and restore Bernal Hill’s natural history. Through invasive plant removal, revegetation with native species, erosion control, and biological monitoring, they hope to re-establish the indigenous, healthy grassland ecosystem of Bernal Hill. If you are interested in helping to continue transforming Bernal Hill into a thriving natural area, please visit our opportunities page for information on volunteering.
<urn:uuid:8427a7e4-5c09-4545-aad3-28be28feb08d>
CC-MAIN-2016-26
http://sfrecpark.org/destination/bernal-heights-park/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949545
1,306
2.859375
3
And it may be observed, that the word moral is not to be understood here according to the common and vulgar acceptation of the word when men speak of morality, and a moral behavior; meaning an outward conformity to the duties of the moral law, and especially the duties of the second table; or intending no more at farthest, than such seeming virtues, as proceed from natural principles, in opposition to those virtues that are more inward, spiritual, and divine; as the honesty, justice, generosity, good nature, and public spirit of many of the heathen are called moral virtues, in distinction from the holy faith, love, humility, and heavenly-mindedness of true Christians: I say, the word moral is not to be understood thus in this place. But in order to a right understanding what is meant, it must be observed, that divines commonly make a distinction between moral good and evil, and natural good and evil. By moral evil, they mean the evil of sin, or that evil which is against duty, and contrary to what is right and ought to be. By natural evil, they do not mean that evil which is properly opposed to duty; but that which is contrary to mere nature, without any respect to a rule of duty. So the evil of suffering is called natural evil, such as pain and torment, disgrace, and the like: these things are contrary to mere nature, contrary to the nature of both bad and good, hateful to wicked men and devils, as well as good men and angels. So likewise natural defects are called natural evils, as if a child be monstrous or a natural fool; these are natural evils, but are not moral evils, because they have not properly the nature of the evil of sin. On the other hand, as by moral evil, divines mean the evil of sin, or that which is contrary to what is right; so by moral good, they mean that which is contrary to sin, or that good in beings who have will and choice, whereby, as voluntary agents, they are, and act, as it becomes them to be and to act, or so as is most fit, and suitable, and lovely. By natural good, they mean that good that is entirely of a different kind from holiness or virtue, viz., that which perfects or suits nature, considering nature abstractly from any holy or unholy qualifications, and without any relation to any rule or measure of right and wrong. Thus pleasure is a natural good; so is honor, so is strength; so is speculative knowledge, human learning, and policy.--Thus there is a distinction to be made between the natural good that men are possessed of, and their moral good; and also between the natural and moral good of the angels in heaven: the great capacity of their understandings, and their great strength, and the honorable circumstances they are in as the great ministers of God's kingdom, whence they are called thrones, dominions, principalities, and powers, is the natural good which they are possessed of; but their perfect and glorious holiness and goodness, their pure and flaming love to God, and to the saints and to one another, is their moral good. So divines make a distinction between the natural and moral perfections of God: by the moral perfections of God, they mean those attributes which God exercises as a moral agent, or whereby the heart and will of God are good, right, and infinitely becoming and lovely; such as his righteousness, truth, faithfulness, and goodness; or, in one word, his holiness. By God's natural attributes or perfections, they mean those attributes, wherein, according to our way of conceiving of God, consists, not the holiness or moral goodness of God, but his greatness, such as his power, his knowledge, whereby he knows all things, and his being eternal, from everlasting to everlasting, his omnipresence, and his awful and terrible majesty. The moral excellency of an intelligent voluntary being is more immediately seated in the heart or will of moral agents. That intelligent being, whose will is truly right and lovely, is morally good or excellent. This moral excellency of an intelligent being, when it is true and real, and not only external or merely seeming and counterfeit, is holiness. Therefore holiness comprehends all the true moral excellency of intelligent beings: there is no other true virtue, but real holiness. Holiness comprehends all the true virtue of a good man, his love to God, his gracious love to men, his justice, his charity, and bowels of mercies, his gracious meekness and gentleness, and all other true Christian virtues that he has, belong to his holiness. So the holiness of God in the more extensive sense of the word, and the sense in which the word is commonly, if not universally used concerning God in Scripture, is the same with the moral excellency of the divine nature, or his purity and beauty as a moral agent, comprehending all his moral perfections, his righteousness faithfulness, and goodness. As in holy men, their charity, Christian kindness and mercy, belong to their holiness; so the kindness and mercy of God belong to his holiness. Holiness in man is but the image of God's holiness; there are not more virtues belonging to the image than are in the original: derived holiness has not more in it than is in that underived holiness which is its fountain: there is no more than grace for grace, or grace in the image, answerable to grace in the original. As there are two kinds of attributes in God, according to our way of conceiving of him, his moral attributes, which are summed up in his holiness, and his natural attributes of strength, knowledge, &c., that constitute the greatness of God; so there is a twofold image of God in man, his moral or spiritual image, which is his holiness, that is the image of God's moral excellency (which image was lost by the fall), and God's natural image, consisting in man's reason and understanding, his natural ability, and dominion over the creatures, which is the image of God's natural attribute. From what has been said, it may easily be understood what I intend, when I say that a love to divine things for the beauty of their moral excellency, is the beginning and spring of all holy affections. It has been already shown, under the former head, that the first objective ground of all holy affections is the supreme excellency of divine things as they are in themselves, or in their own nature; I now proceed further, and say more particularly, that that kind of excellency of the nature of divine things, which is the first objective ground of all holy affections, is their moral excellency, or their holiness. Holy persons, in the exercise of holy affections, do love divine things primarily for their holiness: they love God, in the first place, for the beauty of his holiness or moral perfection, as being supremely amiable in itself. Not that the saints, in the exercise of gracious affections, do love God only for his holiness; all his attributes are amiable and glorious in their eyes; they delight in every divine perfection; the contemplation of the infinite greatness, power, knowledge, and terrible majesty of God, is pleasant to them. But their love to God for his holiness is what is most fundamental and essential in their love. Here it is that true love to God begins; all other holy love to divine things flows from hence: this is the most essential and distinguishing thing that belongs to a holy love to God, with regard to the foundation of it. A love to God for the beauty of his moral at tributes leads to, and necessarily causes a delight in God for all his attributes; for his moral attributes cannot be without his natural attributes: for infinite holiness supposes infinite wisdom, and an infinite capacity and greatness; and all the attributes of God do as it were imply one another. The true beauty and loveliness of all intelligent beings does primarily and most essentially consist in their moral excellency or holiness. Herein consists the loveliness of the angels, without which, with all their natural perfections, their strength, and their knowledge, they would have no more loveliness than devils. It is a moral excellency alone, that is in itself, and on its own account, the excellency of intelligent beings: it is this that gives beauty to, or rather is the beauty of their natural perfections and qualifications. Moral excellency is the excellency of natural excellencies. Natural qualifications are either excellent or otherwise, according as they are joined with moral excellency or not. Strength and knowledge do not render any being lovely, without holiness, but more hateful; though they render them more lovely, when joined with holiness. Thus the elect angels are the more glorious for their strength and knowledge, because these natural perfections of theirs are sanctified by their moral perfection. But though the devils are very strong, and of great natural understanding, they be not the more lovely: they are more terrible indeed, but not the more amiable; but on the contrary, the more hateful. The holiness of an intelligent creature, is the beauty of all his natural perfections. And so it is in God, according to our way of conceiving of the divine Being: holiness is in a peculiar manner the beauty of the divine nature. Hence we often read of the beauty of holiness, Psal. 29:2, Psal. 96:9, and 110:3. This renders all his other attributes glorious and lovely. It is the glory of God's wisdom, that it is a holy wisdom, and not a wicked subtlety and craftiness. This makes his majesty lovely; and not merely dreadful and horrible, that it is a holy majesty. It is the glory of God's immutability, that it is a holy immutability, and not an flexible obstinacy in wickedness. And therefore it must needs be, that a sight of God's loveliness must begin here. A true love to God must begin with a delight in his holiness, and not with a delight in any other attribute; for no other attribute is truly lovely without this, and no otherwise than as (according to our way of conceiving of God) it derives its loveliness from this; and therefore it is impossible that other attributes should appear lovely, in their true loveliness, until this is seen; and it impossible that any perfection of the divine nature should be loved with true love until this is loved. If the true loveliness of all God's perfections arises from the loveliness of his holiness; then the true love of all his perfections arises from the love of his holiness. They that do not see the glory of God's holiness, cannot see anything of the true glory of his mercy and grace: they see nothing of the glory of those attributes, as any excellency of God's nature, as it is in itself; though they may be affected with them, and love them, as they concern their interest: for these attributes are no part of the excellency of God's nature, as that is excellent in itself, any otherwise than as they are included in his holiness, more largely taken; or as they are a part of his moral perfection. As the beauty of the divine nature does primarily consist in God's holiness, so does the beauty of all divine things. Herein consists the beauty of the saints, that they are saints, or holy ones; it is the moral image of God in them, which is their beauty; and that is their holiness. Herein consists the beauty and brightness of the angels of heaven, that they are holy angels, and so not devils. Dan. 4:13, 17, 23; Matt. 25:31, Mark 8:38, Acts 10:22, Rev. 14:10. Herein consists the beauty of the Christian religion, above all other religions, that it is so holy a religion. Herein consists the excellency of the word of God, that it is so holy: Psal. 119:140, "Thy word is very pure, therefore thy servant loveth it." Ver. 128, "I esteem all thy precepts concerning all things to be right; and I hate every false way." Ver. 138, "Thy testimonies that thou hast commanded are righteous, and very faithful." And 172, "My tongue shall speak of thy word; for all thy commandments are righteousness." And Psal. 19:7-10, "The law of the Lord is perfect, converting the soul; the testimony of the Lord is sure, making wise the simple. The statutes of the Lord are right, rejoicing the heart: the commandment of the Lord is pure, enlightening the eyes. The fear of the Lord is clean, enduring forever: the judgments of the Lord are true, and righteous altogether. More to be desired are they than gold, yea, than much fine gold: sweeter also than honey, and the honey comb." Herein does primarily consist the amiableness and beauty of the Lord Jesus, whereby he is the chief among ten thousands, and altogether lovely, even in that he is the holy one of God, Acts 3:14, and God's holy child, Acts 4:27, and he that is holy, and he that is true, Rev. 3:7. All the spiritual beauty of his human nature, consisting in his meekness, lowliness, patience, heavenliness, love to God, love to men, condescension to the mean and vile, and compassion to the miserable, &c., all is summed up in his holiness. And the beauty of his divine nature, of which the beauty of his human nature is the image and reflection, does also primarily consist in his holiness. Herein primarily consists the glory of the gospel, that it is a holy gospel, and so bright an emanation of the holy beauty of God and Jesus Christ: herein consists the spiritual beauty of its doctrines, that they are holy doctrines, or doctrines according to goodness. And herein does consist the spiritual beauty of the way of salvation by Jesus Christ, that it is so holy a way. And herein chiefly consists the glory of heaven, that it is the holy city, the holy Jerusalem, the habitation of God's holiness, and so of his glory, Isa. 63:15. All the beauties of the new Jerusalem, as it is described in the two last chapters of Revelation, are but various representations of this. See chap. 21:2, 10, 11, 18, 21, 27, chap. 22:1, 3. And therefore it is primarily on account of this kind of excellency, that the saints do love all these things. Thus they love the word of God, because it is very pure. It is on this account they love the saints; and on this account chiefly it is, that heaven is lovely to them, and those holy tabernacles of God amiable in their eyes: it is on this account that they love God; and on this account primarily it is, that they love Christ, and that their hearts delight in the doctrines of the gospel, and sweetly acquiesce in the way of salvation therein revealed. Under the head of the first distinguishing characteristic of gracious affections, I observed, that there is given to those that are regenerated, a new supernatural sense, that is as it were a certain divine spiritual taste, which is, in its whole nature, diverse from any former kinds of sensation of the mind, as tasting is diverse from saint in the exercise of this new sense of mind, in spiritual and divine things as entirely different from anything that is perceived in them by natural men, as the sweet taste of honey is diverse from the ideas men get of honey by looking on it or feeling it. Now this that I have been speaking of, viz., the beauty of holiness, is that thing in spiritual and divine things, which is perceived by this spiritual sense, that is so diverse from all that natural men perceive in them; this kind of beauty is the quality that is the immediate object of this spiritual sense; this is the sweetness that is the proper object of this spiritual taste. The Scripture often represents the beauty and sweetness of holiness as the grand object of a spiritual taste and spiritual appetite. This was the sweet food of the holy soul of Jesus Christ, John 4:32, 34: "I have meat to eat that ye know not of--My meat is to do the will of him that sent me, and to finish his work." I know of no part of the holy Scriptures, where the nature and evidences of true and sincere godliness are so much of set purpose and so fully and largely insisted on and delineated, as the 119th Psalm; the Psalmist declares his design in the first verses of the Psalm, and he keeps his eye on this design all along, and pursues it to the end: but in this Psalm the excellency of holiness is represented as the immediate object of a spiritual taste, relish, appetite, and delight of God's law; that grand expression and emanation of the holiness of God's natures and prescription of holiness to the creature, is all along represented as the food and entertainment, and as the great object of the love, the appetite, the complacence and rejoicing of the gracious nature, which prizes God's commandments above gold, yea, the finest gold, and to which they are sweeter than the honey and honey comb; and that upon account of their holiness, as I observed before. The same Psalmist declares, that this is the sweetness that a spiritual taste relishes in God's law: Psal. 19:7, 8, 9, 10, "The law of the Lord is perfect; the commandment of the Lord is pure; the fear of the Lord is clean; the statutes of the Lord are right, rejoicing the heart;--the judgments of the Lord are true, and righteous altogether; more to be desired are they than gold, yea, than much fine gold; sweeter also than honey, and the honey comb." A holy love has a holy object. The holiness of love consists especially in this, that it is the love of that which is holy, as holy, or for its holiness; so that it is the holiness of the object, which is the quality whereon it fixes and terminates. A holy nature must needs love that in holy things chiefly, which is most agreeable to itself; but surely that in divine things, which above all others is agreeable to a holy nature, is holiness, because holiness must be above all other things agreeable to holiness; for nothing can be more agreeable to any nature than itself; holy nature must be above all things agreeable to holy nature: and so the holy nature of God and Christ, and the word of God, and other divine things, must be above all other things agreeable to the holy nature that is in the saints. And again, a holy nature doubtless loves holy things, especially on the account of that for which sinful nature has enmity against them; but that for which chiefly sinful nature is at enmity against holy things, is their holiness; it is for this, that the carnal mind is at enmity against God, and against the law of God, and the people of God. Now it is just arguing from contraries; from contrary causes to contrary effects; from opposite natures to opposite tendencies. We know that holiness is of a directly contrary nature to wickedness; as therefore it is the nature of wickedness chiefly to oppose and hate holiness; so it must be the nature of holiness chiefly to tend to, and delight in holiness. The holy nature in the saints and angels in heaven (where the true tendency of it best appears) is principally engaged by the holiness of divine things. This is the divine beauty which chiefly engages the attention, admiration, and praise of the bright and burning seraphim: Isa. 6:3, "One cried unto another, and said, Holy, holy, holy is the Lord of hosts, the whole earth is full of his glory." And Rev. 4:8, "They rest not day and night, saying, Holy, holy, holy, Lord God Almighty, which was, and is, and is to come." So the glorified saints chap. 15:4, "Who shall not fear thee, O Lord, and glorify thy name? For thou only art holy." And the Scriptures represent the saints on earth as adoring God primarily on this account, and admiring and extolling all God's attributes, either as deriving loveliness from his holiness, or as being a part of it. Thus when they praise God for his power, his holiness is the beauty that engages them: Psal. 98:1, "O sing unto the Lord a new song, for he hath done marvellous things: his right hand, and his holy arm hath gotten him the victory." So when they praise him for his justice and terrible majesty: Psal. 99:2, 3, "The Lord is great in Zion, and he is high above all people. Let them praise thy great and terrible name; for it is holy." Ver. 5, "Exalt ye the Lord our God, and worship at his footstool; for he is holy." Ver. 8, 9, "Thou wast a God that forgavest them, though thou tookest vengeance of their inventions. Exalt ye the Lord our God, and worship at his holy hill: for the Lord our God, is holy." So when they praise God for his mercy and faithfulness: Psal. 97:11, 12, "Light is sown for the righteous, and gladness for the upright in heart. Rejoice in the Lord, ye righteous; and give thanks at the remembrance of his holiness." 1 Sam. 2:2, "There is none holy as the Lord: for there is none besides thee; neither is there any rock like our God." By this therefore all may try their affections, and particularly their love and joy. Various kinds of creatures show the difference of their natures, very much in the different things they relish as their proper good, one delighting in that which another abhors. Such a difference is there between true saints, and natural men: natural men have no sense of the goodness and excellency of holy things at least for their holiness; they have no taste for that kind of good; and so may be said not to know that divine good, or not to see it; it is wholly hid from them; but the saints, by the mighty power of God, have it discovered to them; they have that supernatural, most noble and divine sense given them, by which they perceive it; and it is this that captivates their hearts, and delights them above all things; it is the most amiable and sweet thing to the heart of a true saint, that is to be found in heaven or earth; that which above all others attracts and engages his soul; and that whereby above all things, he places his happiness, and which he lots upon for solace and entertainment to his mind, in this world, and full satisfaction and blessedness in another. By this, you may examine your love to God, and to Jesus Christ, and to the word of God, and your joy in them, and also your love to the people of God, and your desires after heaven; whether they be from a supreme delight in this sort of beauty, without being primarily moved from your imagined interest in them, or expectations from them. There are many high affections, great seeming love and rapturous joys, which have nothing of this holy relish belonging to them. Particularly, by what has been said you may try your discoveries of the glory of God's grace and love, and your affections arising from them. The grace of God may appear lovely two ways; either as bonum utile, a profitable good to me, that which greatly serves my interest, and so suits my self-love; or as bonum formosum, a beautiful good in itself, and part of the moral and spiritual excellency of the divine nature. In this latter respect it is that the true saints have their hearts affected, and love captivated by the free grace of God in the first place. From the things that have been said, it appears, that if persons have a great sense of the natural perfections of God, and are greatly affected with them, or have any other sight or sense of God than that which consists in, or implies a sense of the beauty of his moral perfections, it is no certain sign of grace; as particularly men's having a great sense of the awful greatness and terrible majesty of God; for this is only God's natural perfection, and what men may see and yet be entirely blind to the beauty of his moral perfection, and have nothing of that spiritual taste which relishes this divine sweetness. It has been shown already, in what was said upon the first distinguishing mark of gracious affections, that that which is spiritual, is entirely different in its nature, from all that it is possible any graceless person should be the subject of, while he continues graceless. But it is possible that those who are wholly without grace should have a clear sight and very great and affecting sense of God's greatness, his mighty power, and awful majesty; for this is what the devils have, though they have lost the spiritual knowledge of God, consisting in a sense of the amiableness of his moral perfections; they are perfectly destitute of any sense or relish of that kind of beauty, yet they have a very great knowledge of the natural glory of God (if I may so speak), or his awful greatness and majesty; this they behold, and are affected with the apprehensions of, and therefore tremble before him. This glory of God all shall behold at the day of judgment; God will make all rational beings to behold it to a great degree indeed, angels and devils, saints and sinners: Christ will manifest his infinite greatness, and awful majesty, to everyone, in a most open, clear, and convincing manner, and in a light that none can resist, "when he shall come in the glory of his Father, and every eye shall see him;" when they shall cry to the mountains to fall upon them, to hide them from the face of him that sits upon the throne, they are represented as seeing the glory of God's majesty, Isa. 2:10, 19, 21. God will make all his enemies to behold this, and to live in a most clear and affecting view of it, in hell, to all eternity. God hath often declared his immutable purpose to make all his enemies to know him in this respect, in so often annexing these words to the threatenings he denounces against them: "And they shall know that I am the Lord;" yea he hath sworn that all men shall see his glory in this respect: Numb. 14:21, "As truly as I live, all the earth shall be filled with the glory of the Lord." And this kind of manifestation of God is very often spoken of in Scripture, as made, or to be made, in the sight of God's enemies in this world, Exod. 9:16, and chap. 14:18, and 15:16, Psal. 66:3, and 46:10, and other places innumerable. This was a manifestation which God made of himself in the sight of that wicked congregation at Mount Sinai; deeply affecting them with it; so that all the people in the camp trembled. Wicked men and devils will see, and have a great sense of everything that appertains to the glory of God, but only the beauty of his moral perfection; they will see his infinite greatness and majesty, his infinite power, and will be fully convinced of his omniscience, and his eternity and immutability; and they will see and know everything appertaining to his moral attributes themselves, but only the beauty and amiableness of them; they will see and know that he is perfectly just, and righteous, and true, and that he is a holy God, of purer eyes than to behold evil, who cannot look on iniquity; and they will see the wonderful manifestations of his infinite goodness and free grace to the saints; and there is nothing will be hid from their eyes, but only the beauty of these moral attributes, and that beauty of the other attributes, which arises from it. And so natural men in this world are capable of having a very affecting sense of everything else that appertains to God, but this only. Nebuchadnezzar had a great and very affecting sense of the infinite greatness and awful majesty of God, of his supreme and absolute dominion, and mighty and irresistible power, and of his sovereignty, and that he, and all the inhabitants of the earth were nothing before him; and also had a great conviction in his conscience of his justice, and an affecting sense of his great goodness, Dan. 4:1, 2, 3, 34, 35, 37. And the sense that Darius had of God's perfections, seems to be very much like his, Dan. 6:25, &c. But the saints and angels do behold the glory of God consisting in the beauty of his holiness; and it is this sight only that will melt and humble the hearts of men, and wean them from the world, and draw them to God, and effectually change them. A sight of the awful greatness of God, may overpower men's strength, and be more than they can endure; but if the moral beauty of God be hid, the enmity of the heart will remain in its full strength, no love will be enkindled, all will not be effectual to gain the will, but that will remain inflexible; whereas the first glimpse of the moral and spiritual glory of God shining into the heart, produces all these effects as it were with omnipotent power, which nothing can withstand. The sense that natural men may have of the awful greatness of God may affect them various ways; it may not only terrify them, but it may elevate them, and raise their joy and praise, as their circumstances may be. This will be the natural effect of it, under the real or supposed receipt of some extraordinary mercy from God, by the influence of mere principles of nature. It has been shown already, that the receipt of kindness may, by the influence of natural principles, affect the heart with gratitude and praise to God; but if a person, at the same time that he receives remarkable kindness from God, has a sense of his infinite greatness, and that he is but nothing in comparison of him, surely this will naturally raise his gratitude and praise the higher, for kindness to one so much inferior. A sense of God's greatness had this effect upon Nebuchadnezzar, under the receipt of that extraordinary favor of his restoration, after he had been driven from men, and had his dwelling with the beasts: a sense of God's exceeding greatness raises his gratitude very high; so that he does, in the most lofty terms, extol and magnify God, and calls upon all the world to do it with him; and much more if a natural man, at the same time that he is greatly affected with God's infinite greatness and majesty, entertains a strong conceit that this great God has made him his child and special favorite, and promised him eternal glory in his highest love, will this have a tendency, according to the course of nature, to raise his joy and praise to a great height. Therefore, it is beyond doubt that too much weight has been laid, by many persons of late, on discoveries of God's greatness, awful majesty, and natural perfection, operating after this manner, without any real view of the holy majesty of God. And experience does abundantly witness to what reason and Scripture declare as to this matter; there having been very many persons, who have seemed to be overpowered with the greatness and majesty of God, and consequently elevated in the manner that has been spoken of, who have been very far from having appearances of a Christian spirit and temper, in any manner of proportion, or fruits in practice in any wise agreeable; but their discoveries have worked in a way contrary to the operation of truly spiritual discoveries. Not that a sense of God's greatness and natural attributes is not exceeding useful and necessary. For, as I observed before, this is implied in a manifestation of the beauty of God's holiness. Though that be something beyond it, it supposes it, as the greater supposes the less. And though natural men may have a sense of the natural perfections of God; yet undoubtedly this is more frequent and common with the saints than with natural men; and grace tends to enable men to see these things in a better manner than natural men do; and not only enables them to see God's natural attributes, but that beauty of those attributes, which (according to our way of conceiving of God) is derived from his holiness.
<urn:uuid:0eee2339-00f0-4e73-9101-df0813d54ce2>
CC-MAIN-2016-26
http://www.leaderu.com/cyber/books/religaffect/rapt3sec03.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97207
6,917
2.53125
3
2. GREGORIAN NOTATION Manuscript: "A Child was born for us and the Son has been given to us" In the 11th century the repertoire of chants in the Church covered already the feast of each day and of each event of the Liturgy. The sacred texts had its own characteristic and variety of forms: Introits, Antiphons, Graduals, Hallelujahs, Offertories, Communions, Sequences, etc. to which it should be added those parts of the Liturgy called "Ordinary": Kyries, Glorias, Creeds, etc. All this had to be entrusted to the memory of singers who did not have any musical help, except some marks on the text indicating simply when the melody rose or descended just as is shown in the above manuscript. Of course, the conservation of the chants entrusted only to the good memory did that they were in danger to disappear. Initially the musical notation served like an aid-to-the-memory for whom already had an idea about how should sound. It not was intended that notation was "scientifically" precise. The concept that a melody can be sung reading correctly the score (without the need of having listened previously) is something relatively The oldest examples of musical notation in Western Europe were a kind of writings more as annotations for the texts that were sung. On the other side, the purpose of notation was more that of indicating the expressive character to stand out the subtleties of the vocal expression than that of indicating the height of the melodic notes (at present a great deal of investigation is going on by musicologists specialized in Medieval music) . a benedictin monk named Guido d' Arezzo (Italy 990 - 1050) found the solution. From the hymn of the Eves of St John the Baptist feast d'Arezzo organized what would be later the scale: UT queant laxis (C) REsonare fibris (D) MIra gestorum (E) FAmuli tuorum (F), SOLve polluti (G) LAbii reatum (A), Sancte Ioannes (SI —B—). See the score of the hymne. He invented the stave of four lines; of them, a yellow line would be UT (subsequently became DO —C— ) and a red line would indicate FA (F); this would give origin later to the notion of the 1. HEIGHT OF THE SOUNDS The height of the sounds is indicated by the location of the notes in a stave of four lines, with the possibility to use lower and upper additional lines. The clefs are of DO (C) and of FA (F) which can be in the second, third or fourth line. The possible extension is: Here is presented, in its order, the primitive notation, the actual Gregorian notation and its equivalent one in modern notation. Virga=Stick; Punctum quadratum=square point, Punctum inclinatum=inclined Pes, Podatus of the Latin foot; Torculus, of the latin torquere=to twist, by its broken form; Porrectus, of the Latin porrigere=to extend, by the extended form of its lines; Climacus, of climax=stair; Scandicus, of scandere=to rise; Salicus of salire=to - Those that are formed by joining simple neumes for a single Resupini: when are complemented with Praepunctis or subpunctis if they are notes included before or after: - The neumes that have the last one or two last notes of smaller size receive the name of licuescens or semivowels; the purpose of these notes is to call the attention on the correct pronunciation of the text. Pes, Clivis, and Climacus licuescens are called also epiphonus, cephalicus and ancus, respectively. The smallest size of the lisquescens note does not imply at all modification in - The ones that contain pressus, (of the Latin premo=to press, to stop), that is to say the coincidence in height of the final note of a neume with the initial note of another in a same syllable. It is given also among a punctum and a neume. - The ones that contain quilisma (of Greek külío=to revolve, to roll) a jagged note, serve to join two notes separated by an interval of third. It is never presented alone. The note that precedes to quilisma is lengthened moderately but must not be duplicated in length. - The strophicus (of Greek strophao=to rotate) is a punctum quadratum and might appear in three forms: - The oriscus (of Greek óros=limit or height, hill) is a punctum quadratum placed at the end of a neume. - Bivirga and Trivirga are formed by the union of two or three virgas respectively. (Virga=Stick. Bivirga and Trivirga=two or three sticks respectively) 2. SPECIAL CASES OF EXECUTION - The horizontal episema is placed on one or more notes and signifies expressive and light extension of those sounds: is a horizontal line. The note with ictus in the salicus should be prolonged as if had episema. The episema extends a little the note but it does not duplicate it. It should not be confused with the vertical episema that almost always is placed under the note and marks the binary or ternary steps (see the chapter devoted to Rhythm). - Distropha and Tristropha should be executed in flexible and light form. It is mandatory the repercussion in the first note of each one of them and in the first note of the neume that continues them if it is at the same height. - When the third note of a tristropha carries ictus, it can be executed with repercussion. The oriscus is always of smooth character. The two notes of the pressus should be executed like a clear, strong, and double sound (The distropha, the tristropha and the oriscus never form pressus). - Bivirga and trivirga should be executed like the strophicus, but its repercussion is more notorious. The scandicus with the melodic form D-A-B should be executed like Examples of repercussion: 4. SIGNS OF PAUSE (bars). The signs of pause, originated by the structure of the text, are: a) Minimum dividing line, that separates the clauses or smaller parts in which the text is divided; it does not imply to breath. b) Smaller dividing line, that separates the members of phrase. These are not more than clauses of greater amplitude; it implies to breath almost always. c) Greater dividing line that separates the phrases: Equals to a silence of simple duration of a note and obliges to breathe. d) Double dividing line, that indicates greater conclusive or also final sense of the composition. It equals to a simple silence of a note, at times prolonged a little more . 5. OTHER SIGNS The custos is a sign that goes at the end of each stave. This is not sung, instead it serves as a visual cue to the pitch of the first note on the next line. Also it is used when inside a same musical piece there is a change of clef. In Gregorian Chant the only accidental is B flat (rarely) which is indicated by a B Flat sign. The Flat affects not only the note B that carries it but to the others that appear later and it is canceled by the change of word, by any dividing line or by the natural sign. The B Flat at the foot of the clef remains during all the piece and is canceled only by the natural sign. The Liber Usualis provides a cue for singing the "Gloria Patri" after the introit verse. "Euouae" indicates the vowels of the syllables of "saeculorum Amen", which ends the "Gloria Patri". MARTINEZ SOQUES, Fernando. Método de Canto Gregoriano, Capítulos VI y VII. Ed. Pedagógica. Barcelona, 1943.
<urn:uuid:3eaae4bb-3da8-4fc6-96f4-ad05f8c05efe>
CC-MAIN-2016-26
http://interletras.com/canticum/eng/notation_eng.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892029
1,899
3.484375
3
Researchers have recalculated the mass of a gigantic black hole at the core of the M87 galaxy, and found that it’s about two times as massive as previously estimated: The new study says that M87’s black hole weighs the same as 6.4 billion suns. Researchers say the findings may indicate that many black holes have been underestimated, and also say that the results from this “local” galaxy only 50 million light-years away may solve a mystery regarding the extremely distant black holes known as quasars. Astronomers had previously estimated M87’s total mass, calculating how much of that mass came from both the galaxy’s stars and its central black hole. But previous models didn’t have the supercomputing power to estimate the mass contributed by the galaxy’s “dark halo.” The dark halo is a spherical region surrounding the galaxy that extends beyond its main visible structure. It contains “dark matter”, an as yet unidentified material that cannot be directly detected by telescopes but which astronomers know is there from its gravitational interaction with everything else that can be seen [BBC News]. For the new study, which was presented at the American Astronomical Society meeting and will be published later this year in the Astrophysical Journal, researchers employed the gargantuan computing power of the Lonestar system … at the University of Texas. The Lonestar has 5,840 processing cores and can perform 62 trillion “floating-point operations” per second. For comparison, the most state-of-the-art laptop computer has only two processing cores and performs only 10 billion such operations per second [AFP]. With that computational firepower, researchers determined that a large bulk of the mass initially thought to belong to stars at M87’s core is actually locked up in the halo at the galaxy’s outer edge. But the actual mass of the core is still thought to be the same. So if the extra mass isn’t tied up in stars, it must belong to the supermassive black hole, Gebhardt explained [National Geographic News]. While this new calculation of the black hole’s mass was determined solely via computer modeling, researcher Karl Gebhardt says that not-yet-published observations from the world’s most sophisticated telescopes back up his findings. The new numbers also make sense of previous observations of quasars, the distant black holes that shine brightly as material spirals towards the black hole’s event horizon–the point beyond which nothing, not even light, can escape. These quasars were believed to be colossal, around 10 billion solar masses, “but in local galaxies, we never saw black holes that massive, not nearly,” Gebhardt said. “The suspicion was before that the quasar masses were wrong,” he said [SPACE.com]. Now, by boosting the mass of local black holes, researchers have bolstered the case for prior estimations of the quasars’ mass. 80beats: Chicken-or-Egg Problem: Did Black Holes Form Before the Galaxies That Surround Them? 80beats: Two Stars Are Born Near the Perilous Edge of a Black Hole 80beats: Confirmed: Monstrous Black Hole Lurks in Our Galaxy’s Center 80beats: Researchers Look Into a Black Hole (But Does The Black Hole Look Back?) Image: NASA/CXC/CfA/W. Forman et al./NRAO/AUI/NSF/W. Cotton;/ESA/Hubble Heritage Team (STScI/AURA), and R. Gendler. The M87 galaxy.
<urn:uuid:81254cc3-fec7-4bf2-b547-b90a64600528>
CC-MAIN-2016-26
http://blogs.discovermagazine.com/80beats/2009/06/09/we-knew-black-holes-were-massive-now-double-that/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918664
777
3.140625
3
3. The primary purpose of assessment is to improve teaching and learning. Assessment is used in educational settings for a variety of purposes, such as keeping track of learning, diagnosing reading and writing difficulties, determining eligibility for programs, evaluating programs, evaluating teaching, and reporting to others. Underlying all these purposes is a basic concern for improving teaching and learning. In the United States it is common to use testing for accountability, but the ultimate goal remains the improvement of teaching and learning. Similarly, we use assessments to determine eligibility for special education services, but the goal is more appropriate teaching and better learning for particular students. In both cases, if improved teaching and learning do not result, the assessment practices are not valid (see standard 7). If an educational assessment practice is to be considered valid, it must inform instruction and lead to improved teaching and learning. The assessment problem then becomes one of setting conditions so that classrooms and schools become centers of inquiry where students and teachers investigate and improve their own learning and teaching practices, both individually and as learning communities. This in turn requires teachers, schools, and school districts not only to use assessment to reflect on learning and teaching but also to examine, constantly and critically, the assessment process itself and its relation to instruction. No matter how elaborate and precise the data provided by an assessment procedure are, its interpretation, its use, or the context of its use can render it useless or worse with respect to improving teaching and learning. For example, climates in which perfectly useful assessment data are employed to place blame can lead to defensiveness rather than to problem solving and improved learning. Ensuring that assessment leads to the improvement of teaching and learning is not simply a technical matter of devising instruments for generating higher quality data. At least as important are the conditions under which assessment takes place and the climate produced by assessment practices. Sometimes the language we choose to frame assessment distracts us from this standard. We believe that the commonly expressed need for “higher standards” is better expressed as the need for higher quality instruction, for without it, higher standards simply means denying greater numbers of students access to programs and opportunities. The central function of assessment, therefore, is not to prove whether teaching or learning has taken place, but to improve the quality of teaching and learning and thereby to increase the likelihood that all members of the society will acquire a full and critical literacy (see standard 1).
<urn:uuid:457fdb2c-9187-461c-8df4-15e1d93e37bd>
CC-MAIN-2016-26
http://www.ncte.org/standards/assessmentstandards/standard3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963612
486
3.546875
4
This is a DIGITAL EDITION book. You will receive an email within 48 hours of purchase with complete download instructions. "Say and Do" Holiday Worksheets will improve their language skills with vocabulary word-pictures, riddles, stories, spinning games, yes/no questions, coloring sheets, and following directions activities. - Activities for Halloween, Thanksgiving, Christmas, Hanukkah, Valentine's Day, Easter, St. Patrick's Day, and George Washington, Abraham Lincoln, and Martin Luther King's birthday - 179 perfect-bound reproducible pages
<urn:uuid:a2d37c50-670f-4bfe-a7e6-7e86c72052b9>
CC-MAIN-2016-26
https://www.superduperinc.com/products/view.aspx?pid=CDLBK207&s=say-and-do-holiday-unit-worksheets-digital-edition
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.855419
116
3.03125
3
One gram of turmeric at breakfast has been shown by a new study to improve memory in people with memory problems. In the study itself participants were given 1 gram of turmeric mixed into their ordinary breakfasts (Lee et al., 2014). Their working memory was tested before and some time after their breakfast, and the results were compared with a placebo-control condition. Professor Wahlqvist, who led the Taiwanese study, explained the results: “We found that this modest addition to breakfast improved working memory over six hours in older people with pre-diabetes.” Diabetes and memory problems are linked because having diabetes makes it more likely that a person will also develop dementia if the diabetes is not well controlled. Turmeric is a yellow spice already widely used in cooking, especially in Asia. Its distinctive yellow colour is given to it by a substance called curcumin, which makes up between 3-6% of turmeric. It is the curcumin which is thought to have an active effect in reducing the memory problems associated with dementia. Professor Wahlqvist explained the importance of working memory, which was tested in this study: “Working memory is widely thought to be one of the most important mental faculties, critical for cognitive abilities such as planning, problem solving and reasoning. Assessment of working memory is simple and convenient, but it is also very useful in the appraisal of cognition and in predicting future impairment and dementia.” “Our findings with turmeric are consistent with these observations, insofar as they appear to influence cognitive function where there is disordered energy metabolism and insulin resistance.”
<urn:uuid:376ad2be-b3de-4644-abb8-e3376b1c65e5>
CC-MAIN-2016-26
http://www.spring.org.uk/2014/11/just-1-gram-of-this-spice-boosts-memory-in-six-hours.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00063-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968985
339
3.125
3
Speech/Language Therapy in The Day School In The Day School, speech/language therapy is provided with the unique opportunity to support communication and language development for students throughout every aspect of the school day and to maximize language in those ongoing, everyday situations. We provide a wide variety of services related to communication disorders and communication development, and the speech/language pathologist is the individual whose expertise supports and guides many of the diagnostic and educational services aimed at improving the communicative ability of our students. While services to students may be provided in individual sessions or in group settings — in a classroom, in a speech/language pathologist’s office or other area — the preferred location for any service is the setting in which the skills of communication are to be used. The educational process may involve helping students to: - develop the ability to understand, recall and use words - develop the ability to understand and use sentences - develop the grammatical aspects of language - develop the ability to understand and generate a series of related sentences (as in writing paragraphs, giving directions, telling stories, making needs and wants known) - develop the ability to effectively use language as a tool for communication in a variety of educational and family situations such as social conversation, explanations, descriptions of events, classroom interactions, etc. - develop the ability to use language as a tool for thinking and learning – learning to ask questions and use language to think through problems and develop plans Alternative and Augmentative Communication Our speech/language pathologists can also assist students in developing the use of alternate means of communication. This approach to therapy is appropriate when speech is severely impaired and cannot be used as the primary means of communication. After gathering input from the student, family, teacher and other professionals involved with the student to identify communication needs, our speech/language pathologist will: - design and fabricate appropriate boards/books for the student - select and/or recommend appropriate speech output prostheses (electronic communication system) and/or computer system - along with other members of the classroom team, teach the student sign language or a gestural system - train the student to use the recommended system - consult with other members of the classroom team to maximize opportunities for the student to practice functional use of the system - train families and significant others in the use of communication systems - provide guidance to the classroom team to offer a total communication experience For more information, please call 412-420-2487.
<urn:uuid:5c14bcee-d17c-4e5a-a6c1-93b11ec3e264>
CC-MAIN-2016-26
http://www.amazingkids.org/Educational-Services/speech-language-therapy
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929893
507
2.96875
3
West Nile Virus Cases Causing Concern in Texas Blue Cross and Blue Shield of Texas (BCBSTX) continually monitors events that could impact the health and wellness of your patients. The Centers for Disease Control and Prevention (CDC) reports the highest number of cases documented through the end of July since 2004. The high count is attributed in large part to the much greater than average number of infested mosquitoes, brought on by the mild winter, early spring and very hot summer we have experienced this year. In Texas, over 100 people are confirmed to have fallen ill with West Nile infection, more than double the 10-year average for cases reported before August. Of that total, the majority were in Dallas, Tarrant, Collin and Denton counties of North Texas. Some deaths also have been connected to the illness, according to Texas Department of State Health Services. We encourage you to communicate with your patients the preventive measures they can take to reduce their risk for contracting West Nile, including: - Wear an insect repellent, preferably one with DEET, when outdoors - Avoid being outdoors between dusk and dawn, when mosquitoes are biting. - Install or repair screens to keep mosquitoes outside - Drain standing water to eliminate breeding habitats - Keep pools, saunas and hot tubs chlorinated - Wear light-colored clothes when outdoors, and dress in long sleeves and long pants if possible. The CDC also provides a fact sheet with more information about West Nile on its website: cdc.gov/ncidod/dvbid/westnile/wnv_factsheet.htm.
<urn:uuid:13355046-30f7-4a18-9f67-23aa662234ad>
CC-MAIN-2016-26
http://www.bcbstx.com/provider/news/2012_08_29.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940949
332
3.03125
3
Difference in the potentials of electrodes on the right and left of a galvanic cell. When is positive, positive charge flows from left to right through the cell. The limiting value of for zero current flowing through the cell, all local charge transfer and chemical equilibria being established, was formerly called emf ). The name electromotive force and the symbol emf are no longer recommended, since a potential difference is not a force. PAC, 1996, 68, 957 (Glossary of terms in quantities and units in Clinical Chemistry (IUPAC-IFCC Recommendations 1996)) on page 971 IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). XML on-line corrected version: http://goldbook.iupac.org (2006-) created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins. ISBN 0-9678550-9-8. doi:10.1351/goldbook
<urn:uuid:b59ec927-d4d1-4dec-af49-5e68d94c35fe>
CC-MAIN-2016-26
http://goldbook.iupac.org/E01934.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.77811
242
3.390625
3
Makin' The SoilThe process of soil formation happens in many ways. Here are five of the most important factors involved in soil formation. There are others, but these are the biggies. (1) It can be created because of the shape of the landscape. That shape is called the topography. When you have mountains, the sides of the mountains are said to have a slope. When you have a slope and it rains, there will be drainage. The runoff carries away small rocks and minerals. This runoff winds up in valleys or in the ocean. It slowly builds up and the small pieces make soil. (2) There are climatic effects that create soil. Moisture and rain combine with the temperature to do amazing things to rocks. We just explained that when it rains you have runoff and erosion. Those physical activities break down the rocks and hard surfaces. Temperature plays a role when you move below and above the freezing point. When water freezes, it expands. Rocks and soil that hold water can be cracked when the water freezes and expands. They pop open with a cracking sound! (3) What's in the soil is dependent on geologic factors. The type of soil under your feet is dependent on the bedrock deep below the surface. As the bedrock breaks down, smaller pieces move to the surface and mix with the existing soil. (4) In the same way that there are large geologic factors, chronological factors play an important part in the process. Chronological means time. You need time to make soil. That's it. Sediment can move around quickly but it takes a long time to break down bedrock. We can't just sit and watch this process happen. We have to study it over many years. Also, if we pollute our soil we can't renew it in our lifetime. It takes hundreds to thousands of years. (5) Soil is also created by biological factors. You'll find that soil is half minerals/rocks and half air/water. All sorts of biological things are happening in the air/water space. The organic material is most important. There are tiny living organisms (like bacteria) that break down organic stuff. The "stuff" could be dead leaves or dead animals. The organic stuff is called humus. There are also roots and tunneling creatures that work like the microbes. They turn the soil around and move it. They churn the pieces of soil. Or search the sites for a specific topic. - Food Chains - Land Biomes - Deep Erosion - Soil Formation - Natural Resources - Energy Resources - More Topics Study of Mangrove Peat Development (USGS Video) Useful Reference MaterialsEncyclopedia.com (Soil Formation): Encyclopædia Britannica (Soils in Ecosystems):
<urn:uuid:d43e8f44-abb0-46b0-b0f9-cc9edb2f6a3f>
CC-MAIN-2016-26
http://www.geography4kids.com/files/land_soil2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921998
583
3.953125
4
It is hard to imagine life in America without crowds. Concerts, athletic events, conventions, theater productions, shopping centers, night clubs, movies and other public assembly venues are woven throughout the fabric of American life. Most people never really stop to consider the subject of crowds until tragedy strikes. Unfortunately, as you will learn in this course, there have been a number of incidents in which people within crowds were seriously injured or killed because they were unable to reach safety. The National Fire Protection Association, or NFPA, is an organization dedicated to improving the safety of crowds within large public assembly venues. NFPA has long realized that crowds are often unfamiliar with their surroundings and if someone was there to help guide people to safety more people would survive. This is the job of the crowd manager and this course is designed to train you to be a crowd manager. The IAVM Trained Crowd Manager, or TCM, is the first course for crowd managers to be endorsed by the NFPA. The TCM course will give you the skills you need to help guide people to safety in an emergency. The responsibilities of the crowd manager may be conferred upon various employees employed by a venue and are not necessarily limited to security, ushers, or other employees traditionally associated with crowd control. The TCM Program is designed to equip these crowd managers through a standardized certificate program which is comprised of two phases: Phase One: The first part of the TCM course is a self-paced, web-based program of computer based training. Most people complete Phase One in about six hours. To get started you will click on the “Get Started” at the top of the TCM homepage, register as a student, and begin the course. Phase One has four modules: - Module 1: Introduction and Administration - Module 2: Risks and Remedies - Module 3: Crowd Movement - Module 4: Moving People with Disabilities The web-based training will begin with some administration items and a pre-course assessment that allows you to get a feel for your knowledge on the subject. Module 2 will introduce you to the risks that may present themselves within public assembly venues and the life safety equipment and other safety measures that help mitigate these risks. In Module 3 you will learn about different types of crowds and the fundamentals of emergency and non-emergency crowd movement. And finally, in Module 4 you will be introduced to different methods for assisting people with disabilities in an emergency. To earn your certificate you must successfully complete a 60 question, multiple choice, post course assessment. Once you pass this test you will be able to save and print a completion certificate. With the certificate in hand you can proceed to the second phase of the TCM course. Phase Two: The second phase of the TCM course is all about the specific venue where you will serve as a crowd manager. No two nightclubs, shopping malls, or sports stadiums are the same. This is why Phase Two of the TCM program is so important. Phase Two allows the student to apply the general knowledge presented in the web-based course (Phase One) to a specific venue. Phase Two training will vary from venue to venue but takes a little more than two hours to complete. It is important to understand that Phase Two training must be completed for each venue where you serve as a crowd manager. So, while every crowd manager completes Phase One, those crowd managers that serve in multiple venues will complete several different versions of the Phase Two (venue-specific) training. Phase Two is comprised of three modules: - Module A: Venue familiarization - Module B: Venue emergency plans and procedures - Module C: Venue knowledge assessment Phase Two will be different for each venue but will always begin by familiarizing you with the venue and its policies and procedures. In Module B, you will learn specific details about the venue’s emergency plans and procedures so that you, as a crowd manager, are able to direct people to safety in accordance with the venue’s plan. Finally, to activate your certificate, you must successfully complete a 25 question, multiple-choice, post course assessment about the venue’s emergency procedures. Once you successfully complete this assessment the venue administrator will sign the certificate you brought to the venue and you are ready for work. As a trained crowd manager you will have access to this web portal and may review information, continue your education, and update your skills. This is important, because as a trained crowd manager you are the most important piece of safety equipment in the crowd.
<urn:uuid:7126967a-74fd-4f47-8cb1-8f607f6fd126>
CC-MAIN-2016-26
http://www.trainedcrowdmanager.com/about_TCM.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955412
928
2.703125
3
Tips on cot death What are the principal measures to prevent cot death? We wish to point out that the causes of cot death are still not 100% clear. However, scientific studies have highlighted a number of risk factors which undeniably have a negative effect on sleeping babies. These risks can be reduced if you take the following preventive measures. - Putting the baby on its back is the only safe sleeping position Always put a baby to sleep on its back. You should absolutely avoid putting it on its stomach or side. These positions should be avoided for a number of reasons. Babies will fall asleep sooner on their back if they are put to sleep in this position right from the start. From the age of three months, however, your child may turn itself from back position to stomach position. Even though you put him on his back, in his sleep your baby may unconsciously roll over onto its stomach. Moreover, the risk of cot death is highest in children who sleep on their stomach for the first time. Try to avoid this by letting your baby play on his tummy now and again when he's awake. But only under the constant watchfulness of an adult. Medici and experts recommend laying your baby in different positions. This is ideal for your baby to learn to tilt his head up, crawl and sit. When your baby is sleeping, absolutely avoid him laying on his tummy! - Smoking during and after pregnancy is bad for mother and child Smoking increases the risk of lifelong damage to health and of cot death. Nicotine impedes the function of particular heart receptors which accelerate the heart rhythm. This slows down the response in the event of a shortage of oxygen, e.g. when the baby is asleep. For this reason, smoking is prohibited during and after pregnancy, which applies to mum and dad. Learn more how you can avoid contact between your baby and smoking. The consumption of alcohol and drugs by the mother also increases the risk of cot death. - Make sure the baby does not get too hot The baby must not get too hot so as to prevent the risk of heat congestion. This applies not only when they are asleep but also when awake. Pay particular attention to the combination of clothing, ambient temperature and bed products used. If in spite of these measures your baby does get too hot (e.g. when it has a fever or in high summer temperatures) and it sleeps on an AeroSleep mattress cover, he will be better able to keep his body temperature under control. The air can maximally circulate through our 3D structure: the hot air from the body is discharged and fresh air is supplied from the surroundings. In this way, AeroSleep reduces the chance of overheating and heat congestion. - The bedding must guarantee a safe sleeping environment Always use a safe bed with safe bedding. Avoid all material in which the baby may suffocate or where it can get stuck. - Keep a regular and direct eye on your baby while he is asleep Always stay near your baby. Preferably keep your baby in your room for the first six months, where it should sleep in a separate cot. Also make sure that the baby sleeps near you during the day. Through your natural vigilance, you will in this way notice sooner if something is wrong. If possible keep this up until the child can easily turn around by itself (about six months). Please note never to sleep with your baby in the same bed. This is because an adult bed is more likely to cause cot death than a baby bed. Even when you take these precautionary measures into account, it remains important to know how you can recognize the risks of cot death. Read here how you can detect potentially dangerous situations. What other factors increase the risks when your baby sleeps? Cot death occurs in particular in children under the age of six months, with a peak between two and four months. Cot death occurs about twice as often in male babies. Prematurity increases the chance of cot death because it is reversely proportionate to birth weight and length of pregnancy. The lower the birth rate and the shorter the pregnancy, the higher the chance of cot death. - Sleep rhythm Try to respect the child's life rhythm and follow a regular time schedule. This ensures that the child does not get too little sleep which may also increase the risk of cot death. - Breast feeding Breast feeding is recommended as the gold standard to reduce baby mortality in the post-natal and post-neonatal period.However, breast feeding is currently not considered a significant preventive factor against cot death. Medicines with a sleep-inducing side effect should be avoided. They may cause a baby to sleep too deeply.Give your baby a medicine only if the doctor advises you to do so because for many medicines it is insufficiently known what their effect is on small children. If you are breast feeding, you should also avoid these medicines because you pass them on via your breast milk. Vaccination, infections and ALTE have no risk-enhancing effect on cot death. Cot death occurs more often in winter. Nevertheless, we recommend dressing your child in accordance with the ambient temperature in the room where he sleeps and not on the basis of the outdoor temperature. Even if it is very cold outside, the temperature inside the room may be much higher, e.g. as a result of heating. Air the child's room as much as possible. Where appropriate, use a fan. - Rest and regularity Babies are sensitive to disruptions in rest. It is better to avoid restless situations (travel, staying with other people, moving house, etc.) during the child's first year. They can easily upset a baby, leading to disturbed sleep. This is why it is important to keep an even closer eye on the child in such situations. A baby also experiences a sudden change in the environment as stressful, causing a change in its sleep pattern. It therefore appears that the chance of cot death during day care is higher, in particular during the first days. - Stay on the same altitude At high altitudes or during air travel, there is lower oxygen tension. Babies still have foetalhaemoglobin, making them tolerate relative oxygen decreases. However, long exposure to low oxygen tension may increase the risk of cot death. - Parent-related factors This is often linked with the young age of the parents, a low level of education, single parenthood, etc. These elements together may be responsible for an increased risk of cot death. Moreover, it has been noted that the safety recommendations are less heeded among this group of parents. Unfortunately it is impossible to guarantee complete safety because babies remain vulnerable. The risk that something suddenly happens to a baby can never be completely excluded. However, if the above advice and tips are followed, the chance of particular possible risks of cot death becomes much smaller. Never hesitate to ask your doctor for medical advice. You should do this in any case if you think there is something wrong with your baby, if you have any doubts or questions, or in the following possibly threatening situations. When is lowered blood pressure dangerous for babies? Babies between the age of two and four weeks and between five and six months compensate for a drop in blood pressure by an increased heart beat, restoring their blood pressure to normal. Babies between two and three months, who are at the greatest risk of cot death, are unable to compensate for a drop in blood pressure by increasing heart beat. This means that the blood pressure remains low, which may slow down the baby's wake threshold.
<urn:uuid:6a760f71-f30f-4e2b-91c8-aeb800ff5e50>
CC-MAIN-2016-26
http://www.aerosleep.com/en/technology/tips-on-cot-death
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955246
1,571
3.046875
3
It’s not easy to reach the Cofán indigenous community of Dureno in northeastern Ecuador. First you have to travel 15 miles down a winding rural road from the nearest town. Then you have to ask permission to cross a river by boat to reach the reserve. Dureno’s isolation, in the midst of a great forest, made it a perfect, if somewhat inconvenient, place to hold a workshop for indigenous peoples on climate change. It was there, last month, that the Coordinating Body of the Indigenous Organizations of the Amazon Basin (COICA) held the last in a series of multi-day workshops that focused on the effects of climate change on indigenous communities. The workshops are part of a larger project funded by the Inter-American Development Bank that counts Environmental Defense Fund (EDF) and Woods Hole Research Center among its partners. The overarching goal is to teach indigenous communities about the effects of climate change and about the ability of a program called REDD+ (Reducing Emissions from Deforestation and Degradation) to help them preserve their tribal lands. Passing the Torch The project first began with a week-long ‘train-the-trainers’ workshop in November 2012. EDF’s Chris Meyer was among the experts who trained eight technicians from Ecuador, Colombia, Peru and Brazil, who have in turn held workshops in their own countries over the last seven months to disseminate what they learned. Topics covered in these workshops vary widely: forest carbon measurements, human rights, international climate change negotiations and more. Individuals from hundreds of indigenous communities have now participated. Victims or Game Changers? Indigenous groups in the Amazon and elsewhere will be hard hit by climate change. Steve Schwartzman, EDF’s director of tropical forest policy, points out in a recent publication in Philosophical Transactions of the Royal Society B that these communities are often the first to pick up on changes to the climate on the ground. Participants at the Dureno workshop were no exception: they spoke of changing rainfall patterns, shifting agricultural seasons and unpredictable river levels. But participants in the climate change workshops also learn that their communities possess the tools to help stop deforestation, a major driver of climate change. They can make their voices heard in in national and international discussions on REDD+ and the creation of carbon markets that will compensate them for preserving their forests, providing an economic alternative to logging or clearing forest for agriculture. You need only look at satellite images of the Cofán territory (image at top) to understand what’s at stake: while surrounding forests have been cleared for small-scale farming in past decades, the Cofán forests remain standing. This pattern of indigenous stewardship is evident throughout the Amazon, but it needs to be supported by economic incentives. Future of Dureno There is a reason for Dureno’s inaccessibility: after decades of exploitation by oil companies, loggers and farmers that contaminated their streams and threatened their forests, Cofán leaders sealed off their borders from outsiders. They clearly marked their territory, installed indigenous park rangers, and opted for greater isolation over unsustainable economic pursuits. REDD+ offers Dureno and countless other communities the chance to be rewarded for their stewardship – ending the false choice between earning a living and conserving the forest. For Dureno, though, this will involve reengaging with the world. Participants at the Dureno workshop said that the Ecuadorian government needed to include them in developing a national REDD+ strategy for the country. What is true for the Cofán is true for indigenous people throughout the Amazon. REDD+, as a program for fighting climate change by preserving forests, will only work if it serves the needs and aspirations of the people who control those lands. The COICA workshops are an important first step in that direction.
<urn:uuid:bccc1d2d-3af4-4b6e-9a9a-f04a32ce5641>
CC-MAIN-2016-26
http://www.edf.org/blog/2013/11/14/advancing-indigenous-climate-action-heart-ecuadors-amazon
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950544
794
3.28125
3
|meninx tenuis -->| The two delicate layers of the meninges, the arachnoid mater and pia mater (vs. The tough pachymeninx or dura mater), considered together; by this concept, the arachnoid and pia are two parts of a single layer, much like the parietal and visceral layers of a serous membrane or bursa; although separated by the subarachnoid space they are connected via the arachnoid trabeculae and become continuous where the nerves and filum terminale exit the subarachnoid space (the cerebrospinal fluid-filled space bounded by the leptomeninges). Origin: Lepto-+ G. Meninx, pl. Meninges, membrane (05 Mar 2000) |Bookmark with:||word visualiser||Go and visit our forums|
<urn:uuid:b8e82535-2370-4e7f-bc83-aea15a4a9195>
CC-MAIN-2016-26
http://www.mondofacto.com/facts/dictionary?meninx+tenuis
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.777481
187
2.90625
3
Galileo Finds Veritable Chemical Factory On Europa News story originally written on March 29, 1999 Scientists have found another chemical on Jupiter's moon Europa . The chemical is hydrogen peroxide. You might have used hydrogen peroxide in toothpaste to whiten your teeth. Or you might have used it to clean germs out of a cut. There are other chemicals on Europa. The Galileo spacecraft has found water, carbon dioxide, and maybe even salt. Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: Europa was first discovered by Galileo in 1610, making it one of the Galilean Satellites. It is Jupiter's 4th largest moon, 670,900 km ( miles) from Jupiter. Europa's diameter is about half the distance...more Many exciting discoveries were made about Europa during the Galileo mission. The surface of Europa is unusual, even for an icy moon. It appears that the surface is pretty new, rather than being ancient....more It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The sky was clear and the weather was great. This was the America's 123rd manned space mission. A huge...more Scientists found a satellite orbiting the asteroid, Eugenia. This is the second one ever! A special telescope allows scientists to look through Earth's atmosphere. The first satellite found was Dactyl....more The United States wants Russia to put the service module in orbit! The module is part of the International Space Station. It was supposed to be in space over 2 years ago. Russia just sent supplies to the...more A coronal mass ejection (CME) happened on the Sun last month. The material that was thrown out from this explosion passed the ACE spacecraft. ACE measured some exciting things as the CME material passed...more
<urn:uuid:dc65b605-4191-466c-87c5-6f7cbe93af1a>
CC-MAIN-2016-26
http://www.windows2universe.org/headline_universe/peroxide.html&edu=elem
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00189-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94592
457
2.796875
3
The growing use of routine mammograms over the past 30 years has done little to lower the death rate from breast cancer but has sharply increased the number of women who are wrongly diagnosed with the disease, a new study reported. The study, published Wednesday in the New England Journal of Medicine, is sure to intensify the already fierce debate over how often women should get mammograms, a controversy that has embroiled policymakers, politicians and physicians — not to mention their female patients. The researchers didn’t make recommendations about how frequently women should get mammograms, but their findings put them squarely in the camp of those who are increasingly cautious, if not downright skeptical, about the screening test. The study estimated that over the past 30 years, a total of 1.3 million women have been misdiagnosed with the disease. In other words, while their mammograms revealed tumors considered potentially cancerous at the time, the women, if left untreated, never would have developed cancer.
<urn:uuid:89e32480-0951-4c4e-a877-9e5689e77c74>
CC-MAIN-2016-26
http://www.tvnewslies.org/tvnl/index.php/news/health/26224-mammograms-barely-lower-death-rate-lead-to-many-wrong-diagnoses-study-finds.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00019-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967934
196
2.6875
3
Focus: New View of Magnetic Atoms Researchers have taken one of the clearest looks at the magnetic behavior of atom-thin layers of metal. The team grew very uniform layers of cobalt and measured the direction of the magnetism inside. They found that as the thickness of the metal film increases stepwise from one to three atomic layers, its magnetism changes direction twice, by 90 degrees each time. The team’s calculations completely explain this surprising, rarely observed effect, as they report in the 14 April PRL. Understanding atomic-scale magnetic interactions like these is essential for researchers hoping to decrease the size of magnetic memories. Computer hard drives encode 0s and 1s as the alignment directions of tiny bar magnets. Each bar magnet is a chunk of metal whose atomic spins are aligned; the amount and direction of magnetism from those spins is called the magnetization of that chunk. To build smaller units of memory, researchers need to understand how smaller numbers of spins orient themselves, such as in sheets one or a few atoms thick. Complicating this task, metal atoms deposited across a surface normally form many small hills and valleys, with no broad plateaus of uniform height. Determining a sample’s thickness or magnetization often involves some guesswork, says Juan de la Figuera of the Autonomous University of Madrid. To make an unambiguous measurement, de la Figuera, his student Farid El Gabaly and colleagues used an electron microscope to image a layer of cobalt atoms growing on a ruthenium surface. Using the images, they continually fine-tuned the cobalt source during the growth process to create uniform layers one, two, or three atoms thick and 10 microns across. With wide, atomically flat areas, the team could measure magnetization knowing exactly what thickness they were observing. They used a so-called spin-polarized electron beam, in which the electron spins point the same direction, and imaged the surface with the beam polarized both perpendicular to the surface and parallel to it. Combining the images, the team generated a map of the magnetization in different regions of the surface, like a topographical map of a country. The one- and three-atom-thick layers were both magnetized parallel to the surface, like a compass needle, while the two-atom layer was magnetized perpendicular to the surface. “The surprising thing was why for a very, very thin film it would want to be in plane and then it would go out of plane,” says de la Figuera. Such flip-flopping magnetization also occurs in iron, he says, where its precise cause has remained obscure. Knowing the structure of the cobalt layers unambiguously was key to explaining the effect theoretically, says de la Figuera. Left to themselves, the cobalt atoms would magnetically orient “in-plane,” like an array of bar magnets lying close together, flat on the floor. But other influences, such as the distorted cobalt crystal structure caused by the ruthenium below, can favor an out-of-plane alignment. That’s because the electrons involved in the magnetization also participate in the bonds between atoms, so the magnetic alignment is influenced by the positions of neighboring atoms. The team calculated that for exactly two atomic layers, these other influences overcome the atoms’ intrinsic tendency to align in-plane. “The paper confirms in a beautiful and spectacular way our understanding of nanomagnetism,” says André Thiaville of the University of Paris-South. “It is very nice to have real space observations on very clean samples, in order to see the very fundamental mechanisms at work.” JR Minkel is a freelance science writer in New York City.
<urn:uuid:e0471a8d-37a0-48a6-9872-00888ce2a8e0>
CC-MAIN-2016-26
http://physics.aps.org/story/v17/st13
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942142
780
3.609375
4
WHY YOU SHOULD FEED PETS NON-GMO & ORGANIC FOOD or what Is really in commercial pet foods Do not read this article while eating your dinner or feeding your pet! If you think your pet food contains healthy ingredients like whole chicken, choice cuts of beef, fresh grains and all the nutrition your dog or cat will ever need...think again! If you care at all for your beloved pets, don’t believe the ads from the $20 billion per year U.S. pet food industry when it comes to the health of your loving pets. Rather than being scientifically designed to provide everything your pet needs for good health, as advertised, commercial pet foods actually lack sufficient minerals, enzymes and vitamins for good health - and often contain ingredients, food additives, pesticides and GMO's actually harmful to pet health. You see, the pet food industry is just an extension of the human food (agriculture) industry, and their products are at least as bad as fast food or processed foods and probably much worse! Pet food, from the corporate point of view, is just a place for slaughterhouse waste and grains considered "unfit for human consumption" to be turned into profit. What you will probably find in commercial pet foods are the contaminated or condemned remains of animals - that means Dead, Dying, Diseased or Disabled livestock. They use absolutely ALL of these waste products, including tongues, esophagi, nails, claws, feathers, beaks, tendons, lungs with pneumonia and other diseased and cancerous meat - nothing goes to waste in the name of profit. You may even find blood and fecal wastes. These are all listed on the label as byproducts which are found in moist as well as dry pet foods. Those nutritious-sounding "whole grains" used in pet foods are just a cheap filler. In addition to being hard to absorb in a carnivore’s digestive system, these grains are waste products, too, and have had the starch and oil removed for use in other products (usually by chemical processing) or they are the hulls and other remnants from the milling process. Some of the grains used may have been deemed unfit for human consumption because of mold, contaminants or poor storage practices. The protein in pet food comes from a variety of sources. When cattle, swine, chickens, lambs, or any number of other animals are slaughtered, the choice cuts such as lean muscle tissue are trimmed away from the carcass for human consumption. Whatever remains of the carcass -- bones, blood, pus, intestines, ligaments, and almost all the other parts not generally consumed by humans -- is used in pet food. These are called "byproducts" or other names on pet food labels, but there is no definition for the what these are required on labels. Many of these remnants are indigestible and provide a questionable source of nutrition for animals. In addition, the amount of nutrition provided by byproducts can vary from vat to vat. Another source of meat you won't find mentioned on pet food labels are dogs and cats that have been put to sleep, dead zoo animals and roadkill. Once it is rendered, Protein is protein - right? Rendering is the process of melting animal carcasses in a huge vat to extract oil, fat, bone meal and meat. The high heat used in rendering is supposed to make all this toxic waste safe - but it doesn’t. Though some bacteria are destroyed initially, there is little attention to paid to the process afterwards, leaving a high probability of contamination later from contact with the raw material. The manufacturers are not required to test for recontamination. They also don’t test for endotoxins - toxins released from a bacterium when it dies. What can the feeding of such ingredients do to your companion animal? Some veterinarians claim that feeding slaughterhouse wastes to animals increases their risk of getting cancer and other degenerative diseases. For example, feeding byproducts of dead cows to live cows has been linked to mad cow disease. One factor is that the cooking methods used by pet food manufacturers and rendering plants can’t destroy the hormones used to fatten livestock, or medications, such as those used to treat diseased animals or those used to euthanize dogs and cats. Animal & Poultry Fat You may notice a pungent odor when you open a container of pet food. What is the source of that delightful smell? It is refined animal fat, kitchen grease, and other oils too rancid or deemed inedible for humans, doctored up for the noses of your pets then added to the pet food. Restaurant grease has been a major component of feed grade animal fat over fifteen years. This grease is held in fifty-gallon drums outside of restaurants for weeks, exposed to extreme temperatures with no regard for its future use. The next few times you dine out, be sure to look out back behind the restaurant for a container with a rendering company's name on it. It is almost guaranteed that you will find one. Rendering companies pick up this rancid grease and mix the different types of fat together, stabilize them with powerful antioxidants to retard additional spoilage, and then sell the blended products to pet food companies. These fats are sprayed directly onto dried kibble or extruded pellets to make an otherwise bland or distasteful product palatable to your pet. The fat also acts as a binding agent to which manufacturers add other flavor enhancers as well. Pet food scientists have discovered that animals love the taste of these sprayed fats. Manufacturers are masters at getting a dog or a cat to eat something she would normally turn up her nose at. Wheat, soy, corn, Peanuts Hulls & Other Vegetable Protein The amount of grain used in pet food has risen over the last decade. Once considered filler by the pet food industry, grain products now make up a considerable portion of pet food. The availability of nutrients in grain products is dependent upon the digestibility of the grain. The amount and type of carbohydrate in pet food determines the amount of nutrient value the animal actually gets. Dogs and cats can almost completely absorb carbohydrates from some grains, such as white rice. Up to 20% of other grains can escape digestion. The availability of nutrients for wheat, beans, and oats is poor. The nutrients in potatoes and corn are far less available than those in rice. Carbohydrate that escapes digestion is of little nutritional value due to bacteria in the colon that ferment carbohydrates. Some ingredients, such as peanut hulls, are used strictly for "filler" and have no nutritional value at all! Two of the top three ingredients in pet food are almost always some form of grain. But since cats and dogs are carnivores -- they must eat meat to fulfill certain physiological needs. Why are we feeding a corn-based product to them? The answer is that corn is cheaper than meat. In 1995 Nature's Recipe pulled thousands of tons of dog food off the shelf after consumers complained that their dogs were vomiting and losing their appetite. Nature's Recipe's loss amounted to $20 million. The problem was a fungus that produced vomitoxin, an aflatoxin, which is a subset of mycotoxin, a poison given off by mold contaminated the wheat. Although it caused many dogs to vomit, stop eating and have diarrhea, vomitoxin is a milder toxin than most. The more virulent strains of mycotoxins can cause weight loss, liver damage, lameness, and even death. The Nature's Recipe incident prompted the Food and Drug Administration (FDA) to intervene. Dina Butcher, Agriculture Policy Advisor for North Dakota Governor Ed Schafer, concluded that the discovery of vomitoxin in Nature's Recipe wasn't much of a threat to the human population because "the grain that would go into pet food is not a high quality grain. Which means that the grain used in pet food is not fit for human consumption and therefore not a threat to the human population. Soy is another common ingredient sometimes used as filler in pet food. It adds bulk so that when an animal eats a product containing soy it will feel more satisfied. While soy has been linked to gas in some dogs, other dogs do quite well with it. Vegetarian dog foods use soy as a protein source. Industry critics note that many of the ingredients used as humectants -- ingredients such as corn syrup and corn gluten meal which bind water to prevent oxidation -- also bind the water in such a way that the food actually sticks to the colon and may cause blockage. The blockage of the colon may cause an increased risk of cancer of the colon or rectum. Additives & Preservatives Additives are used in commercial pet foods to improve stability or appearance, and of course provide no nutritional value. These include emulsifiers to prevent water and fat from separating, antioxidants to prevent fat from turning rancid and antimicrobials to reduce spoilage. Added color and flavor make the product more attractive to consumers and their pets. Two-thirds of the pet food manufactured in the United States contains preservatives. In the remaining third, 90% includes ingredients already stabilized by synthetic preservatives. Premixed vitamin additives used to supplement pet food can also contain preservatives. This means that your pet may eat food with several types of preservatives that have been added at the rendering plant, the manufacturing plant and in the supplemental vitamins. In the last 40 years, the number of food additives has greatly increased. Of the more than 8,600 recognized food additives today, no toxicity information is available for 46% of them. Cancer-causing agents are sometimes permitted if they are used at low enough levels. The risk of continued use at these cancer-causing agents has not been studied and the build up of these agents may be harmful. Ethoxyquin (EQ), for example, was found in dogs' livers and tissues months after it had been removed from their diet, and as of July 31, 1997, the FDA's Center for Veterinary Medicine requested that manufacturers reduce the maximum level for EQ be cut in half, to 75 parts per million. Though the law requires studies of direct toxicity of additives and preservatives, most of them have not been tested for their combined effect after ingestion. Three commonly used preservatives, BHA, BHT, and EQ, have a proven synergistic effect that may lead to the development of certain types of cancer. Butylated hydroxyanisole (BHA) and butylated hydroxtoluene (BHT) are the most commonly used antioxidants in processed food for human consumption. For these antioxidants, there is little information documenting their toxicity or the safety of long-term use in pet food. In animal feeds, the most commonly used antioxidant preservative is ethoxyquin. Some pet food critics and veterinarians claim ethoxyquin is a major cause of disease, skin problems, and infertility in dogs. Ethoxyquin is not approved for use as a preservative in human food. Nitrate, also used in meat for human consumption. combines with bacteria, which changes it to another chemical form with carcinogenic properties called nitrosamines. Very small amounts of this chemical can cause acute and chronic liver damage. "Natural preservatives" and antioxidants are like Vitamin C, Vitamin E, may seem better than chemical preservatives they may be less effective than chemical preservatives. To make pet food nutritious, manufacturers "fortify" it with vitamins and minerals, which are also just more highly processed, hard to digest chemicals. They have to do this, however, because the other ingredients in the pet food basically have little or no nutrition left after processing. The answer, of course, is to use fresh, whole raw foods that don’t need preservatives, that aren’t rancid and rendered, that aren’t made from by -products, and that are too disgusting to discuss during dinner! Commercially manufactured or rendered meat meals are highly contaminated with bacteria because their source includes animals that have died because of disease, injury, or natural causes. These dead animal may not be rendered until days after death. Therefore the carcass is often contaminated with bacteria. While cooking may kill bacteria, it does not eliminate endotoxins that can cause disease. Pet food manufacturers do not test their products for endotoxins. Escherichia coli (E Coli) is another bacteria that can be found in contaminated pet foods. E Coli bacteria, like Salmonella, can be destroyed by cooking at high temperatures, however, the endotoxin produced by the bacteria will remain. Aflatoxin comes from mold or fungi. Improper drying and storage of crops causes mold growth, which results in Aflatoxin contamination. Ingredients that are most likely to be contaminated with this toxin are cottonseed meal, peanut meal, and fish meal. The National Research Council (NRC) of the Academy of Sciences set the nutritional standards for pet food until 1974, when the pet food industry created a group called the American Association of Feed Control Officials (AAFCO). At that time AAFCO chose to adopt the NRC standards rather than develop its own. The NRC standards required feeding trials for pet foods that claimed to be "complete" and "balanced." The pet food industry found the feeding trials to be too restrictive, so AAFCO designed an alternate procedure for claiming the nutritional adequacy of pet food. Instead of feeding trials, chemical analysis would be done to determine if a food met or exceeded the NRC standards. But chemical analysis does not address the palatability, digestibility and biological availability of nutrients in pet food. So it is unreliable for determining whether a food will provide an animal with sufficient nutrition. To compensate for the limitations of chemical analysis, AAFCO added a "safety factor," which was to exceed the minimum amount of nutrients required to meet the complete and balanced requirements. By establishing its own standards and disregarding the NRC standards, AAFCO established itself as the governing body for pet food. In essence the pet food industry developed its own standards for nutritional adequacy. Genetically Modified Ingredients Most pet foods on the market today are probably mostly made from corn products that are genetically modified (GMO) -- the seeds genetically modified to produce plants that can withstand repeated spraying with Monsanto's Roundup, a glyphosate-based weed killer demonstrated to have adverse health effects in animals! In 2009 about 60 percent of all the cor grown in in the U.S. was genetically modified. Studies have shown that genetically modified corn causes significant kidney and liver disease in rats after only a 90-day feeding trial, and has a negative effect on the heart, spleen and other organs. A new lifetime study of rats fed GMO corn shows they died earlier than rats on a standard diet plus they developed tumors and severe kidney and liver damage as well. Half the male rats and 70 percent of females died prematurely, compared with 30 percent of males and 20 percent of females in the control group. A 2009 article in the journal Critical Reviews in Food Science and Nutrition asserted that "The results of most of the rather few studies conducted with GM foods indicate that they may cause hepatic, pancreatic, renal, and reproductive effects and may alter hematological, biochemical, and immunologic parameters the significance of which remains unknown. The above results indicate that many GM foods have some common toxic effects. The toxic insecticidal agent Bacillus thuringiensis is present in most GMO crops in the U.S. that wind up in animal feed and pet food. Glufosinate and glyphosate are herbicides, applied to millions of acres of genetically modified crops in the U.S. These herbicides cause kidney damage in animals, endocrine disruption and birth defects in frogs, and are lethal to many amphibians. Glyphosate has also been linked to miscarriages, premature births, and non-Hodgkin's lymphoma, in humans. Health experts are connecting the rise in human allergies, including skin conditions and inflammatory GI disorders, to consumption of GMO foods – in particular, GMO soy Independent animal feeding safety studies show adverse or unexplained effects of GMO foods, including inflammation and abnormal cell growth in the GI tract, as well as in the liver, kidney, testicles, heart, pancreas and brain. GMO crops have also been shown to be unstable and prone to unplanned mutations. Also, remember that corn and soy ingredients are not biologically appropriate for dogs dog and cats, even if they're not GMO. Both these ingredients are associated with a variety of health problems in animals, from allergies and skin disorders to oral disease, inflammatory bowel disease,and cystitis. Problems Caused by Inadequate Nutrition in Pet Foods The idea of one pet food providing 100% of a pet’s nutrition for its entire life is a myth. Since cereals are the primary ingredients in most commercial pet foods, and dogs and cats eat need protein and variety, commercial pet foods can’t provide adequate nutrition. The problems associated with a commercial diet are seen every day at veterinary establishments. Chronic digestive problems, such as chronic diarrhea, are among the most frequent illnesses treated. Allergy or hypersensitivity to foods is a common problem usually seen as diarrhea or vomiting. The market for "hypoallergenic" pet foods is now a multimillion dollar business. These diets were formulated to address the increasing intolerance to foods that animals have developed to commercial pet foods. Even the actual meat that is used in commercial pet foods has poor protein digestibility. Diets containing protein with less than 70% digestibility cause diarrhea in dogs. Some fillers used in these foods can also cause colitis, which is the inflammation of the colon. Most pet food companies do not publish digestibility statistics and they are never seen on pet food labels. Acute vomiting and diarrhea is also a symptom of bacteria contamination. Dry commercial food is often contaminated with bacteria which may cause problems. Improper food storage and some feeding practices may result in the multiplication of this bacteria. For example, adding water to moisten pet food and then leaving it at room temperature causes bacteria to multiply. Yet this practice is suggested on the back of some kitten and puppy foods. Pet food formulas and the practice of feeding that manufacturers recommend have increased other digestive problems. Feeding only one meal per day can cause the irritation of the esophagus by stomach acid. Feeding two smaller meals is better. Urinary tract disease is directly related to diet in both cats and dogs. Plugs, crystals, and stones in cat bladders are caused by commercial pet food formulas. Dogs can also form stones as a result of their diet. Rapid growth in large breed puppies has been shown to contribute to bone and joint disease. Excess calories in manufactured puppy food formulas promote rapid growth. There are now special puppy foods for large breed dogs. But this recent change will not help the countless dogs who lived and died with hip and elbow disease. There is also evidence that hyperthyroidism in cats results from commercial pet food diets. This is a new disease that first surfaced in the 1970s, when canned food products first came on the market. The exact cause and effect are not yet known. This is a serious and sometimes terminal disease and treatment is expensive. Many nutritional problems appeared with the popularity of cereal-based commercial pet foods. Sometimes this is because the diet is incomplete. Sometimes it’s a result of additives or a result of contamination with bacteria, toxins and other organisms. In some diseases the role of commercial pet food is understood, in others, it is not. The bottom line is that diets composed primarily of low quality cereals and rendered meat meals are not as nutritious or safe as you should expect for your cat or dog. The answer is the same as it is for you and the rest of your family - whole, live, fresh raw food! And thankfully, it isn’t as difficult as you may think. This web site provides some recipes and links to other raw pet food web pages. Just take a dash of your love for your pet and mix it thoroughly with a little online research and you’re on your way! Additives Found in Processed Pet Foods - Oxidizing, reducing agents Do your pets eat food industry waste? To multi-national food companies, a pet food company is just a marketing strategy for turning their waste products and garbage into profits. Four of the five major U.S. pet food companies think profiting on waste products is more important than your pet's health (as of 2008): - Colgate-Palmolive (Hills Science Diet Pet Food) - Heinz (9 Lives, Amore, Gravy Train, Kibbles n Bits, Recipe, Vets) - Nestle (Friskies, Fancy Feast, Alpo, Mighty Dog) - Mars (Kal Kan, Mealtime, Pedigree, Sheba). The Pet Food Institute -- their trade association, acknowledges the use of by-products (i.e., waste) as extra income for producers and farmers: "The purchase and use of these ingredients by the pet food industry not only provides nutritional needs for pets at reasonable costs, but provides an important source of income to American farmers and processors of meat, poultry and seafood products for human consumption.” "There is virtually no infor- mation on the bio- availability of nutrients ... in many of the dietary ingredients used in pet foods. These ingredients are generally byproducts of the meat, poultry and fishing industries, with the potential for a wide variation in nutrient composition. Claims of ... the Association of American Feed Control Officials (AAFCO) ... do not give assurances of nutritional adequacy ..." - James Morris, Quinton Rogers, Dept. of Molecular Biosciences, University of California at Davis Veterinary School of Medicine (2008) Of the top 4 ingredients in Purina O.N.E. Dog Formula (Chicken, Ground Yellow Corn, Ground Wheat, and Corn Gluten Meal) 2 are corn-based products ... the same product. This industry practice is known as splitting. When components of the same whole ingredients are listed separately -- such as Ground Yellow Corn and Corn Gluten Meal -- it appears there is less corn than chicken, even though the combined corn ingredients outweigh the chicken.
<urn:uuid:15f93666-f7db-4a91-866b-b2a230a891a3>
CC-MAIN-2016-26
http://www.rawfoodlife.com/Raw_Pets/raw_pets.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94921
4,592
2.5625
3
WEDNESDAY, June 27 (HealthDay News) -- Standing for long periods of time or working more than 40 hours a week while pregnant may affect the baby's development, Dutch researchers report. In the new study, women who had jobs in sales, child care and teaching, which required spending many hours on their feet, had infants with heads about 3 percent smaller than women who worked in other jobs during their pregnancies, the researchers found. Whether this makes a difference in the long-term development of the child isn't known, said lead researcher Alex Burdorf, from the department of public health at Erasmus University Medical Center in Rotterdam. "We are not sure about that," he said. "But there are clear indications that a smaller head may negatively affect cognitive [brain] development." Exactly how it might play a part in any one child's development isn't predictable, "but at a group level a smaller head is seen as a negative start," Burdorf said. The only women who need to be concerned are those who stand all day and "whose doctor has indications that weight gain or fetal growth is less than expected," he added. The study established an association, and not a cause-and-effect link, between working conditions and baby size. The report was published in the June 27 online edition of Occupational and Environmental Medicine. For the study, Burdorf's team collected data on more than 4,600 pregnant women. The women were asked about their work situations including whether their jobs required lifting, standing, walking, long hours or night work. The researchers measured the development of the babies throughout the pregnancy and after birth. They found that physically demanding work had no effect on the infant's size, weight or whether the child was premature. There was also no effect on infants of mothers who worked right up to the month before giving birth. Women who spent a lot of time standing, however, had infants with an average head size 1 centimeter smaller than infants of women who didn't spend a lot of work time on their feet. In addition, women who worked more than 40 hours a week were more likely to give birth to infants with smaller heads that weighed less than infants of women who worked less than 25 hours a week, the researchers found. These findings may mean that standing and working very long hours has a negative effect on the infant's development, Burdorf's group said. Apart from these exceptions, work is generally a good thing during pregnancy, Burdorf's team noted. Women who work have pregnancies with fewer complications and have fewer stillbirths or infants with birth defects, compared with women who don't work, they said. Dr. Jill Rabin, chief of ambulatory care, obstetrics and gynecology at Long Island Jewish Medical Center in New Hyde Park, N.Y. was skeptical about the findings. "The study poses more questions than it answers," she said. One problem with the study is that all the data were self-reported so there is a possibility the data is not completely accurate. In addition, while the researchers took into account some other factors, such as drinking and smoking, height and weight, they didn't account for others. Particularly important are diet and previous pregnancies, Rabin said. "These factors may be the most important," she noted. It is also not possible to know from the study if this small difference in head size will have long-term consequences. "Whether or not these infants will have long-term effects can only be determined by following them over time," she said. For more information on having a healthy pregnancy, visit the U.S. Department of Health and Human Services Office on Women's Health. SOURCES: Alex Burdorf, Ph.D., department of public health, Erasmus University Medical Center, Rotterdam, the Netherlands; Jill Rabin, M.D. chief, ambulatory care, obstetrics and gynecology, Long Island Jewish Medical Center, New Hyde Park, N.Y.; June 27, 2012, Occupational and Environmental Medicine, online Copyright © 2012 HealthDay. All rights reserved. |Previous: College Athlete Deaths in Workouts Spur New Guidelines||Next: Health Tip: Help Cool a Temper Tantrum| Reader comments on this article are listed below. Review our comments policy.
<urn:uuid:662b8b43-2f18-4642-bce5-a45b4d7f54fe>
CC-MAIN-2016-26
http://www.doctorslounge.com/index.php/news/hd/30140
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973635
907
2.65625
3
People often dislike, criticise and put themselves down for their procrastination. In a new study, though, Wohl et al. (2010) wondered if this self-blame may be counter-productive. By following 119 first-year students through two midterm examinations, the researchers tested whether self-forgiveness about procrastination before the first midterm was associated with less procrastination before the second midterm. Although we tend to think that letting ourselves off easy will lead to more procrastination, Wohl et al. found the reverse: “Forgiveness allows the individual to move past their maladaptive behaviour and focus on the upcoming examination without the burden of past acts to hinder studying.” This may work because: “…forgiving oneself for procrastinating has the beneficial effect of reducing subsequent procrastination by reducing negative affect associated with the outcome of an examination.” Another way of thinking of this is in terms of approach and avoidance behaviours. Because we tend to avoid things that make us feel bad, pent up guilt about a task will make us avoid that task in the future. Self-forgiveness, though, may reduce guilt and so make us more likely to approach the task. This explanation highlights the fact that we don’t just have emotional relationships with people, we also have them with tasks. Some tasks we like and look forward to like trusted old friends, while others feel more like muggers stealing away hours of our lives. The design of this study doesn’t tell us how easy it is for those who are hard on themselves to begin exercising self-forgiveness because it only examined what participants did naturally. Unfortunately psychologists have little evidence about the process of self-forgiveness, they only know it’s ‘A Good Thing’. Perhaps just knowing that self-forgiveness is healthy is beneficial. I hope for all our sakes it is. → Also check out: how to avoid procrastination. Image credit: Emilie Ogez
<urn:uuid:924fab69-7c5c-48a6-b831-e1960558981e>
CC-MAIN-2016-26
http://www.spring.org.uk/2010/05/procrastinate-less-by-forgiving-yourself.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00181-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93893
426
2.515625
3
Some fortunate parents have enough cash on hand to pay for their child’s college education in full. However, this is a rare occurrence. Most parents deal with many sleepless nights as they try to determine how to best pay for their child’s schooling. Typically, college age kids don’t think too much about money. They know that college is expensive, but are not too concerned with how to pay for tuition, room and board, books, and other associated fees. As a parent, it is your responsibility to educate your child on the financial impacts of attending college. Along with this, you should do your best to offer assistance to obtain as much financial aid as possible. Here are three important steps to take: 1. Speak With the College Financial Aid Offices As your child compares schools and works toward making a decision, be sure to speak with a representative of the financial aid office of each college he or she is considering. By doing so, you can learn more about the overall cost of attending the school, as well as the financial aid options that are available. You should ask the same questions of each school. By doing so, you and your child are “comparing apples to apples” when it comes time to make a final decision. 2. Think Outside the Box Believe it or not, some parents and students let the financial aid office do all the work for them. They receive their package in the mail and do whatever they are told. While it is essential to stay in close contact with the school and consider all options that are presented, there are additional steps you should take to increase the overall financial aid package. Have you helped your child find and apply for third party scholarships and grants? There are many available. Organizations all over the country offer funds to students based on everything from racial status, to academic performance, to chosen degree paths. 3. Focus on Student Loans Last Upon receiving a financial aid package, you may notice that the school always includes student loans to make up any difference between grants and scholarships and the total cost. There is nothing wrong with utilizing a student loan, if it is necessary. However, before you and your child begin to search for the best type of student loan, you should exhaust every other option. For example, you may be willing to give your child a personal loan to help him or her avoid large interest payments in the future. You may discover that other loans are available as well – and they may offer better interest rates. Don’t jump the gun and accept student loans before you absolutely have to. College kids often do not consider the financial impact of their education. As a parent, you need to educate them on everything from their financial aid options, to what their choices will mean in the future. With this advice, you should be able to help your child obtain more financial aid. Anything you can do to assist is valuable, especially when considering the continually rising costs of college.
<urn:uuid:9a2f8619-1f1a-47ca-a3de-536acd1b62c1>
CC-MAIN-2016-26
http://mainstreammom.com/help-child-more-financial-aid-college/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00073-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973738
610
2.5625
3
T2 and its close relative T4 are viruses that infect the bacterium E. coli. The infection ends with destruction (lysis) of the bacterial cell so these viruses are examples of bacteriophages ("bacteria eaters"). Each virus particle (virion) consists of: - a protein head (~0.1 µm) inside of which is a single, circular molecule of double-stranded DNA containing 166,000 base pairs. - a protein tail from which extend - thin protein fibers - The virus attaches to the E. coli cell (a). This requires a precise molecular interaction between the fibers and the cell wall of the host. - The DNA molecule is injected into the cell (b). - Within 1 minute, the viral DNA begins to be transcribed and translated into some of the viral proteins, and synthesis of host proteins is stopped. - At 5 minutes, viral enzymes needed for synthesis of new viral DNA molecules are produced (c). - At 8 minutes, some 40 different structural proteins for the viral head and tail are synthesized. - At 13 minutes, assembly of new viral particles begins (d). - At 25 minutes, the viral lysozyme destroys the bacterial cell wall and the viruses burst out — ready to infect new hosts (e). - If the bacterial cells are growing in liquid culture, it turns clear. - If the bacterial cells are growing in a "lawn" on the surface of an agar plate, then holes, called plaques, appear in the lawn. Occasionally, new phenotypes appear such as a change in the appearance of the plaques or even a loss in the ability to infect the host. As with so many organisms, the occurrence of mutations provides the tools to learn about such things as - Some strains of E. coli, e.g. one designated B/2, gain the ability to resist infection by normal ("wild-type") T2. The mutation has caused a change in the structure of their cell wall so that the tail fibers of T2 can no longer bind to it. However, T2 can strike back. Occasional T2 mutants appear that overcome this resistance. The mutated gene, designated h (for "host range"), encodes a change in the tail fibers so they can once again bind to the cell wall of strain B/2. The normal or "wild-type" gene is designated h+ . - When plated on a lawn containing both E. coli B and E. coli B/2, - the mutant (h) viruses can lyze both strains of E. coli, producing clear plaques, while - the wild-type (h+) viruses can only lyze E. coli B producing mottled or turbid plaques. - Occasional T2 mutants appear that break out of their host cell earlier than normal. - The mutation occurs in a gene designated r (for "rapid lysis"). It reveals itself by the extra-large plaques that it forms. - The wild-type gene, producing a normal time of lysis, is designated r+. It forms normal-size plaques. - the function of the gene; - its location in the DNA molecule (mapping). As we have seen, E. coli strain B can be infected by both h+ and h strains of T2. In fact, a single bacterial cell can be infected simultaneously by both. Let us infect a liquid culture of E. coli B with two different mutant T2 viruses When this is done in liquid culture, and then plated on a mixed lawn of E. coli B and B/2, four different kinds of plaques appear. ||Number of Plaques The most abundant (460 each) are those representing the parental types; that is, the phenotypes are those expected from the two infecting strains. However, small numbers (40 each) of two new phenotypes appear. These can be explained by genetic recombination having occasionally occurred between the DNA of each parental type within the bacterial cell. Just as in higher organisms [Link], one assumes that the frequency of recombinants is proportional to the distance between the gene loci. In this case, 80 out of 1000 plaques were recombinant, so the distance between the h and r loci is assigned a value of 8 map units or centimorgans (cM). Now coinfect E. coli B with two other strains of T2: Again, 4 kinds of plaques are produced: parental (470 each) and recombinant (30 each). The smaller number of recombinants indicates that these two gene loci (h and m) are closer together (6 cM) than h and r (8 cM). But the order of the three loci could be either To find out which is the correct order, perform a third mating using This makes it clear that the order is m—h—r, not h—m—r. But why only 12 cM between the outside loci (m and r) instead of the 14 cM produced by adding the map distances found in the first two matings? The answer comes from performing a mating between T2 viruses differing at all three loci: (Note: this time one parent has all mutant; the other all wild-type alleles — don't be confused!) The result: 8 different types of plaques are formed. - parentals; that is, nonrecombinants in Groups 1 and 2; - recombinants — all the others Analyzing these data shows how the two-point cross between m and r understated the true distance between them. Let's first look at single pairs of recombinants as we did before (thus ignoring the third locus). - If we look at all the recombinants between h and r but ignore m (as in the first experiment), we find that they are contained in Groups 5, 6, 7, and 8 — giving the total of 80 that we found originally. - If we look at recombinants between h and m but ignore r (as in the second experiment), we find that they are contained in Groups 3, 4,7, and 8 — giving the same total of 60 that we found before. - But if we focus only on m and r (as we did in the third experiment), we find that the recombinants are contained in Groups 3, 4, 5, and 6 — giving the same total of 120 as before while the non-recombinants are not only in Groups 1 and 2 but also in Groups 7 and 8. The reason: a double-crossover occurred in these cases, restoring the parental configuration of the m and r alleles. - Because these double crossovers were hidden in the third experiment, the map distance (12 cM) was understated. To get the true map distance, we add their number to each of the other recombinant groups (Groups 3,4,5, and 6) so 25 + 5 +25 +5 +35 + 5 + 35 + 5 = 140, and the true map distance between m and r is the 14 cM that we found by adding the map distances between h and r (8 cM) and h and m (6 cM). The three-point cross is also useful because it gives the gene order simply by inspection: - Find the rarest genotypes (here Groups 7 and 8), and - the gene NOT in the parental configuration (here h) is always the middle one. |There is another mapping technique — deletion mapping — that was used with T4, another "T-even" bacteriophage. Link to a discussion.| 19 February 2011
<urn:uuid:7099426e-6a19-41ca-9ac9-0eef7c706602>
CC-MAIN-2016-26
http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/L/Luria.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00119-ip-10-164-35-72.ec2.internal.warc.gz
en
0.91913
1,622
4.28125
4
Two Months In and 2015 Is Record Warm We may only be two months into 2015, but already the year is burning up the charts, setting up the possibility that it could topple 2014’s newly minted record for hottest year. Together, January and February were the warmest such period on record, according to global data released Wednesday by the National Oceanic and Atmospheric Administration. With an El Niño (albeit a weak one) in place, there’s potential for that warmth to stick around and elevate temperatures for more of the year. How much temperatures around the globe differed from average during the first two months of 2015. Credit: NOAA NCDC Of course, two months is only a small portion of the year, and it’s impossible to say for sure how the remainder will turn out. But regardless of its final ranking, 2015 will almost certainly be much warmer than most years in the records (which stretch back to 1880), thanks to the steady rise in global temperatures fueled by the unabated release of greenhouse gases into the atmosphere. January 2015 was the second warmest in NOAA records, as was February as the month checked in at about 1.5°F warmer than the 20th century average for the month. Combined, the first two months of the year were 1.42°F above average and nearly half a degree above the same point last year. February 2014 ranked only as the 21st warmest for the month. A Broken Record: 2014 Hottest Year 2015 Begins With CO2 Above 400 PPM Mark Watch 63 Years of Global Warming in 14 Seconds “So we are much warmer for the year to date this year compared with last,” Jessica Blunden, a climate scientist with ERT, Inc., at NOAA’s National Climatic Data Center, said in an email. (NASA’s temperature rankings put February in the No. 2 spot and January tied for fourth. The rankings differ because each agency uses slightly different methods to process data.) As was the case for much of last year, the eastern U.S. was the only very cold spot on the planet this winter, with many more areas of abnormal warmth. In particular, the western half of the U.S. was a hot spot, as was a large swath of Russia and parts of Scandinavia. This pattern was a shift from much of last year, when the most sweltering spots were over parts of the oceans, particularly in the Pacific and Indian basins. “The ocean heat does appear to be tapering off a bit overall compared with summer/fall last year, but we're still seeing record-warm temperatures in many regions, including, of course, the eastern North Pacific, which appears to have had a big influence over our weather in the US,” Blunden said. Ocean water has a long “memory” for heat, as more energy must be absorbed or released to change its temperature than is the case for air. That ocean heat helped propel 2014 to the top of the standings and will continue to play a role in 2015. Of the past 12 months, nine of them have been the warmest or second warmest for that particular month, NOAA said in its latest data release. The El Niño that NOAA has declared to have started in February could also keep temperatures elevated, as historically the climate state is associated with higher global temperatures. “So if El Niño holds, and especially if it strengthens, we could very well be looking at another record warm year,” Blunden said. Regardless of whether 2015 bests 2014 or comes up short, it is part of the larger, decades-long warming trend fueled by growth of carbon dioxide and other heat-trapping gases released by human activity. Nine of the 10 warmest years in the books have occurred in the 21st century, and no record cold year has been set since 1911. Watch out 2014, as 2015 may be coming for your crown. Editor's note: This story was updated to reflect February 2014 was the 21st warmest, and not 44th. You May Also Like: California, Quebec Teaming Up On Climate Change Include Climate Change in Disaster Planning, FEMA Says Climate Change on International Disaster Talks Agenda Cities Could Be Ideal for Utility-Scale Solar Plants
<urn:uuid:10dcb113-2ac5-43a9-b008-91ae8e166bd1>
CC-MAIN-2016-26
http://www.climatecentral.org/news/two-months-in-and-2015-is-record-warm-18790
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00007-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961206
894
2.796875
3
A primary purpose of the creation of the Senate, as a part of the federal legislative process, was, therefore, to afford protection to the various sectional interests in Canada in relation to the enactment of federal legislation. — The Supreme Court of Canada in Re: Authority of Parliament in relation to the Upper House, 1 S.C.R. 54, p. 67 In creating the Senate in the manner provided in the Act, it is clear that the intention was to make the Senate a thoroughly independent body which could canvass dispassionately the measures of the House of Commons. This was accomplished by providing for the appointment of members of the Senate with tenure for life. — The Supreme Court of Canada in Re: Authority of Parliament in relation to the Upper House, 1 S.C.R. 54, p. 77 ... it is our opinion that while [the Constitution] would permit some changes to be made by Parliament in respect of the Senate as now constituted, it is not open to Parliament to make alterations which would affect the fundamental features, or essential characteristics, given to the Senate as a means of ensuring regional and provincial representation in the federal legislative process. — The Supreme Court of Canada in Re: Authority of Parliament in relation to the Upper House, 1 S.C.R. 54, pp. 7778 “Must legislation be approved by the Senate? Can the Senate propose legislation?” Every system needs checks and balances, and the legislative system is no exception. One house may have passed legislation too quickly, or certain concerned groups may feel they did not get a chance to be heard. That's why Canada's Constitution states that both the Senate and the House of Commons must approve bills separately in order for them to become law. The lawmaking process starts with a bill — a proposal to create a new law, or to change an existing one. Most of the bills considered by Parliament are public bills, meaning they concern matters of public policy such as taxes and spending, health and other social programs, defence and the environment. A bill can be introduced in the House of Commons (C-bills) or the Senate (S-bills), but most public bills get their start in the Commons. A bill goes through certain formal stages in each house. These stages include a series of three readings during which parliamentarians debate the bill. Prior to third and final reading, each house also sends the bill to a committee where members examine the fine points of the legislation. Committee members listen to witnesses give their opinions on the bill, and then subject it to clause-by-clause study based on the testimony. Either house can do four things with a bill: pass it; amend it; delay it; or defeat it. Sometimes, one house refuses changes or amendments made by the other, but they usually both agree eventually. All laws of Canada are formally enacted by the Sovereign, by and with the advice and consent of the Senate and House of Commons. Once both houses have approved a bill, it is presented for Royal Assent and becomes law. 1. Passage through first house (sometimes the Senate, usually the House of Commons) 2. Passage through the second house (usually the Senate, sometimes the House of Commons) 3. Royal Assent given by the Governor General (the bill is made law on the advice and with the consent of both houses) • First reading (the bill proposing a law is received and circulated) Second reading (the principle of the bill is debated: does the bill represent good policy?) • Committee stage: – members of the public appear as witnesses to comment – committee members study the bill in detail, clause-by-clause – the committee adopts a report, with or without amendments • Report stage (the committee report is considered by the whole house) • Third reading (final approval of the bill) • The bill is either re-sent to the other house or is set aside for Royal Assent When senators see a need for a law, they can respond individually by introducing bills of their own. The bill may or may not make it through all the stages and become law. Even if it does not, a bill can still give visibility to an issue and so encourage debate and action. Here is an example of a Senate bill that did become law: In late November 2005, Parliament passed Senator Jean-Robert Gauthier's Bill S-3's, An Act to amend the Official Languages Act, into law. Bill S-3s amendments to the Official Languages Act have given it teeth by allowing Canadians to take the federal government to court if it does not live up to its obligation to protect and promote both French and English minorities in Canada. The government can now be held to account for its progress, or lack of progress, in fulfilling our national objective of bilingualism. The government can introduce its bills in the Senate and frequently takes advantage of this option. Doing so takes pressure off the House of Commons' timetable. A bill that is complex and technical rather than partisan is a perfect candidate for initial review by the Senate. Bills to implement income tax treaties are a good example. The Senate can also pre-study bills that have been introduced in the House of Commons but have not yet reached the Senate, when it considers this to be a useful initiative. Private bills are introduced on the petition of a citizen and address the needs of a single person, company or institution, rather than applying to the general public, and are usually initiated in the Senate. In the 19th century, private bills were popular to incorporate and regulate the railroad companies and religious organizations that opened the West. For the greater part of the twentieth century, divorces in certain provinces were granted by private bill introduced in the Senate. More recent private bills have authorized marriages otherwise prohibited by law, revived companies, allowed companies to change jurisdiction, and incorporated and regulated charitable and other non-profit organizations. Private bills are valuable because they can point to weaknesses in the general law. The only bills that cannot be initiated in the Senate are money bills. Money bills collect or disburse public funds. They must always be proposed by the government and considered first in the House of Commons. Only then can a money bill be submitted to the Senate for its consideration. The Senate can pass or defeat a money bill and can also amend it, but only to reduce taxes or expenditures. The Senate plays a key role in amending bills passed by the House of Commons. Senators have the expertise to put a bill under the microscope and examine it in detail, and the Senate timetable is flexible enough to allow longer periods of study. The end product is a more effective and long-lasting piece of legislation. From April 2003 to March 2009, a period that covers seven sessions of Parliament, the Senate recommended amendments to 37 of the 300 bills that made it to the committee stage of consideration. That means that the Senate proposed amendments on 12 per cent of the bills it studied. On June 22, 2006, Bill C-2 arrived in the Senate. The first Act of Parliament of a recently-elected government, it was a massive and complex bill aimed at improving government accountability. The Senate's Committee on Legal and Constitutional Affairs examined the bill. It held over 100 hours of meetings, hearing 168 witnesses. Based on witnesses' testimony, the committee proposed an unprecedented 156 amendments to the bill. After debating the bill for 14 more hours and proposing 106 additional amendments, senators finally passed it with a total of 158 changes. It had undergone what may have been the most comprehensive legislative review in Senate history. After lengthy back-and-forth between the Senate and the House of Commons, the bill finally passed with roughly 90 Senate amendments. Even when the Commons takes the step of refusing a Senate amendment, the amending process draws attention to the contentious issue. Those aspects of the bill obviously deserve — and usually get — closer scrutiny by the government, the media or both. Canada's Constitution gives both houses of Parliament the power to defeat proposed legislation sent to it by the other house. This is called the veto power. While the Senate does not oppose the will of the Commons very often, senators have rejected bills. Senators have considered this possibility on occasions when they felt the government did not have an electoral mandate for a measure opposed by the public, when the bill was obviously outside the constitutional authority of Parliament, or under other extraordinary circumstances. The Senate can defeat government bills without the dramatic political fallout that would occur if the House of Commons did the same thing. If the House of Commons defeats a major piece of legislation, the government usually resigns and an election is called. If a bill is defeated in the Senate, the government can go back to the drawing board and submit a new bill. In 1998, after extensive hearings and consultation with a broad range of witnesses, the Legal and Constitutional Affairs Committee opposed the enactment of Bill C-220. The bill, although not a government bill, which was passed by the House of Commons, would have provided the government with the power to censor publications written by persons convicted of crimes where the publication in question was based substantially on the crime for which the conviction was entered. Senators on the Committee believed that the bill was a direct violation of section 2 of the Canadian Charter of Rights and Freedoms which guarantees freedom of expression. The Senate agreed with the Committee's recommendation, and the bill was rejected. The Senate can also delay a bill, or decide not to act on it. Without being formally rejected, a delayed bill dies at the end of the session. In certain circumstances, Senate action or inaction can persuade a government that it needs to go to the people for a new mandate. In 1988, Canadians got to vote on the free trade agreement with the United States because the Senate delayed Bill C-130, to implement the agreement. The government called an election on the issue. As soon as it was re-elected, the government submitted a similar bill that Parliament passed expeditiously. In other cases, the Senate can delay a bill in order to give it more careful scrutiny that it received in the House of Commons and to draw greater public attention to the issue at hand. Bill C-10 was one such case. A large and complex tax bill, it had had a quick examination in the House before arriving in the Senate in December 2007. There had been no expectation of controversy, but in March 2008 Senators responded to urgent calls from the Canadian film industry. The bill contained a clause that, they felt, would amount to censorship by allowing the Minister of Heritage to arbitrarily deny finished film productions a crucial tax credit. Other groups clamoured to address the committee about this or other concerns. The committee continued its study, and the bill died when Parliament was dissolved in September 2008. Parliament can make constitutional amendments on its own by passing a bill, but only if the amendments operate within the federal sphere of power. The Senate has a veto power over these amendments, just as it has over all bills proposed to Parliament. Other kinds of constitutional amendments affect both federal and provincial powers. Because the legislatures of affected provinces must agree to these, both the Senate and the provinces speak for the regions on such amendments. When the Senate and the provinces do not agree on an amendment, the Constitution favours the provinces. The amendment may be made without Senate approval if the required number of provinces authorize it and if the House of Commons re-affirms its support for the amendment after the Senate concerns become apparent. However, the Commons must wait for six months from when it first approved the amendment before approving it a second time. This Senate power to require the Commons and the provinces to reflect for six months is sometimes described as its suspensive veto.
<urn:uuid:15064a3c-23a5-422b-a4d9-bb5a573ced8d>
CC-MAIN-2016-26
http://www.parl.gc.ca/About/Senate/Today/laws-e.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00025-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967165
2,397
3.6875
4
A person with anemia has fewer red blood cells in his or her blood than the normal level. Red blood cells carry oxygen to all the cells in the body. When the number of red blood cells is lower than normal, less oxygen is carried in the blood. Signs of anemia A person with anemia may not notice any signs. As anemia gets worse, you may have: Fatigue – feel weak or tired Dizziness or feel faint Cold hands or feet Pale skin or nails that break easily Trouble thinking clearly or a hard time concentrating Shortness of breath or chest pain A fast or irregular heart beat Fewer menstrual periods or increased bleeding during menstrualperiods Talk to your doctor if you have any of these signs. Call the emergency if you have shortness of breath or chest pain. Causes of anemia The causes of anemia include: Problems with how iron is used by the body Not eating enough iron-rich foods Bleeding or blood loss, such as from heavy menstrual periods1 Treatments for some diseases, such as cancer, that make it harder for the body to make new red blood cells Sickle-cell disease where the body destroys too many red blood cells Immune system problems where the body destroys or cannot make red blood cells Babies less than one year old who drink cow’s or goat’s milk Babies who are fed formula that does not have extra iron Your doctor will do tests to find the cause of your anemia and to plan your treatment. You may need to: Eat a healthy diet that includes fruits, vegetables, breads, dairy products, meat and fish. Eat more iron-rich foods such as lean beef, pork or lamb, poultry, seafood, iron-fortified cereals and grains, green leafy vegetables such as spinach, nuts and beans. Your doctor may want you to meet with a dietitian to plan healthy meals. Take vitamin or iron supplements. Get a blood transfusion to treat blood loss. Blood is given through an intravenous (IV) line into a blood vessel. Have other treatments such as medicines or surgery to treat the cause of your anemia. Talk to your doctor or nurse if you have any questions or concerns
<urn:uuid:6e9eb200-3c5a-4363-ba43-f6f1bbbd98e5>
CC-MAIN-2016-26
http://imedecin.net/en/conditions/diseases/1360-anemia.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924469
479
3.453125
3
West Virginia, Northeast Kentucky, Southwest Virginia, and Southeast Ohio all experience a variety of summer weather conditions. This page contains information about severe weather terms, safety rules, and some tornado events that affected West Virginia. Everyone should heighten knowledge of the dangers of flooding, severe thunderstorms and tornadoes, and this page provides information to help you get prepared for severe weather before it occurs. CHECK OUT THE ADVANCED HYDROLOGIC PREDICTION SERVICE (AHPS) Flood and Flash Flood Safety Tips: According to FEMA: Critical NWS Flood Products Hazardous Weather Outlook- This product alerts the public when flood producing rainfall is expected in 36 to 72 hours or if any severe weather threat is expected. During the months of February and March, this product also contains information on the potential for flooding from the spring snow melt. In West Virginia, over 60% of flood fatalities, and the #1 cause of weather related fatalities, is due to people driving through flooded roadways and low water crossings. Why do vehicles float? They float for the same reason a 97,000 ton aircraft carrier floats, buoyancy! Where does this idea that "my heavy vehicle will keep me safe" come from? It comes from the false trust in the weight of the vehicle you are driving. Many believe their 3,000 pound or more vehicle will remain in contact with the road surface, that it is too heavy to float. Think about that for a moment. Aircraft carriers weighing 97,000 tons float. Vehicles, including ships, float because of buoyancy. In fact, most cars can be swept away in 18-24 inches of moving water. Trucks and SUVs do not fare much better with an additional 6-12 inches of clearance. |Are you ready for a flash flood?| Know what to expect If it has been raining hard for several hours, or steadily raining for several days, be alert to the possibility of a flood. Listen to local radio or TV stations for flood information. Reduce potential flood damage by-- Raising your furnace, water heater, and electric panel if they are in areas of your home that may be flooded. Consult with a professional for further information if this and other damage reductions measures can be taken. Floods can take several hours to days to develop-- A flood WATCH means a flood is possible in your area. A flood WARNING means flooding is occurring or will occur soon in your area. Flash floods can take only a few minutes to a few hour to develop-- A flash flood WATCH means flash flooding is possible in your area. A flash flood WARNING means a flash flood is occurring or will occur very soon. Prepare a Family Disaster Plan Keep insurance policies, documents, and other valuables in a safe-deposit box. Assemble a disaster supplies kit containing-- First aid kit and essential medications Canned food and can opener. At least three gallons of water per person Protective clothing, rainwear, and bedding or sleeping bags. Battery powered radio, flashlight and extra batteries. Special items for infant, elderly, or disabled family members. Written instructions for how to turn off electricity, gas, and water if authorities advise you to do so. ( Remember, you'll need a professional to turn natural gas service back on) Identify where you could go if told to evacuate. Choose several places...a friends home in another town, a motel, or a shelter When a flood WATCH is issued-- Move your furniture and valuables to higher floors of your home. Fill your car's gas tank in case an evacuation notice is issued. When a flood WARNING is issued- When a flash flood WATCH is issued-- Be alert to signs of flash flooding and be ready to evacuate on a moments notice. When a flash flood WARNING is issued-- ...Or if you think it has already started, evacuate immediately. You may have only seconds to escape. Act quickly! Move to higher ground aways from rivers, streams, creeks, and storm drains. Do not drive around barricades. They are there for your safety. If your car stalls in rapidly rising waters, abandon it immediately and climb to higher ground. A tornado is a violently rotating column of air extending from a thunderstorm to the ground... The average forward speed it 30 mph but may vary from nearly stationary to 70 mph... Tornadoes can occur at any time of the year in any locations... In home or small buildings: Go to the basement (if available) or to an interior room on the lowest floor, such as a closet or bathroom. Wrap yourself in overcoats or blankets to protect yourself from flying debris. In schools, hospitals, factories, or shopping centers: Go to interior rooms and halls on the lowest floor. Stay away from glass enclosed places or areas with wide-span roofs such as auditoriums and warehouses. Crouch down and cover your head. In high-rise buildings: Go to interior small rooms or halls. Stay away from exterior walls or glassy areas. In cars and mobile homes: ABANDON THEM IMMEDIATELY!! Most deaths occur in cars and mobile homes. If you are in either of these locations, leave it and go to a substantial structure or a designated tornado shelter. If no suitable structure is nearby: Lie flat in the nearest ditch or depression and use your hands to cover your head. All thunderstorms produce lightning and are dangerous. Lightning kills more people each year than tornadoes. Lightning often strikes as far as 10 miles away from any rainfall. Many deaths from lightning occur ahead of the storm because people try and wait to the last minute before seeking shelter. You are in danger from lightning if you can hear thunder. If you can hear thunder, lightning is close enough that it could strike your location at any moment. Lightning injuries can lead to permanent disabilities or death. On average, 10% of strike victims die; 70% of survivors suffer serious long term effects. Blue Skies and Lightning - Lightning can travel sideways for up to 10 miles. Even when the sky looks blue and clear, be cautious. If you hear thunder, take cover. At least 10% of lightning occurs without visible clouds overhead in the sky. There is NO safe place to be outside in a thunderstorm. If you can't get into a fully enclosed building or vehicle, do not seek shelter under trees or partially open structures. Sitting or crouching on the ground is NOT safe and should be your last resort. Avoid leaning against vehicles. Get off bicycles and motorcycles. Avoid metal! Don't hold on to metal items such golf clubs, fishing rods, tennis rackets or tools. Get out of the water. It's a great conductor of electricity. Don't stand in puddles of water, even if wearing rubber boots. Move away from a group of people. Stay several yards away from other people. Don't share a bleacher bench or huddle in a group. Severe Weather Terms and Definitions Warning - a particular weather hazard is either imminent or has been reported. A warning indicates the need to take immediate action to protect life and property. The type of hazard is reflected in the type of warning (e.g., tornado warning, blizzard warning). Watch- a particular hazard is possible, or when conditions support its occurrence. A watch is a recommendation for planning preparation, and increased awareness (i.e., to be alert for changing weather, listen for further information, and think about what to do if the danger materializes). Tornado- A violently rotating column of air in contact with the ground and extending from the base of a thunderstorm. Severe Thunderstorm- A thunderstorm that produces tornadoes, hail 0.75 inches or more in diameter, or winds of 50 knots (58 mph) or more. Straight-line Winds- Generally, any wind that is not associated with rotation, used mainly to differentiate them from tornadic winds. Flood- The condition that occurs when water overflows the natural or artificial confines of a stream or other body of water, or accumulates by drainage over low-lying areas. Flash Flood- A flood that rises and falls quite rapidly, usually as the result of intense rainfall over a relatively small area. Usually it occurs within 6 hours of a rain event. Slight Risk (of severe thunderstorms)- Implies well-organized severe thunderstorms are expected, but in small numbers and/or low coverage. Moderate Risk (of severe thunderstorms)- Indicates a potential for a greater concentration of severe thunderstorms than the slight risk, and in most situations, greater magnitude of the severe weather. High Risk (of severe thunderstorms)- Suggests a major severe weather outbreak is expected, with a high concentration of severe weather reports and an enhanced likelihood of extreme severe (i.e., violent tornadoes or very damaging convective wind events occurring across a large area). Supercell- A thunderstorm with a persistent rotating updraft. Supercells are rare, but are responsible for a remarkably high percentage of severe weather events - especially tornadoes, extremely large hail and damaging straight-line winds. Squall Line- A solid or nearly solid line or band of active thunderstorms. Downburst- A strong downdraft resulting in an outward burst of damaging winds on or near the ground. Downburst winds can produce damage similar to a strong tornado. Although usually associated with thunderstorms, downbursts can occur with showers too weak to produce thunder. Funnel Cloud- A condensation funnel extending from the base of a towering cumulus or cumulonimbus cloud, associated with a rotating column of air that is not in contact with the ground (and hence different from a tornado). A condensation funnel is a tornado, not a funnel cloud, if either a) it is in contact with the ground or b) a debris cloud or dust whirl is visible beneath it. Cold-air Funnel- A funnel cloud that can develop from a small shower or thunderstorm when the air aloft is unusually cold (hence the name). On rare occasions, small, relatively weak tornadoes can occur. These weak tornadoes last only a few minutes and are generally much less violent than other types of tornadoes.
<urn:uuid:4bdebef1-7b72-42ef-9f0b-dd1492c02da4>
CC-MAIN-2016-26
http://www.weather.gov/rlx/summerawareness
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920003
2,152
2.984375
3
15-Nov-2003 -- This confluence is located in the Sonoran Desert approximately 11 miles south of Mexico Highway 2 between San Luis and Sonyota. This confluence would appear to be easily reachable on most maps, however an approximately 10 mile long mountain chain runs parallel to the highway and blocks direct access. Access to the confluence must be made around the western or eastern end of the chain. The western access is across open desert and appears to be the easiest. However, there is a "No Entrada" sign on one of the dirt roads leading into the area. Further research on the internet indicates that this area is probably part of the Pinacate Biosphere Presevere which covers hundreds of square miles. Off-road driving is prohibited in the area and a permit is required for access. A web search using the key word "Pinacate" will yield several sites related to the preserve. It may be possible to obtain permission to visit this confluence from the park headquarters. However, be aware. I believe the confluence is located deep into some sand dunes in a very remote area. According to the web, this area contains the largest area of sand dunes in North America. Some of the dunes are reported to be 180 meters high. It is also one of the hottest and driest places in North America.
<urn:uuid:1e2fd79e-d539-4a0e-acbe-49ab53618f3e>
CC-MAIN-2016-26
http://www.confluence.org/confluence.php?visitid=7499
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95475
275
2.71875
3
I’m currently working on incorporating more natural materials into my preschool classroom. The kiddos love them and seem naturally drawn to anything in nature. When I think of kids and nature, Ann of My Nearest and Dearest pops right to mind. I adore reading about what she’s up to every week, and I’m blessed to have her as a friend! So I am beyond excited to have her visiting Fun-A-Day to talk about learning with natural materials! After you read her wonderfully informative post, I hope you stop by My Nearest and Dearest. She’s also on Pinterest, Facebook, and Google+. I am excited and honoured to be guest posting for Mary Catherine today. Not only is she one of my go-to bloggers for play-based learning inspiration, she is a genuinely kind and funny person who I am proud to call a friend. When Mary Catherine approached me to write a post about kids and nature, I immediately said yes, but wasn’t entirely sure what I’d write about. Watching my 3.5 year-old son “cook” with acorns, shells, and rocks reminded me that giving children the opportunity to simply PLAY with natural materials is a wonderful way for them to learn about nature. It also allows them to develop an appreciation and respect for the environment. Incorporating objects from nature into play and learning activities needn’t be complicated. If we, as parents and teachers, provide our children with natural materials, they WILL find ways to use them. And in playing with them, studying them, and asking questions about them, they will learn about nature. How to Use Natural Materials in the Home or Classroom As Loose Parts: If you’d like to bring more nature into your home or classroom, using found objects as loose parts is a good way to start. We have jars, baskets, and bowls of acorns, rocks, pine cones, and shells that my son, daycare kids and I have collected. The kids use them every single day in many ways. Take a look at my post Playing and Creating with Rocks for examples of how to play with rocks, which are arguably the easiest loose part to find outside. Try adding loose parts from nature to the block center or play kitchen. Use them for counting or patterning activities. Or simply present your child with a bowl of natural materials and let her play with them however she chooses to. On a Nature Table: Setting up a nature table is a wonderful way to display all the beautiful bits and bobs that your kids pick up outside. Don’t be discouraged if you don’t have a spare table to devote to nature finds. A tray or shelf work just as well. The idea is to designate a specific area in your home or classroom to showcase objects from nature and (optionally) small toys, books, or tools (a magnifying glass, pencil and paper, ruler, etc.) that allow for a deeper investigation and understanding of the natural objects. See our Winter Nature Table for more information on setting up and playing at a nature table. At the Play Dough Table: We’re always looking for new and interesting items to combine with play dough. Whole spices and objects collected in nature make great additions to the play dough table. We’ve had fun with Play Dough and Wildflowers and with acorns and cinnamon bark. Small sticks are also great to pair with play dough. In Sensory Bins and Small Worlds: Adding items from nature to sensory bins and small worlds can add a sense of realism to the play experience. I added sticks, rocks, pine needles, and pine cones to a simple Forest Small World. I used shells and rocks in our Ocean Floor Discovery Bin to help evoke the look and feel of a real ocean floor. In Art and Craft Projects: Using natural materials in arts and crafts is both eco-friendly and economical. Look at the things your child has picked up outside and think about how they might be used as painting tools or how they might look decorated with glitter and paint. We’ve stamped with acorn caps, painted pine cones, made fern prints, and painted with grape stems (I count using kitchen scraps as creating with nature!). It is important to me that my son grows up with a respect for nature and a love of the simple beauty found in it. The above examples are just some of the ways I’m ensuring that happens. Now, please tell me how you use nature in your home or classroom. I don’t know about you, but my mind is whirling with possibilities now! I cannot wait to try out some of Ann’s suggestions for bringing natural materials into my classroom (and my home). What are some of your favorite ways to incorporate natural materials in learning and play?
<urn:uuid:ee1a3f7e-1cd6-4138-872a-55ad8c056e70>
CC-MAIN-2016-26
http://fun-a-day.com/playing-learning-with-natural-materials/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947953
1,007
2.796875
3
Studies conducted at the Catalan Institute of Oncology, Spain, suggest that the decreased risk of stomach cancer by consumption of foods that contain a lot of flavonoids is only beneficial for women only. Not without reason, the results obtained after observing 500 thousand respondents from 10 European countries. Researchers say, as for the participants who took part in the study were 35 to 70 years, during the research in 11 years. During the study, researchers found that as many as 683 people who participated in stomach cancer, and 288 cases of cancer among women. Judging from the results, it is known female respondents who consumed 580 milligrams of flavonoids per day at risk of developing stomach cancer is 51 percent lower than women who consumed less than 200 milligrams of flavonoids per day. In line with these studies, previous studies also mentioned that flavonoids can reduce the risk of cancer. And research focus to add that foods containing flavonoids have more leverage effect to ward off an attack of stomach cancer. A diet based flavonoid-rich foods such as fruits, vegetables, whole grain cereals, nuts, tea, and chocolate. Such a diet is to make people eat less meat, and can reduce the risk of stomach cancer, as said study leader, Raul Zamora-Ros. As is known, is a component of plant flavonoid anti-oxidants as a supplement to the cell, which has the properties of these materials as antifungal, antibacterial, antiviral, antioxidant and anti-inflammatory. To get the benefits of flavonoids, you can find several kinds of food, such as carrots, sweet potatoes, broccoli, kale, spinach, cabbage, peppers, and paprika. Moreover in the fruit, you can eat kiwi, mango, papaya, citrus, melons, peaches, grapes, or watermelon. In addition to fruits and vegetables, flavonoids are also found in tea are known to have high levels of flavonoids 12.511 milligrams per 100 grams. (Reuters, Wikipedia, Healthmeup / * / OL-06)
<urn:uuid:8fb198c3-c4fe-4dfb-94e2-dd035ccbd6c5>
CC-MAIN-2016-26
http://medicinalplantscenter.blogspot.com/2013/01/flavonoids-foods-can-prevent-stomach.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00067-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963793
436
3.25
3
What is Super Tuesday? “Super Tuesday,” which is scheduled for March 1, refers to the day when a dozen states (and one territory) will hold their nominating contests this year. Generally, “Super Tuesday” is the unofficial name for a Tuesday during the presidential primary election when the largest number of states hold their nominating contests. Story Continued Below Which states are voting on Super Tuesday? Alabama, Arkansas, Georgia, Massachusetts, Minnesota, Oklahoma, Tennessee, Texas, Vermont and Virginia will hold contests for both Republicans and Democrats. Republicans in Alaska will hold caucuses. Democrats in Colorado will hold their caucuses as well. Finally, Democrats in American Samoa are also holding their nominating contest. When do polls close on Super Tuesday? Voting occurs throughout the day, but polls will close at different times. Polls in Alabama, Georgia, Vermont and Virginia close at 7 p.m. (all times Eastern). Massachusetts, Oklahoma and Tennessee close their polls at 8 p.m. Most Texas polls close at 8, but a few in the state’s western region will close an hour later. Arkansas' polls close at 8:30 p.m. Minnesota’s caucuses begin at 8. Alaska’s caucuses close around midnight. What is the “SEC Primary”? The “SEC Primary” is a nickname for Super Tuesday and is an ode to the Southeastern Conference, an athletic conference that includes universities in many of the Southern states holding their contests on Tuesday. The heavy concentration of Southern states in Tuesday’s primaries—Alabama, Arkansas, Georgia, Tennessee and Texas—gives a regional flavor to the voting, hence the alternate name. How many delegates are at stake on Super Tuesday? 661 Republican delegates will be allocated, based on Super Tuesday, and 865 delegates for Democrats. How are Super Tuesday delegates distributed? Under party rules, no state holding its primary before March 15 can do a winner-take-all allocation of delegates, meaning that all Super Tuesday states will divide up their delegates in some way. In some states, that’s close to directly proportional to voter results, whereas others have a “winner-take-most” allocation structure or minimum vote thresholds for scoring delegates. Why does Super Tuesday exist? The concept originated in 1988 for two main reasons: the consolidation of voters and organization of campaigns. Southern Democrats wanted to highlight the electoral significance of their region by grouping states on a single day of voting. The arrangement also helps make the party primaries less parochial by forcing candidates to campaign nationwide. Has Super Tuesday mattered in recent elections? How is Super Tuesday different from other primary days? No other primary day has as many delegates grouped at once, and thus no other day gives a single candidate as much of a chance to declare a sense of certainty about his or her position. The less local the race becomes, the more serious the contenders are as national candidates. Seven states will vote the following weekend, but starting on March 7, votes and delegates trickle in. Super Tuesday will therefore give the race clarity in a way no other single day can. Will any candidates drop out afterward? Hanging on by a thread, Ohio Gov. John Kasich and retired neurosurgeon Ben Carson could face serious losses across the country and be pressured by party officials to give up hope and help Rubio and Cruz take it to Trump. The longer the two long shots stay in the race, the harder it is to make up the gap between Trump’s delegate total and everyone else’s. When are the next primaries after Super Tuesday? On Saturday, March 5, Democrats and Republicans vote in the Kansas caucuses and Louisiana primaries. Republicans will also vote in Kentucky and Maine, while Democrats will vote in Nebraska. On Sunday, Democrats go to the polls in Maine.
<urn:uuid:d908ec1d-d3cd-4465-945c-9903b92da4be>
CC-MAIN-2016-26
http://www.politico.com/story/2016/02/super-tuesday-2016-everything-you-need-to-know-219849
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00110-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941015
793
3.390625
3
When you're scrambling to make a burn feel better or find an antidote for someone who has mistakenly swallowed a toxic chemical, you might fall back on some of the folklore of first aid. Rather than helping, these common first aid mistakes can actually make matters worse. Here are a few common first aid falsehoods and what you should do instead. Mistake: Putting butter on a burn. You've probably heard the tip to put butter on a burn, but bear in mind that it's bad advice: Any greasy substance on a burn keeps heat in and could make it hard for a burn to heal or be properly treated. What to do: Run cold water over the burn to ease the pain. Then gently dry the area and keep it loosely covered. If it starts to blister or seems infected, get medical treatment. Mistake: Using syrup of ipecac to cause vomiting. When someone swallows a toxic or poisonous chemical, you might think to bring it back up immediately by giving syrup of ipecac. But experts say it's best not to induce vomiting, which can cause more damage. Some substances actually can be worse for you when they are vomited up again. What to do: Immediately call your doctor or the national Poison Control Center (800-222-1222) for advice about handling the situation. Mistake: Putting heat on a sprain or fracture. Heat can be soothing for aches and pains, but you shouldn't apply heat to a sprain or fracture. Heat will only increase the swelling. What to do: Apply ice, an ice pack or even frozen veggies for about 20 minutes. Make sure to wrap the ice in a towel to protect your skin. Use the RICE treatment of rest, ice, compression, elevation. Mistake: Putting hot water on frozen skin. You might be tempted to run hot water over a frozen patch of skin or a limb to warm it up. This approach increases the risk of damaging skin if you use water that is too hot. What to do: Gradually thaw the skin or limb with a warm—not hot—water bath. Mistake: Using rubbing alcohol to bring down a fever. Wiping rubbing alcohol on your skin makes your skin feel cooler, but this cooling doesn't help that much when you have a fever. In addition, alcohol can be absorbed through the skin. For small children and infants in particular, this approach increases the risk of alcohol poisoning. What to do: Try a fever-reducing medication containing ibuprofen or acetaminophen and call your doctor if the fever doesn't go away. Mistake: Using a tourniquet for a snake bite. Tying off the flow of blood to prevent the spread of toxins from a snake bite seems like a sensible idea, but might just cause more damage. In some instances, the poison is then concentrated in one area where it can be damaging. In other cases, damage occurs with the sudden release of snake venom into the blood once the tourniquet is taken off. What to do: The most important step is to calm the person who was bitten and help him or her to keep the bitten body part completely still to slow the flow of venom in the body. Since swelling can become severe, remove jewelry and constricting clothing from areas near the bite. Antivenin is the most effective treatment for most poisonous snake bites, but this is a complex situation that needs expert treatment, so get emergency medical aid as quickly as possible. Mistake: Using a tourniquet to stop a bleeding gash. For a deep gash in an arm or leg, you may think about tying a tourniquet around the thigh or upper arm to stop the bleeding. But that could stop the flow of blood to the entire limb, causing more damage. What to do: Apply direct downward pressure on the wound (use a thick layer of sterile gauze under your hands if it's available), and then wrap the wound securely when the bleeding stops. If it continues to bleed or appears to need stitches, seek medical care. Mistake: Rubbing your eye to remove a foreign object. When you have a speck of dirt or some other debris in your eye, the sensation can be incredibly aggravating. Although you may want to rub your eye to remove the debris, hold back. Rubbing your eye when there is a foreign particle in it can cause more damage to your eye. What to do: Since tears alone probably won't be enough to wash out the debris, rinse your eye with clean tap water. Get medical care if the sensation continues. Mistake: Leaving an adhesive bandage on a cut. Putting antibacterial ointment on a cut and then leaving on a bandage for a few days doesn't speed healing; rather it increases unwanted moisture over the cut. What to do: Clean the cut and apply ointment, but then let it heal in the fresh air. If you need a bandage to keep the cut clean, make sure you change it about twice a day and keep the entire area clean and dry.
<urn:uuid:0d46f1c9-e60d-473a-8b03-38c1f263d06e>
CC-MAIN-2016-26
http://www.stvincent.org/Health-Library/Wellness-Library/How-to-Avoid-Common-First-Aid-Mistakes.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928627
1,057
2.921875
3
Malignant Mesothelioma Cancer What you need to know Mesothelioma is a rare type of cancer that occurs in people that have either inhaled or swallowed asbestos fibers. These fibers travel through the body and become lodged in the lung or other parts of the body, resulting in cancer that can appear decades later. Most often these fibers lodge themselves in the outer lining of the lungs, known as the pleura, causing pleural mesothelioma, the most common type of this asbestos cancer. With no known cure for the disease, patients often face treatments aimed at managing the symptoms and improving their quality of life. Once patients near their end of life, doctors often shift to palliative care which is intended to control pain, stop bleeding, relieve pressure, and to allow patients to be at home with their loved ones. Mesothelioma is diagnosed in close to 3,000 Americans each year, and about that many victims also die from the disease each year.
<urn:uuid:514de5ff-bef1-48d0-bcd4-ca08e50a14f8>
CC-MAIN-2016-26
http://www.mesotheliomacounsel.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968715
206
3.34375
3
The Way of Harmony with Energy Aikido (Japanese: ai- harmony, ki- energy or power, do- way, or path) is the martial art developed in the mid-20th Century by Morihei Ueshiba, called O-sensei (great teacher) by his devotees. Aikido is a modern martial art with ancient roots; Ueshiba was a student of numerous traditional (koryu) forms of Japanese martial arts, including aiki-jutsu, sumo, and sword and spear arts. Aikido is traditionally defined as a soft, circular art that focuses on pins, throws, and grappling, with little emphasis on strikes or attacks. Aikido is sometimes called a non-violent martial art; in the ideal case, aikido is intended to neutralize an attack without causing harm to either the attacker or defender. This ideal was born of Ueshiba's devotion to the Omoto-kyo neo-Shinto movement, combined with traditional samurai notions regarding budo. It is hard to say when exactly aikido began; certainly, the art changed and grew throughout the life of its founder, whose personal history is intimately tied with the history of aikido. Furthermore, the history of aikido is intimately tied to that of the various schools of aiki-jutsu, who trace their histories back to the Minamoto clan of ancient Japan. Many date its modern emergence to 1927, when Ueshiba first began teaching martial arts in Tokyo, but certainly Ueshiba had already spent years formulating his own martial arts visions. In 1931, Ueshiba's headquarters dojo, the Kobukan opened, and in the years that followed, the man and his teachings began to gain prominence throughout Japanese society. The name 'aikido' emerged in 1942, the system previously being known as aiki-jujutsu or aiki-budo. The stages of Morihei Ueshiba's life are commonly seen to correspond to the different schools of aikido that emerged after his death; Yoshinkan from his early years, when the influence of Ueshiba's aiki-jutsu training was most prominent, is marked by harder, more linear techniques. The Aikikai School, often seen as the most 'official' style, originates in the middle period of Ueshiba's life, after 1942 but before the years leading up to his death. Aikikai styles are best defined in contrast to those of the other schools- they are more rounded and softer than Yoshinkan styles, but retain more physicality than the developments of the Ki Society school. The Ki Society, having its origins in the techniques that Ueshiba taught towards the end of his life, place greater emphasis on the concept of ki and centering than either of the other schools, and tend to emphasis ki testing and rounded, soft techniques. Each of these schools represented the vision of aikido that Ueshiba had imparted to different students during different stages of his life (Gozo Shioda in the case of the Yoshinkan, and Koichi Tohei (namesake of the Tohei hop) in the case of the Ki Society), though both students began training under Ueshiba in the 1930's. Aikido employs techniques of training and instruction that are somewhat unique. They combine traditional ideas about martial arts instruction (often more so than other, more ancient arts), modern learning techniques, and temple etiquette. Specifics vary from school to school and sensei to sensei, but a few things tend to be consistent. Terminology can vary considerable- there are multiple Japanese terms for each technique or element, and multiple translations into English for each of those words. Some terms are almost always called by (one of) their Japanese name(s). Others are almost always translated. Be prepared to Build Your Vocabulary A traditional dojo consists of a large, rectangular mat (synthetic or tatami), oriented towards the kamiza- a small shrine, often featuring either a portrait of O-sensei, or calligraphy relevant to budo. The dojo is, in a certain sense, a temple (dojo originally referred to a place where Buddhist monks were taught), with the kamiza as its altar. Teacher and students bow to the kamiza at the open and close of each class, and it is considered polite to not face ones back or point ones feet towards the kamiza. Likewise, shoes are not to be worn on the mat. Because all students, regardless of rank, typically train together in aikido, rank is of less concern than in other disciplines. You may never know the rank of your partners, even those you train with repeatedly, unless you ask. You can probably make guesses based on ability, though. Aikido uses the traditional system of two rankings: kyu (learner) ranks, and dan (qualified student) ranks. The kyu system usually starts at 6 or 5 for adults, and rank decreases with testing, until one reaches 1st kyu. Kyu-rank students wear a white belt and gi, and most commonly do not wear a hakama. Dan ranks begin at 1 (shodan) and increase with testing, to a theoretical maximum of 10. In practice, the highest ranked living aikidoka are of 9th dan or less, and the highest ranked Westerner is 7th or 8th. Dan ranked students wear a black belt, gi, and hakama, usually blue or black in color. Testing practices vary by school and dojo. Some require a minimum number of hours of training to test for each rank, others base their requirements on years or months of training and don't bother with the number of classes attended. At each level, the student is expected to show progression in the number of techniques known, and skill at executing the required techniques. Ki tests, ukemi ability, weapon forms and dealing with armed attackers, and randori may also play a role, particularly at higher levels of testing. Contrary to what many believe, a black belt does not indicate that you are a master of aikido, as the progression of ranks beyond the first black belt indicates. Rather, in the traditional system, the black belt indicated that the student has achieved a basic level of competency with the full range of techniques taught by the art, and is prepared for further study. It's the martial arts equivalent of a bachelor's degree; you're educated, you know the ropes, it is a basis for continuing your studies, but you certainly don't know everything. A black belt also does not qualify one to teach; teaching certification is often handled separately from the regular rank progression, and may require special study. Full teaching authorization is often only given to those of 3-4th dan or higher. Aikido instruction is based on two pillars: observation, and partner practice. Aikido techniques are introduced to the class by the instructor performing them with a senior student acting as uke (uke is the person who delivers the attack, and receives the technique. It literally means 'the one who receives (the technique)'). The instructor may verbally describe the technique, or may repeat certain aspects of it in slow motion, or from multiple angles. In the most traditional dojos, instruction is not performed, per se. Rather, the student simply observes the technique, and is expected to reproduce it (this is sometimes called 'stealing' techniques, but is actually the principle on which all ancient dojos operated). Few modern dojos go to this extreme, and usually provide at least some verbal instruction. Nevertheless, the ability to learn by observation is highly prized, and is particularly handy for learning at a seminar After techniques are presented by the instructor, students break up into pairs (or in the case of odd numbers, pairs and a trio), and, after an initial bow, begin to practice the technique. One partner (usually called uke) delivers the initial attack, receives the technique, and performs the roll, breakfall, or other technique (called ukemi) necessary to avoid being injured. The other partner (called, typically, nage- lit. 'thrower') receives the attack, performs the technique, and ensures that his partner comes to no harm. Techniques are practiced twice on each side before partners switch roles- uke attacks on the left, right, left, and right, and then switches to performing the role of nage. When partners practice in threes, one student sits in seiza on the edge of the mat, observing. The standing pair perform the technique, switch, perform the technique, and then uke sits, and the sitting student rises to become the new uke; to put it more plainly, nage always remains standing after both partners have performed the technique. Students are expected to switch partners for each technique, the intention being that every student will be paired with every other student before the end of class, without repeating (in reality, there are usually either too few students, or too few techniques presented for this to be true). Students are not paired on the basis of ability, and classes are not 'tracked'; it is common, and expected, that very senior students and beginners will practice together. In these cases, it is expected that the senior student will adapt to fit the needs of the junior, providing instruction where necessary, and matching his technique to the ukemi ability of the newer learner. Training is meant to be cooperative. Nage does not 'win' by making uke fall, nor does uke 'win' by frustrating nage's technique. Both partners are expected to deliver authentic attacks, and perform the technique to the best of their ability. Likewise, it is not proper for uke to 'take a fall', acting like the technique works when it has not been performed correctly, nor does uke resist a correctly performed technique to the point that nage is forced to either give up or injure him. If attacks are performed authentically, and techniques carried out correctly with an honest nage and uke, they will work as advertised. If you give attacks with too little energy, if nage exhibits improper ma'ai or does not perform the technique correctly, you will end up with a wrestling match. More on this later. The sensei may observe students or groups of students during the performance of techniques, offering criticism and (hopefully) correction. Training may also include ki testing (simple exercises designed to test the students ability to move from the center and sense a partner's energy), randori (group sparring, in which a single student receives simultaneous or sequential attacks from multiple partners), ukemi practice, lectures on aikido thought, or practice of weapons kata. Two fundamental lessons of aikido training: the person you are practicing on is your partner, not your opponent and you are always responsible for the safety of your partner Perhaps the most important thing that you will ever learn from aikido is the fine art of falling down. Ukemi refers to the methods of receiving and reacting to techniques, as well as the maneuvers used to prevent injury when one is thrown or pinned. These techniques consist of rolls (forward and backward), the breakfall or high fall, and things to remember about where to put your face and hands to prevent them from being kicked, stepped on, rug burned, or otherwise abused when one is being pinned. It also includes learning how to deliver attacks that both provide enough energy for the technique to be performed and also preserve nage from danger. One should be able to throw a punch or deliver a shomen with enough force that the throw will be more or less automatic; at the same time, you should be able to stop or divert your attack any time it becomes clear that your partner is not going to respond in time to avoid injury. Please do not punch nage in the face. In the system used by the Japanese well into the 20th Century, a new student would not be permitted to perform techniques until they had spent several years taking ukemi for more senior students. Modern attitudes are more liberal; students begin taking both roles as soon as they can safely do so. Absolute beginners are often segregated into special classes until their ukemi skills are adequate to open training. In practice, one can train without problem, and even pass your first few kyu tests without learning the breakfall- techniques can be modified to work around this limitation. Most people will never get into a fight with armed or unarmed attackers where it would be wise to break out the ol' aikido- or anything else, for that matter. Everybody falls. More people die of falls, especially, but not exclusively the elderly, than die of shootings, robberies gone bad, knifings, or any other form of violence. Learning to fall can save your life in the Real World- it could mean a bruised butt instead of a cracked skull the next time you slip on an icy sidewalk. Far more aikidoka have stories about ukemi helping them survive or lessen the damage from falls than have stories about fighting off bands of armed terrorists in the supermarket. If you learn nothing else from aikido, learn how to fall. A quite thorough and enlightening explanation of the basic elements whose permutations make up the catalogue of aikido techniques exist at the so-named node. In short form, aikido builds around 3 attack forms (which can be generalized to any kind of energetic attack- kicks, punches, strikes with a sword, baseball bat, broken bottle, staff, or plush Elmo toy), as well as several dynamic or static grab techniques, and responds through 4-6 wrist locks (depends on school/who you ask) and seven or so basic throws. Most techniques also exist in two variations- a rotational form (omote) and a more linear 'entering' form (iremi). Aikido makes use of three basic weapons: the jo, or short staff, the bokken, or wooden sword, and the tanto, in this case a wooden knife. These three weapons were incorporated into aikido because they had been parts of Morihei Ueshiba's own training as a young man. The weapons are used as an extension to regular training; the three strikes used in basic aikido techniques are based around Japanese fencing forms, and so can be performed either armed or unarmed. The added length of the weapons aides students in learning ma'ai, or proper distance, and the added 'threat' of the weapon can add intensity to training. In theory, aikido techniques allow one to defeat an armed attacker, disarming and immobilizing in a single move. Likewise, certain techniques show the student how to use a weapon (most commonly a jo) to perform an empty-hand technique- locking and pinning an assailant who is so foolish as to grab the end of his staff. In practice, it is not a good idea to get into fights with people with spears, knives or swords. No good can come of it. Some dojos attempt to make this clear by practicing using wooden knives with inked edges- as in plan on a few days in the hospital for every ink-stain on your gi. Aikido weapons are also used solo, in training kata designed to enhance overall coordination and balance. These kata are often borrowed from other arts- iaido, iaijutsu, jodo, and others. They consist of a series of strikes, blocks, stance and position switches, and movements, that 'rehearse' a combat, and provide practice in basic movement. Kata can be performed alone, or with a partner- make sure you have quality weapons that will not splinter or shatter if struck during partner practice. Some dojo only teach weapon techniques to more senior students, and may offer special classes in their use. Other dojos may be putting a bokken in your hand the minute you walk through the door. Some do not use them at all. Most feel that they are a valuable addition to practice, and necessary for a 'complete' and traditional aikido education- good if you place to teach or ascend the dan ranks- but not necessary to learn and enjoy aikido. The full personal philosophy of Morihei Ueshiba is hard to put to words, or even understand. Many of his own students later reported that they were unable to follow the often rambling teachings of their saintly instructor. Ueshiba's views had been profoundly affected in the early 1900's by his contact with the Omoto-kyo movement, an obscure Shinto revival movement that made several attempts at founding Utopian colonies, and was eventually shut down by the government in a Waco-style confrontation. From his belief in Omoto-kyo, Ueshiba received a life-long belief in kotodama (sacred sound theory), misogi (purification by ablution), and non-violence. Ueshiba sought to make the martial way (budo)compatible with these principles, and spent his life in the effort. Aikido techniques are designed to blend the energies of attacker and defender, rather than having the two collide. The energy of the attacker provides the energy for the technique; the attacker, in effect, throws himself. Teachers sometimes speak of breaking the 'fixation' of the attacker- not only distracting his attention by atemi, but raising his awareness from the limited world of the attack to his wider environment. Functionally, this involves the control and manipulation of the attackers point of attention and center of balance. With respect to non-violence, aikido seeks to preserve both attacker and defender from harm. This depends on several factors; the skill of the person performing the technique, the willingness of the attacker to look out for his own well being, and the circumstances of the attack. Furthermore, aikido doesn't teach you good ways to start a fight, other than holding out a hand and saying "here, grab my wrist." Of course, the truly determined will find a way anyway, but the fundamentally reactive nature of aikido gives it an orientation towards ending violence, rather than initiating it. As mentioned above, O-sensei's own philosophies were lost even on many of his most ardent Japanese students. T.K Chiba, head honcho for the Western region of the U.S Aikikai Federation has written about the relation between Zen and aikido training- translating Ueshiba's principles into an idiom more understandable to many of aikido's current students- both Western and Japanese. Some Notes About Aikido Training wherein the author shoots off his mouth about aikido teaching and training techniques. The system of teaching and training employed in aikido is unique. It's not the way you learned to play baseball, it's not the way you learned to play Nintendo, and most likely it isn't even the way you learned karate, tae kwan do, or kung fu. You will train, from the beginning, with people who know a lot more than you do, and who can really help you, having Been There Themselves. From very early on, you will train with people who know less- who you yourself may be called on to help, rather before you feel ready or qualified to do so. You will fly through the air, and land with a resounding splat. Your knees will get sore- first from sitting in seiza, later from getting up and down a hundred times during an intense session. It can be fantastic. It can also be irritating as all hell. What follows are my opinions on a few subjects relating to making aikido training Work. They have nothing to do with 'reforming' the way that aikido is taught, or changing the way training is done, and everything to do with the attitude that people bring to the mat. My hope is that by following them, we all become one of those great people that everyone looks back on and is glad to have trained with; not that we should judge everyone that we meet, and find them wanting. That Old Time Instruction Pros and Cons of traditional-style instruction Well, okay, maybe it has a little to do with changing the way aikido is taught- but really, it's just about extending a trend. Most people teaching aikido now a days do not teach it in the traditional way- where students are expected to simply watch and copy, with no verbal instruction. I think that this is a good thing. I am not about to criticize this as an abandonment of the tradition, as some might. Rather, I would say that this is a trend that needs to be extended, wherever possible. We know a lot more about learning than the Japanese did in the Good Old Days when techniques of instruction like the watch-'n'-copy method were crafted. And while it can be argued that the 'old way' was good enough, and worked for a long time, there is every reason to think that it can be improved. Different people learn different ways. This is particularly true of corporeal disciplines like martial arts, where the learning may be quite unlike any learning that we have done before. So in my view, the more ways that techniques can be presented, the better. Watch and copy is good. Describing techniques is good. Physically taking hold of people, and moving their bodies in the right way is good. Piece at a time is good. Do-it-all-at-once is good. Fast. Slow. You get the picture. The problem is, there are a lot of teachers who still, somewhere in their minds, think the old way is best. They shy away from talking too much, or doing too many different things. The conventional wisdom remains that it is the student's job to learn how to learn by observation. And there is some truth to this- there's no reason why you shouldn't stretch yourself, and work hard at learning as much as you can, as many ways as you can. But let's face it. There's little sacred about any one way of learning. Varying instruction styles helps everyone. Attack of the Limp-Wristed Attacker The importance of Uke As I mentioned above, it is of great importance to learn how to uke properly. Too many people think that the 'fun' part is where you get to 'win' and give uke a toss. Often, this is because they are slightly scared of being thrown themselves. The way to counter this is with better ukemi, and only practice can guarantee that. But lets look more closely at the uncooperative uke. Unhelpfull ukes come in generally two varieties. One is the timid attacker. Because he is either scared of taking a real fall or roll, or scared of hurting you, uke gives you an attack with the consistency cold, damp oatmeal. Believe it or not, this may be more dangerous to uke than a real attack. A weak attack does not provide the momentum to allow an undertrained aikidoka to get into a real (read 'safe') roll, and may result in them being dropped on their head or neck. Bad news. Furthermore, it's just damn irritating to nage, and may result in them pulling or pushing a bit harder than they should- especially if they are as undisciplined as uke! The timid uke is also likely to be a little bit too eager to respond to a technique. Before the throw is even begun, they drop to the floor- and then get back up shaking their wrist and grimacing, like you just did them a terrible injury. Make no mistake- nage's determined to demonstrate their Death Grip (&tm) at every opportunity are a menace, and should be informed that they are manhandling their charge. But frankly, if you were looking for non-contact sports you should have picked something else. You are going to get pulled, pinned, thrown, and bent. Your wrist will be manipulated in ways designed to cause you some (but hopefully not too much) pain. The guideline is to wait until you actually feel that the technique has worked, and then react. Obviously, this will come at different times for different folks. But it does nothing to help nage if you drop, roll, or fall when the technique has not yet been applied- or not yet been applied correctly. Stay up until the technique is done correctly. Doing otherwise helps no one. The other sort of uncooperative uke is the stubborn uke, or the 'it's not working' uke. This bright bulb refuses to respond to correctly applied technique. He is determined to show that his centering is 'better', or that he knows how to 'counter' the technique. If nage was doing it right, he reasons, I would have no choice but to go down. True enough- if the initial attack is sincere, and if uke does not respond to the technique with a strength contest. Most techniques can be frustrated- temporarily- by attempting to out-muscle nage. This is particularly effective with new students; be warned, ye uncooperative ukes, that more senior members may 'counter your counter', and you will end up taking a flying fall in a rather unexpected direction, with no one to blame but yourself. Some argue that it is important for students to learn how a 'real' attacker might respond to a technique, which has merit. But there is a time for such learning, and it is not during the period when a student is struggling to understand the basic form of the technique. Such concerns are for later learning, as one continuous to progress towards full competency. Resisting correctly applied technique comes of an attitude that regards training as fundamentally adversarial, rather than cooperative. That is not aikido training. Furthermore, it can result in injury to the offending uke; almost every technique provides a mechanical or anatomical advantage to nage. You WILL go down eventually- it's a matter of whether you go down with the simple application of pressure, or whether you need some tendons strained or a joint or bone seriously damaged before you get the picture. Of course, a trained and disciplined nage would never go to such lengths- but an untrained one might not know any better, and an undisciplined one might have an 'accident'. The ideal of aikido is to prevent injury to both attacker and defender- but it is only possible if both parties correctly assess what is in their own best interest. With all this talk about bad ukes, let me pause for a moment and recognize the good ones. Training with someone who knows how to uke properly will be the best experience of your aikido career. Period. You will learn more in fifteen minutes with someone who refuses to go down when you botch the technique, always responds to correctly performed technique, and knows the difference between the two, than you will learn in a week with someone who confuses these points. A good uke will insist on correct form, and will frustrate badly executed techniques. They will help you correct mistakes, but also challenge you to correct yourself and recognize your own errors. Good partners who know how to act as uke can mean more than the instructor, and come in second only to your own effort in helping you learn. Appreciate them, thank them, and take good care of them. A periodically quoted aikido proverb(attributable, I believe, to author and Aikido-L member Carol Shifflett) reminds us that uke is, essentially, someone who has loaned us their body so we can learn something. Trust, needless to say, is essential. Which brings us to. . . The Blunder Years Placing your life in the hands of an idiot man-child Into every life some rain must fall. And into (almost) every aikido career come a couple of periods in which you will pose a serious danger to those around you. I have humorously titled these periods 'The Blunder Years', but they may last for only a few months, weeks, or even for only a few classes. Someone in one of these stages is most likely not ever going to kill or seriously injure a partner. What they are quite likely to do is inflict a little more damage than their uke expects, resulting in bumps and bruises, lost wind, strained muscles and joints, and, most seriously, cranky partners. Let their identification and classification serve as a warning on two fronts: - Look for these tendencies in yourself. You may be able to prevent someone's Bad Aikido Day by heading off your bad attitude at the pass. - Look for these symptoms in your partners, and guard yourself appropriately. You are always responsible for your own safety in aikido. Your partner is also always responsible for your safety. If your partner fails in or neglects his responsibility, you had best not fail in yours. Bad Things will ensue. Having a dodgy partner is not an invitation to bad uke-ing (see above), but it is a good time to pay attention to your own safety. Look carefully at the number of things that you are relying on your partner to think about and take care of- the direction you will be projected, the amount of space you will have before encountering a wall or other obstacle, whether or not you are about to collide with another student. If your partner gives any indication that he is not thinking of these things, you will either think of them, or suffer the consequences. Surprisingly, raw beginners are not particularly dangerous. They are just learning their techniques, and probably can not execute them with enough energy to throw you out a window or into a wall. They are a little frightened of everything, which can be to your advantage- they are probably really, really frightened of hurting you, unless they're heartless bastards. In fact, the first time things get dangerous is when beginners are beginning to reach competency. The time of onset varies from person to person- a few months, a year or two, maybe after passing their first kyu test. What happens is this: they begin to get very close to doing things right. Maybe they do them completely right part of the time. They want to incorporate more correct things into the technique. They think more about their posture, their centering, their ki extension, their footwork. . . It's a lot to think about. Wait, was there something else? Oh yeah, forgot to think about 'don't damage partner'. Crash! Students thinking hard about their own technique are likely to forget about the safety of their partner. They may be capable of doing the technique, but as yet incapable of performing the little additional feats of timing and technique that will help keep their partner safe. They will forget where they are in the room, which way their partner is pointing, which way the technique will come off, and, perhaps most critically, how far it is to the nearest wall. Two people in this stage of development practicing together can be murder. It's like the rule that every teenager in a group reduces its I.Q by half. When I was at roughly this stage of development, and when a friend and partner was at roughly the same point, we managed to wander back and forth across the room several times while performing a kote-gaeshi technique that ended in a simple backroll. My partner forgot to look at where we were, and which way I was going to roll when the technique was applied. I did the same thing. The result was that I rolled back, forcefully, and smacked my head into the wall of the practice room with a crash that shook plaster from the ceiling next door. I was dazed, but managed to avoid a concussion or other serious injury by virtue of the thickness of my skull. And mom said it would never come in handy. A few months later, someone gave me a forward projection that resulted in a forward roll directly into a large post. Whoops. The next time things get dangerous is, oddly, right after someone passes their black belt (or shodan) exam. It seems counterintuitive: surely these guys know better, right? And yet, it seems to be universally true. It's called New Shodan Syndrome (NSS), and it has been observed across the world. An illustration in the popular Aikido Student Handbook illustrates common mistakes made by new students in trying to don their gi: jacket closed the wrong way (indicating that you are dead), pants on backwards (kneepads know protecting the knee pit), belt tied wrong, wearing a ninja costume, etc. On illustration shows a person wearing a gi and hakama facing the wrong direction; the caption reads 'New shodan- uniform on correctly, head on backwards.' This pretty much says it all. New shodans often seem to think that they have just graduated from the Marine Corp, or B.U.D.S or something, and that it is their duty to be All Out, All the Time. They do not wish to trifle with those whose skills are beneath their own; they feel it slows their learning. They are the most likely to forget to pay attention to their partner's level of ukemi. Once uke shows any signs of being able to perform elementary levels of rolls and falls, the new shodan assumes that, since he is practicing with them, he must know what he is doing. They will respond to attacks with undue force without warning, and seem to take a certain amount of pride in this 'skill'. Paradoxically, after they gain their black belt, shodans seem to have more to prove than they did before they took the test. People preparing for the shodan test (1st kyu students), on the other hand, are often some of the most conscientious and helpful partners you can have. People suffering from NSS will not spike you into a convenient wall the way a neglectful junior partner might; they have too much skill for such crudities. Rather, they will slam you to the mat in neglect of the limitations of your ukemi skills, bouncing your head on the ground, knocking the wind out of you, and generally knocking you about. They may suddenly switch techniques on you without warning- disorienting you before bouncing you on the floor. You will be annoyed. NSS may last for only a few weeks, months or days following the shodan's assencion to the ranks of the black-belted. It may last until they make nidan . It may never end. They can be quite pleasant to train with, if one is of equal or higher rank, but are in many cases more of a menace than a help to lower ranked students. That's the end of my commentary. All of these observations may apply more or less to your own experience. YMMV. Variation between dojos can be extreme in aikido, as with any martial art, reflecting the temperament of the instructor, and the attitudes of the students. Maybe the feng shui of the dojo too. Who knows. Some dojos are better than others, and some may just fit your own style and personality better than others. It's your money. Find somewhere that works for you, but be ready to learn and adapt. There are a lot of books about aikido around. These are a few with which I have personal experience. Consult your local Book Depository for further reading, and a convenient sniper nest. - Aikido and the Dynamic Sphere, by A. Westbrook and O. Ratti. The classic systematic examination of the philosophy and mechanics of aikido, written by two long-time students. Considered to be the most complete study of aikido available, and perhaps the finest book on any martial art. Includes training exercises, a complete examination of techniques and theory, and numerous illustrations, classifications and diagrams. - The Aikido Student Handbook, by Greg O'Connor. A small book containing a great deal of practical information for students of aikido. Dojo etiquette and courtesy. How to properly don a gi and fold a hakama. Helpful vocabulary, and what to expect from as aikido class. - Abundant Peace: The Biography of Morihei Ueshiba, by John Stevens. Understanding Morihei Ueshiba's personal history can aide greatly in understanding the philosophy and formulation of aikido. Stevens, author of a well-known biography of the Zen swordsman Tesshu, provides a concise, insightful look at Ueshiba's life, illustrated with numerous photographs of the man and his students.
<urn:uuid:a4579b8f-2a3a-44e4-83ed-fa9fe98a3950>
CC-MAIN-2016-26
http://everything2.com/user/Spasemunki/writeups/aikido?showwidget=showCs1365258
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966377
7,590
2.890625
3
We all know what botnets are (think so), but anyway let’s see a proper definition of botnets taken from shadowserver… and I quote: A botnet is a collection of computers, connected to the internet, that interact to accomplish some distributed task. Although such a collection of computers can be used for useful and constructive applications, the term botnet typically refers to such a system designed and used for illegal purposes. Such systems are composed of compromised machines that are assimilated without their owner’s knowlege. Among the DDoS usage of botnets there are also know usages like: Keylogging is perhaps the most threatening botnet feature to an individual’s privacy. Many bots listen for keyboard activity and report the keystrokes upstream to the bot herder. Some bots have builtin triggers to look for web visits to particular websites where passwords or bank account information is entered. This gives the herder unprecendented ability to gain access to personal information and accounts belonging to thousands of people. Botnets can be used to steal, store, or propogate warez. Warez constitutes any illegally obtained and/or pirated software. Bots can search hard drives for software and licenses installed on a victims machine, and the herder can easily transfer it off for duplication and distribution. Furthermore, drones are used to archive copies of warez found from other sources. As a whole, a botnet has a great deal of storage capacity. Botnets often are used as a mechanism of propogating spam. Compromised drones can forward spam emails or phish scams to many 3rd party victims. Furthermore, instant messaging accounts can be utilized to forward malicious links or advertisements to every contact in the victim’s address book. By spreading spam-related materials through a botnet, a herder can mitigate the threat of being caught as it is thousands of individual computers that are taking on the brunt of the dirty work. and the one I’m gonna focus on (well, something derived from it) -> Click Fraud Botnets can be used to engage in Click Fraud, where the bot software is used to visit web pages and automatically “click” on advertisement banners. Herders have been using this mechanism to steal large sums of money from online advertising firms that pay a small reward for each page visit. With a botnet of thousands of drones, each clicking only a few times, the returns can be quite large. Since the clicks are each coming from seperate machines scattered accross the globe, it looks like legitimate traffic to the untrained investigator. My point is that many herders (botnet organizers) use a pretty raw Click Fraud mechanism, mainly just issue the command to the bot to retrieve the page and it’s advertisement and rebuild a query string to the advertisers website with the referer header set… as mentioned in the definition this may seem sometimes legitimate traffic to some, but big advertising companies would notice that something isn’t right, stuff like hundreds of clicks at (almost) the same time and similar scenario’s… The new approach (better) would be to generate only website traffic at random hours because highly visited websites use pay-per-post campaigns (more info about pay-per-post)… and there are also other advertising systems like simple banner/ad placement on the website/blog and via the traffic stats you get paid… How could botnets help? Well botnets would act as general users/viewers of the blog/website thus making legitimate traffic… masked by a randomized visit system… a general scenario: - the herder issues the command to visit a website - each bot receives the command, enters a random delay before executing it (in minutes) (ex: rand(60)) - the bot finally executes the visit and resets the delay time before revisit adding a day to it also A very raw implementation could be easily implemented but varying from botnets to botnets, because some botnets are simple IRC based while others not… It’s unethical… to whom?! to advertising companies only…
<urn:uuid:9dabd750-c103-41b9-a35d-7aa557badcf1>
CC-MAIN-2016-26
http://www.darknet.org.uk/2008/09/productive-botnets/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92467
846
3.015625
3
Math Lite Feels Better A recent Wall Street Journal editorial noted that the Everyday Math program not only promotes the use of calculators from kindergarten on but also brings a new set of feelings to math classes. Rather than having students wrestle with the rules of long division, the following fifth-grade worksheet asks them to expand their minds by completing the following sentences: A. If math were a color, it would be ________, because _______ B. If it were a food, it would be ________, because _______ C. If it were weather, it would be ________, because _______
<urn:uuid:6c7462d0-a407-4e0c-a695-71d32c46b1c9>
CC-MAIN-2016-26
http://news.heartland.org/newspaper-article/2000/03/01/math-lite-feels-better
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00088-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965245
126
2.5625
3
Glyphosate: mechanism of action Glyphosate is a herbicide used in agriculture and non-crop situations for the control of a wide range of weeds. Chemically, the active ingredient glyphosate (N-phosphonomethyl-glycine) is a derivative of glycine, the smallest amino acid found in proteins. In the glyphosate molecule, one of the amino hydrogen atoms of glycine is replaced with a phosphonomethyl group. Compared to other active ingredients in herbicides, glyphosate is a small molecule with a molecular weight of 169 g. Glyphosate is a derivative of the amino acid glycine, where one of the amino hydrogen atoms has been replaced with a phosphonomethyl group. (Phosphorus atoms in orange, hydrogen atoms in white, oxygen atoms in red, nitrogen atom in blue) Once absorbed by the plant, glyphosate binds to and blocks the activity of the enzyme enolpyruvylshikimate-3-phosphate synthase (EPSPS). The EPSPS enzyme comes at the start of the shikimic acid pathway that converts simple carbohydrate precursors derived from glycolysis and the pentose phosphate pathway to aromatic amino acids and many other important plant metabolites. The enzyme is normally located within the chloroplasts where it catalyses the reaction of shikimate-3-phosphate (S3P) and phosphoenol pyruvate to form 5-enolpyruvyl-shikimate-3-phosphate (ESP). ESP is a precursor for aromatic amino acids and, ultimately, hormones, vitamins and other essential plant metabolites. Structural similarities to phosphoenol pyruvate enable glyphosate to bind to the substrate binding site of the EPSPS, inhibiting its activity and blocking its import into the chloroplast. Since the active site of the EPSPS enzyme is highly consistent in higher plants, glyphosate affects a broad spectrum of weeds indiscriminately. Inhibiting the function of the shikimic acid pathway causes a deficiency in aromatic amino acids, eventually leading to the plant’s death by starvation. Despite the fact that glyphosate is a small and simple molecule, its water solubility is too low for it to be easily sprayed in the field. The most common glyphosate formulations for commercial purposes therefore mix it with other substances to improve its efficiency. In many plant protection products glyphosate acid is formulated as a salt to enhance its water solubility. A wide range of different glyphosate herbicide formulations have been registered in Europe. These include granular (SG) and liquid formulations (SL), various salts of glyphosate including isopropylamine (IPA), potassium (K), ammonium (NH4) and dimethyl ammonium (DMA). Fast uptake of glyphosate is also crucial to prevent the herbicide being washed off by rain after spraying. Many glyphosate plant protection products also contain surfactants of various types and concentrations that improve leaf absorption, retention and coverage. Last update: 19 June 2013
<urn:uuid:59438417-11a8-4ef7-988b-bcfa0622a143>
CC-MAIN-2016-26
http://www.glyphosate.eu/glyphosate-mechanism-action
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910653
627
3.34375
3
According to Pew Research Center Internet & American Life Project. Nearly two-thirds (63%) of cell phone owners now use their phone to go online. Those people that use their phone to go online are known as “cell internet users” and can be anyone who uses their cell phone to access the internet or use email. 91% of all Americans now own a cell phone, this means that 57% of all American adults are cell internet users. Additionally, one third of these cell internet users (34%) mostly use their phone to access the internet, as opposed to other devices like a desktop, laptop, or tablet computer. We call these individuals “cell-mostly internet users,” and they account for 21% of the total cell owner Young adults, non-whites, and those with relatively low income and education levels are particularly likely to be cell-mostly
<urn:uuid:7309bfee-41fc-4ebe-afa6-c81d552cd412>
CC-MAIN-2016-26
http://blog.3gstore.com/2013/09/americans-going-online-via-cellphone.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94635
189
2.6875
3
Nearly fifteen years before the birth of gay liberation, the Daughters of Bilitis (DOB) was the world’s first organization committed to lesbian visibility and empowerment. Like its predominantly gay male counterpart, the Mattachine Society, DOB was launched in response to the oppressive anti-homosexual climate of the McCarthy era, when lesbian and gay people were arrested, fired from jobs, and had their children taken away simply because of their sexual orientation. It was against this political backdrop that a circle of San Francisco lesbians formed a private club where lesbians could meet others in a safe, affirming setting. The small social group evolved over the next two decades into a national organization that counted more than a dozen chapters, and laid the foundation for today’s lesbian rights movement. Different Daughters chronicles this movement and the women who fought the church and state in order to change not only our nation’s perception of homosexuality, but how lesbians see themselves. Marcia Gallo has interviewed dozens of former DOB members, many of whom have never spoken on record. Through its leaders, magazine, and network of local chapters, DOB played a crucial role in creating lesbian identity, visibility, and political strategies in Cold War America. Back to top Rent Different Daughters 1st edition today, or search our site for other textbooks by Marcia M. Gallo. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Seal Press. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
<urn:uuid:fee72751-4809-4e06-b28c-b5083d38a774>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/different-daughters-1st-edition-9781580052528-1580052525
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00160-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960462
326
3.359375
3
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Post-traumatic seizures (PTS) are seizures that result from traumatic brain injury (TBI), brain damage caused by physical trauma. PTS may be a risk factor for post-traumatic epilepsy (PTE), but a person who has a seizure or seizures due to traumatic brain injury does not necessarily have PTE, which is a form of epilepsy, a chronic condition in which seizures occur repeatedly. However, "PTS" and "PTE" may be used interchangeably in medical literature. Seizures are usually an indication of a more severe TBI. Seizures that occur shortly after a person suffers a brain injury may further damage the already vulnerable brain. They may reduce the amount of oxygen available to the brain, cause excitatory neurotransmitters to be released in excess, increase the brain's metabolic need, and raise the pressure within the intracranial space, further contributing to damage. Thus, people who suffer severe head trauma are given anticonvulsant medications as a precaution against seizures. Around 5–7% of people hospitalized with TBI have at least one seizure. PTS are more likely to occur in more severe injuries, and certain types of injuries increase the risk further. The risk that a person will suffer PTS becomes progressively lower as time passes after the injury. However, TBI survivors may still be at risk over 15 years after the injury. Children and older adults are at a higher risk for PTS. In the mid 1970s, PTS was first classified by Bryan Jennett into early and late seizures, those occurring within the first week of injury and those occurring after a week, respectively. Though the seven day cutoff for early seizures is used widely, it is arbitrary; seizures occurring after the first week but within the first month of injury may share characteristics with early seizures. Some studies use a 30 day cutoff for early seizures instead. Later it became accepted to further divide seizures into immediate PTS, seizures occurring within 24 hours of injury; early PTS, with seizures between a day and a week after trauma; and late PTS, seizures more than one week after trauma. Some consider late PTS to be synonymous with post-traumatic epilepsy. Early PTS occur at least once in about 4 or 5% of people hospitalized with TBI, and late PTS occur at some point in 5% of them. Of the seizures that occur within the first week of trauma, about half occur within the first 24 hours. In children, early seizures are more likely to occur within an hour and a day of injury than in adults. Of the seizures that occur within the first four weeks of head trauma, about 10% occur after the first week. Late seizures occur at the highest rate in the first few weeks after injury. About 40% of late seizures start within six months of injury, and 50% start within a year. Especially in children and people with severe TBI, the life-threatening condition of persistent seizure called status epilepticus is a risk in early seizures; 10 to 20% of PTS develop into the condition. In one study, 22% of children under 5 years old developed status seizures, while 11% of the whole TBI population studied did. Status seizures early after a TBI may heighten the chances that a person will suffer unprovoked seizures later. It is not completely understood what physiological mechanisms cause seizures after injury, but early seizures are thought to have different underlying processes than late ones. Immediate and early seizures are thought to be a direct reaction to the injury, while late seizures are believed to result from damage to the cerebral cortex by mechanisms such as excitotoxicity and iron from blood. Immediate seizures occurring within two seconds of injury probably occur because the force from the injury stimulates brain tissue that has a low threshold for seizures when stimulated. Early PTS are considered to be provoked seizures, because they result from the direct effects of the head trauma and are thus not considered to be actual epilepsy, while late seizures are thought to indicate permanent changes in the brain's structure and to imply epilepsy. Early seizures can be caused by factors such as cerebral edema, intracranial hemorrhage, cerebral contusion or laceration. Factors that may result in seizures that occur within two weeks of an insult include the presence of blood within the brain; alterations in the blood brain barrier; excessive release of excitatory neurotransmitters such as glutamate; damage to tissues caused by free radicals; and changes in the way cells produce energy. Late seizures are thought to be the result of epileptogenesis, in which neural networks are restructured in a way that increases the likelihood that they will become excited, leading to seizures. Shortly after TBI, people are given anticonvulsant medication, because seizures that occur early after trauma can increase brain damage through hypoxia, excessive release of excitatory neurotransmitters, increased metabolic demands, and increased pressure within the intracranial space. Medications used to prevent seizures include valproate, phenytoin, and phenobarbital. It is recommended that treatment with anti-seizure medication be initiated as soon as possible after TBI. Prevention of early seizures differs from that of late seizures, because the aim of the former is to prevent damage caused by the seizures, whereas the aim of the latter is to prevent epileptogenesis. Strong evidence from clinical trials suggests that antiepileptic drugs given within a day of injury prevent seizures within the first week of injury, but not after. For example, a 2003 review of medical literature found phenytoin to be preventative of early, but probably not late PTS. In children, anticonvulsants may be ineffective for both early and late seizures. For unknown reasons, prophylactic use of antiepileptic drugs over a long period is associated with an increased risk for seizures. For these reasons, antiepileptic drugs are widely recommended for a short time after head trauma to prevent immediate and early, but not late, seizures. No treatment is widely accepted to prevent the development of epilepsy. However, medications may be given to repress more seizures if late seizures do occur. Assessment and treatmentEdit Medical personnel aim to determine whether a seizure is caused by a change in the patient's biochemistry, such as hyponatremia. Neurological examinations and tests to measure levels of serum electrolytes are performed. Not all seizures that occur after trauma are PTS; they may be due to a seizure disorder that already existed, which may even have caused the trauma. In addition, post-traumatic seizures are not to be confused with concussive convulsions, which may immediately follow a concussion but which are not actually seizures and are not a predictive factor for epilepsy. Seizures that result from TBI are often difficult to treat. Antiepileptic drugs that may be given intravenously shortly after injury include phenytoin, sodium valproate, carbamazepine, and phenobarbital. Antiepileptic drugs do not prevent all seizures in all people, but phenytoin and sodium valproate usually stop seizures that are in progress. PTS is associated with a generally good prognosis. It is unknown exactly how long after a TBI a person is at higher risk for seizures than the rest of the population, but estimates have suggested lengths of 10 to over 15 years. For most people with TBI, seizures do not occur after three months, and only 20–25% of people who suffer TBI have PTS more than two years after the injury. However, moderate and severe TBI still confer a high risk for PTS for up to five years after the injury. Studies have reported that 25–40% of PTS patients go into remission; later studies conducted after the development of more effective seizure medications reported higher overall remission rates. In one quarter of people with seizures from a head trauma, medication controls them well. However, a subset of patients have seizures despite aggressive antiepileptic drug therapy. The likelihood that PTS will go into remission is lower for people who have frequent seizures in the first year after injury. Risk of developing PTEEdit It is not known whether PTS increase the likelihood of developing PTE. Early PTS, while not necessarily epileptic in nature, are associated with a higher risk of PTE. However, PTS do not indicate that development of epilepsy is certain to occur, and it is difficult to isolate PTS from severity of injury as a factor in PTE development. About 3% of patients with no early seizures develop late PTE; this number is 25% in those who do have early PTS, and the distinction is greater if other risk factors for developing PTE are excluded. Seizures that occur immediately after an insult are commonly believed not to confer an increased risk of recurring seizures, but evidence from at least one study has suggested that both immediate and early seizures may be risk factors for late seizures. Early seizures may be less of a predictor for PTE in children; while as many as a third of adults with early seizures develop PTE, the portion of children with early PTS who have late seizures is less than one fifth in children and may be as low as one tenth. The incidence of late seizures is about half that in adults with comparable injuries. Research has found that the incidence of PTS varies widely based on the population studied; it may be as low as 4.4% or as high as 53%. Of all TBI patients who are hospitalized, 5 to 7% have PTS. PTS occur in about 3.1% of traumatic brain injuries, but the severity of injury affects the likelihood of occurrence. The most important factor in whether a person will develop early and late seizures is the extent of the damage to the brain. More severe brain injury also confers a risk for developing PTS for a longer time after the event. One study found that the probability that seizures will occur within 5 years of injury is in 0.5% of mild traumatic brain injuries (defined as no skull fracture and less than 30 minutes of post-traumatic amnesia, abbreviated PTA, or loss of consciousness, abbreviated LOC); 1.2% of moderate injuries (skull fracture or PTA or LOC lasting between 30 minutes and 24 hours); and 10.0% of severe injuries (cerebral contusion, intracranial hematoma, or LOC or PTA for over 24 hours). Another study found that the risk of seizures 5 years after TBI is 1.5% in mild (defined as PTA or LOC for less than 30 minutes), 2.9% in moderate (LOC lasting between 30 minutes and 1 day), and 17.2% in severe TBI (cerebral contusion, subdural hematoma, or LOC for over a day; image at right). Immediate seizures have an incidence of 1 to 4%, that of early seizures is 4 to 25%, and that of late seizures is 9 to 42%. Age influences the risk for PTS. As age increases, risk of early and late seizures decreases; one study found that early PTS occurred in 30.8% of children age 7 or under, 20% of children between ages 8 and 16, and 8.4% of people who were over 16 at the time they were injured (graph at right). Early seizures occur up to twice as frequently in brain injured children as they do in their adult counterparts. In one study, children under five with trivial brain injuries (those with no LOC, no PTA, no depressed skull fracture, and no hemorrhage) suffered an early seizure 17% of the time, while people over age 5 did so only 2% of the time. Children under age five also have seizures within one hour of injury more often than adults do. One study found the incidence of early seizures to be highest among infants younger than one year and particularly high among those who suffered perinatal injury. However, adults are at higher risk than children are for late seizures. People over age 65 are also at greater risk for developing PTS after an injury, with a PTS risk that is 2.5 times higher than that of their younger counterparts. The chances that a person will suffer PTS are influenced by factors involving the injury and the person. The largest risks for PTS are having an altered level of consciousness for a protracted time after the injury, severe injuries with focal lesions, and fractures. The single largest risk for PTS is penetrating head trauma, which carries a 35 to 50% risk of seizures within 15 years. If a fragment of metal remains in within the skull after injury, the risk of both early and late PTS may be increased. Head trauma survivors who abused alcohol before the injury also at higher risk for developing seizures. Occurrence of seizures varies widely even among people with similar injuries. It is not known whether genetics play a role in PTS risk. Studies have had conflicting results with regard to the question of whether people with PTS are more likely to have family members with seizures, which would suggest a genetic role in PTS. Most studies have found that epilepsy in family members does not significantly increase the risk of PTS. People with the ApoE-ε4 allele may also be at higher risk for late PTS. Risks for late PTS include hydrocephalus, reduced blood flow to the temporal lobes of the brain, brain contusions, subdural hematomas, a torn dura mater, and focal neurological deficits. PTA that lasts for longer than 24 hours after the injury is a risk factor for both early and late PTS. Up to 86% of people who have one late post-traumatic seizure have another within two years. - ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Tucker GJ (2005). "16: Seizures" Silver JM, McAllister TW, Yudofsky SC Textbook Of Traumatic Brain Injury, American Psychiatric Pub., Inc. - ↑ 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 Agrawal A, Timothy J, Pandit L, Manju M (2006). Post-traumatic epilepsy: An overview. Clinical Neurology and Neurosurgery 108 (5): 433–439. - ↑ 3.0 3.1 3.2 3.3 3.4 Iudice A, Murri L (2000). Pharmacological prophylaxis of post-traumatic epilepsy. Drugs 59 (5): 1091–1019. - ↑ 4.0 4.1 4.2 4.3 4.4 4.5 4.6 Teasell R, Bayona N, Lippert C, Villamere J, Hellings C (2007). Post-traumatic seizure disorder following acquired brain injury. Brain Injury 21 (2): 201–214. - ↑ 5.00 5.01 5.02 5.03 5.04 5.05 5.06 5.07 5.08 5.09 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 Frey LC (2003). Epidemiology of posttraumatic epilepsy: A critical review. Epilepsia 44 (Supplement 10): 11–17. - ↑ Swash M (1998). Outcomes in neurological and neurosurgical disorders, 172-173, Cambridge, UK: Cambridge University Press. - ↑ 7.0 7.1 7.2 Chang BS, Lowenstein DH (2003). Practice parameter: Antiepileptic drug prophylaxis in severe traumatic brain injury: Report of the quality standards subcommittee of the American Academy of Neurology. Neurology 60 (1): 10–16. - ↑ 8.0 8.1 8.2 Garga N, Lowenstein DH (2006). Posttraumatic epilepsy: a major problem in desperate need of major advances. Epilepsy Curr 6 (1): 1-5. - ↑ 9.0 9.1 9.2 9.3 9.4 Cuccurullo S (2004). Physical medicine and rehabilitation board review, 68–71, Demos Medical Publishing. URL accessed 2008-02-13. - ↑ Benardo LS (2003). Prevention of epilepsy after head trauma: Do we need new drugs or a new approach?. Epilepsia 44 (Supplement 10): 27-33. - ↑ 11.0 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 Gupta A, Wyllie E Lachhwani DK (2006). The Treatment of Epilepsy: Principles & Practice, 521-524, Hagerstown, MD: Lippincott Williams & Wilkins. - ↑ 12.0 12.1 12.2 12.3 Young B (1992). "Post-traumatic epilepsy" Barrow DL Complications and Sequelae of Head Injury, 127-132, Park Ridge, Ill: American Association of Neurological Surgeons. - ↑ 13.0 13.1 13.2 13.3 13.4 13.5 Herman ST (2002). Epilepsy after brain insult: Targeting epileptogenesis. Neurology 59 (9 Suppl 5): S21-S26. - ↑ 14.0 14.1 14.2 14.3 Menkes JH, Sarnat HB, Maria BL (2005). Child Neurology, 683, Hagerstown, MD: Lippincott Williams & Wilkins. URL accessed 2008-06-12. - ↑ 15.0 15.1 Andrews BT (2003). Intensive Care in Neurosurgery, 192, New York: Thieme Medical Publishers. URL accessed 2008-06-08. - ↑ Beghi E (2003). Overview of studies to prevent posttraumatic epilepsy. Epilepsia 44 (Supplement 10): 21–26. - ↑ Kushner D (1998). Mild traumatic brain injury: Toward understanding manifestations and treatment. Archives of Internal Medicine 158 (15): 1617–1624. - ↑ Ropper AH, Gorson KC (2007). Clinical practice. Concussion. New England Journal of Medicine 356 (2): 166–172. - ↑ Posner E, Lorenzo N (October 11 2006). "Posttraumatic epilepsy". Emedicine.com. Retrieved on 2008-02-19. - ↑ Oliveros-Juste A, Bertol V, Oliveros-Cid A (2002). Preventive prophylactic treatment in posttraumatic epilepsy. Revista de Neurolología 34 (5): 448–459. - ↑ 21.0 21.1 Chadwick D. Adult onset epilepsies. E-epilepsy - Library of articles, National Society for Epilepsy. - ↑ 22.0 22.1 Asikainen I, Kaste M, Sarna S (1999). Early and late posttraumatic seizures in traumatic brain injury rehabilitation patients: Brain injury factors causing late seizures and influence of seizures on long-term outcome. Epilepsia 40 (5): 584–589. - ↑ D'Ambrosio R, Perucca E (2004). Epilepsy after head injury. Current Opinion in Neurology 17 (6): 731–735. - ↑ Firlik KS, Spencer DD (2004). "Surgery of post-traumatic epilepsy" Dodson WE, Avanzini G, Shorvon SD, Fish DR, Perucca E The Treatment of Epilepsy, 775, Oxford: Blackwell Science. URL accessed 2008-06-09. - ↑ Pitkänen A, McIntosh TK (2006). Animal models of post-traumatic epilepsy. Journal of Neurotrauma 23 (2): 241–261. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:477a3f53-2f60-415f-8e7c-3abc83b3a842>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Post-traumatic_seizure
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915485
4,119
3.28125
3
Children that are ready for school are more likely to have higher third grade reading scores, a predictor of high school success. Research shows that 90% of a child’s brain develops dramatically during the first five years and what parents do during these years to support their child’s growth will have a meaningful impact throughout life. California voters passed Proposition 10, a tobacco tax, to fund programs for children ages 0 to 5 and their families. The distribution of funding is based on the number of children born in each of the 58 counties. Here’s how you can help your child get ready for school: - Take your child to all well‐child visits. - See a dentist by the first tooth or first birthday. - Welcome your newborn. - Make sure your child is developing on track. - Choose high quality early care and education programs. - Read with your child each day. - Get Connected. Find resources in your community that can help. All First 5 Commission’s provide unique local services to help during the first five years of life. Programs vary by county and the age of your child. Most of all, First 5 Commissions support parents as their child’s first teacher.
<urn:uuid:59ebcb2d-34bd-47f4-b563-2cdef58ff124>
CC-MAIN-2016-26
http://www.first5sacregion.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964105
254
3.09375
3
Like many organizations that work with young people who have come in contact with the justice system, we are constantly looking for interventions that show some evidence of appropriately addressing the reasons underlying youth delinquency, preferably before that behavior gets them into real trouble. We’ve been particularly interested in cognitive behavioral therapy (CBT), which focuses on working with young people to help them develop better decision-making and impulse control to overcome the adolescent tendency to make problematic automatic decisions. It encourages thinking about thinking, or what is called “meta-cognition.” First used in the 1970s to address mental health disorders including substance abuse, anxiety, and depression, CBT is increasingly being considered appropriate for helping youth considered ‘at risk’ navigate the developmental shoals of adolescence . Instead of focusing on mitigating challenges arising from structural issues in young people’s lives, CBT offers a means to develop skills that young people can actively deploy in difficult situations to avoid getting into trouble. The National Bureau of Economic Research recently released a paper, Preventing Youth Violence and Dropout, on a new study evaluating the efficacy of an intervention for at-risk young people that includes CBT as a component. Researchers studied 2,740 male youth in Chicago Public Schools during the 2009-10 school year in neighborhoods on the south and west sides of the city. The young men, in grades 7 to 10, were randomly assigned either to a control group or to a group receiving an intervention called “Becoming a Man” and run by two local nonprofits. The intervention involved consistent exposure to pro-social adults, after school programming, and CBT. The CBT component included standard curriculum focused on the emotional reactions to events that are often influenced by automatic thoughts which can be controlled and processed. Participants were taught relaxation techniques to help avoid these automatic reactions. The curriculum also focused on helping youth put actions and attitudes in perspective to avoid acting out. The study found that participation in the intervention reduced violent crime arrests by 8.1 arrests for every 100 youth over the course of the program year. By comparison to the control group, this amounted to a 44% decline. Including non-violent, non-property, non-drug crimes, arrests decreased by 11.5 per 100 youth during the year, a 36% decline. Depending on how violent crime is monetized, the paper points out that the benefit-cost ratio is 30:1 based on the effects of the reduction in crime alone. The intervention may have also led to positive long term school outcomes. The increase in the grade point averages (GPA) of the study participants involved in the intervention was compared with an earlier study that looked at how increases in GPAs for 9th grade Chicago Public School students correlated with later graduation rates. Using the same correlation, the schooling impacts of the “Becoming a Man” intervention suggest potential gains in graduation rates of 7 to 22% by comparison to the control group.
<urn:uuid:a4a8ec03-3762-4a0d-9ffa-38a24f46d94d>
CC-MAIN-2016-26
http://www.youthatcenter.org/2013/05/cognitive-behavioral-therapy.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00004-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963363
598
3.140625
3
How big is the planet, actually? Friedrich Georg Wilhelm von Struve (1793–1864), the German geophysicist who was the director of the stellar observatory at the University of Dorpat in Russia (now Tartu in Estonia) began work on mapping the exact shape and size of the planet in 1816. In his work, von Struve used 265 measurement points – triangulation points – that form a 2820 km-long arc from Finnmark to the Danube Delta. Science and international collaboration By putting so many measurement points along a meridian (the vertical lines on a map) it was possible to calculate the exact size and shape of the planet, which allowed cartographers to create more accurate maps. The project thus represented a giant scientific leap forward and was also an early example of scientific collaboration across national borders. In 2005, 34 of the triangulation points – the ones that are distinguished by a landmark – became protected and included as cultural memorials on the UNESCO world heritage list. In the 1800s, the project involved two countries: Russia and Sweden/Norway. Today, Struve’s Geodetic Arc spans ten countries: Norway, Sweden, Finland, Russia, Estonia, Latvia, Lithuania, Belarus, Moldova and Ukraine. Hammerfest is the most northerly town in the world – and at that time the most northerly place in the world that scientists could reasonably travel to – and it marks the north end of the arc. It was here that the Meridian Column was erected in 1854, inscribed with the following text: The northernmost end of a geodetic arc at 25° 20' from the northern ocean to the Danube river – through Norway, Sweden and Russia. On the instructions of HM Oscar I and Emperors Alexander I and Nicolaus I, using unbroken geometries. Latitude: 70° 40' 11.3". The Meridian Column is located on Fuglenes in the built-up area of Hammerfest, 3 km from the Express Route quay. Three other measurement points in Norway are also included: the Unna Ráipásas mountain peak in Alta and the peaks of Luvddiidcohkka and Bealjásvárri in Kautokeino The church spire in Alatornio, Finland, cairns on the Russian island of Gogland in the Gulf of Finland and the observatory in Tartu are among the most visible points. Mer om Hammerfest Hammerfest Turist, the local tourist board, informs excellently on Hammerfest on their website.
<urn:uuid:29199180-18a4-4c2e-8a68-068d2844c659>
CC-MAIN-2016-26
http://www.nordnorge.com/?id=178726952&News=52
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928165
546
3.84375
4
WASHINGTON--(BUSINESS WIRE)--Nearly four billion aerosol cans are produced in North America each year. More than 15,000 U.S. recycling programs accept empty steel aerosol containers. But many Americans don’t know that aerosols can be recycled. Consumers are unaware in part because only one-third of recycling programs actively publicize their acceptance of steel aerosol cans. The Consumer Aerosol Products Council (CAPCO) and the Consumer Specialty Products Association’s (CSPA) Aerosol Products Division are working to educate consumers, recycling program coordinators and municipal governments about the recyclability of empty steel containers. “Today more than 65 percent of Americans have access to aerosol recycling through curbside pickup, drop off and waste-to-energy recovery programs,” said Doug Fratz, of CSPA’s Aerosol Products Division, which represents the consumer aerosol industry. “We hope access translates to action.” Here’s what consumers need to know to increase recycling efforts this Earth Day. Many steel aerosol cans feature the “Please Recycle When Empty” logo to indicate they can be safely recycled. Here’s how: - Empty the aerosol of its contents through normal use. - Check instructions from the local collector to determine if empty aerosols are accepted. - Place empty aerosol container in bin along with other recyclables. The Consumer Specialty Products Association (CSPA) is the premier trade association representing the interests of companies that manufacture, formulate, distribute and sell more than $100 billion annually in the U.S. of familiar consumer products that help household and institutional customers create cleaner and healthier environments. CSPA member companies employ hundreds of thousands of people globally. Products CSPA represents include disinfectants that kill germs in homes, hospitals and restaurants; candles, and fragrances and air fresheners that eliminate odors; pest management products for home, garden and pets; cleaning products and polishes for use throughout the home and institutions; products used to protect and improve the performance and appearance of automobiles; aerosol products and a host of other products. Through its product stewardship program, Product Care®, and scientific and business-to-business endeavors, CSPA provides members a platform to effectively address issues regarding the health, safety and sustainability of their products. For more information, please visit www.cspa.org.
<urn:uuid:c86875d2-268f-46be-94fd-6d218d2909c0>
CC-MAIN-2016-26
http://www.businesswire.com/news/home/20140421005885/en/Recycle-Aerosols-Earth-Day
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900908
504
2.734375
3
History of Modern Computers An index to famous inventors in the computer business, over twenty-six fully illustrated features cover the history of computers from Konrad Zuse and the first programmable in 1936 - to today's computers. History and Development of early history and a description of the next five generations of computers. of the Nerds computer history and some of the major figures involved. History of Computers During My Lifetime of the computers of the 70s, 80s and 90s. Mind Machine Web Museum history gallery, full of rare and surprisingly beautiful photos (1955 Geniac Computer or 1960 ThinkATron the first home computer with a GUI, graphical user interface. Apple Macintosh, the famous Apple home computer. More apple history. of the entertaining game computer. inventor of the Cray Supercomputer. compact disks, floppy disks etc. Hard to decide who was first. - Random Access Memory was the inventor of ram: random access memory, the device was patented in 1968 by Dennard. invented a punch-card tabulation machine system for statistical computation. and Xerox invented network computing. Walter Brattain & Wiliam Shockley invented the transistor in 1947. recognized as the father of the modern digital computer in 1939. of the Internet Important disclaimer information about this About site.
<urn:uuid:96e2fe61-fcca-4541-9e9e-ae81d3f53c1d>
CC-MAIN-2016-26
http://theinventors.org/library/inventors/blcomputer.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.748331
295
3.4375
3
RICHMOND, Va. (WTVR) - Two of NOAA's GOES satellites worked together to produce a time-lapse view of half of Earth, capturing the birth of Sandy all the way through its landfall and progress inland. GOES-13 and GOES-15 satellites observed Sandy from October 21 through the 30, 2012, seeing when the tropical disturbance became Tropical Storm Sandy in the Caribbean Sea. The geosynchronous satellites (meaning they orbit over Earth at the same location to capture an unbroken view of the spot it watches) monitored the intensification of Sandy as it tracked into the Atlantic Ocean east of Florida and through the Bahamas, hugging just east of the Outer Banks before hooking west. Category One Hurricane Sandy made landfall in southern New Jersey on October 29, 2012, then moved inland to Pennsylvania as Post-Tropical Storm Sandy. (Time-Lapse: NASA GOES Project)
<urn:uuid:c006976c-e93d-4b93-9279-d421f0316ca9>
CC-MAIN-2016-26
http://wtvr.com/2012/10/31/time-lapse-sandys-life-viewed-from-space/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929149
187
3.25
3
By Mary Small, Colorado State University Cooperative Extension, Urban Integrated Pest Management Leaves on houseplants can drop for many reasons. When a new plant is brought home, it may shed leaves in response to lower light intensities there. Usually, once the adjustment is make, leaf drop ceases, unless the plant won't tolerate reduced light conditions. Temperature changes also can cause leaf drop. Low humidity in Colorado during winter can be another cause. Usually placing plants in naturally humid areas, such as the kitchen or bathroom, will help as long as other growing requirements are met. The prime suspect in leaf drop, though is overwatering. The top of the soil in a potted plant dries out fast, especially in winter, but underneath it can still be moist. Before watering, it's a good idea to check the soil by pushing a finger into the soil to your first or second knuckle. If the soil is moist to the touch, wait a day or two, then recheck. When the soil feels dry at the depth you are checking, it is time to water. Put enough water on the soil so the excess comes out of the bottom of the container. Let it drain for 15 or 20 minutes, then discard the excess water. Photograph courtesy of Judy Sedbrook. © CSU/Denver County Extension Master Gardener 2010 888 E. Iliff Avenue, Denver, CO 80210 Date last revised: 01/05/2010
<urn:uuid:e6aa9904-7db3-4cac-827f-8a88cf10e431>
CC-MAIN-2016-26
http://www.colostate.edu/Depts/CoopExt/4DMG/Plants/dropping.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00099-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915642
300
3.265625
3
Frequently Asked Questions Find answers to recurring questions and myths about Bitcoin. Table of contents - What is Bitcoin? - Who created Bitcoin? - Who controls the Bitcoin network? - How does Bitcoin work? - Is Bitcoin really used by people? - How does one acquire bitcoins? - How difficult is it to make a Bitcoin payment? - What are the advantages of Bitcoin? - What are the disadvantages of Bitcoin? - Why do people trust Bitcoin? - Can I make money with Bitcoin? - Is Bitcoin fully virtual and immaterial? - Is Bitcoin anonymous? - What happens when bitcoins are lost? - Can Bitcoin scale to become a major payment network? - Is Bitcoin legal? - Is Bitcoin useful for illegal activities? - Can Bitcoin be regulated? - What about Bitcoin and taxes? - What about Bitcoin and consumer protection? - How are bitcoins created? - Why do bitcoins have value? - What determines bitcoin’s price? - Can bitcoins become worthless? - Is Bitcoin a bubble? - Is Bitcoin a Ponzi scheme? - Doesn't Bitcoin unfairly benefit early adopters? - Won't the finite amount of bitcoins be a limitation? - Won't Bitcoin fall in a deflationary spiral? - Isn't speculation and volatility a problem for Bitcoin? - What if someone bought up all the existing bitcoins? - What if someone creates a better digital currency? - Why do I have to wait for confirmation? - How much will the transaction fee be? - What if I receive a bitcoin when my computer is powered off? - What does "synchronizing" mean and why does it take so long? - What is Bitcoin mining? - How does Bitcoin mining work? - Isn't Bitcoin mining a waste of energy? - How does mining help secure Bitcoin? - What do I need to start mining? - Is Bitcoin secure? - Hasn't Bitcoin been hacked in the past? - Could users collude against Bitcoin? - Is Bitcoin vulnerable to quantum computing? What is Bitcoin? Bitcoin is a consensus network that enables a new payment system and a completely digital money. It is the first decentralized peer-to-peer payment network that is powered by its users with no central authority or middlemen. From a user perspective, Bitcoin is pretty much like cash for the Internet. Bitcoin can also be seen as the most prominent triple entry bookkeeping system in existence. Who created Bitcoin? Bitcoin is the first implementation of a concept called "cryptocurrency", which was first described in 1998 by Wei Dai on the cypherpunks mailing list, suggesting the idea of a new form of money that uses cryptography to control its creation and transactions, rather than a central authority. The first Bitcoin specification and proof of concept was published in 2009 in a cryptography mailing list by Satoshi Nakamoto. Satoshi left the project in late 2010 without revealing much about himself. The community has since grown exponentially with many developers working on Bitcoin. Satoshi's anonymity often raised unjustified concerns, many of which are linked to misunderstanding of the open-source nature of Bitcoin. The Bitcoin protocol and software are published openly and any developer around the world can review the code or make their own modified version of the Bitcoin software. Just like current developers, Satoshi's influence was limited to the changes he made being adopted by others and therefore he did not control Bitcoin. As such, the identity of Bitcoin's inventor is probably as relevant today as the identity of the person who invented paper. Who controls the Bitcoin network? Nobody owns the Bitcoin network much like no one owns the technology behind email. Bitcoin is controlled by all Bitcoin users around the world. While developers are improving the software, they can't force a change in the Bitcoin protocol because all users are free to choose what software and version they use. In order to stay compatible with each other, all users need to use software complying with the same rules. Bitcoin can only work correctly with a complete consensus among all users. Therefore, all users and developers have a strong incentive to protect this consensus. How does Bitcoin work? From a user perspective, Bitcoin is nothing more than a mobile app or computer program that provides a personal Bitcoin wallet and allows a user to send and receive bitcoins with them. This is how Bitcoin works for most users. Behind the scenes, the Bitcoin network is sharing a public ledger called the "block chain". This ledger contains every transaction ever processed, allowing a user's computer to verify the validity of each transaction. The authenticity of each transaction is protected by digital signatures corresponding to the sending addresses, allowing all users to have full control over sending bitcoins from their own Bitcoin addresses. In addition, anyone can process transactions using the computing power of specialized hardware and earn a reward in bitcoins for this service. This is often called "mining". To learn more about Bitcoin, you can consult the dedicated page and the original paper. Is Bitcoin really used by people? Yes. There is a growing number of businesses and individuals using Bitcoin. This includes brick and mortar businesses like restaurants, apartments, law firms, and popular online services such as Namecheap, WordPress, and Reddit. While Bitcoin remains a relatively new phenomenon, it is growing fast. At the end of August 2013, the value of all bitcoins in circulation exceeded US$ 1.5 billion with millions of dollars worth of bitcoins exchanged daily. How does one acquire bitcoins? - As payment for goods or services. - Purchase bitcoins at a Bitcoin exchange. - Exchange bitcoins with someone near you. - Earn bitcoins through competitive mining. While it may be possible to find individuals who wish to sell bitcoins in exchange for a credit card or PayPal payment, most exchanges do not allow funding via these payment methods. This is due to cases where someone buys bitcoins with PayPal, and then reverses their half of the transaction. This is commonly referred to as a chargeback. How difficult is it to make a Bitcoin payment? Bitcoin payments are easier to make than debit or credit card purchases, and can be received without a merchant account. Payments are made from a wallet application, either on your computer or smartphone, by entering the recipient's address, the payment amount, and pressing send. To make it easier to enter a recipient's address, many wallets can obtain the address by scanning a QR code or touching two phones together with NFC technology. What are the advantages of Bitcoin? - Payment freedom - It is possible to send and receive bitcoins anywhere in the world at any time. No bank holidays. No borders. No bureaucracy. Bitcoin allows its users to be in full control of their money. - Choose your own fees - There is no fee to receive bitcoins, and many wallets let you control how large a fee to pay when spending. Higher fees can encourage faster confirmation of your transactions. Fees are unrelated to the amount transferred, so it's possible to send 100,000 bitcoins for the same fee it costs to send 1 bitcoin. Additionally, merchant processors exist to assist merchants in processing transactions, converting bitcoins to fiat currency and depositing funds directly into merchants' bank accounts daily. As these services are based on Bitcoin, they can be offered for much lower fees than with PayPal or credit card networks. - Fewer risks for merchants - Bitcoin transactions are secure, irreversible, and do not contain customers’ sensitive or personal information. This protects merchants from losses caused by fraud or fraudulent chargebacks, and there is no need for PCI compliance. Merchants can easily expand to new markets where either credit cards are not available or fraud rates are unacceptably high. The net results are lower fees, larger markets, and fewer administrative costs. - Security and control - Bitcoin users are in full control of their transactions; it is impossible for merchants to force unwanted or unnoticed charges as can happen with other payment methods. Bitcoin payments can be made without personal information tied to the transaction. This offers strong protection against identity theft. Bitcoin users can also protect their money with backup and encryption. - Transparent and neutral - All information concerning the Bitcoin money supply itself is readily available on the block chain for anybody to verify and use in real-time. No individual or organization can control or manipulate the Bitcoin protocol because it is cryptographically secure. This allows the core of Bitcoin to be trusted for being completely neutral, transparent and predictable. What are the disadvantages of Bitcoin? - Degree of acceptance - Many people are still unaware of Bitcoin. Every day, more businesses accept bitcoins because they want the advantages of doing so, but the list remains small and still needs to grow in order to benefit from network effects. - Volatility - The total value of bitcoins in circulation and the number of businesses using Bitcoin are still very small compared to what they could be. Therefore, relatively small events, trades, or business activities can significantly affect the price. In theory, this volatility will decrease as Bitcoin markets and the technology matures. Never before has the world seen a start-up currency, so it is truly difficult (and exciting) to imagine how it will play out. - Ongoing development - Bitcoin software is still in beta with many incomplete features in active development. New tools, features, and services are being developed to make Bitcoin more secure and accessible to the masses. Some of these are still not ready for everyone. Most Bitcoin businesses are new and still offer no insurance. In general, Bitcoin is still in the process of maturing. Why do people trust Bitcoin? Much of the trust in Bitcoin comes from the fact that it requires no trust at all. Bitcoin is fully open-source and decentralized. This means that anyone has access to the entire source code at any time. Any developer in the world can therefore verify exactly how Bitcoin works. All transactions and bitcoins issued into existence can be transparently consulted in real-time by anyone. All payments can be made without reliance on a third party and the whole system is protected by heavily peer-reviewed cryptographic algorithms like those used for online banking. No organization or individual can control Bitcoin, and the network remains secure even if not all of its users can be trusted. Can I make money with Bitcoin? You should never expect to get rich with Bitcoin or any emerging technology. It is always important to be wary of anything that sounds too good to be true or disobeys basic economic rules. Bitcoin is a growing space of innovation and there are business opportunities that also include risks. There is no guarantee that Bitcoin will continue to grow even though it has developed at a very fast rate so far. Investing time and resources on anything related to Bitcoin requires entrepreneurship. There are various ways to make money with Bitcoin such as mining, speculation or running new businesses. All of these methods are competitive and there is no guarantee of profit. It is up to each individual to make a proper evaluation of the costs and the risks involved in any such project. Is Bitcoin fully virtual and immaterial? Bitcoin is as virtual as the credit cards and online banking networks people use everyday. Bitcoin can be used to pay online and in physical stores just like any other form of money. Bitcoins can also be exchanged in physical form such as the Casascius coins, but paying with a mobile phone usually remains more convenient. Bitcoin balances are stored in a large distributed network, and they cannot be fraudulently altered by anybody. In other words, Bitcoin users have exclusive control over their funds and bitcoins cannot vanish just because they are virtual. Is Bitcoin anonymous? Bitcoin is designed to allow its users to send and receive payments with an acceptable level of privacy as well as any other form of money. However, Bitcoin is not anonymous and cannot offer the same level of privacy as cash. The use of Bitcoin leaves extensive public records. Various mechanisms exist to protect users' privacy, and more are in development. However, there is still work to be done before these features are used correctly by most Bitcoin users. Some concerns have been raised that private transactions could be used for illegal purposes with Bitcoin. However, it is worth noting that Bitcoin will undoubtedly be subjected to similar regulations that are already in place inside existing financial systems. Bitcoin cannot be more anonymous than cash and it is not likely to prevent criminal investigations from being conducted. Additionally, Bitcoin is also designed to prevent a large range of financial crimes. What happens when bitcoins are lost? When a user loses his wallet, it has the effect of removing money out of circulation. Lost bitcoins still remain in the block chain just like any other bitcoins. However, lost bitcoins remain dormant forever because there is no way for anybody to find the private key(s) that would allow them to be spent again. Because of the law of supply and demand, when fewer bitcoins are available, the ones that are left will be in higher demand and increase in value to compensate. Can Bitcoin scale to become a major payment network? The Bitcoin network can already process a much higher number of transactions per second than it does today. It is, however, not entirely ready to scale to the level of major credit card networks. Work is underway to lift current limitations, and future requirements are well known. Since inception, every aspect of the Bitcoin network has been in a continuous process of maturation, optimization, and specialization, and it should be expected to remain that way for some years to come. As traffic grows, more Bitcoin users may use lightweight clients, and full network nodes may become a more specialized service. For more details, see the Scalability page on the Wiki. Is Bitcoin legal? To the best of our knowledge, Bitcoin has not been made illegal by legislation in most jurisdictions. However, some jurisdictions (such as Argentina and Russia) severely restrict or ban foreign currencies. Other jurisdictions (such as Thailand) may limit the licensing of certain entities such as Bitcoin exchanges. Regulators from various jurisdictions are taking steps to provide individuals and businesses with rules on how to integrate this new technology with the formal, regulated financial system. For example, the Financial Crimes Enforcement Network (FinCEN), a bureau in the United States Treasury Department, issued non-binding guidance on how it characterizes certain activities involving virtual currencies. Is Bitcoin useful for illegal activities? Bitcoin is money, and money has always been used both for legal and illegal purposes. Cash, credit cards and current banking systems widely surpass Bitcoin in terms of their use to finance crime. Bitcoin can bring significant innovation in payment systems and the benefits of such innovation are often considered to be far beyond their potential drawbacks. Bitcoin is designed to be a huge step forward in making money more secure and could also act as a significant protection against many forms of financial crime. For instance, bitcoins are completely impossible to counterfeit. Users are in full control of their payments and cannot receive unapproved charges such as with credit card fraud. Bitcoin transactions are irreversible and immune to fraudulent chargebacks. Bitcoin allows money to be secured against theft and loss using very strong and useful mechanisms such as backups, encryption, and multiple signatures. Some concerns have been raised that Bitcoin could be more attractive to criminals because it can be used to make private and irreversible payments. However, these features already exist with cash and wire transfer, which are widely used and well-established. The use of Bitcoin will undoubtedly be subjected to similar regulations that are already in place inside existing financial systems, and Bitcoin is not likely to prevent criminal investigations from being conducted. In general, it is common for important breakthroughs to be perceived as being controversial before their benefits are well understood. The Internet is a good example among many others to illustrate this. Can Bitcoin be regulated? The Bitcoin protocol itself cannot be modified without the cooperation of nearly all its users, who choose what software they use. Attempting to assign special rights to a local authority in the rules of the global Bitcoin network is not a practical possibility. Any rich organization could choose to invest in mining hardware to control half of the computing power of the network and become able to block or reverse recent transactions. However, there is no guarantee that they could retain this power since this requires to invest as much than all other miners in the world. It is however possible to regulate the use of Bitcoin in a similar way to any other instrument. Just like the dollar, Bitcoin can be used for a wide variety of purposes, some of which can be considered legitimate or not as per each jurisdiction's laws. In this regard, Bitcoin is no different than any other tool or resource and can be subjected to different regulations in each country. Bitcoin use could also be made difficult by restrictive regulations, in which case it is hard to determine what percentage of users would keep using the technology. A government that chooses to ban Bitcoin would prevent domestic businesses and markets from developing, shifting innovation to other countries. The challenge for regulators, as always, is to develop efficient solutions while not impairing the growth of new emerging markets and businesses. What about Bitcoin and taxes? Bitcoin is not a fiat currency with legal tender status in any jurisdiction, but often tax liability accrues regardless of the medium used. There is a wide variety of legislation in many different jurisdictions which could cause income, sales, payroll, capital gains, or some other form of tax liability to arise with Bitcoin. What about Bitcoin and consumer protection? Bitcoin is freeing people to transact on their own terms. Each user can send and receive payments in a similar way to cash but they can also take part in more complex contracts. Multiple signatures allow a transaction to be accepted by the network only if a certain number of a defined group of persons agree to sign the transaction. This allows innovative dispute mediation services to be developed in the future. Such services could allow a third party to approve or reject a transaction in case of disagreement between the other parties without having control on their money. As opposed to cash and other payment methods, Bitcoin always leaves a public proof that a transaction did take place, which can potentially be used in a recourse against businesses with fraudulent practices. It is also worth noting that while merchants usually depend on their public reputation to remain in business and pay their employees, they don't have access to the same level of information when dealing with new consumers. The way Bitcoin works allows both individuals and businesses to be protected against fraudulent chargebacks while giving the choice to the consumer to ask for more protection when they are not willing to trust a particular merchant. How are bitcoins created? New bitcoins are generated by a competitive and decentralized process called "mining". This process involves that individuals are rewarded by the network for their services. Bitcoin miners are processing transactions and securing the network using specialized hardware and are collecting new bitcoins in exchange. The Bitcoin protocol is designed in such a way that new bitcoins are created at a fixed rate. This makes Bitcoin mining a very competitive business. When more miners join the network, it becomes increasingly difficult to make a profit and miners must seek efficiency to cut their operating costs. No central authority or developer has any power to control or manipulate the system to increase their profits. Every Bitcoin node in the world will reject anything that does not comply with the rules it expects the system to follow. Bitcoins are created at a decreasing and predictable rate. The number of new bitcoins created each year is automatically halved over time until bitcoin issuance halts completely with a total of 21 million bitcoins in existence. At this point, Bitcoin miners will probably be supported exclusively by numerous small transaction fees. Why do bitcoins have value? Bitcoins have value because they are useful as a form of money. Bitcoin has the characteristics of money (durability, portability, fungibility, scarcity, divisibility, and recognizability) based on the properties of mathematics rather than relying on physical properties (like gold and silver) or trust in central authorities (like fiat currencies). In short, Bitcoin is backed by mathematics. With these attributes, all that is required for a form of money to hold value is trust and adoption. In the case of Bitcoin, this can be measured by its growing base of users, merchants, and startups. As with all currency, bitcoin's value comes only and directly from people willing to accept them as payment. What determines bitcoin’s price? The price of a bitcoin is determined by supply and demand. When demand for bitcoins increases, the price increases, and when demand falls, the price falls. There is only a limited number of bitcoins in circulation and new bitcoins are created at a predictable and decreasing rate, which means that demand must follow this level of inflation to keep the price stable. Because Bitcoin is still a relatively small market compared to what it could be, it doesn't take significant amounts of money to move the market price up or down, and thus the price of a bitcoin is still very volatile. Bitcoin price over time: Can bitcoins become worthless? Yes. History is littered with currencies that failed and are no longer used, such as the German Mark during the Weimar Republic and, more recently, the Zimbabwean dollar. Although previous currency failures were typically due to hyperinflation of a kind that Bitcoin makes impossible, there is always potential for technical failures, competing currencies, political issues and so on. As a basic rule of thumb, no currency should be considered absolutely safe from failures or hard times. Bitcoin has proven reliable for years since its inception and there is a lot of potential for Bitcoin to continue to grow. However, no one is in a position to predict what the future will be for Bitcoin. Is Bitcoin a bubble? A fast rise in price does not constitute a bubble. An artificial over-valuation that will lead to a sudden downward correction constitutes a bubble. Choices based on individual human action by hundreds of thousands of market participants is the cause for bitcoin's price to fluctuate as the market seeks price discovery. Reasons for changes in sentiment may include a loss of confidence in Bitcoin, a large difference between value and price not based on the fundamentals of the Bitcoin economy, increased press coverage stimulating speculative demand, fear of uncertainty, and old-fashioned irrational exuberance and greed. Is Bitcoin a Ponzi scheme? A Ponzi scheme is a fraudulent investment operation that pays returns to its investors from their own money, or the money paid by subsequent investors, instead of from profit earned by the individuals running the business. Ponzi schemes are designed to collapse at the expense of the last investors when there is not enough new participants. Bitcoin is a free software project with no central authority. Consequently, no one is in a position to make fraudulent representations about investment returns. Like other major currencies such as gold, United States dollar, euro, yen, etc. there is no guaranteed purchasing power and the exchange rate floats freely. This leads to volatility where owners of bitcoins can unpredictably make or lose money. Beyond speculation, Bitcoin is also a payment system with useful and competitive attributes that are being used by thousands of users and businesses. Doesn't Bitcoin unfairly benefit early adopters? Some early adopters have large numbers of bitcoins because they took risks and invested time and resources in an unproven technology that was hardly used by anyone and that was much harder to secure properly. Many early adopters spent large numbers of bitcoins quite a few times before they became valuable or bought only small amounts and didn't make huge gains. There is no guarantee that the price of a bitcoin will increase or drop. This is very similar to investing in an early startup that can either gain value through its usefulness and popularity, or just never break through. Bitcoin is still in its infancy, and it has been designed with a very long-term view; it is hard to imagine how it could be less biased towards early adopters, and today's users may or may not be the early adopters of tomorrow. Won't the finite amount of bitcoins be a limitation? Bitcoin is unique in that only 21 million bitcoins will ever be created. However, this will never be a limitation because transactions can be denominated in smaller sub-units of a bitcoin, such as bits - there are 1,000,000 bits in 1 bitcoin. Bitcoins can be divided up to 8 decimal places (0.000 000 01) and potentially even smaller units if that is ever required in the future as the average transaction size decreases. Won't Bitcoin fall in a deflationary spiral? The deflationary spiral theory says that if prices are expected to fall, people will move purchases into the future in order to benefit from the lower prices. That fall in demand will in turn cause merchants to lower their prices to try and stimulate demand, making the problem worse and leading to an economic depression. Although this theory is a popular way to justify inflation amongst central bankers, it does not appear to always hold true and is considered controversial amongst economists. Consumer electronics is one example of a market where prices constantly fall but which is not in depression. Similarly, the value of bitcoins has risen over time and yet the size of the Bitcoin economy has also grown dramatically along with it. Because both the value of the currency and the size of its economy started at zero in 2009, Bitcoin is a counterexample to the theory showing that it must sometimes be wrong. Notwithstanding this, Bitcoin is not designed to be a deflationary currency. It is more accurate to say Bitcoin is intended to inflate in its early years, and become stable in its later years. The only time the quantity of bitcoins in circulation will drop is if people carelessly lose their wallets by failing to make backups. With a stable monetary base and a stable economy, the value of the currency should remain the same. Isn't speculation and volatility a problem for Bitcoin? This is a chicken and egg situation. For bitcoin's price to stabilize, a large scale economy needs to develop with more businesses and users. For a large scale economy to develop, businesses and users will seek for price stability. Fortunately, volatility does not affect the main benefits of Bitcoin as a payment system to transfer money from point A to point B. It is possible for businesses to convert bitcoin payments to their local currency instantly, allowing them to profit from the advantages of Bitcoin without being subjected to price fluctuations. Since Bitcoin offers many useful and unique features and properties, many users choose to use Bitcoin. With such solutions and incentives, it is possible that Bitcoin will mature and develop to a degree where price volatility will become limited. What if someone bought up all the existing bitcoins? Only a fraction of bitcoins issued to date are found on the exchange markets for sale. Bitcoin markets are competitive, meaning the price of a bitcoin will rise or fall depending on supply and demand. Additionally, new bitcoins will continue to be issued for decades to come. Therefore even the most determined buyer could not buy all the bitcoins in existence. This situation isn't to suggest, however, that the markets aren't vulnerable to price manipulation; it still doesn't take significant amounts of money to move the market price up or down, and thus Bitcoin remains a volatile asset thus far. What if someone creates a better digital currency? That can happen. For now, Bitcoin remains by far the most popular decentralized virtual currency, but there can be no guarantee that it will retain that position. There is already a set of alternative currencies inspired by Bitcoin. It is however probably correct to assume that significant improvements would be required for a new currency to overtake Bitcoin in terms of established market, even though this remains unpredictable. Bitcoin could also conceivably adopt improvements of a competing currency so long as it doesn't change fundamental parts of the protocol. Why do I have to wait for confirmation? Receiving notification of a payment is almost instant with Bitcoin. However, there is a delay before the network begins to confirm your transaction by including it in a block. A confirmation means that there is a consensus on the network that the bitcoins you received haven't been sent to anyone else and are considered your property. Once your transaction has been included in one block, it will continue to be buried under every block after it, which will exponentially consolidate this consensus and decrease the risk of a reversed transaction. Each confirmation takes between a few seconds and 90 minutes, with 10 minutes being the average. If the transaction pays too low a fee or is otherwise atypical, getting the first confirmation can take much longer. Every user is free to determine at what point they consider a transaction sufficiently confirmed, but 6 confirmations is often considered to be as safe as waiting 6 months on a credit card transaction. How much will the transaction fee be? Transactions can be processed without fees, but trying to send free transactions can require waiting days or weeks. Although fees may increase over time, normal fees currently only cost a tiny amount. By default, all Bitcoin wallets listed on Bitcoin.org add what they think is an appropriate fee to your transactions; most of those wallets will also give you chance to review the fee before sending the transaction. Transaction fees are used as a protection against users sending transactions to overload the network and as a way to pay miners for their work helping to secure the network. The precise manner in which fees work is still being developed and will change over time. Because the fee is not related to the amount of bitcoins being sent, it may seem extremely low or unfairly high. Instead, the fee is relative to the number of bytes in the transaction, so using multisig or spending multiple previously-received amounts may cost more than simpler transactions. If your activity follows the pattern of conventional transactions, you won't have to pay unusually high fees. What if I receive a bitcoin when my computer is powered off? This works fine. The bitcoins will appear next time you start your wallet application. Bitcoins are not actually received by the software on your computer, they are appended to a public ledger that is shared between all the devices on the network. If you are sent bitcoins when your wallet client program is not running and you later launch it, it will download blocks and catch up with any transactions it did not already know about, and the bitcoins will eventually appear as if they were just received in real time. Your wallet is only needed when you wish to spend bitcoins. What does "synchronizing" mean and why does it take so long? Long synchronization time is only required with full node clients like Bitcoin Core. Technically speaking, synchronizing is the process of downloading and verifying all previous Bitcoin transactions on the network. For some Bitcoin clients to calculate the spendable balance of your Bitcoin wallet and make new transactions, it needs to be aware of all previous transactions. This step can be resource intensive and requires sufficient bandwidth and storage to accommodate the full size of the block chain. For Bitcoin to remain secure, enough people should keep using full node clients because they perform the task of validating and relaying transactions. What is Bitcoin mining? Mining is the process of spending computing power to process transactions, secure the network, and keep everyone in the system synchronized together. It can be perceived like the Bitcoin data center except that it has been designed to be fully decentralized with miners operating in all countries and no individual having control over the network. This process is referred to as "mining" as an analogy to gold mining because it is also a temporary mechanism used to issue new bitcoins. Unlike gold mining, however, Bitcoin mining provides a reward in exchange for useful services required to operate a secure payment network. Mining will still be required after the last bitcoin is issued. How does Bitcoin mining work? Anybody can become a Bitcoin miner by running software with specialized hardware. Mining software listens for transactions broadcast through the peer-to-peer network and performs appropriate tasks to process and confirm these transactions. Bitcoin miners perform this work because they can earn transaction fees paid by users for faster transaction processing, and newly created bitcoins issued into existence according to a fixed formula. For new transactions to be confirmed, they need to be included in a block along with a mathematical proof of work. Such proofs are very hard to generate because there is no way to create them other than by trying billions of calculations per second. This requires miners to perform these calculations before their blocks are accepted by the network and before they are rewarded. As more people start to mine, the difficulty of finding valid blocks is automatically increased by the network to ensure that the average time to find a block remains equal to 10 minutes. As a result, mining is a very competitive business where no individual miner can control what is included in the block chain. The proof of work is also designed to depend on the previous block to force a chronological order in the block chain. This makes it exponentially difficult to reverse previous transactions because this requires the recalculation of the proofs of work of all the subsequent blocks. When two blocks are found at the same time, miners work on the first block they receive and switch to the longest chain of blocks as soon as the next block is found. This allows mining to secure and maintain a global consensus based on processing power. Bitcoin miners are neither able to cheat by increasing their own reward nor process fraudulent transactions that could corrupt the Bitcoin network because all Bitcoin nodes would reject any block that contains invalid data as per the rules of the Bitcoin protocol. Consequently, the network remains secure even if not all Bitcoin miners can be trusted. Isn't Bitcoin mining a waste of energy? Spending energy to secure and operate a payment system is hardly a waste. Like any other payment service, the use of Bitcoin entails processing costs. Services necessary for the operation of currently widespread monetary systems, such as banks, credit cards, and armored vehicles, also use a lot of energy. Although unlike Bitcoin, their total energy consumption is not transparent and cannot be as easily measured. Bitcoin mining has been designed to become more optimized over time with specialized hardware consuming less energy, and the operating costs of mining should continue to be proportional to demand. When Bitcoin mining becomes too competitive and less profitable, some miners choose to stop their activities. Furthermore, all energy expended mining is eventually transformed into heat, and the most profitable miners will be those who have put this heat to good use. An optimally efficient mining network is one that isn't actually consuming any extra energy. While this is an ideal, the economics of mining are such that miners individually strive toward it. How does mining help secure Bitcoin? Mining creates the equivalent of a competitive lottery that makes it very difficult for anyone to consecutively add new blocks of transactions into the block chain. This protects the neutrality of the network by preventing any individual from gaining the power to block certain transactions. This also prevents any individual from replacing parts of the block chain to roll back their own spends, which could be used to defraud other users. Mining makes it exponentially more difficult to reverse a past transaction by requiring the rewriting of all blocks following this transaction. What do I need to start mining? In the early days of Bitcoin, anyone could find a new block using their computer's CPU. As more and more people started mining, the difficulty of finding new blocks increased greatly to the point where the only cost-effective method of mining today is using specialized hardware. You can visit BitcoinMining.com for more information. Is Bitcoin secure? The Bitcoin technology - the protocol and the cryptography - has a strong security track record, and the Bitcoin network is probably the biggest distributed computing project in the world. Bitcoin's most common vulnerability is in user error. Bitcoin wallet files that store the necessary private keys can be accidentally deleted, lost or stolen. This is pretty similar to physical cash stored in a digital form. Fortunately, users can employ sound security practices to protect their money or use service providers that offer good levels of security and insurance against theft or loss. Hasn't Bitcoin been hacked in the past? The rules of the protocol and the cryptography used for Bitcoin are still working years after its inception, which is a good indication that the concept is well designed. However, security flaws have been found and fixed over time in various software implementations. Like any other form of software, the security of Bitcoin software depends on the speed with which problems are found and fixed. The more such issues are discovered, the more Bitcoin is gaining maturity. There are often misconceptions about thefts and security breaches that happened on diverse exchanges and businesses. Although these events are unfortunate, none of them involve Bitcoin itself being hacked, nor imply inherent flaws in Bitcoin; just like a bank robbery doesn't mean that the dollar is compromised. However, it is accurate to say that a complete set of good practices and intuitive security solutions is needed to give users better protection of their money, and to reduce the general risk of theft and loss. Over the course of the last few years, such security features have quickly developed, such as wallet encryption, offline wallets, hardware wallets, and multi-signature transactions. Could users collude against Bitcoin? It is not possible to change the Bitcoin protocol that easily. Any Bitcoin client that doesn't comply with the same rules cannot enforce their own rules on other users. As per the current specification, double spending is not possible on the same block chain, and neither is spending bitcoins without a valid signature. Therefore, It is not possible to generate uncontrolled amounts of bitcoins out of thin air, spend other users' funds, corrupt the network, or anything similar. However, powerful miners could arbitrarily choose to block or reverse recent transactions. A majority of users can also put pressure for some changes to be adopted. Because Bitcoin only works correctly with a complete consensus between all users, changing the protocol can be very difficult and requires an overwhelming majority of users to adopt the changes in such a way that remaining users have nearly no choice but to follow. As a general rule, it is hard to imagine why any Bitcoin user would choose to adopt any change that could compromise their own money. Is Bitcoin vulnerable to quantum computing? Yes, most systems relying on cryptography in general are, including traditional banking systems. However, quantum computers don't yet exist and probably won't for a while. In the event that quantum computing could be an imminent threat to Bitcoin, the protocol could be upgraded to use post-quantum algorithms. Given the importance that this update would have, it can be safely expected that it would be highly reviewed by developers and adopted by all Bitcoin users.
<urn:uuid:b9f11e89-e53c-4e8a-a791-544bb94e9ddc>
CC-MAIN-2016-26
https://bitcoin.org/en/faq
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953193
7,794
2.515625
3
Biomass Potential of Poplar Energy Crops in Minnesota and Wisconsin Assessed Short-rotation woody crops such as poplar trees and poplar-hybrid varieties are a significant component of the total biofuel and bioenergy feedstock resource in the United States. Production of these dedicated energy crops can result in large-scale land conversion, raising questions about the economic, logistic, and ecologic feasibility of the crops. To address such concerns, Forest Service scientists used available social (i.e., land ownership and cover) and biophysical (i.e., climate, soil characteristics) spatial data to map eligible lands suitable for establishing and growing poplar biomass for bioenergy crops across Minnesota and Wisconsin. They confirmed the validity of this mapping technique by sampling and assessing biotic variables within locations identified on the maps. In addition, they estimated potential poplar productivity within identified areas using a process-based growth model to determine spatial distribution of productive lands across the study area. Although this novel approach was validated for Minnesota and Wisconsin, the methodology is useful across a wide range of geographic conditions, irrespective of intraregional variability in site and climate parameters. Thus, this information is vital for siting poplar energy production systems to increase productivity and associated ecosystem services and is widely applicable to woody biomass production systems worldwide. Forest Service Partners
<urn:uuid:06921feb-40fe-464e-b353-21745a21605e>
CC-MAIN-2016-26
http://www.fs.fed.us/research/highlights/highlights_display.php?in_high_id=50
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892088
273
2.6875
3
Soda Facts 101 Research has proven a direct relationship between consumption of sugary drinks and an increase in obesity, which promotes diabetes, heart disease, stroke, and many other health problems. Now you know the plight of The Real Bears. Real human families should also know about the risks of drinking too much soda. Here are the unhappy facts. "There is no scientific evidence that connects sugary beverages Truth: Each additional sugary drink consumed per day increases the likelihood of a child becoming obese by about 60%. Sugary drinks are connected to other health problems Truth: Each soda consumed per day increases the risk of heart disease by 19% in men. Truth: Drinking one or two sugary drinks per day increases your risk for type 2 diabetes by 25%. Truth: Diabetes can lead to erectile dysfunction. "If you're consuming the calories from the banana and there is the same number of calories as in a beverage that you consume, the impact on your body is calories Truth: Liquid calories are more conducive to weight gain than solid calories, because the human body doesn't compensate by reducing calorie intake later in the day. Truth: Sugary drinks are the single-largest source of calories in the American diet, providing an average of about 7 percent of total calories per person, and that average includes all the people who rarely drink them. The percentage of calories from sugary drinks is much higher for people who consume them often—such as several times a day. Truth: Most sugary drinks are devoid of nutrition—vitamins, minerals, protein, or fiber—and contain only empty calories. Truth: It would take the average adult over one hour of walking to burn off the 240 calories in a 20-ounce Coke. Truth: Americans consume about 38 pounds of sugar from sugary drinks each year. "At The Coca-Cola Company, we know our business can only be as strong and sustainable and healthy as the communities we serve." Truth: If communities were healthier, Coca-Cola Co. would be selling a lot fewer Cokes. The tripling of sugary carbonated drink consumption since the mid-1950s is one of the major causes of obesity. Truth: Between 20% and 50% of the approximately 300 calories Americans have added to their diets in the past 30 years is attributable to increasing sugary drink consumption, now at an average of 178 calories for men and 103 calories for women per day. Truth: Coca-Cola plans to spend more than $21 billion over the next five years to expand its business in just four countries: China, India, Brazil, and Mexico—which will undermine the health of "the communities we serve." Truth: When Congress was considering a soda tax to help pay for health-care reform and improve the health of communities, Big Soda increased its lobbying expenses by 3,000% over the 2005 levels. Truth: Big Soda gives generously to community groups, organizations of public officials, minority groups, and medical and health groups to influence policy positions and discourage criticism of the companies for undermining the health of communities. It often "changes the conversation" by focusing on building playgrounds and encouraging physical activity. "[O]ur member companies do not advertise beverages other than juice, water or milk-based drinks to any audience that is comprised predominantly of children under Truth: Not only do children under 12 see Coke and Pepsi logos everywhere, but Coca-Cola Co. promotes its products heavily at Disneyland, on American Idol, and on telecasts of the Olympics, all of which are seen by huge numbers of young children. Also, they sell kids' tee-shirts, toys, games, and stuffed animals with Coca-Cola logos at its web store, and the company licenses similar kid-friendly products at Toys "R" Us, and elsewhere. Truth: Coke has long reached millions of young children by marketing its drinks at child-friendly fast food restaurants, including McDonald's, the home of Happy Meals. Truth: While soda companies, thankfully, have not advertised on TV shows intended for little kids, they have spent heavily to get their brand names onto school scoreboards and their products into elementary, middle, and high schools. An internal 1995 Coke newsletter exclaimed, "The Coca-Cola Company is focusing upon the education market with revitalized efforts around the world." Only recently did public pressure force them to stop. Truth: Soft drink companies do market aggressively to teens. According to the Federal Trade Commission, in 2006, companies spent $474 million marketing carbonated beverages directly to adolescents-more than twice the marketing budget for any other consumable product. "Coca-Cola is an excellent complement to the habits of a healthy life." Truth: Coca-Cola and other colas undermine that healthy life with loads of obesity-promoting high-fructose corn syrup, mildly addictive caffeine, caramel coloring with its carcinogenic 4-methylimidazole contaminant, and tooth-rotting phosphoric acid. Unfortunately, this is no lie: "There is a large portion of the population that relies on the carbohydrates and energy in our regular beverages." Truth: Far too many people do rely too much on soft drinks for their calories. Sugary drinks' empty calories displace healthier foods, and Americans already consume hundreds more calories per day on average than they did 30 years ago. Truth: Two-thirds of American adults and one-third of children are overweight or obese. Truth: The American Heart Association urges Americans to consume 60% less sugary drinks by 2020. Truth: Overall, males 12 to 19 years old consume 273 calories per day from sugary drinks; female teens down 171 per day. Want to dive deeper into the facts? Download Soda Facts 101 with citations here
<urn:uuid:45b65620-90c2-4f4a-b992-f9a6a31a8a1c>
CC-MAIN-2016-26
http://therealbears.org/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942336
1,202
2.625
3
- Development & Aid - Economy & Trade - Human Rights - Global Governance - Civil Society Wednesday, June 29, 2016 - While the United Nations climate talks in Durban enter their ninth day of political feet-dragging, researchers and peasants around the world are busy connecting the dots between so- called “green climate solutions”, industrialised agriculture and chronic hunger. New research released Tuesday by the U.S.-based Oakland Institute (OI) reveals the nexus between “false” fuel alternatives such as the development of agrofuels and agroforests and the massive land grab underway in Africa that is stripping thousands of peasants of their land and means of subsistence. The research cites the hypocrisy of major industrialised actors like the U.S. and the European Union, as well the World Bank Group (WBG) and other development agencies for pouring money into assisting victims of famine and natural disasters, all the while making massive investments in schemes that heat the earth and stifle local development. Industrial agriculture and biofuels: neither clean nor green Industrialised agricultural practices currently produce 13.5 percent of all green house gas emissions, mostly methane and nitrous oxide. The latter is emitted in huge doses through the spraying of fertiliser, which is used 800 times more frequently today than it was 100 years ago. The production of fertilisers themselves requires the burning up of fossil fuels, emitting up to 41 million tonnes of carbon dioxide (CO2) annually according to the U.N. Food and Agricultural Organisation (FAO). And yet, powerful governments like the U.S. and various players from the eurozone, together with the WBG, continue to advocate for the proliferation of agrofuels, which employ the same dirty, large-scale farming techniques described above, as a “green solution” to the climate crisis. In fact, the production of mono crop agrofuels guzzle thousands of gallons of freshwater, are processed into biodiesels – the very products that have overheated the planet to begin with – and create long, oil-thirsty transport chains to carry the product. The OI report estimates that the “conversion of rainforests and native grasslands into fields to produce agrofuel crops will release between 17 to 420 times more CO2 than the amount of greenhouse gas emissions that would be reduced following the replacement of fossil fuels with agrofuels. The increase in agrofuel use may release between 44 and 73 million additional tons of CO2 equivalent per year.” The U.S. alone has vowed to increase its use of agrofuels by 30 percent in the coming years. According to OI’s research, five million hectares of land throughout sub-Saharan Africa are currently under cultivation for agrofuel crops like palm trees and eucalyptus, in a multibillion dollar scheme that profits major transnational corporations and their government allies. The Chinese government now owns eight million hectares of land in the Democratic Republic of Congo for palm oil production, while Crest Global Green, a British bioenergy giant, holds deeds to 900,000 hectares combined in Mali, Guinea, and Senegal. “We were also shocked to find, during our research, several Scandinavian churches making land investments in countries like Mozambique, in schemes that involved thousands of hectares of illegally acquired land,” Frederic Mousseau, the policy director of OI, told IPS. “We have come to expect this from hedge funds, but not from churches,” he added. “The emergence of carbon trading and carbon markets has also been a major factor in the land grab, with carbon credits being touted as a green solution to the problem of carbon emissions,” Mousseau added. In fact, “the trade in carbon credits involves corporations and governments buying and selling credits in one part of the world in order to continue polluting domestically. Carbon trading not only assigns rights to developed countries and corporations to pollute, but also represents what some are calling “global climate malgovernance”,” according to the report. “Since this is a relatively new phenomenon, we have not yet seen all possible manifestations of the problem,” Moussa told IPS. “All we know for sure are the immediate negative consequences of this practice such as investors planting non-native crops which destroy the local environment, replacing rich grasslands with mono crops and denying indigenous groups their rights to land and their traditional practices that respect biodiversity.” David Deng, research director of the South Sudan Law Society, told IPS, “In South Sudan, government officials rarely know what biofuels are, much less carbon credits. As a result, they are often willing to give away these rights for free.” “For the time being, the uncertainty of the transitional context has prevented companies from beginning operations but if these “green” deals (carbon credits and agrofuel projects) in the newly established South Sudan move forward, we will see a massive transfer of wealth from landowning communities in South Sudan to transnational companies in the global North,” he added. Meanwhile Green Resources Ltd, a Norwegian timber company, has embarked on a plan to replace nearly 7,000 hectares of natural Tanzanian grasslands with monocultures of pine and eucalyptus, destroying the local biodiversity, displacing smallholders and burying jobs. The loss of local employment has been a particularly thorny issue in Sierra Leone, where investments by the Socfin Agricultural Company in the Pujenhun district have marginalized workers in the area. “Older people who have lost their land are not employed and women have to leave their homes as early as 4:30am to queue for daily wage jobs, which they seldom get,” Joseph Rahall, the director of Green Scenery in Sierra Leone, told IPS. “Vast tracks of land are now being cleared to make way for oil palm monocultures, which cannot be compared to a biodiverse flora. Families from the upland farms used to grow multiple crops capable of absorbing the shocks of food scarcity but many of these families have stopped planting for fear that multinationals will occupy their land,” he said. “Community members who were peacefully protesting the illegal occupation of their land were arrested and are now facing trials in court. The Northern countries’ preference for biofuels has deprived countries like ours of basic human security,” he added. *This is the first of a two-part series on biofuels, industrialised agriculture and hunger.
<urn:uuid:f33734c4-5c7d-45fa-80c5-0ce3988a3931>
CC-MAIN-2016-26
http://www.ipsnews.net/2011/12/at-the-nexus-of-agrofuels-land-grabs-and-hunger-ndash-part-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948653
1,377
2.625
3
Location: U.S. Federal Courthouse, Beltsville, Maryland U.S.A. Materials: Granite, water, bronze cylinder with Onondaga text, myscanthus grasses, "Seneca Red" gravel, 10,000 arrowheads Size: 100'L x 200'w This courthouse site was visited frequently 9,000 years ago by archaic Indians in search of stone materials for the manufacture of arrowheads, spear points, and scraping tools. Hundreds of the artifacts have been excavated here, but thousands remain buried below this area. Indian Run, which ran across this site at one time, was a stream of many uses for these early inhabitants. This small park adjacent to the Courthouse draws on the knowledge of these things for its inspiration. The long stone outcropping with its stream and waterfall recall the original Indian Run, the serpentine walkway, the archaic Indian moundbuilders, and the bronze cylinder which is the indigenous peoples' contribution to our legal system. The cylinder is perforated with the text of the Iroquois Book of the Great Law. The text itself is written in Onondaga and was transcribed from the ancient oral tradition of five Iroquois nations. At night this text is projected over the entire site by a pinpoint light source located within the bronze cylinder. It is believed by many scholars that Benjamin Franklin drew from this manuscript when he was designing the US Constitution. In addition this site was "seeded" with 10,000 arrowheads provided by the artist, these will be recoverable by visitors for many years. Return to Jim Sanborn webpage
<urn:uuid:e244912c-5847-49a6-b213-f062160ebf63>
CC-MAIN-2016-26
http://elonka.com/kryptos/sanborn/IndianRunPark.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960764
330
2.921875
3
April 15, 2010 EU Initiative to "AXLR8" Move to High-Tech, Animal-Free Methods For Chemical and Drug Testing AXLR8, an ambitious European Commission-funded initiative has launched today as part of growing international efforts to revolutionise chemical and drug safety testing with sophisticated “21st century” cell- and computer-based methods. A successful transition to 21st century toxicology could mean the end of animal testing as it exists today, sparing at least a million animals from suffering and death each year in the European Union alone . AXLR8 (=accelerate) is a unique collaboration between Humane Society International and academic scientists and technical experts from Germany and Belgium . The initiative, which has been awarded a half-million-Euro grant from the European Commission under the 7th Framework Programme for Research and Technology Development, will help to monitor and support European research to modernise the science of safety testing, and strengthen international co-ordination in this area. The vision is of a not-so-distant future in which most toxicity testing is carried out using a combination of computer modelling and human cell tests, which can already be performed with unparalleled efficiency using “high throughput” testing robots capable of working nearly 1,500-times faster than a human technician . Says Troy Seidle, director of research & toxicology for Humane Society International: “Exposing relatively short-lived animals to unrealistic doses of chemicals in sterile laboratory conditions is a primitive approach to assessing chemical effects on humans in real-life conditions. The scientists and global corporations HSI is working with are only too aware of the urgent need to bring the science of safety testing into the 21st century. If utilised to their full potential, cutting-edge cell- and computer-based methods could transform toxicity testing, making it quicker, cheaper and more applicable to real-life human exposure scenarios. As well as having enormous benefits for human health and environmental protection, this transition towards 21st century toxicity could significantly reduce and ultimately replace testing on animals.” There is considerable momentum behind the aim of a global transition to more modern and humane approaches in toxicity testing. In 2007 the United States National Research Council published the influential report “Toxicity Testing in the 21st Century: a Vision and a Strategy” calling for just such an overhaul of safety testing. As a result, U.S. regulatory and research agencies joined forces under the banner of the “Tox21” initiative to advance the scientific understanding of cellular mechanisms by which chemical toxicity occurs, and to develop more predictive methods for safety testing. In 2009, experts from six continents representing industry, academia, in vitro sciences and animal welfare, endorsed a global resolution endorsing the NRC vision . Toxicity data are needed to evaluate chemicals used in everything from cosmetics and household cleaners to pharmaceuticals, food additives, and pesticides. However, scientists and legislators across the EU and United States are coming to recognise that conventional tests, in which animals such as rodents, rabbits and dogs are given unrealistically large doses of chemicals, are too costly, time-consuming, and of uncertain relevance to human health effects to meet the demands for better and faster data as part of new chemicals regulation such as REACH . A recent report by the U.S. Food & Drug Administration estimates that new drug candidates have only an 8 percent chance of reaching the market, in large part because animal studies so often “fail to predict the specific safety problem that ultimately halts development” . For example, to evaluate the cancer-causing potential of a single chemical in a conventional rodent test takes up to 5 years, 800 animals and 3 million Euros . For the same price and without any use of animals, as many as 350 chemicals could be tested in less than one week in 200 different cell or gene tests using a robot-automated high throughput approach . This mechanistic approach involves a virtual “dissection” of the human body into its various cell types (brain, skin, lung, liver, etc.) and then tests each of these cell types individually for different types of toxic response. Computerised systems biology and pharmacokinetic models are then used to recreate the whole body scenario and relate conditions at the cellular level to expected real-world conditions for a living, breathing human being. Humane Society International and its affiliates are spearheading initiatives in the EU and U.S. to achieve this transition away from observing gross pathological effects in chemically overdosed animals to one based on studying how chemicals interact with “cellular pathways” in the human body at environmentally relevant exposure levels. Some of the globe’s largest consumer product, chemical and pharmaceutical companies—Dow, DuPont, Johnson & Johnson, Procter & Gamble, Unilever—have joined forces with HSI and its affiliates to add corporate support to help bring about the evolution in toxicology . And in the EU, the HSI-envisioned AXLR8 initiative will bring together leading scientists to advise the European Commission regarding future research needs and priorities to transition towards 21st century toxicology. Mapping the toxic pathways of the human body is an ambitious project that could take between 10 and 20 years—toxicology’s equivalent of mapping the human genome. Such a “big biology” project will require international collaboration and substantial investment, which is why HSI is working on both sides of the Atlantic to bring together international experts comprising academia, industry, government and regulators. The European Union has long been a world-leader in the development and regulatory uptake of animal replacement, reduction and refinement (3Rs) approaches in toxicity testing. Over the past 20 years, the European Commission has invested upwards of 200 million Euros in the 3Rs, and already some EU-pioneered tests have been internationally accepted, including tests for skin irritation, phototoxicity and pyrogenicity . Funding from the Commission’s 6th and 7th Research Framework Programmes is currently supporting 18 large-scale integrated projects to develop non-animal methods and strategies for reproductive toxicity and carcinogenicity, skin allergy and other health and environmental concerns . Recent advancements in molecular and cellular biology have made possible the realisation of the vision of 21st century toxicology. A range of new tools exist—such as functional genomics, proteomics, metabonomics, high data content screening and systems biology – that can be used for studying the effects of chemical stressors in the human body at a molecular and cellular level in a time- and cost-efficient manner. Rapid screening using high-throughput robotic automation means that thousands of substances can be processed in a single day. In this way data can be available to regulators in hours instead of days, weeks or even years, making future chemicals regulation more intelligent and responsive. Notes to Editors: 1. AXLR8 (website: axlr8.eu) is an EU-funded coordination project developed on the initiative of Humane Society International, the Free University of Berlin, and the Flemish Institute for Technological Research. 3. Humane Society International and its partner organisations together constitute one of the world's largest animal protection organisations—backed by 11 million people. For nearly 20 years, HSI has been working for the protection of all animals through the use of science, advocacy, education and hands-on programs. Celebrating animals and confronting cruelty worldwide (website: hsieurope.org). 8. Call for a new approach to toxicity testing & risk assessment to move away from animal use and better protect health, safety and the environment: signed by delegates at the 7th World Congress on Alternatives and Animal Use in the Life Sciences, Rome, August 2009 (website: http://bit.ly/l7Y18). 12. Personal communication from Dr. Chris Austin, director of the US National Institutes of Health Chemical Genomics Center.
<urn:uuid:2aaad15f-d4c9-41cf-9da9-1386efeea501>
CC-MAIN-2016-26
http://www.hsi.org/world/europe/news/news/2010/04/axlr8_041510.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923185
1,617
2.671875
3
|Scientific Name:||Acacia pennivenia| |Taxonomic Notes:||May be transferred to Racosperma.| |Red List Category & Criteria:||Near Threatened ver 3.1| |Reviewer(s):||Abuzinada, A.H. & AL-Eisawi, D.M.H. (Arabian Plants Red List Authority)| Acacia pennivenia, whilst abundant at present, is lopped as livestock fodder in dry periods. If livestock numbers increase greatly, or a succession of drought years occur, then this species will come under increasingly threatened. |Previously published Red List assessments:|| |Range Description:||Endemic to Soqotra.| |Range Map:||Click here to open the map viewer and explore range.| |Current Population Trend:||Unknown| |Habitat and Ecology:||Widespread in drought-deciduous woodland. Altitude of 50–650 m. Balfour in his Botany of Socotra (Bayley Balfour, 1888) has a record of an Entada sp. No specimen can be traced and the identity of his plant is a mystery. He notes this as a "A beautiful and graceful tree of which material is too fragmentary to permit identification, [which] is provisionally referred to this genus". He goes on to say that it has some resemblance to Acacia pennivenia Schweinf. and that the inhabitants give it the same name (Tomhor). No species of Entada has been recorded from the island and it seems likely that Balfour’s plant was in fact Acacia pennivenia. IUCN. 2004. 2004 IUCN Red List of Threatened Species. www.iucnredlist.org. Downloaded on 23 November 2004. Lock, J.M. 1989. Legumes of Africa: a checklist. Royal Botanic Gardens, International Legume Database and Information Service, Kew. Miller, A.G. 1992. List of Socotran endemics with conservation status. Revised November 1991 (unpublished). Miller, A.G. 1997. Completed data collection forms and comments concerning the threatened trees of Socotra and Yemen. Oldfield, S., Lusty, C. and MacKinven, A. (compilers). 1998. The World List of Threatened Trees. World Conservation Press, Cambridge, UK. |Citation:||Miller, A. 2004. Acacia pennivenia. The IUCN Red List of Threatened Species 2004: e.T30425A9548309. . Downloaded on 29 June 2016.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:2b0b39d4-4bf9-4730-a879-98454f675ded>
CC-MAIN-2016-26
http://www.iucnredlist.org/details/full/30425/0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00077-ip-10-164-35-72.ec2.internal.warc.gz
en
0.772565
601
3.046875
3
Posted on Dec 02, 2013 Proposition 39, as many California voters know, is a revolving loan fund that aims to quadruple the state's energy investments. Prop 39 California solar power was a huge subject with voters last year when they passed the "California Clean Energy Jobs Act" to help boost renewable energy resources with cutting edge companies such as Aztec Solar. Prop 39 Sacramento solar power laws are also aimed at creating more clean energy investments to help schools. Proposition to aid California Schools For instance, California Gov. Jerry Brown said the proposition will do a great deal to help the state's 1,032 school districts by providing about $2.875 billion over the next five years to invest in green energy projects and improve schools. The funds will likely be used to fund green projects at schools, which creates jobs, increases the use of renewable energy and decreases schools’ utility bills. In this day and age, as energy costs rise and funding for education is often cut, Prop 39 will invest in both the state’s energy and educational future. Impact of Prop 39 While the overall impact of Proposition 39 cannot be stated to the smallest detail, there is a clear relationship between this funding, which the state will earn by closing loopholes that unfairly favor companies with out of state properties and payroll and that require true out of state companies to pay income tax on their in-state revenue, and creating jobs in the solar sector and saving money for schools. For instance, a recent report by the UCLA Luskin Center for Innovation points out how that the law's revolving loan fund would not only boost natural energy investment and job creation, but also help California school districts with much needed funding that would be recapped from this long-term investment in solar energy technologies. Energy Investments Needed Now Another aspect of the proposition is linked to the public demand for more energy efficiency at a time when many consumers are struggling to pay their bills while energy prices are increasing. The proposition appealed to California voters because it both invests in renewable resources, such as solar energy, and creates jobs in the process. Meanwhile, it also helps small businesses compete in the state by ensuring larger companies can’t exploit tax loopholes that hurt those of us who pay our fair share. At the same time, voters are anxious for the program to start bearing fruit and we at Aztec Solar are excited to get started working to help rebuild our struggling school systems, invest in California’s energy independence and ensure we are saving the planet for years to come. You can refer to this article for more information about Prop 39.
<urn:uuid:4e60e153-a1bc-4608-8c24-777eb3d357a4>
CC-MAIN-2016-26
http://aztecsolar.com/solar-blog/prop-39-california-solar-power-promises-energy-funding-for-schools/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00184-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957074
526
2.875
3
Context sensitive processes can be applied to almost any function for which an agency is responsible. These are some of the ways in which this approach can be useful: - Long-Range Transportation Plans — Attention to diverse stakeholder values can play an integral role in visioning, screening, and prioritizing projects. Long-range plans can also include policies that encourage CSS approaches during plan implementation and project development. - Area-wide Transportation Planning — This approach can be applied over a variety of project areas, including statewide, metropolitan planning organizations (MPO), and regional and area-wide planning through such techniques as integrating transportation, economic, environmental, and land use factors at the area level before projects are selected. - Agency Standards and Criteria — The application of CSS during the development of agency policies and standards may not only lead to additional criteria, but also to greater flexibility, so that a project's context can be adequately considered. - Develop Project Concepts — Using the CSS approach leads to a more comprehensive and diverse set of alternatives that may offer different ways of balancing stakeholder interests and objectives. - Project Development — CSS approaches offer a framework that fully supports National Environmental Policy Act (NEPA) documents, approvals, and permitting and other related environmental regulations considered within the NEPA decision-making framework. - Consultation and Public Involvement — Engaging resource agencies and the public in decision making ensures greater chances for project success. - Preliminary Engineering and Final Design — Applying CSS principles during the design process results in a more thorough understanding of choices, opportunities, and constraints and further clarifies purpose and need. - Construction — Coordination and participation in developing traffic management plans, scheduling traffic delays, maintaining business impacts, and other mitigation of construction impacts. - Maintenance and Operations — The use of CSS approaches leads to scheduling activities to avoid conflicts with major events, providing information to those affected by the activity, and use of equipment and pesticides that avoid or minimize impacts on the natural environment. Strategies for Agency and Department Managers - Encourage Interdisciplinary Teams — Create collaborative teams that include all relevant planning and design disciplines. Endorse policies that lead to regular cooperation as projects develop. - Mentor Staff — Identify staff with experience using the approach and encourage knowledge/skill sharing. - Provide Training — Send staff to seminars and conferences to learn new applications and share experiences. - Use New Technologies — Invest in new technologies that will improve designs and public understanding and involvement. - Adopt Performance Measures — The NCHRP report, “Performance Measures for Context Sensitive Solutions,” illustrates measures that track the use of CSS approaches, both at the project level and organization–wide. - Document the Business Case — When CSS projects are successfully completed, summarize and distribute the information so other agencies can learn the benefits gleaned from the project, especially in relation to budget and schedule benefits. - Incorporate Lessons into Practice — Incorporate lessons learned into an agency´s way of doing business by changing internal policies and communicating among agency personnel. - Review Design Standards — Audit current design standards. Do they hinder implementation of CSS approaches by mandating standards without considering context? The report on “Context Sensitive Solutions in Designing Major Urban Thoroughfares” offers suggestions on improving design standards. Strategies for Elected Officials - Adopt Supportive Policies — Examples include public involvement policies, local planning and design, and other policy statements - Promote Success — Talk to constituencies about the benefits of the CSS process through annual report publications, speeches, and interviews Challenges of Implementing CSS While CSS can ultimately be a rewarding approach to project development, there are also challenges. It is important to meet these challenges head on and address them up front: - Internal Resistance to Change — Managers can help team members understand how their skills relate to job skills required for CSS approaches, provide a rationale for change that is meaningful to each team member's work, and tie performance goals to implementation of CSS approaches. - Inflexible Design Standards – Design standards may sometimes be applied rigidly to avoid liability or simply because it is the “way designs are typically done.” Owner/agency liability can be managed when context sensitive solutions are well reasoned and comprehensively documented. To implement CSS approaches, opportunities can be provided for design staff to learn from other design practitioners. This helps designers explore strategies for overcoming barriers to flexible application of design standards and help identify design exception policies that can be applied flexibly. - Added Budget for Process — The stakeholder involvement process and other CSS elements can be scaled to the size and complexity of the project. - Added Time for Process — The CSS process requires a larger investment of time early in the project. The reward comes later when the design can be advanced relatively quickly with little rework because the team thoroughly understands the context and can design within it. - Lack of Stakeholder Trust — The CSS process can require new relationships between DOTs and regulatory agencies and other stakeholders. If there is resistance to shifting to collaborative relationships from traditional regulatory relationships, the DOT can provide training in CSS skills or begin with pilot projects or programs to develop a shared understanding of roles and responsibilities. CSS at the Federal Level The Federal Highway Administration promotes CSS approaches by establishing policy, setting funding priorities, conducting training and technical assistance, and providing financial and other support for guidance documents and demonstration projects. - Establishing Policy — FHWA has depended on CSS approaches to improve environmental sensitivity in transportation decision making by incorporating CSS at the project level across the country. - Providing Training and Technical Assistance — FHWA has provided a 2-day CSS overview course, Webinars, and peer exchanges to federal, state, and local partners. Agency specialists also consult and advise on technical issues for planning and project development processes. - Administering Legislation — The Safe, Accountable, Flexible, Efficient Transportation Equity Act — A Legacy for Users (SAFETEA-LU) legislation, the current 6-year funding bill, includes a provision authorizing the Secretary of Transportation to consider CSS approaches in establishing National Highway System (NHS) standards. - Funding to Enhance Livability — SAFETEA–LU contains specific programs, such as Transportation Enhancements and Congestion Mitigation and Air Quality, that create eligible categories and funding criteria to advance CSS projects to completion. - Funding Pilot Studies — Following the “Thinking Beyond the Pavement“ conference, FHWA selected five pilot states to implement the CSS approach: Connecticut, Kentucky, Maryland, Minnesota, and Utah. In addition, FHWA partners with federal or national organizations such as American Association of State Highway and Transportation Officials (AASHTO) and Transportation Research Board (TRB) increasingly promote integration of CSS approaches into project development, construction, and maintenance. - Organizing Conferences — Federal agencies and organizations have sponsored many conferences on CSS approaches since 1998. - Developing Guidance Documents — AASHTO, FHWA, EPA, the Institute of Transportation Engineers, and Congress for the New Urbanism are completing guidance on the design of urban thoroughfares. - Developing Web Sites — Project for Public Spaces, FHWA, AASHTO, the Federal Transit Administration, the Institute of Transportation Engineers, the National Association of City Transportation Officials, and the National Park Service have developed a Web site and clearinghouse for context sensitive solutions www.contextsensitivesolutions.org. CSS at the State Level State DOTs are playing a central role in implementing this approach. Some examples of initiatives at the state level include: - Policy — CSS policies have been adopted by 26 states through executive orders, agency policy changes, or legislative actions. The Utah Department of Transportation (UDOT), a CSS pilot state, adopted an agency policy in 2000 and has since trained more than 400 employees in CSS approaches. UDOT´s Web site describes CSS as €a philosophy that guides the Utah Department of Transportation wherein safe transportation solutions are planned, designed, constructed, and maintained in harmony with the community and the environment.” - Training — North Carolina DOT developed a 3–day CSS training course that was offered at least once a month from 2003 to 2006. www.ncdot.gov - Funding Priorities — Maryland has a transportation enhancement program that focuses on smaller transportation projects that fit the “context of the community.” www.marylandroads.com - Demonstration Projects — State–level success stories are numerous. - Design Manuals — Delaware DOT developed a Traffic Calming Design Manual that includes roadway design standards intended to slow traffic, discourage traffic from cutting through neighborhoods, increase safety for vehicles and pedestrians, and enhance pedestrian environments. - Long-Range Plans — The Oregon Transportation Plan uses CSS principles to involve stakeholders from different parts of the state as advisors in the plan's development, and obtains substantial public feedback on the draft plan via online survey, online comments, and more than 20 public meetings around the state. www.oregon.gov/ODOT - Agency Coordination — Florida DOT´s Environmental Screening Tool is an online resource where resource agencies perform a review of transportation projects after inclusion in a long–range plan and before initiation of design. CSS at the Regional Level Implementation of CSS approaches at the regional level is usually done by metropolitan planning organizations (MPOs). Examples include: - Regional Plans — Metro Vision 2030 is the long-range transportation plan for the Denver Regional Council of Governments and is the area´s comprehensive guide for regional planning. The evaluation criteria reflect a desire to be consistent with the stakeholder vision and goals. For example, projects are given points if they are within or adjacent to an urban center, if they serve a major intermodal facility, or if they support a transit corridor. www.drcog.org - Design Guidelines — The MPO for Portland, Oregon, (Metro) developed guidelines for designing “Green Streets,€ streets that are designed to incorporate stormwater treatment within the right of way. These guidelines provide a variety of design cross sections that accommodate bio–filtering swales, conveyance swales, detention basins, and/or detention ponds. www.oregonmetro.gov - Corridor Plans — The Portland (Maine) Area Comprehensive Transportation Committee (PACTS) developed an arterial land use policy that requires a land use plan to preserve arterial capacity, protect mobility and public investments, and minimize sprawl for arterial corridor roadway projects that will reduce commuter travel times between an urbanized and a non-urbanized area. www.pactsplan.org CSS at the Local Level Local jurisdictions construct a majority of transportation improvement projects, frequently using local design standards. CSS efforts at a local scale are focused on developing these standards as well as on project development, construction, and maintenance. Some examples of how CSS principles have been applied at the local - Design Guidelines — Sacramento, California, updated roadway design standards in response to concerns from residents and business owners regarding inflexible standards. New standards provide minimum and recommended street widths, allow for trade–offs, and include clearer direction on administering standards. - Corridor Plans — Lake Worth, Florida, improved safety and livability downtown by reducing the number of lanes on two downtown streets from three to two. Width from the third lane was used to install parallel parking, paver–block sidewalks and crosswalks, and intersection bumpouts. During construction, the city regularly apprised business owners of progress and assisted with procurement of economic development grant funds. The improvements stimulated economic development in the downtown and greatly reduced vehicle speed and number of accidents. Unique Issues of Urban Arterials Complementing Urban Land Uses — In urban areas, key aspects of context are often social and economic in nature. How will the improvement impact the way people live and work in the vicinity? Urban Network — Streets in urban areas are part of a network; changes to one street have impacts on adjacent streets. Will planned improvements integrate the surrounding network as part of the solution? Accommodating Multiple Modes — Solutions need to consider all users. This is especially true in urban areas, where non-auto modes are more prevalent. How will the improvement affect cyclists, pedestrians, and transit riders? The Institute of Transportation Engineers is completing work on an important document, “Designing Walkable Urban Thoroughfares: A Context Sensitive Approach,” which will contain extensive and specific guidance on how to address the types of questions posed above.
<urn:uuid:d4e9ae26-8622-44de-8312-2fb5f66bcf68>
CC-MAIN-2016-26
http://www.fhwa.dot.gov/context/css_primer/applying.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00063-ip-10-164-35-72.ec2.internal.warc.gz
en
0.89439
2,655
2.515625
3
Definition of gorilla n. - A large, arboreal, anthropoid ape of West Africa. It is larger than a man, and is remarkable for its massive skeleton and powerful muscles, which give it enormous strength. In some respects its anatomy, more than that of any other ape, except the chimpanzee, resembles that of man. 2 The word "gorilla" uses 7 letters: A G I L L O R. No direct anagrams for gorilla found in this word list. Words formed by adding one letter before or after gorilla (in bold), or to agillor in any order: s - gorillas Shorter words found within gorilla: ag agio ago ai ail air al algor all ar argil argol aril gal gall gaol gar gill girl giro glair glia glial gloria go goa goal gor goral grail grill ill la lag lair lar largo lari li liar lira lo log logia loral oar oil olla or ora oral rag ragi rail ria rial rig rill roil roll List shorter words within gorilla, sorted by length All words formed from gorilla by changing one letter Browse words starting with gorilla by next letter
<urn:uuid:32ec22b3-14df-48c6-9cb7-5778815b3343>
CC-MAIN-2016-26
http://www.morewords.com/word/gorilla/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.803048
260
2.75
3
The story behind the making of Schindler’s List and the development of a foundation dedicated to sharing Holocaust testimonies. In celebration of the 20th anniversary of the foundation and the film, this commemorative work is divided into two sections. After an introduction from Steven Spielberg, the first part recounts the process of making Schindler’s List. The material in this section is wide-ranging and includes an account of the maturation process of Spielberg’s directorial ability, the chance encounter that sparked the book behind the movie, certain cinematographic techniques utilized in the film and the experience of having Holocaust survivors visiting the movie set in Poland. The Schindlerjuden, “Schindler Jews,” interacting with the cast and crew inspired Spielberg to initiate the Shoah Visual History Foundation, which works from the premise that the “last act of genocide is always denial and silence”; the foundation collects and catalogs oral histories of the Holocaust. The second part of the book discusses the process of building, developing and expanding this foundation. Initially, Spielberg aimed for narratives from 50,000 Holocaust survivors and rescue-worker witnesses. However, once that goal was reached and exceeded, the foundation used these testimonies in their production of documentaries. More recently, this foundation, which grew out of a desire to ensure that the Holocaust will never happen again, expanded into collecting accounts from the Armenian, Rwandan and Cambodian genocides. With the connection to USC, these eyewitness stories serve as educational tools that will elicit the strong emotive response necessary for the prevention of future attempts at genocide. This general history of the film and the foundation has a promotional feel as it also discusses exciting new technological directions for the foundation. An informative coffee table book for film buffs and those interested in Jewish history.
<urn:uuid:80bc91e7-26ca-4fc2-bf31-a148249fe3ec>
CC-MAIN-2016-26
https://www.kirkusreviews.com/book-reviews/usc-shoah-foundation/testimony-legacy-schindlers-list/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923418
368
2.9375
3
Three Flaws in the Education System and How to Fix Them Curriculum and Instruction | Education | Educational Assessment, Evaluation, and Research | Social and Philosophical Foundations of Education Whenever international assessments of student achievement are conducted, in which tests of subject matter knowledge are given to students around the world, the United States scores poorly. I will identify three mistakes American educators make that undermine student learning, and suggest how these problems can be corrected. ©2005 Claremont Graduate University Drew, D. (2005). Three flaws in the education system and how to fix them. The Claremont Letter. Retrieved from http://www.cgu.edu/include/ClaremontLetteri2v1.pdf.
<urn:uuid:7f405773-38e0-46ad-b316-ffffb59054f3>
CC-MAIN-2016-26
http://scholarship.claremont.edu/cgu_fac_pub/17/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.854909
147
2.65625
3
Tres Marías, Las (läs trās märēˈäs) [key], archipelago, in the Pacific Ocean, c.60 mi (100 km) W of Nayarit state, Mexico. Of the four islands, two—María Madre, which is the largest (c.56 sq mi/145 sq km) and is also a federal penal colony, and María Magdalena—produce maguey, salt, and lumber. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
<urn:uuid:8fc9cc03-7f38-45d0-8719-4b1109cc6879>
CC-MAIN-2016-26
http://www.factmonster.com/encyclopedia/world/tres-marias-las.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00062-ip-10-164-35-72.ec2.internal.warc.gz
en
0.736996
126
2.546875
3
BELL PERFORMANCE FUEL ISSUES SERIES: ETHANOL PROBLEMS FACING CONSUMERS The blending of ethanol into gasoline across the nation is now a common practice due to recent EPA mandates for 10% ethanol blends. These mandates are aimed at improving air quality and reducing air pollution from fuel emissions, which ethanol blends achieve through the lowering of harmful emissions. But ethanol causes major issues for consumers, who face loss of mileage, storage issues and a tendency for ethanol to corrode plastic and fiberglass tanks and parts, especially in marine applications. POPULARITY OF ALTERNATIVE FUELS AND OXYGENATES In many states, it's hard to find a gas station that isn't selling at least 10% ethanol in their gasoline; you see the warning stickers on all of the pumps. Most people don't really know why it's put into gasoline; they just know they may have heard bad things about it. Ethanol is classified as an "oxygenate", meaning it increases the oxygen content of the fuel that it is blended into. The EPA has historically used government mandates (as allowed by the Clean Air Act) to force the introduction of oxygenates into gasoline, as a way to help reduce emissions like carbon monoxide and improve urban air quality. There are really three pieces of legislation that were the biggest influencers in the rise of alternative fuels like ethanol. Actually, there were four. President Carter, some of you may remember, talked during his presidency about reducing the US dependence on oil imports and one of his efforts was spearheading the 1980 Synthetic Fuels Act - one of the initial efforts to get people to think differently about the fuels they use. Unfortunately its momentum was thwarted when the price of oil plummeted in the 1980s, and alternative fuels kind of dropped off the radar - gasoline and diesel were too cheap for people to seriously consider using other things. Three pieces of legislation followed. The 1988 Alternative Fuels Act required government agencies to purchase vehicles that run on alternative and provided financial incentives for auto makers to develop more kinds of vehicles to run on these fuels. This was a big step of faith at the time because ethanol and biodiesel really weren't widespread in availability. The 1990 Clean Air Act gave the EPA authority to push for mandates (like requiring use of alternative fuels) in order to make air quality better. The 1992 Energy Policy Act codified a long-term goal that by 2010, non-petroleum alternative fuels would have penetrated 30% of the fuels market. So for changing the mainstream fuel supply, the EPA really first started mandating this practice on a large scale in 1992, when MTBE (which had already been used in the 1970s as an anti-knock agent in gasoline) began being blended into gasoline to help cut harmful emissions. At its peak in 1999, 200,000 barrels (8.4 million gallons) per day of MTBE were being produced, all being added to gasoline at a 10% treat ratio. Unfortunately, scientists began to find evidence that MTBE was linked to ill-health effects, and also found it easily contaminated ground water; these led to its widespread withdrawal from the market. This is what allowed ethanol to displace MTBE as the dominant oxygenate of choice to blend into gasoline which satisfied these EPA emissions requirements. THE GOOD - ETHANOL ADVANTAGES To be sure, ethanol imparts some advantageous qualities when blended into gasoline. First and foremost are reduced emissions. These may not be so important to the average consumer (unless they are concerned about going green), but this is the advantage the EPA and environmental scientists like. Ethanol blended into gasoline at a 10-85% ratio makes fuel that produces lower levels of carbon monoxide, unburned hydrocarbons, particulate matter (another form of unburned fuel) and harmful aromatic compound emissions (which have been linked to cancer) than pure gasoline. All of these together offer positive effects on smog and pollution levels in urban areas that may have traditionally struggled with this problem. These urban areas, if they aren't concerned about their citizenry, have a financial incentive to care about the problem, because areas out of compliance with Federal air quality standards (hence, the EPA's jurisdiction applies here) can be at risk of losing access to important federal funds for the many things they use federal money to pay for. Oxygenates like ethanol and MTBE already had historical use before the 1992 Clean Air Act as octane improvers. Pure ethanol has an octane rating of 113, while E10 blends have the octane rating listed at the pump, which is usually the same as regular or premium gasoline. Unfortunately for the consumer, it is likely because, despite the ethanol additive having a high octane rating, the fuel blender uses a lower octane base gasoline in order to end up with the same octane rating in the E10 blend as they had before. So the consumer doesn't really get an added octane benefit in an E10, despite the ethanol fraction having a higher octane rating. Ethanol is made in the United States from corn (in Brazil it is made from sugar cane), making it a renewable fuel that reduces (somewhat) our dependence on oil imports. This is a big plus for a lot of people who want to go more "green". No doubt you've heard of the "flex-fuel" vehicles. These are vehicles that have had engine modifications to enable them to run on either gasoline or a high concentration of ethanol like E85. Putting such a high concentration of ethanol in an engine that has not been modified is never a good idea - flex-fuel vehicles have special fuel sensors to properly read the ethanol-fuel mixture and special fuel injection changes to ensure the mixture isn't too rich or lean. Without these modification, the vehicle won't run right and you can very easily get a damaged engine over time. THE BAD - ETHANOL PROBLEMS FOR CONSUMERS Loss of Mileage Loss of mileage from use of ethanol blends results from the ethanol molecule containing less energy value than gasoline. The energy value in petroleum fuels is a function of the number of carbon bonds in the molecule. Gasoline molecules are much longer with more carbon bonds than the small ethanol molecule, so you have less energy potential in that blended fuel. Pure ethanol has a gross BTU value 35% less than the equivalent amount of gasoline. However, most cars don't run on pure ethanol - in fact, running on higher than 15-20% ethanol concentration can cause engine damage because the engine has to be adjusted to account for the differing combustion property of that concentration.. The commonly found E10 blend only has 10% ethanol, so the actual drop in energy value is more along the lines of 3.5%-5.0%. In October 2010, Congress will consider raising the minimum ethanol requirement from 10% to 15%. When this happens, fuel mileage drops will be even larger. 5% may not seem like that much, but consumers have already demonstrated that they are extremely price conscious and do not take any added expense lightly in this economy. Pure ethanol has a strong ability to absorb water from the atmosphere around it. This is true also of the blends made from pure ethanol and gasoline. Ethanol has such a strong attraction to water that chemical producers cannot even sell 100% pure ethanol - it is always 99.8% or less, because there will always be at least a tiny bit of water. As you may expect, attraction of water is an even bigger problem for marine users of E10-E85 than it is for on-road drivers. When water accumulates in a fuel or storage tank, it sinks to the bottom of the tank because water is heavier than fuel. It then contributes to a whole host of fuel problems and issues, which can be summarized here: Breeding Ground for Microbes Microbes like bacteria and fungi all need accumulated water in order to grow and thrive in a fuel storage tank. If an infestation takes hold, problems with corrosion, filter plugging and reduction in fuel quality can follow. However, ethanol blends, like gasoline, tend to be used quicker than stored diesel fuels, so this is not so much of a problem in actual practice. Phase Separation means the ethanol 'phase' separates from the gasoline 'phase' and results in two layers of two different compounds, instead of a homogenous mixture of gasoline and ethanol. At this point the ethanol will sink below the gasoline phase and mix with any more accumulated water, giving an ethanol-water phase mixture. Loss of Octane When ethanol separates from gasoline, it causes a loss of 2-4 octane points in the fuel mixture; in effect, as it separates, it drags the octane value of the gasoline. An 87-octane fuel that separates can have its octane rating drop to 83-84, which is unsatisfactory for most vehicles and will cause performance issues. Potential for Equipment Damage An ethanol blend that has separated will have the ethanol and water mixture settled at the bottom of the tank, where the fuel line is. The fuel line potentially can suck this mixture up into the combustion chamber, where it will burn like an overly lean mixture (lean = not enough gasoline). Because it is not mostly gasoline now, burning this kind of fuel gives real potential for valve damage. This becomes an expensive proposition. Oxidation and Deposit Buildup Water is one of the impurities that will accelerate oxidation reactions in any petroleum-based fuel, whether gasoline, diesel, biodiesel or ethanol blends. Oxidation reactions are responsible for fuel stratification and the fallout of heavy ends from the fuel mixture. These heavy fuels can build up in the bottom of a fuel storage tank, and when they are injected as fuel, they do not burn like pure fuel but will leave deposits in all parts of the combustion system - combustion chamber, valves and fuel injectors. At best, you get raised emissions to the catalytic convertor, rough running and poor engine performance, while at worst you get a drop in mileage. Boat owners in the northeast can readily testify how ethanol blends up to E85 attach and dissolve rubber and plastic parts, even fiberglass fuel tanks. Ethanol has always been an excellent solvent and unfortunately this is not a good thing for engines and fuel delivery systems which rely on rubber and plastic parts for their function. Repeated exposure over time will cause the plastic resigns to dissolve in the ethanol; they subsequently build up as new deposits on valves, causing the same kind of performance issues as carbon deposits can. In exchange for becoming more "green", consumers face a trade off with certain problems that ethanol blends can cause in their vehicles and boats. The EPA's pending increase of ethanol concentration to 15% in all reformulated on-road gasolines will only increase these problems. Subsequently there is a substantial market for additives out there to treat ethanol blends and blunt these problems. Some of them are better than others. The best ethanol additives will contains combustion improvers to blunt the mileage drop, detergents to clean out deposits and any dissolved resin buildup, an ingredient to disperse and control water buildup sand an ingredient to protect rubber and plastic parts from ethanol solvency. Beware of products that make outrageous claims and guarantees - if it seems too good to be true (guaranteed 35% mileage increase?), it very likely is too good to be true. FOR MORE INFORMATION For more information on these and other fuel-related problems and solutions to those issues, visit Bell Performance on the web. Article Source: Erik Bjornstad - EzineArticles.com Expert Author
<urn:uuid:4942e4f6-b3ce-4dd6-b78d-641b3b5b4b8d>
CC-MAIN-2016-26
http://www.wranglerforum.com/f290/the-effects-of-ethanol-gas-on-a-jeep-64198.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949383
2,384
2.8125
3
Restoring a chemical in the gut sends a message to mouse brains to stop overeating Eating right can be a big challenge — especially with delicious, high-calorie foods all around. When those calories come from foods high in fat, the gut will reward the diner with a message to the brain that triggers feelings of pleasure. But eating lots of fatty foods can cause this pleasure-signaling pathway to fail. People with this problem tend to respond by eating more to make up for that loss, some research suggests. As a result, they are more likely to become obese. Scientists working with mice now think they’ve figured out why this gut-brain communication breaks down — and how to fix it. The pleasurable feelings from food are tied to a substance in the brain called dopamine. Too much fat can decrease the dopamine released after eating. In a new study, Ivan de Araujo of Yale University and his coworkers discovered the impaired dopamine signal and what causes it: A fatty diet reduces the body’s production of a chemical called oleoylethanolamine (OH lee ohl eth an OHL ah meen). When the researchers injected this chemical into the guts of mice that had been fed lots of fat, the animals once again received pleasure from eating. And they stopped overeating. De Araujo’s team reported its findings in the August 16 issue of the journal Science. Although mice and people are vastly different, the new study suggests the strategy to restore the connection is worth investigating for people. Indeed, the potential for it to help people is “huge,” Paul Kenny told Science News. He's a neuroscientist at the Scripps Research Institute in Juniper, Fla., and did not take part in the new study. Neuroscientists study the function of nerve cells in the brain and other parts of the body. Food isn't the only thing that can trigger dopamine’s release. Illegal drugs such as cocaine or methamphetamine also can trigger a dopamine “dump” — or release — in the brain. For the new study, de Araujo’s team performed several experiments on mice. But beforehand, the scientists fed one group of mice a high-fat diet for about three months. Another group of mice got low-fat chow for the same amount of time. All mice were healthy and ate normal diets before the study. But by the end of the three-month period, mice getting high-fat foods were now obese. They had eaten too much. Now the new experiments began. In one, the scientists confirmed that a high-fat diet sabotaged the brain’s dopamine response. They injected fat directly into the stomachs of the mice. Dopamine levels surged after the injections, but only in mice that had been eating a low-fat diet. The now-obese mice didn't get this pleasure response. In a second experiment, the researchers measured gut levels of oleoylethanolamine. Previous studies had suggested the chemical helps send a “stop eating” signal to the brain right after a meal and that mice that eat too much fat don’t make enough of it. As expected, more of the chemical showed up in the guts of mice that had been eating a low-fat diet than in the guts of animals that had pigged out for months on high-fat food. The scientists next wanted to know what would happen if they injected the chemical directly into the guts of a different group of the mice fed a high-fat diet. And their third experiment showed this made a big difference: Dopamine levels in obese mice once again surged after a fat injection in the gut. They ate less and lost weight. The chemical injection seemed to repair the broken gut-brain communication. The scientists don't know if the same method would help people. But they're planning tests to find out. Kenny, from the Scripps Research Institute, told Science News that the added chemical appears to offer real potential to help people. Drugs that modify chemicals in the brain often cause severe side effects, like mood changes. But since this chemical is natural and works in the gut — not in the brain — it shouldn’t cause those unwanted changes. “Maybe it will address obesity and maybe it won't,” Kenny said, “but I think it's a wonderful place to start.” dopamine A neurotransmitter, this chemical helps transmit signals in the brain. nervous system The network of nerve cells and fibers that transmits signals between parts of the body. neuroscience Science that deals with the structure or function of the brain and other parts of the nervous system. Researchers who work in this field are called neuroscientists. neurotransmitter A chemical substance that is released at the end of a nerve fiber. It transfers an electrical signal to another nerve, to a muscle cell or to some other structure. C. Gelling. “Gut-brain communication failure may spur overeating.” Science News. Aug. 15, 2013. S. Ornes. “Risk-taking linked to Ritalin.” Science News for Kids. Oct. 5, 2012.
<urn:uuid:5513364a-d782-4c0a-929c-012422d54cc6>
CC-MAIN-2016-26
https://student.societyforscience.org/article/putting-brakes-overeating
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00197-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953138
1,083
2.96875
3
CDC - Guideline for Isolation Precautions in Hospitals Julia S. Garner, RN, MN, and the Hospital Infection Control Practices Advisory Committee From the Public Health Service, US Department of Health and Human Services, Centers for Disease Control and Prevention, Atlanta, Georgia. Garner JS, Hospital Infection Control Practices Advisory Commitee. Guideline for isolation precautions in hospitals. Infect Control Hosp Epidemiol 1996; 17:53-80, and Am J Infect Control 1996; 24:24-52. Part II. Recommendations for Isolation Precautions in Hospitals Hospital Infection Control Practices Advisory Committee RATIONALE FOR ISOLATION PRECAUTIONS IN HOSPITALS Transmission of infection within a hospital requires three elements: a source of infecting microorganisms, a susceptible host, and a means of transmission for the microorganism. Human sources of the infecting microorganisms in hospitals may be patients, personnel, or, on occasion, visitors, and may include persons with acute disease, persons in the incubation period of a disease, persons who are colonized by an infectious agent but have no apparent disease, or persons who are chronic carriers of an infectious agent. Other sources of infecting microorganisms can be the patient's own endogenous flora, which may be difficult to control, and inanimate environmental objects that have become contaminated, including equipment and medications. Resistance among persons to pathogenic microorganisms varies greatly. Some persons may be immune to infection or may be able to resist colonization by an infectious agent; others exposed to the same agent may establish a commensal relationship with the infecting microorganism and become asymptomatic carriers; still others may develop clinical disease. Host factors such as age; underlying diseases; certain treatments with antimicrobials, corticosteroids, or other immunosuppressive agents; irradiation; and breaks in the first line of defense mechanisms caused by such factors as surgical operations, anesthesia, and indwelling catheters may render patients more susceptible to infection. Microorganisms are transmitted in hospitals by several routes, and the same microorganism may be transmitted by more than one route. There are five main routes of transmission: contact, droplet, airborne, common vehicle, and vectorborne. For the purpose of this guideline, common vehicle and vectorborne transmission will be discussed only briefly, because neither play a significant role in typical nosocomial infections. - Contact transmission, the most important and frequent mode of transmission of nosocomial infections, is divided into two subgroups: direct-contact transmission and indirect-contact transmission. - Direct-contact transmission involves a direct body surface-to-body surface contact and physical ransfer of microorganisms between a susceptible host and an infected or colonized person, such as occurs when a person turns a patient, gives a patient a bath, or performs other patient-care activities that require direct personal contact. Direct-contact transmission also can occur between two patients, with one serving as the source of the infectious microorganisms and the other as a susceptible host. - Indirect-contact transmission involves contact of a susceptible host with a contaminated intermediate object, usually inanimate, such as contaminated instruments, needles, or dressings, or contaminated hands that are not washed and gloves that are not changed between patients. - Droplet transmission, theoretically, is a form of contact transmission. However, the mechanism of transfer of the pathogen to the host is quite distinct from either direct- or indirect-contact transmission. Therefore, droplet transmission will be considered a separate route of transmission in this guideline. Droplets are generated from the source person primarily during coughing, sneezing, and talking, and during the performance of certain procedures such as suctioning and bronchoscopy. Transmission occurs when droplets containing microorganisms generated from the infected person are propelled a short distance through the air and deposited on the host's conjunctivae, nasal mucosa, or mouth. Because droplets do not remain suspended in the air, special air handling and ventilation are not required to prevent droplet transmission; that is, droplet transmission must not be confused with airborne transmission. - Airborne transmission occurs by dissemination of either airborne droplet nuclei (small-particle residue [5 µm or smaller in size] of evaporated droplets containing microorganisms that remain suspended in the air for long periods of time) or dust particles containing the infectious agent. Microorganisms carried in this manner can be dispersed widely by air currents and may become inhaled by a susceptible host within the same room or over a longer distance from the source patient, depending on environmental factors; therefore, special air handling and ventilation are required to prevent airborne transmission. Microorganisms transmitted by airborne transmission include Mycobacterium tuberculosis and the rubeola and varicella viruses. - Common vehicle transmission applies to microorganisms transmitted by contaminated items such as food, water, medications, devices, and equipment. - Vectorborne transmission occurs when vectors such as mosquitoes, flies, rats, and other vermin transmit microorganisms; this route of transmission is of less significance in hospitals in the United States than in other regions of the world. Isolation precautions are designed to prevent transmission of microorganisms by these routes in hospitals. Because agent and host factors are more difficult to control, interruption of transfer of microorganisms is directed primarily at transmission. The recommendations presented in this guideline are based on this concept. Placing a patient on isolation precautions, however, often presents certain disadvantages to the hospital, patients, personnel, and visitors. Isolation precautions may require specialized equipment and environmental modifications that add to the cost of hospitalization. Isolation precautions may make frequent visits by nurses, physicians, and other personnel inconvenient, and they may make it more difficult for personnel to give the prompt and frequent care that sometimes is required. The use of a multi-patient room for one patient uses valuable space that otherwise might accommodate several patients. Moreover, forced solitude deprives the patient of normal social relationships and may be psychologically harmful, especially to children. These disadvantages, however, must be weighed against the hospital's mission to prevent the spread of serious and epidemiologically important microorganisms in the hospital. FUNDAMENTALS OF ISOLATION PRECAUTIONSA variety of infection control measures are used for decreasing the risk of transmission of microorganisms in hospitals. These measures make up the fundamentals of isolation precautions. Handwashing and Gloving Handwashing frequently is called the single most important measure to reduce the risks of transmitting organisms from one person to another or from one site to another on the same patient. The scientific rationale, indications, methods, and products for handwashing have been delineated in other publications.(64-72) Washing hands as promptly and thoroughly as possible between patient contacts and after contact with blood, body fluids, secretions, excretions, and equipment or articles contaminated by them is an important component of infection control and isolation precautions. In addition to handwashing, gloves play an important role in reducing the risks of transmission of microorganisms. Gloves are worn for three important reasons in hospitals. First, gloves are worn to provide a protective barrier and to prevent gross contamination of the hands when touching blood, body fluids, secretions, excretions, mucous membranes, and nonintact skin (27-29); the wearing of gloves in specified circumstances to reduce the risk of exposures to bloodborne pathogens is mandated by the OSHA bloodborne pathogens final rule.(51) Second, gloves are worn to reduce the likelihood that microorganisms present on the hands of personnel will be transmitted to patients during invasive or other patient-care procedures that involve touching a patient's mucous membranes and nonintact skin. Third, gloves are worn to reduce the likelihood that hands of personnel contaminated with microorganisms from a patient or a fomite can transmit these microorganisms to another patient. In this situation, gloves must be changed between patient contacts and hands washed after gloves are removed. Wearing gloves does not replace the need for handwashing, because gloves may have small, inapparent defects or may be torn during use, and hands can become contaminated during removal of gloves.(14,15,39,72-76) Failure to change gloves between patient contacts is an infection control hazard.(32) Appropriate patient placement is a significant component of isolation precautions. A private room is important to prevent direct- or indirect-contact transmission when the source patient has poor hygienic habits, contaminates the environment, or cannot be expected to assist in maintaining infection control precautions to limit transmission of microorganisms (i.e., infants, children, and patients with altered mental status). When possible, a patient with highly transmissible or epidemiologically important microorganisms is placed in a private room with handwashing and toilet facilities, to reduce opportunities for transmission of microorganisms. When a private room is not available, an infected patient is placed with an appropriate roommate. Patients infected by the same microorganism usually can share a room, provided they are not infected with other potentially transmissible microorganisms and the likelihood of reinfection with the same organism is minimal. Such sharing of rooms, also referred to as cohorting patients, is useful especially during outbreaks or when there is a shortage of private rooms. When a private room is not available and cohorting is not achievable or recommended,(23) it is very important to consider the epidemiology and mode of transmission of the infecting pathogen and the patient population being served in determining patient placement. Under these circumstances, consultation with infection control professionals is advised before patient placement. Moreover, when an infected patient shares a room with a noninfected patient, it also is important that patients, personnel, and visitors take precautions to prevent the spread of infection and that roommates are selected carefully. Guidelines for construction, equipment, air handling, and ventilation for isolation rooms have been delineated in other publications.(77-79) A private room with appropriate air handling and ventilation is particularly important for reducing the risk of transmission of microorganisms from a source patient to susceptible patients and other persons in hospitals when the microorganism is spread by airborne transmission. Some hospitals use an isolation room with an anteroom as an extra measure of precaution to prevent airborne transmission. Adequate data regarding the need for an anteroom, however, is not available. Ventilation recommendations for isolation rooms housing patients with pulmonary tuberculosis have been delineated in other CDC guidelines.(23) Transport of Infected Patients Limiting the movement and transport of patients infected with virulent or epidemiologically important microorganisms and ensuring that such patients leave their rooms only for essential purposes reduces opportunities for transmission of microorganisms in hospitals. When patient transport is necessary, it is important that 1) appropriate barriers (e.g., masks, impervious dressings) are worn or used by the patient to reduce the opportunity for transmission of pertinent microorganisms to other patients, personnel, and visitors and to reduce contamination of the environment; 2) personnel in the area to which the patient is to be taken are notified of the impending arrival of the patient and of the precautions to be used to reduce the risk of transmission of infectious microorganisms; and 3) patients are informed of ways by which they can assist in preventing the transmission of their infectious microorganisms to others. Masks, Respiratory Protection, Eye Protection, Face Shields Various types of masks, goggles, and face shields are worn alone or in combination to provide barrier protection. A mask that covers both the nose and the mouth, and goggles or a face shield are worn by hospital personnel during procedures and patient-care activities that are likely to generate splashes or sprays of blood, body fluids, secretions, or excretions to provide protection of the mucous membranes of the eyes, nose, and mouth from contact transmission of pathogens. The wearing of masks, eye protection, and face shields in specified circumstances to reduce the risk of exposures to bloodborne pathogens is mandated by the OSHA bloodborne pathogens final rule.(51) A surgical mask generally is worn by hospital personnel to provide protection against spread of infectious large-particle droplets that are transmitted by close contact and generally travel only short distances (up to 3 ft) from infected patients who are coughing or sneezing. An area of major concern and controversy over the last several years has been the role and selection of respiratory protection equipment and the implications of a respiratory protection program for prevention of transmission of tuberculosis in hospitals. Traditionally, although the efficacy was not proven, a surgical mask was worn for isolation precautions in hospitals when patients were known or suspected to be infected with pathogens spread by the airborne route of transmission. In 1990, however, the CDC tuberculosis guidelines (18) stated that surgical masks may not be effective in preventing the inhalation of droplet nuclei and recommended the use of disposable particulate respirators, despite the fact that the efficacy of particulate respirators in protecting persons from the inhalation of M tuberculosis had not been demonstrated. By definition, particulate respirators included dust-mist (DM), dust-fume-mist (DFM), or high-efficiency particulate air (HEPA) filter respirators certified by the CDC National Institute for Occupational Safety and Health (NIOSH); because the generic term "particulate respirator" was used in the 1990 guidelines, the implication was that any of these respirators provided sufficient protection.(80) In 1993, a draft revision of the CDC tuberculosis guidelines (22) outlined performance criteria for respirators and stated that some DM or DFM respirators might not meet these criteria. After review of public comments, the guidelines were finalized in October 1994,(23) with the draft respirator criteria unchanged. At that time, the only class of respirators that were known to consistently meet or exceed the performance criteria outlined in the 1994 tuberculosis guidelines and that were certified by NIOSH (as required by OSHA) were HEPA filter respirators. Subsequently, NIOSH revised the testing and certification requirements for all types of air-purifying respirators, including those used for tuberculosis control.(81) The new rule, effective in July 1995, provides a broader range of certified respirators that meet the performance criteria recommended by CDC in the 1994 tuberculosis guidelines. NIOSH has indicated that the N95 (N category at 95% efficiency) meets the CDC performance criteria for a tuberculosis respirator. The new respirators are likely to be available in late 1995. Additional information on the evolution of respirator recommendations, regulations to protect hospital personnel, and the role of various federal agencies in respiratory protection for hospital personnel has been published.(80) Gowns and Protective Apparel Various types of gowns and protective apparel are worn to provide barrier protection and to reduce opportunities for transmission of microorganisms in hospitals. Gowns are worn to prevent contamination of clothing and to protect the skin of personnel from blood and body fluid exposures. Gowns especially treated to make them impermeable to liquids, leg coverings, boots, or shoe covers provide greater protection to the skin when splashes or large quantities of infective material are present or anticipated. The wearing of gowns and protective apparel under specified circumstances to reduce the risk of exposures to bloodborne pathogens is mandated by the OSHA bloodborne pathogens final rule.(51) Gowns are also worn by personnel during the care of patients infected with epidemiologically important microorganisms to reduce the opportunity for transmission of pathogens from patients or items in their environment to other patients or environments; when gowns are worn for this purpose, they are removed before leaving the patient's environment and hands are washed. Adequate data regarding the efficacy of gowns for this purpose, however, is not available. Patient-Care Equipment and Articles Many factors determine whether special handling and disposal of used patient-care equipment and articles are prudent or required, including the likelihood of contamination with infective material; the ability to cut, stick, or otherwise cause injury (needles, scalpels, and other sharp instruments [sharps]); the severity of the associated disease; and the environmental stability of the pathogens involved.(27,51,82-84) Some used articles are enclosed in containers or bags to prevent inadvertent exposures to patients, personnel, and visitors and to prevent contamination of the environment. Used sharps are placed in puncture-resistant containers; other articles are placed in a bag. One bag is adequate if the bag is sturdy and the article can be placed in the bag without contaminating the outside of the bag (85); otherwise, two bags are used. The scientific rationale, indications, methods, products, and equipment for reprocessing patient-care equipment have been delineated in other publications.(68,84,86-91) Contaminated, reusable critical medical devices or patient-care equipment (i.e., equipment that enters normally sterile tissue or through which blood flows) or semicritical medical devices or patient-care equipment (i.e., equipment that touches mucous membranes) are sterilized or disinfected (reprocessed) after use to reduce the risk of transmission of microorganisms to other patients; the type of reprocessing is determined by the article and its intended use, the manufacturer's recommendations, hospital policy, and any applicable guidelines and regulations. Noncritical equipment (i.e., equipment that touches intact skin) contaminated with blood, body fluids, secretions, or excretions is cleaned and disinfected after use, according to hospital policy. Contaminated disposable (single-use) patient-care equipment is handled and transported in a manner that reduces the risk of transmission of microorganisms and decreases environmental contamination in the hospital; the equipment is disposed of according to hospital policy and applicable regulations. Linen and Laundry Although soiled linen may be contaminated with pathogenic microorganisms, the risk of disease transmission is negligible if it is handled, transported, and laundered in a manner that avoids transfer of microorganisms to patients, personnel, and environments. Rather than rigid rules and regulations, hygienic and common sense storage and processing of clean and soiled linen are recommended.(27,83,92,93) The methods for handling, transporting, and laundering of soiled linen are determined by hospital policy and any applicable regulations. Dishes, Glasses, Cups, and Eating Utensils No special precautions are needed for dishes, glasses, cups, or eating utensils. Either disposable or reusable dishes and utensils can be used for patients on isolation precautions. The combination of hot water and detergents used in hospital dishwashers is sufficient to decontaminate dishes, glasses, cups, and eating utensils. Routine and Terminal Cleaning The room, or cubicle, and bedside equipment of patients on Transmission-Based Precautions are cleaned using the same procedures used for patients on Standard Precautions, unless the infecting microorganism(s) and the amount of environmental contamination indicates special cleaning. In addition to thorough cleaning, adequate disinfection of bedside equipment and environmental surfaces (e.g., bedrails, bedside tables, carts, commodes, doorknobs, faucet handles) is indicated for certain pathogens, especially enterococci, which can survive in the inanimate environment for prolonged periods of time.(94) Patients admitted to hospital rooms that previously were occupied by patients infected or colonized with such pathogens are at increased risk of infection from contaminated environmental surfaces and bedside equipment if they have not been cleaned and disinfected adequately. The methods, thoroughness, and frequency of cleaning and the products used are determined by hospital policy. HICPAC ISOLATION PRECAUTIONS There are two tiers of HICPAC isolation precautions. In the first, and most important, tier are those precautions designed for the care of all patients in hospitals, regardless of their diagnosis or presumed infection status. Implementation of these "Standard Precautions" is the primary strategy for successful nosocomial infection control. In the second tier are precautions designed only for the care of specified patients. These additional "Transmission-Based Precautions" are for patients known or suspected to be infected by epidemiologically important pathogens spread by airborne or droplet transmission or by contact with dry skin or contaminated surfaces. Standard Precautions synthesize the major features of UP (Blood and Body Fluid Precautions) (27,28) (designed to reduce the risk of transmission of bloodborne pathogens) and BSI (29,30) (designed to reduce the risk of transmission of pathogens from moist body substances) and applies them to all patients receiving care in hospitals, regardless of their diagnosis or presumed infection status. Standard Precautions apply to 1) blood; 2) all body fluids, secretions, and excretions except sweat, regardless of whether or not they contain visible blood; 3) nonintact skin; and 4) mucous membranes. Standard Precautions are designed to reduce the risk of transmission of microorganisms from both recognized and unrecognized sources of infection in hospitals. Transmission-Based Precautions are designed for patients documented or suspected to be infected with highly transmissible or epidemiologically important pathogens for which additional precautions beyond Standard Precautions are needed to interrupt transmission in hospitals. There are three types of Transmission-Based Precautions: Airborne Precautions, Droplet Precautions, and Contact Precautions. They may be combined for diseases that have multiple routes of transmission. When used either singularly or in combination, they are to be used in addition to Standard Precautions. Airborne Precautions are designed to reduce the risk of airborne transmission of infectious agents. Airborne transmission occurs by dissemination of either airborne droplet nuclei (small-particle residue [5 µm or smaller in size] of evaporated droplets that may remain suspended in the air for long periods of time) or dust particles containing the infectious agent. Microorganisms carried in this manner can be dispersed widely by air currents and may become inhaled by or deposited on a susceptible host within the same room or over a longer distance from the source patient, depending on environmental factors; therefore, special air handling and ventilation are required to prevent airborne transmission. Airborne Precautions apply to patients known or suspected to be infected with epidemiologically important pathogens that can be transmitted by the airborne route. Droplet Precautions are designed to reduce the risk of droplet transmission of infectious agents. Droplet transmission involves contact of the conjunctivae or the mucous membranes of the nose or mouth of a susceptible person with large-particle droplets (larger than 5 µm in size) containing microorganisms generated from a person who has a clinical disease or who is a carrier of the microorganism. Droplets are generated from the source person primarily during coughing, sneezing, or talking and during the performance of certain procedures such as suctioning and bronchoscopy. Transmission via large-particle droplets requires close contact between source and recipient persons, because droplets do not remain suspended in the air and generally travel only short distances, usually 3 ft or less, through the air. Because droplets do not remain suspended in the air, special air handling and ventilation are not required to prevent droplet transmission. Droplet Precautions apply to any patient known or suspected to be infected with epidemiologically important pathogens that can be transmitted by infectious droplets. Contact Precautions are designed to reduce the risk of transmission of epidemiologically important microorganisms by direct or indirect contact. Direct-contact transmission involves skin-to-skin contact and physical transfer of microorganisms to a susceptible host from an infected or colonized person, such as occurs when personnel turn patients, bathe patients, or perform other patient-care activities that require physical contact. Direct-contact transmission also can occur between two patients (e.g., by hand contact), with one serving as the source of infectious microorganisms and the other as a susceptible host. Indirect-contact transmission involves contact of a susceptible host with a contaminated intermediate object, usually inanimate, in the patient's environment. Contact Precautions apply to specified patients known or suspected to be infected or colonized (presence of microorganism in or on patient but without clinical signs and symptoms of infection) with epidemiologically important microorganisms than can be transmitted by direct or indirect contact. A synopsis of the types of precautions and the patients requiring the precautions is listed in Table 1. EMPIRIC USE OF AIRBORNE, DROPLET, OR CONTACT PRECAUTIONS In many instances, the risk of nosocomial transmission of infection may be highest before a definitive diagnosis can be made and before precautions based on that diagnosis can be implemented. The routine use of Standard Precautions for all patients should reduce greatly this risk for conditions other than those requiring Airborne, Droplet, or Contact Precautions. While it is not possible to prospectively identify all patients needing these enhanced precautions, certain clinical syndromes and conditions carry a sufficiently high risk to warrant the empiric addition of enhanced precautions while a more definitive diagnosis is pursued. A listing of such conditions and the recommended precautions beyond Standard Precautions is presented in Table 2. The organisms listed under the column "Potential Pathogens" are not intended to represent the complete or even most likely diagnoses, but rather possible etiologic agents that require additional precautions beyond Standard Precautions until they can be ruled out. Infection control professionals are encouraged to modify or adapt this table according to local conditions. To ensure that appropriate empiric precautions are implemented always, hospitals must have systems in place to evaluate patients routinely, according to these criteria as part of their preadmission and admission care. Immunocompromised patients vary in their susceptibility to nosocomial infections, depending on the severity and duration of immunosuppression. They generally are at increased risk for bacterial, fungal, parasitic, and viral infections from both endogenous and exogenous sources. The use of Standard Precautions for all patients and Transmission-Based Precautions for specified patients, as recommended in this guideline, should reduce the acquisition by these patients of institutionally acquired bacteria from other patients and environments. It is beyond the scope of this guideline to address the various measures that may be used for immunocompromised patients to delay or prevent acquisition of potential pathogens during temporary periods of neutropenia. Rather, the primary objective of this guideline is to prevent transmission of pathogens from infected or colonized patients in hospitals. Users of this guideline, however, are referred to the "Guideline for Prevention of Nosocomial Pneumonia" (95,96) for the HICPAC recommendations for prevention of nosocomial aspergillosis and Legionnaires' disease in immunocompromised patients. The recommendations presented below are categorized as follows: Category IA. Strongly recommended for all hospitals and strongly supported by well-designed experimental or epidemiologic studies. Category IB. Strongly recommended for all hospitals and reviewed as effective by experts in the field and a consensus of HICPAC based on strong rationale and suggestive evidence, even though definitive scientific studies have not been done. Category II. Suggested for implementation in many hospitals. Recommendations may be supported by suggestive clinical or epidemiologic studies, a strong theoretical rationale, or definitive studies applicable to some, but not all, hospitals. No recommendation; unresolved issue. Practices for which insufficient evidence or consensus regarding efficacy exists. The recommendations are limited to the topic of isolation precautions. Therefore, they must be supplemented by hospital policies and procedures for other aspects of infection and environmental control, occupational health, administrative and legal issues, and other issues beyond the scope of this guideline. I. Administrative Controls Develop a system to ensure that hospital patients, personnel, and visitors are educated about use of precautions and their responsibility for adherence to them. Category IB Adherence to Precautions Periodically evaluate adherence to precautions, and use findings to direct improvements. Category IB II. Standard Precautions Use Standard Precautions, or the equivalent, for the care of all patients. Category IB - Wash hands after touching blood, body fluids, secretions, excretions, and contaminated items, whether or not gloves are worn. Wash hands immediately after gloves are removed, between patient contacts, and when otherwise indicated to avoid transfer of microorganisms to other patients or environments. It may be necessary to wash hands between tasks and procedures on the same patient to prevent cross-contamination of different body sites. Category IB - Use a plain (nonantimicrobial) soap for routine handwashing. Category IB - Use an antimicrobial agent or a waterless antiseptic agent for specific circumstances (e.g., control of outbreaks or hyperendemic infections), as defined by the infection control program. Category IB (See Contact Precautions for additional recommendations on using antimicrobial and antiseptic agents.) Wear gloves (clean, nonsterile gloves are adequate) when touching blood, body fluids, secretions, excretions, and contaminated items. Put on clean gloves just before touching mucous membranes and nonintact skin. Change gloves between tasks and procedures on the same patient after contact with material that may contain a high concentration of microorganisms. Remove gloves promptly after use, before touching noncontaminated items and environmental surfaces, and before going to another patient, and wash hands immediately to avoid transfer of microorganisms to other patients or environments. Category IB Mask, Eye Protection, Face Shield Wear a mask and eye protection or a face shield to protect mucous membranes of the eyes, nose, and mouth during procedures and patient-care activities that are likely to generate splashes or sprays of blood, body fluids, secretions, and excretions. Category IB Wear a gown (a clean, nonsterile gown is adequate) to protect skin and to prevent soiling of clothing during procedures and patient-care activities that are likely to generate splashes or sprays of blood, body fluids, secretions, or excretions. Select a gown that is appropriate for the activity and amount of fluid likely to be encountered. Remove a soiled gown as promptly as possible, and wash hands to avoid transfer of microorganisms to other patients or environments. Category IB Handle used patient-care equipment soiled with blood, body fluids, secretions, and excretions in a manner that prevents skin and mucous membrane exposures, contamination of clothing, and transfer of microorganisms to other patients and environments. Ensure that reusable equipment is not used for the care of another patient until it has been cleaned and reprocessed appropriately. Ensure that single-use items are discarded properly. Category IB Ensure that the hospital has adequate procedures for the routine care, cleaning, and disinfection of environmental surfaces, beds, bedrails, bedside equipment, and other frequently touched surfaces, and ensure that these procedures are being followed. Category IB Handle, transport, and process used linen soiled with blood, body fluids, secretions, and excretions in a manner that prevents skin and mucous membrane exposures and contamination of clothing, and that avoids transfer of microorganisms to other patients and environments. Category IB Occupational Health and Bloodborne Pathogens - Take care to prevent injuries when using needles, scalpels, and other sharp instruments or devices; when handling sharp instruments after procedures; when cleaning used instruments; and when disposing of used needles. Never recap used needles, or otherwise manipulate them using both hands, or use any other technique that involves directing the point of a needle toward any part of the body; rather, use either a one-handed "scoop" technique or a mechanical device designed for holding the needle sheath. Do not remove used needles from disposable syringes by hand, and do not bend, break, or otherwise manipulate used needles by hand. Place used disposable syringes and needles, scalpel blades, and other sharp items in appropriate puncture-resistant containers, which are located as close as practical to the area in which the items were used, and place reusable syringes and needles in a puncture-resistant container for transport to the reprocessing area. Category IB - Use mouthpieces, resuscitation bags, or other ventilation devices as an alternative to mouth-to-mouth resuscitation methods in areas where the need for resuscitation is predictable. Category IB Place a patient who contaminates the environment or who does not (or cannot be expected to) assist in maintaining appropriate hygiene or environmental control in a private room. If a private room is not available, consult with infection control professionals regarding patient placement or other alternatives. Category IB III. Airborne Precautions In addition to Standard Precautions, use Airborne Precautions, or the equivalent, for patients known or suspected to be infected with microorganisms transmitted by airborne droplet nuclei (small-particle residue [5 µm or smaller in size] of evaporated droplets containing microorganisms that remain suspended in the air and that can be dispersed widely by air currents within a room or over a long distance). Category IB Place the patient in a private room that has 1) monitored negative air pressure in relation to the surrounding areas, 2) 6 to 12 air changes per hour, and 3) appropriate discharge of air outdoors or monitored high-efficiency filtration of room air before the air is circulated to other areas in the hospital.(23) Keep the room door closed and the patient in the room. When a private room is not available, place the patient in a room with a patient who has active infection with the same microorganism, unless otherwise recommended,(23) but with no other infection. When a private room is not available and cohorting is not desirable, consultation with infection control professionals is advised before patient placement. Category IB Wear respiratory protection (N95 respirator) when entering the room of a patient with known or suspected infectious pulmonary tuberculosis.(23,81) Susceptible persons should not enter the room of patients known or suspected to have measles (rubeola) or varicella (chickenpox) if other immune caregivers are available. If susceptible persons must enter the room of a patient known or suspected to have measles (rubeola) or varicella, they should wear respiratory protection (N95 respirator).(81) Persons immune to measles (rubeola) or varicella need not wear respiratory protection. Category IB Limit the movement and transport of the patient from the room to essential purposes only. If transport or movement is necessary, minimize patient dispersal of droplet nuclei by placing a surgical mask on the patient, if possible. Category IB Additional Precautions for Preventing Transmission of Tuberculosis Consult CDC "Guidelines for Preventing the Transmission of Tuberculosis in Health-Care Facilities"(23) for additional prevention strategies. IV. Droplet Precautions In addition to Standard Precautions, use Droplet Precautions, or the equivalent, for a patient known or suspected to be infected with microorganisms transmitted by droplets (large-particle droplets [larger than 5 µm in size] that can be generated by the patient during coughing, sneezing, talking, or the performance of procedures). Category IB Place the patient in a private room. When a private room is not available, place the patient in a room with a patient(s) who has active infection with the same microorganism but with no other infection (cohorting). When a private room is not available and cohorting is not achievable, maintain spatial separation of at least 3 ft between the infected patient and other patients and visitors. Special air handling and ventilation are not necessary, and the door may remain open. Category IB In addition to wearing a mask as outlined under Standard Precautions, wear a mask when working within 3 ft of the patient. (Logistically, some hospitals may want to implement the wearing of a mask to enter the room.) Category IB Limit the movement and transport of the patient from the room to essential purposes only. If transport or movement is necessary, minimize patient dispersal of droplets by masking the patient, if possible. Category IB V. Contact Precautions In addition to Standard Precautions, use Contact Precautions, or the equivalent, for specified patients known or suspected to be infected or colonized with epidemiologically important microorganisms that can be transmitted by direct contact with the patient (hand or skin-to-skin contact that occurs when performing patient-care activities that require touching the patient's dry skin) or indirect contact (touching) with environmental surfaces or patient-care items in the patient's environment. Category IB Place the patient in a private room. When a private room is not available, place the patient in a room with a patient(s) who has active infection with the same microorganism but with no other infection (cohorting). When a private room is not available and cohorting is not achievable, consider the epidemiology of the microorganism and the patient population when determining patient placement. Consultation with infection control professionals is advised before patient placement. Category IB Gloves and Handwashing In addition to wearing gloves as outlined under Standard Precautions, wear gloves (clean, nonsterile gloves are adequate) when entering the room. During the course of providing care for a patient, change gloves after having contact with infective material that may contain high concentrations of microorganisms (fecal material and wound drainage). Remove gloves before leaving the patient's room and wash hands immediately with an antimicrobial agent or a waterless antiseptic agent.(72,94) After glove removal and handwashing, ensure that hands do not touch potentially contaminated environmental surfaces or items in the patient's room to avoid transfer of microorganisms to other patients or environments. Category IB In addition to wearing a gown as outlined under Standard Precautions, wear a gown (a clean, nonsterile gown is adequate) when entering the room if you anticipate that your clothing will have substantial contact with the patient, environmental surfaces, or items in the patient's room, or if the patient is incontinent or has diarrhea, an ileostomy, a colostomy, or wound drainage not contained by a dressing. Remove the gown before leaving the patient's environment. After gown removal, ensure that clothing does not contact potentially contaminated environmental surfaces to avoid transfer of microorganisms to other patients or environments. Category IB Limit the movement and transport of the patient from the room to essential purposes only. If the patient is transported out of the room, ensure that precautions are maintained to minimize the risk of transmission of microorganisms to other patients and contamination of environmental surfaces or equipment. Category IB When possible, dedicate the use of noncritical patient-care equipment to a single patient (or cohort of patients infected or colonized with the pathogen requiring precautions) to avoid sharing between patients. If use of common equipment or items is unavoidable, then adequately clean and disinfect them before use for another patient. Category IB Additional Precautions for Preventing the Spread of Vancomycin Resistance Consult the HICPAC report on preventing the spread of vancomycin resistance for additional prevention strategies.(94)
<urn:uuid:b734be66-edac-4ca9-aa69-ffae00a26c1a>
CC-MAIN-2016-26
http://www.rezahygiene.com/1/Hygiene-Resource/Infection-Control/Useful-Articles/View/smid/743/ArticleID/77/reftab/250/t/CDC---Guideline-for-Isolation-Precautions-in-Hospitals.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00161-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908511
8,041
2.96875
3
Some States do not have an official list or directory of licensed wildlife rehabilitators or wildlife centers and organizations, therefore these links will point to one of the larger online directories. Many baby birds are found by people and taken in to be cared for. People believe the baby bird is rejected by its parents, lost, or can not get back into the nest. This is 99% of the time not the case. The fatality rate of baby birds that are taken in by kind-hearted individuals is unfortunately very very high. Many people ask if a baby bird will be rejected if a person handles the baby and the bird parents smell the human. This is just an "old wives'" tale. Baby birds are NOT rejected by their parents if a person handles them. In fact, most birds have a very poor sense of smell or are incapable of smelling at all. DO NOT try to raise an infant opossum if you do not know what you are doing! It is also illegal in most States! Most opossum babies end up orphaned, because their mother was hit by a car (their only real defense is to play dead...) or killed by dogs. So PLEASE, if you care and you happen to hit an opossum with your car - accidents happen - take a minute and make sure that there are no babies on the animal, because they usually survive a lot within momma's pouch. After all, they are America's only Marsupials. If you find orphaned babies please do not try to feed them. Keep them warm and get them to a licensed wildlife rehabilitator as quickly as possible. Fed incorrectly these little ones can aspirate [inhale formula into their lungs] and die. Even as they get a little older we still must be careful to match their mother's milk and their diet as it would be in the wild. Their systems are delicate at this age and they do not have the ability to digest many of our foods. If you do find what you believe to be an orphaned kit, please, don't just snatch it up. We first need to make sure that it is indeed an orphan. Many babies play while mother is sleeping in a tree. Mother is nocturnal, but babies are not. Found an orphaned animal? Find out how to determine if it needs your help or not! Wild animals of all shapes and sizes are born during the spring and summer months. In your own backyard, you may come across baby birds, rabbits, squirrels, opossums, and other young wildlife as they make they make their way into the world. For many people, the pleasure of seeing these young creatures is mixed with a sense of protectiveness—of wanting to help them survive. But spotting a baby animal by himself doesn't necessarily mean he's an orphan. Many wildlife parents leave their young alone during the day, sometimes for long periods. The parent is usually nearby and quite conscious of her young. Also, keep in mind that despite their small size, many young animals are actually independent enough to fend for themselves. How can you tell if an animal needs your help or should be left alone? Here are some general signs to look for: A wild animal presented to you by a cat or dog An apparent or obvious broken limb A featherless or nearly featherless bird (nestling) on the ground Evidence of a dead parent nearby If a wild animal exhibits any of the above signs, you should immediately call one of the following local resources for assistance. You will find listings for most of these in your telephone directory.
<urn:uuid:80db0904-4d1d-40c0-9c0c-dd47b459e92f>
CC-MAIN-2016-26
http://www.rainbowwildlife.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00114-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96706
751
2.640625
3
Visualizing proteins & their evolution. We present a tutorial for Cn3D, a molecular visualization program that allows students to see the tertiary structure of a protein and compare it with the primary structure of the same protein (Sayers et al., 2009). Students can also use the program to visualize two major evolutionary mechanisms: duplication and divergence, and exon shuffling. Key Words: Cn3D; molecular visualization; protein simulation; evolution. (Tests, problems and exercises) Proteins (Tests, problems and exercises) Pohlman, Robert F. |Publication:||Name: The American Biology Teacher Publisher: National Association of Biology Teachers Audience: Academic; Professional Format: Magazine/Journal Subject: Biological sciences; Education Copyright: COPYRIGHT 2010 National Association of Biology Teachers ISSN: 0002-7685| |Issue:||Date: August, 2010 Source Volume: 72 Source Issue: 6| |Geographic:||Geographic Scope: United States Geographic Code: 1USA United States| First, install Cn3D on your computer. You can download it for free from the NCBI Web site (http://www.ncbi.nlm.nih.gov). Click on "Domains & Structures" for a page with a link for downloading Cn3D. Follow the installation directions. I. Getting Acquainted with Cn3D 1. Go to http://www.ncbi.nlm.nih.gov/structure/ MMDB/mmdb.shtml (case sensitive). 2. On the top line, type "1D5R" in the Structure search box. This is Pten tumor suppressor protein, chosen because it contains alpha-helices, beta-pleated sheets, and coils. Click on "Go." Click on "1D5R." 3. Click on "Structure View in Cn3D" below the picture of the protein. 4. Either Cn3D will open directly or a file will download. If Cn3D has not opened, double click the downloaded file to open Cn3D. 5. The protein is visible in two windows: a. The square window, called the structure window, shows the tertiary (three-dimensional) structure. b. The rectangular window below, called the sequence/alignment viewer, shows the primary structure (order of amino acids). Single-letter abbreviations are used for amino acids. 6. The amino acids are the same colors in both windows, correlating primary, secondary, and tertiary structures. 7. In the default settings of Cn3D: a. Alpha-helices are green cylinders; arrows point from the free amino end to the free carboxyl end. b. Beta-pleated sheets (strands) are orange arrows pointing from the free amino end to the free carboxyl end. c. Random coils are blue. 8. To rotate the molecule, place the cursor in the structure window and move it. 9. View/Restore or View/Reset restores the original rendering. 10. The sequence/alignment viewer shows that this protein contains one amino acid chain. Select some amino acids with your cursor. Looking carefully, observe that they turn yellow in both windows. You can highlight any amino acids and see where they occur in the three-dimensional structure of the protein. Hold down the shift key to highlight more than one part of the protein at the same time. Highlight several alpha-helices (green). The cylinders remain green, but the protein backbone turns yellow. 11. In the sequence/alignment viewer, the free amino end is on the left and the free carboxyl end is on the right. The gray amino acids at each end are not in the final protein. 12. Highlight some amino acids at the free amino end of the protein. In the structure window, the free amino end of the protein turns yellow. Repeat with the free carboxyl end. 13. Place the cursor on the last amino acid at the free carboxyl end. The lower left-hand corner of the sequence/alignment viewer indicates that there are 324 amino acids in this protein. You can find the number of any amino acid by placing the cursor on it. 14. Highlight several parts of the beta-pleated sheets. They turn yellow in the structure window; the arrows remain orange. 15. In the View menu, click on "Zoom In" and then "Zoom Out." View/ Restore or View/Reset restores the original setting. 16. Rotate your molecule by clicking on "Spin" in the View/Animation menu. Click on "Stop" to stop. 17. In the Style/Edit Global Style menu, uncheck "alpha-helix"; the cylinders disappear. Uncheck "Strands"; the arrows disappear. To color different parts of the molecule, set the toggle at "User Color" for the part of the molecule you want to color; manipulate the color wheel. Click on "Solvents" to see the water molecules surrounding the protein. Click on "Protein Side Chains" to see the amino acid side chains. 18. In the Style/Rendering shortcuts menu, try different rendering styles. Return to "Worms," the original rendering style. 19. In the Style/Coloring shortcuts menu, try different coloring shortcuts. 20. To see other molecules, return to the NCBI Structure home page by clicking on "Structure" in the top menu bar. Type the PDB/MMDB code into the box at the top of the page and click on "Go." To get a PDB/MMDB code, search for the protein by name from the NCBI Structure home page. Select the protein you are interested in from the many responses. 1. On the NCBI Structure home page, enter "2DN2" (human hemoglobin). How many amino acid chains are in hemoglobin? (Answer: 4.) 2. One hemoglobin molecule has two alpha-chains that are identical to each other and two beta-chains that are identical to each other. The alpha-chains are on lines A and C, and the beta-chains are on lines B and D. Hold down the Shift key and highlight the two alpha-chains; observe them in the structure window. Repeat for the beta-chains. 3. Find the sixth amino acid, glutamic acid (e), in the two beta-chains (lines B and D). Holding down the Shift key, highlight them both. 4. Rotating the molecule in the structure window, locate the glutamic acids on the outside of the protein. Glutamic acid is negatively charged; charged amino acids are often on the outside of proteins, where they make the protein more soluble in water. In sickle cell anemia, these glutamic acids are replaced by valines, large nonpolar amino acids. The sickle cell hemoglobins aggregate into chains thousands of molecules long, as valines on adjacent hemoglobin molecules are attracted by hydrophobic interactions. These long chains of hemoglobins cause sickle-shaped red blood cells. 5. Observe the four heme groups in the molecule. Each heme group carries one oxygen molecule (O2), so one hemoglobin molecule carries four O2. In the Edit/Global Style window, click on "Heterogens"; the heme groups disappear. Clicking on "Heterogens" again makes them reappear. III. Using VAST to Compare Two Proteins VAST (Vector Alignment Search Tool) shows the tertiary structures of two or more proteins superimposed on each other while the sequence/ alignment viewer displays the primary structure of each protein. Compare the beta-chain of human hemoglobin with sperm whale myoglobin. Myoglobin carries oxygen in muscle cells. 1. Go to the Structure Summary page for 2DN2, human hemoglobin. Click on "VAST" (after Related Structure). Click on chain B, "entire chain"; this is human beta-hemoglobin. This screen shows all the proteins that can be compared with human beta-hemoglobin. Check the box to the left of 1A6M_A, sperm whale myoglobin. Click on "View 3D Alignment" at the top left of your screen. 2. Human beta-hemoglobin and sperm whale myoglobin are superimposed in the structure window. The sequence/alignment viewer shows the primary structure of both proteins. Highlight differences in the primary structures, and find them in the structure window. Small differences in primary structure lead to subtle differences in tertiary structure; these cause important differences in the properties of the protein. In the Edit Global Style window in the Style menu, check "protein side chains." Look at the amino acid side chains, concentrating on places where the proteins are different. Rotate the proteins in the structure window. You are comparing the entire whale myoglobin molecule with one beta-chain of human hemoglobin. IV. Other Proteins INSULIN--2G4M. How many amino acid chains are in an insulin molecule? (Answer: 2.) In Style/Coloring Shortcuts/Domain, color each chain a different color. Notice the three disulfide bonds (gold). NUCLEOSOME WITH DNA--2NZD. The nucleosome contains histone proteins and DNA. It forms when chromosomes coil up during mitosis and meiosis. How many histone proteins are in one nucleosome? (Answer: 8, labeled A through H. I and J are the strands of the DNA molecule.) In the Style/Edit Global Style menu, color the proteins magenta and the DNA white. (Suggested settings: Uncheck the "helix objects" box; view both the protein and nucleotide backbone in "wire worms" rendering; click on "Nucleotide Side Chains" to see the DNA base pairs; for color, in "user selection," select magenta for proteins and white for nucleotide backbone and nucleotide side chains; uncheck "Heterogens.") Rotate the molecule; DNA winds around each nucleosome twice. DNA is negatively charged; histones are positively charged because they contain many positively charged lysines and arginines. Holding down the Shift key, highlight some lysines (k) and arginines (r); find them in the tertiary structure of the proteins. AQUAPORIN--3D9S. Aquaporin carries water across cell membranes. It is a homotetramer (4 identical amino acid chains) with one pore in each chain. Highlight one amino acid chain. In the Style menu, click on "Edit Global Style." Uncheck "helix objects" in the Show column to see the four amino acid chains. The chain you highlighted is yellow. Rotate the molecule 90[degrees] so that you no longer see the pores. You see the alpha-helices that cross the cell membrane. Holding down the Shift key, highlight all nine alpha-helices (green) in one amino acid chain. They are not all the same length and are in different places in the protein, but most are in the part of the protein that crosses the cell membrane. CATALASE--1DGB. Catalase, a homotetramer, catalyzes the breakdown of hydrogen peroxide. How many amino acids are in each chain? (Answer: 498.) Holding down the Shift key, highlight one of the chains and observe its location. In the Style menu, Coloring Shortcuts: Molecule colors each amino acid chain a different color. TUBULIN--1TUB (pig). Tubulin is a globular (roughly spherical) protein that makes up microtubules in the cytoskeleton. Cilia, flagella, and the mitotic spindle are made of microtubules; many vesicles move along microtubules. Tubulin molecules occur as dimers containing one alpha-tubulin (1TUB_A) and one beta-tubulin (1TUB_B) molecule. Highlight one chain to see alpha-tubulin and beta-tubulin individually. Compare alpha- and beta-tubulin using VAST. Click on "VAST," then click on chain A, "entire chain." Click on "1TUB_B," and click on "View 3D alignment." Alpha-tubulin contains 440 amino acids and beta-tubulin contains 427. Highlight an area where the two proteins are different (amino acids nos. 43-58 in alpha-tubulin) to see a difference in the tertiary structure of the proteins. V. Duplication & Divergence Genes in organisms today are descended from approximately 1200 genes that existed around the time of the universal ancestor (Nusslein-Volhard, 2006). Each of these early genes coded for a protein motif. In the ensuing 3.5 billion years, these genes have given rise to the genes that exist today through two major evolutionary mechanisms: duplication and divergence, and exon shuffling. In duplication and divergence, an ancestral gene is duplicated through a copying error. This occurs frequently over evolutionary time, although rarely in individuals. After a gene has duplicated, one copy of the gene performs its usual function, whereas the second copy can accumulate mutations. Occasionally, the second copy accumulates mutations and codes for a useful protein that is preserved by natural selection. This is divergence. Myoglobin and hemoglobin are similar because they evolved by duplication and divergence from an ancestral globin gene. Alpha- and beta-tubulin are similar because they evolved by duplication and divergence from another ancestral gene. Human FSH and HCG are proteins containing one alpha-chain and one beta-chain. The alpha-chains are identical; the beta-chains are different. They arose by duplication and divergence from an ancestral gene. FSH, follicle stimulating hormone, is produced by the anterior pituitary gland. In females, it induces maturation of an egg; in males, it stimulates sperm production. HCG, human chorionic gonadotropin, is produced by fertilized eggs and prevents the breakdown of the corpus luteum, allowing the fertilized egg to implant. HCG is measured in pregnancy tests. 1. Go to 1HRP, the structure page for HCG. Observe the alpha- and beta-chains. Highlight each to see its location. 2. Click on "VAST." 3. Click on B, "entire chain." 4. Click on "1FL7_B, beta-FSH." 5. Click on "View 3D Alignment." 6. Tertiary structures of beta-FSH and beta-HCG are superimposed in the structure window. There are 145 amino acids in beta-HCG and 111 in beta-FSH. 7. In the view menu, click on "Zoom Out" to see the entire molecule. 8. Highlight all the amino acids in 1HRP_B to turn beta-HCG yellow. Repeat for beta-FSH (1FL7_B). 9. Highlight areas where the primary structures differ (amino acids nos. 72-79 and 46-41 in beta-HCG). Observe how the tertiary structures also differ. VI. Exon Shuffling Exon shuffling is another major evolutionary mechanism. One form of exon shuffling occurs when exons from different genes combine to form a new gene; the new gene codes for a protein containing motifs from several different proteins. Notch, a protein found in all animals, has many functions during early embryonic development. Human Notch contains 2703 amino acids, with an extracellular domain outside the cell, one transmembrane segment passing through the cell membrane, and an intracellular domain inside the cell (Figure 1). The free amino end of the protein is outside the cell. The extracellular domain contains 36 EGF (epidermal growth factor) repeats, followed by 3 NL domains, then two NOD regions. The transmembrane segment, an alpha-helix between amino acids nos. 1745-1767, is next. The intracellular domain contains six ankyrin repeats, the final motif found in Notch. When Notch is activated, this part of the protein is cleaved, enters the nucleus, and binds to specific DNA sequences, turning genes on. The molecule binding to the outside of Notch causes genes to be expressed even though it never enters the cell (Barrick & Kopan, 2006). Observe these parts of the Notch protein: 1TOZ--Human Notch 1. The ligand binding region shown here contains three EGF repeats, each 38 amino acids long. There are 12 of these regions in one Notch protein, each containing three beta-pleated sheets. Each EGF repeat contains one two-stranded beta-pleated sheet. Six cysteines in each EGF repeat form three disulfide bonds (gold). 1PB5--NL domain. One NL domain contains 35 amino acids, no alphahelices, and no beta-pleated sheets. There are three disulfide bonds (gold). Each Notch protein contains three NL domains. 2F8Y--ankyrin repeat. The six ankyrin repeats combined contain two nearly identical amino acid chains 223 amino acids long. Each amino acid chain has 12 alpha-helices and either four or six beta-pleated sheets. Each beta-pleated sheet has two strands. How did Notch arise by exon shuffling? Choanoflagellates are one-celled eukaryotes that are the closest relatives of animals (Figure 2). They are widely distributed in aquatic environments (King, 2005). See Choano-Wiki (http://www.choano.org/wiki/ChoanoWiki) for a movie (choanoflagellate gallery). Because choanoflagellates do not have a Notch protein and all animals do, we can infer that Notch evolved after the last common ancestor of animals and choanoflagellates, and before the earliest animal (Figure 2; King et al., 2008). Choanoflagellates have genes coding for three separate proteins, each of which has parts found in Notch. Choanoflagellate protein N1 contains six ankyrin repeats. Choanoflagellate protein N2 contains two NL domains, and choanoflagellate protein N3 contains 36 EGF repeats (Figure 3). [FIGURE 1 OMITTED] [FIGURE 2 OMITTED] [FIGURE 3 OMITTED] Early in animal evolution, over half a billion years ago, the genes that code for these three proteins in choanoflagellates recombined by exon shuffling to form the gene that codes for Notch (King et al., 2008). VII. Exporting Your Structure 1. Open the protein structure. a. In the File menu, select "Export PNG." b. Export as "protein.png." c. Add ".png" to the end of your file. 2. Dragging this file onto a Word or PowerPoint file adds the protein to your document. The idea for this tutorial and some of the initial procedures came from a bioinformatics workshop for high school biology teachers presented by Harvard University's Life Sciences--HHMI Outreach Program, and the accompanying handout entitled "Bioinformatics Lab: Viewing Proteins" by Rob Kulathinal of Harvard University and Brian Bettencourt of University of Massachusetts/Lowell. We also thank Raymond S. Broadhead, Brooks School, Massachusetts, and Leone Castles Rochelle, Ridgeview High School, South Carolina. Special thanks to Nicole King for helpful conversations and to Nadav Kupiec for expert preparation of artwork. We thank our students who endured many trial runs of this tutorial and contributed many thoughtful suggestions. Barrick, D. & Kopan, R. (2006). The Notch transcription activation complex makes its move. Cell, 124, 883-885. King, N. (2005). Choanoflagellates. Current Biology, 15, R113-R114. King, N., Westbrook, M.J., Young, S.L., Kuo, A., Abedin, M., Chapman, J. & others. (2008). The genome of the choanoflagellate Monosiga brevicollis and the origin of metazoans. Nature, 451, 783-788. Nusslein-Volhard, C. (2006). Coming to Life: How Genes Drive Development. Carlsbad, CA: Kales Press. Sayers, E.W., Barrett, T., Benson, D.A., Bryant, S.H., Canese, K., Chetvernin, V. & others. (2009). Database resources of the National Center for Biotechnology Information. Nucleic Acids Research, 37, D5-D15. SUSAN OFFNER (email@example.com) and ROBERT F. POHLMAN (rpohlman@ sch.ci.lexington.ma.us) are biology teachers at Lexington High School, 251 Waltham Street, Lexington, MA 02421. |Gale Copyright:||Copyright 2010 Gale, Cengage Learning. All rights reserved.|
<urn:uuid:306df735-8af4-42cb-9432-07e1d7f508e0>
CC-MAIN-2016-26
http://www.biomedsearch.com/article/Visualizing-proteins-their-evolution/245037768.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.843903
4,544
3
3
Tuesday, March 8, 2011 For Catholics and many Protestants, today is the last day before the fasting and penance of Lent. Known variously as Fat Tuesday, Mardi Gras, or Shrove Tuesday, this is a day that started as a celebration and anticipation for the joy of Easter but has lately become another commercialized excuse for bad behavior. However, what many people have forgotten is that today is also the last day of winter--or at least it should be. A popular urban legend has it that spring does not begin until March 20 this year. In recent years, calendar makers and the mainstream media have popularized the misguided idea that the equinox is the precise moment that winter ends and spring begins even though the changing of the seasons is a natural and gradual process that cannot be exactly predicted from one year to the next. As the name implies, March is a month of changes, so instead of an abrupt switch at March 20, evidence that the process of spring has already begun is everywhere. That process is reminiscent of the adage that March “comes in like a lion and goes out like a lamb.” The high winds and freezing cold of February are already giving way to the verdure of spring. We now live in an age where, for the first time in the history of the world, a majority of people live in cities and are detached from nature, so it is easy to understand why this urban legend has become so popular. Worse still, we are now subjected to faux-intellectual clichés such as the perennial, “it’s snowing in December but it’s not even winter yet.” Such oxymoronic statements are evidence that the choice of any particular date for the beginning of each season is at best arbitrary. However, if we must choose a date, Ash Wednesday seems as good a choice as any, especially if we consider that the word “Lent” is derived from the old Germanic word for "spring." In the Mid-Atlantic region, this is a particularly good fit for the reasons below. From a meteorological standpoint, the coldest and snowiest days of the year run from the first week in December to the first week in March: 1) In the Mid-Atlantic region, the average daily temperature drops to its lowest level almost exactly at the midpoint between December 1 and March 1, not at the midpoint between the winter solstice and the vernal equinox. Although temperatures are different, the midpoint of the temperature curve is similar in other regions. 2) Comparing daily temperatures for December and March, almost every Day in December is colder than the same day in March. 3) Average daily snowfall is highest in the throughout the months of December to February, not from December 20 to March 20. From a botanical standpoint, deciduous plants start growing in late February and go into dormancy around early December: 4) The first blooms of spring--especially crocuses and daffodils--can be found in early to mid-March, or sometimes even late February, well before the vernal equinox. 5) Depending on location, the changing leaves achieve their peak color in late September through early November and start to fall very soon thereafter. 6) The average date of the last overnight frost of spring, and thus the time for planting crops ranges from mid-February in the South to mid-May in the Mountain West. Thus, the equinox is actually closer to the midpoint of spring from an agricultural standpoint. From a cultural and historical standpoint, the season of spring has always included most or all of the month of March and possibly even February, whereas Summer is understood to begin in June, May, or even late April depending on who you ask: 7) Groundhog Day (or Candlemas) which is celebrated on February 2 is a tradition which sometimes results in a proclamation of “early spring.” Although the predictions of Punxatawney Phil and his cousins are wildly inaccurate when compared to actual weather data, the tradition is nevertheless a recognition that mild weather sometimes comes before March 1 and certainly long before the vernal equinox. 8) As mentioned above, Lent, which is derived from the Germanic root for "spring," begins between February 4 and March 10, but the average date of Ash Wednesday is on February 21. 9) Especially in the South, Easter is the first day it is considered acceptable to wear white pants, shoes, etc. which are traditionally associated with summer. The average date of Easter is on April 8. 10) Similarly, the feast of the Nativity of St. John is a popular holiday celebrated across Europe in late June which is known as midsummer, implying that spring has already ended and summer has begun long before the summer solstice. 11) The name of March comes from ancient Rome, when March was the first month of the year and named Martius after Mars, the Roman god of war. In Rome, where the climate is Mediterranean, March was the first month of spring, a logical point for the beginning of the year as well as the start of the military campaign season. 12) Data going back to 1974 indicate that retail prices for gasoline are higher on average for the months of May through September because the summer months are traditionally associated with car trips and vacations. Finally, from a bureaucratic standpoint, no agency or body of the United States government has ever declared the solstices and the equinoxes as the “official” start and end of the seasons: 13) According to the National Oceanographic and Atmospheric Administration which falls under the U.S. Department of Commerce, Atlantic hurricane season runs from June 1 to November 30 and thus coincides exactly with the traditional dates for summer and fall. 14) According to the Energy Information Administration which falls under the U.S. Department of Energy, the “winter heating season” runs from October 1 through March 31 and thus corresponds to the traditional dates for winter, with some extra padding on either end. The midpoint of the winter heating season is December 31. 15)The U.S. Naval Observatory which falls under the U.S. Department of Defense, provides a listing of the exact times for the equinoxes, solstices, perihelion, and aphelion which is titled “Earth’s Seasons” but does not state that these astronomical phenomena are in any way the “official” start or end of the seasons. 16) By law, Daylight Saving Time (DST) now starts on the second Sunday in March and ends on the first Sunday in November. Prior to 2007, when the new law went into effect, DST started on the last Sunday in April and ended on the last Sunday in October. Just as the government definition of “winter heating season” includes padding on both ends of the traditional season, so too does DST includes padding on either end of the traditional summer season. The midpoint of DST is in mid-July. In summary, the folklore of the equinox as the first day of spring is a modern idea which did not become fashionable until very recently. The idea has been popularized in the last few decades, because the exact times of certain phenomena are variable and can only be noted after they have occurred. At the same time the dates of many phenomena such as the last frost or peak fall leaves vary across the vastness of our continent. However, instead of relying on the false precision of the astronomical equinox, we should base our understanding of the seasons on natural phenomena that we observe around us. Indeed, at this moment here in Virginia, the first tender green leaves and pink buds are coming out on the trees and shrubberies, the grass is starting to turn green again, and chirping birds are once again heralding the arrival of sunrise. Regardless of what the calendar makers would have you believe, spring is here. Wednesday, March 2, 2011 With an announcement today of a “federal property board” to evaluate and propose the sale of federal assets, President Obama demonstrated again his impeccable gift for exceptionally poor timing. "For too long red tape and politics have prevented the government from moving forward on these fronts," the official said, adding that the board was expected to return $15 billion over the first three years in operation.This announcement comes just as members of Congress from Montana and Idaho are bringing legislation to the floor that would require Congressional approval before permanently closing federal lands to mineral and energy exploration. The White House estimates there are around 14,000 federal properties designated as excess, plus thousands of others which are either no longer needed, or are underutilized. Democrat Obama, under intense pressure from his Republican opponents to cut government spending and narrow a budget deficit estimated at $1.645 trillion this fiscal year, is trying to squeeze savings by making the government more efficient. Rehberg said his bill is needed because it was revealed last year that the Department of the Interior was planning to declare millions of acres in Montana as a national monument. "This 'Treasured Landscapes' plan relied heavily on input from a few special interest groups and seeks to abuse the Antiquities Act for the presidential designation of National Monuments across the west, including Montana," Rehberg said.Thus, the Obama Administration is proposing the sale of federal property—although presumably only in urban areas—to generate a mere $15 billion over the next three years while at the same time closing off millions of acres of federal land to lucrative uses such as lumber, mining, cattle grazing, and energy drilling. According to the Bureau of Land Management’s own website: The public lands provide significant economic benefits to the Nation and to states and counties where these lands are located. Revenues generated from public lands make BLM one of the top revenue-generating agencies in the Federal government. In 2007, for instance, BLM’s onshore mineral leasing activities will generate an estimated $4.5 billion in receipts from royalties, bonuses, and rentals that are collected by the Minerals Management Service. Approximately half of these revenues will be returned to the States where the mineral leasing occurred.A savings of $15 billion is trivial compared to the overall deficit and is made even more laughable when compared to the loss of federal revenue for the use of public lands. The duplicity of this maneuver is enough to shock even the most cynical observer of national politics.
<urn:uuid:daef5595-c429-449e-804c-fc642f47a616>
CC-MAIN-2016-26
http://prolixpatriot.blogspot.com/2011_03_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965465
2,173
2.84375
3
Do you use the Internet a lot? Do you get twitchy, anxious, or downright homicidal when your connection is down or you're forced to pull yourself away from the manifold delights of YouTube? You may have a problem. The bible of mental disorders — officially known as the ‘Diagnostic and Statistical Manual of Mental Disorders’ or DSM-IV — has decided that Internet addiction qualifies as an entirely legitimate mental illness that is recommended for further study in its May 2013 edition, says RT.com. Internet (and technology) addiction might be relatively new, but it's certainly real, as horror-stories about neglectful gamer parents and violent shootings over in-game defeat continue to be breathlessly reported. Kids are considered to be especially at risk, says RT.com, which points out that Australia was one of the first countries to offer official help for those suffering from video game addiction problems. Recent Australian research found that video game addicts suffered from 25 percent more depression and 15 percent more anxiety than more moderate gamers — although it's unclear if this is the cause or a symptom of their problem. Interestingly, researchers found that the extreme players actually had somewhat higher grades than their counterparts if they were in school, and did fine at work, indicating that gaming may be a coping mechanism for players with other issues. Think you might qualify? You can take part in an Australian study on video games and addiction here, if you're over 18. And there's always this online Internet Addiction Test. Or talking to a professional, that could work, too.
<urn:uuid:bc521f14-b4c6-4700-bcf6-ae3356e05a2a>
CC-MAIN-2016-26
http://www.businessinsider.com/internet-addiction-is-a-mental-disorder-2012-10
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969354
320
2.640625
3
- To generate and promote equity research in music education - To foster cross-institutional conversations and research projects in this area - To disseminate this research through scholarly conferences and publications - To use this research to better prepare music teachers and music teacher educators - To increase the cultural diversity of music education researchers, music teacher educators, and music teachers by increasing the number of music education students from underrepresented, underserved populations; and by striving to create culturally relevant, inclusive, and welcoming learning environments for all students - To initiate change through an agenda of activism and community engagement that challenges exclusionary paradigms in music education* * M. Kindall-Smith, C. L. McKoy, and S. W. Mills, “Challenging Exclusionary Paradigms in the Traditional Musical Canon: Implications for Music Education Practice,” International Journal of Music Education, November 2011 29: 374-386.
<urn:uuid:4ed9e0ec-b34b-4f63-a29b-8c21fdcbe4fa>
CC-MAIN-2016-26
http://creme.wisc.edu/about.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00060-ip-10-164-35-72.ec2.internal.warc.gz
en
0.868941
191
2.984375
3
Orangutans in Borneo Orangutans in Kalimantan (Borneo), Indonesia For the first seven or eight years of his life, Fred Galdikas had a best friend called Apollo Bob. Like most friends at that age, they would play together outside. And like most children at that age, their differences seemed immaterial. Kids have an ability to look beyond race, religion, or language and just see a friend for who they are. That was probably lucky for Fred. You see, Apollo Bob was an orangutan. Thirty years ago, when Fred was born, his mother was living deep inside the jungles of Borneo, in the Indonesian part of the island called Kalimantan. Dr Birute Galdikas had set up a refuge for orangutans – somewhere to protect them and to research them. And while she was there, her family grew. What she didn’t realise at first was that the family would end up including the animals around her. These days, more than forty years after she first arrived in Borneo, Dr Birute Galdikas still spends most of her time living and working with the orangutans in Tanjung Puting National Park. Fred also spends much of his life at the base camp there, when he’s not working on the administrative side of the Orangutan Foundation International organisation in the United States. His “deep innate connection” with the animals – something he’s felt since birth – means he can never be away for too long. I first meet Fred as we sit on the wooden deck of a ‘klotok’, the traditional Indonesian boat that is taking us up the river to Camp Leakey, the heart of the orangutan conservation efforts. In the trees on the water’s edge, monkeys sit in branches and watch us go past. The river winds its way through the dense jungle and the boat lethargically makes its way upstream. Around us the dense jungle is never silent – a reminder that we’re not alone out here. “Just remember, we are going into their world”, Fred explains. “We’re going into an orangutan’s world, we’re not going to our world. This is where they stay, where they live. So when we interrupt that flow, it’s interrupting nature a little bit.” It’s an interruption that is needed, though. The orangutans are under threat from a number of fronts, but mostly from a shrinking habitat. Many local Indonesians are destroying the natural forests in Borneo to create palm oil plantations – one of the easiest ways to make money on the island. After staring out at the endless jungle of trees along the river for the past few hours, it’s hard to imagine the devastation that’s happening just kilometres away. But Fred knows the reality all too well. “There just simply isn’t enough forest for the orangutans to roam and live”, he tells me. Visiting Camp Leakey, Kalimantan, Borneo When we finally arrive at Camp Leakey, it’s difficult to firmly clasp any sense of time. This could be 2012… or it could be the 1970s, when Birute first started her work. A small collection of wooden houses stands in a clearing. In front of one building, a wild boar is sleeping. Indonesian assistants sit in a hut smoking, waiting to show us around. They seem more comfortable in the humidity than the visitors. It takes four hours to get here by boat these days. It feels so remote but I can only imagine what it was like 40 years ago when Birute set it up. There was no electricity or phones and, more importantly, she was under immense pressure after being told by academics that she had no chance of success – that the orangutans were too elusive to be studied in the wild. How wrong they were. Or, to put it better, how wrong Dr Birute Galdikas proved them to be. And if you need reminding of that, the evidence of her success is right in front of me at the feeding station. A local assistant puts a large bunch of bananas on the elevated wooden platform in the jungle, a ten minute walk away from the camp. He also leaves a bucket of milk and then walks away. Then the animals come. One orangutan appears high in a tree and slowly lowers itself down towards the platform, watching the surroundings as it descends. There’s a rustling sound on the ground behind me and I turn around to see another orangutan lumbering towards the bananas, right through a group of humans. More follow from all directions until there are about half a dozen. There’s a nonchalance from the animals, seemingly aware of the people and the role we’re playing, but without any deference. This is indeed their world and they know it. As we’re watching the orangutans grab the bananas and then climb the trees to eat them, I chat with Fred. He explains how the animals are free to come and go as they like – there are no fences here. The food is offered in case the animals need it. “We supplement their food intake”, he says. “If there’s no fruit in the forest, they’re not going to eat naturally, so they come here. But sometimes visitors come, spend all this money, spend four hours getting here and they don’t see an orangutan. Well that’s a good thing – it’s because they’re off feeding from their natural wild fruit.” But there is a bond here between human and animal that is unusual and unlike anything I have seen before. There’s almost a magic in the way the orangutans behave with Fred and the workers. Many of these animals were rescued as baby orphans and have been brought up by humans. Although they are now free and behave as such, they’re emotionally connected with their guardians. As I walk back to the klotok, I’m thinking about all that I’ve seen here in Tanjung Puting National Park. Not just the grace and beauty of the animals but the dedication and love of the humans who have looked after them for so many years. Mother orangutans have held their babies close to their chests as they’ve climbed down to take milk from the humans. One day those babies will grow up and have their own babies to care for. I’m lost in thought when a small commotion near one of the huts in camp catches my attention. Fred is there, sitting on the step. Next to him, sitting just as calmly, is an orangutan and her baby. They all look at each other for a moment, there are almost smiles on all their faces. Then the animal nods her head, picks up her baby, and walks away back to the forest. Friends? Family? They’re all the same here at Camp Leakey.You can find more here about the work of Orangutan Foundation International Time Travel Turtle was a guest of the Indonesian Ministry of Tourism but the opinions, over-written descriptions and bad jokes are his own.
<urn:uuid:2f074bd8-5ba5-4dc9-93bb-12a2ffa67933>
CC-MAIN-2016-26
http://www.timetravelturtle.com/2012/10/orangutans-borneo-kalimantan-indonesia/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00116-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963053
1,555
2.671875
3
Below is the summary of what i have been able to understand of Critical Reasoning (So far). I have compiled my notes from various sources and credit goes to original authors such: comprehensive program 2009 and verbal workbook (good and concise) 2) Princeton, crack the GMAT (not too good. just basic) 3) Unknown blogger (http://gmat-cr.blogspot.com/search/labe ... STRATEGIES 4) Most importantly blogger by name of Mukul Hinge (http://www.mukulhinge.com/CRBasics.html 5) Tough diagnostic questions and good explanations to answers from http://www.informit.com/articles/articl ... 9&seqNum=6 6) Fundamental as explained by http://www.prepfortests.com/gmat/tutori ... ing/whatis 7) A good walk through presentation Critical Reasoning.PPT [200.5 KiB] Downloaded 21515 times Hope you all you would find this post useful. Thanks and all the best to all ______________________________________________________________________________________________________Critical Reasoning Basics is a system of reasoning that allows us to arrive at conclusions using available data and critically check the validity of these conclusions. 2. Argument / Stimulus ; in critical reasoning we consider group of related propositions. An argument is set of two or more propositions (called premises) in such a way that all but one of them (premises) provide support for the remaining one (conclusion) 3. Line of reasoning ; the transition or moment from premises to the conclusion, the logical connection between them, is called line of reasoning. ; the un-stated premises that make or break the conclusion of the argument / stimulus ; Contains the central idea of the argument. • Conclusion = Premises + Assumption (the one that links the premises) • If premise are valid and sufficient, then the conclusion must be true. Conversely if the conclusion is invalid, then premises must be invalid or in sufficient or both. . . But if premises are invalid the conclusion is not necessarily invalid. ; are un-stated partial conclusions that can be drawn from the given premise. They do not contain the central idea, but they lead to/support the central idea. • An inference can serve as a link in the line of reasoning but is not the same as conclusion. A conclusion invariably addresses the central idea of the stimulus whereas inference serves only to support the conclusion • There can be many inferences; such as immediate inference, followed by final inference, before it leads to conclusion. Putting the above together; lets see examplesEXAMPLE 1I ate mango today I went to school he is good man . . . .the above three are just set of statements / premises that have no connection or line of reasoning.EXAMPLE 2 . . . .unlike in above example in the following statements / premises there are connections: Jimmy is a doctor (premise) All doctors go to medical school.(premise) So jimmy went to medical school (conclusion)EXAMPLE 3. . . . lets see an example that highlights; Conclusion v/s inference; Premise #1: Students who get seven hours of sleep at night tend to be more alert the next day than those who don’t get seven hours of sleep Premise #2: The ability to get good scores in any competitive exam depends on one’s level of alertness. • Inference: From the above two we can infer that, “if you are a student (may not be true for others) you are likely to be more alert next morning if you get seven hours of sleep” • Conclusion: From the above two we can conclude that, “student (may not be true for others) wishing to do well in competitive exams should try and get at least seven hours of sleep on day before the exam)Important • Logic can give us a consistent and reliable method of inferentially arriving at a conclusion but cannot guarantee the validity of the statements / premises used in constructing an argument. • In critical reasoning u have to accept the truth of the stated premises and you only validate the conclusion. That is to say, in critical reasoning A FACT is 100% TRUE; a FACT or as we say a PREMISE can never be strengthened or weakened; only the CONCLUSION can be strengthened/weakened.Approaching / Solving Critical Reasoning Questions 1 Read the stimulus, little swiftly 2 Read the question & re-read the stimulus, little steadily 3 Identify the conclusion** 4 Separate the evidence from the conclusion 5 Re-arrange the premise and conclusion to get clear line of reasoning 6 Pre-phrase the answer 7 Attack the question**Yes!! this is the most important step. If you have identified conclusion correctly your half battle is won. You might be wondering that whats so big deal about identifying the conclusion. next time when you practice CR try identifying conclusion first you will see difference; while reading stimulus we tend to believe we have identified the conclusion whereas we have note. . . . identifying conclusion is so very important because, its the conclusion which is verified, strengthen or weakened and not the premises (which are 100% fact and have to be taken as truth)Remember • To distinguish between conclusion and evidence ask "what does the author believe" (conclusion) OR "why does the author believe" (Evidence) • Breaking the arguments into premises / evidence & conclusions is important as our minds naturally want to understand what we are reading, and sometimes we assume connections between the premises and conclusions of an argument that “don’t” exist. By doing this, our minds often make of arguments that don’t. • Fight your habit to make assumptions regarding arguments and watch out for illogical connections the test wants you to make! • Don’t assume information unless you see it in the argument. • Memories what the questions is asking. Before you start attacking the question, identify what is to be done; what is to be destroyed or strengthened or identified (Mumble the reply to yourself & memorise what need to be looked for in answer choices; then you won’t have to re-read the stimulus). • Think of word “destroy” when strengthening or weakening an argument. • Don’t get stuck on one answer option (instead read other choices, which will help u eliminate faster); just as in data sufficiency if first statement seems complicated you move on to second one and eliminate the wrong choices, and then come back to the first statements) • Each of the answers choices will do one of the following; - It will weaken the argument. - Or It will strengthen the argument. - Or It will not affect the argument at all (neutral) - Or will be out of scope. • KISS. Keep things short and simple. • Stick within the scope; strengthen or weaken in context of conclusion not otherwise. Even if the option seems ok ask yourself if it’s in the context of the conclusion. Answer choice has to be the one which has reference in the conclusion. • Unlike RC, in CR, answer choice with ‘Strong’ wording need not be eliminated, in fact if wording is too qualified (containing words like some, occasionally, or possibly), check whether statement is truly strong enough to affect the argument Types of questions 1. Identify assumption / strengthen / weaken the conclusion • If assumption is true, then conclusion has to be true whereas if assumption is invalid conclusion has to be invalid i.e. Assumption can validate / strengthen the argument if true or can invalidate / weaken the argument if false • While identifying the assumption you need to separate what is relevant from what is crucial to the discussion / argument. • Avoid answer that re-state the premises • In Strengthen / weaken questions avoid answer choices that do opposite of what is asked • ‘EXCEPT’ questions; right answer is that least significantly affects the conclusion 2. Identify the conclusion • Note: For ‘identify the conclusion’ question, since the conclusion is not in the passage, the answer can’t be restatement of the information given in the passage. • The conclusion has to be supported by all the evidence in the passage i.e. it should connect / make use of all the premises. And not just link up with just one (of many) premises refer. 3. Deduce the conclusion / get the inference • No outside information should be added • If choice needs any additional assumption to work, then its wrong • Additionally, when question says, “if the information provided in the passage is true” any option that express doubt about the information provided is also out • In inference questions be vary of answer choices that; . . . . . Go outside the scope (the right choice should be just a step ahead in direction what can be deduced from the premises, which eventually leads to conclusion) . . . . . Are too extreme (be wary of words such as never, always, must etc.) 4. Mimic the reasoning Depending on question type you will need to use one of the following 3 METHODSMETHOD 1 : Use Venn diagrams example :1 All vehicles with 4 stroke engine need maintenance after a year. Your bike is a 4 stroke. Therefore, your bike will needs maintenance after a year. This conclusion is perfectly fine and can be represented in form of Venn diagram as follow:example :2 All men are idiots Tom is an idiot Therefore Tom is a man; this conclusion is perfectly fine and acceptable in GMAT. But, when we see carefully this same stimulus can be logically valid without being factually correct B: Set of all idiots A: Set of all men (contained in B) C: Tom (can fall outside A) . . . so, you see that TOM could be a man or could not be a man; irrespective for purpose of CR in GMAT, the conclusion 'Tom is a man', is perfectly acceptableMETHOD 2 : Use x, y, z to simply questionsMETHOD 3 : Use logical operators; In questions which involve straight forward negation or conditional analysis‘If not’ operator denoted by ‘!’ if A then B, if !B then !A‘And’ operator denoted by ‘&’ If A & B = C !A => !C !A & !B =>!C‘OR’ denoted by ‘||’ If A ||B = C If A, !B => C If B, !A => C If !A , !B => !C If A & B => C 5. Resolve the paradox • Find the missing link that ensures that both /all the premises in the argument stand true. 6. Identify the flaw • Don’t confuse flaw question with weaken question; in weaken question you are supposed to find the additional information which if true would destroy the argument, whereas in flaw question, you know the evidence doesn’t support the conclusion very well at all, and its up to you to explain why END of Post i love kudos consider giving them if you like my post!! CRITICAL REASONING FOR BEGINNERS: notes & links to help you learn CR better. Click Below QUANT NOTES FOR PS & DS: notes to help you do better in Quant. Click Below GMAT Timing Planner: This little tool could help you plan timing strategy. Click Below
<urn:uuid:2e333374-36fd-4bc8-969e-4eaa3bbf8095>
CC-MAIN-2016-26
http://gmatclub.com/forum/critical-reasoning-for-beginners-82111.html?kudos=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00094-ip-10-164-35-72.ec2.internal.warc.gz
en
0.889335
2,469
2.984375
3
This post was written by Lisa Haugaard and Vanessa Kritzer. In Washington, Congress is nearing a vote on the U.S.-Colombia free trade agreement (FTA). In Colombia, Afro-Colombian and indigenous communities, union members and small-scale farmers are bracing for the agreement's economic impact and enduring still more violence and conflict over rights and land. Here's why we care so much about it: First, the slaughter of trade unionists in Colombia is far from over. More trade unionists were killed last year in Colombia than in the rest of the world combined -- see this chart, if the words alone do not resonate. Twenty two trade unionists have been murdered this year to date. One was hung by barbed wire and tortured; another was shot by men on motorcycles as he left a union meeting; one was killed walking home with her mother; one was shot in the head as he celebrated his 46th birthday. Some ninety-four percent of the 2,900 murders since 1986 remain in impunity. Something is deeply wrong when a country leads the world in murders of people who exercise their freedom to organize. The Obama administration and Colombia's President Juan Manuel Santos signed in April 2010 a Labor Action Plan that commits the Colombian government to protect trade unionists, prosecute cases and reform labor laws. That is a positive step. But the plan rewards promises rather than results. While so-called "cooperatives" are supposed to be outlawed (labor intermediaries that obscure the relationship between workers and a company to avoid respecting labor law) according to the AFL-CIO, workers continue to be forced into cooperative-like arrangements and to be fired for legitimate union activity. Fifteen trade unionists have been killed since the action plan took effect. Trade unionists could continue to be slaughtered in Colombia in 2012, and the FTA would remain in place forever. We need to see a sustained reduction in violence, real progress in bringing the killers to justice, and engrained respect for internationally recognized worker rights, before any trade deal should advance. As the AFL-CIO's President Richard Trumka put it, "If 51 CEOs had been murdered in Colombia last year, this deal would be on a very slow track indeed." Second, the trade agreement will devastate poor farmers who have borne the brunt of the country's brutal conflict. These farm families who have lost husbands, sons and daughters -- and barely eke out a living as it is -- will lose even more. Threatened Afro-Colombian and indigenous communities will have to fight even harder to stay on their ancestral lands. In the two-minute video below, activists in Colombia and the United States predict what the damage will be like for these Colombians already struggling to get by: Colombian economists estimate that 1.8 million small-scale farmers would see their net agricultural income fall by over 16 percent on average, but 400,000 farmers dependent on crops that would compete with U.S. products would lose 48 to 70 percent of their farm income. Moreover, as the Santos Administration implements a positive plan to return land to displaced persons, a flood of U.S. imports will make it more difficult for returnees to stay and thrive. The Colombian government does not have a serious plan in place to mitigate the impact of the FTA on its vulnerable small farmers. We should care about these families because they have suffered so much already. But we should also care because undercutting their livelihoods would push farmers back into coca production, the raw material for cocaine. U.S. taxpayers have already paid the tab on some $8 billion in aid to Colombia, supposedly with the aim of fighting illegal drugs. Third, the trade agreement will escalate the kinds of investment that are most associated with violence. You may have heard about the campaign against conflict diamonds. Colombia has conflict gold, conflict coal, conflict oil, conflict cattle ranching, conflict ports, conflict dams and conflict African palm plantations for biofuel. Paramilitary groups use violence to push people off their lands for these projects, or businessmen pay paramilitaries to do it for them. This is one reason Colombia is a world leader in internal displacement, with over five million people living in desperate conditions after being forced to flee their resource-rich lands. As the Afro-Colombian and indigenous communities predict in the video below, this trade deal will favor these kinds of investment. Unless paramilitary and other criminal networks are dismantled before the deal is sealed, the FTA will escalate the violence. President Obama promised in his campaign to stand up for human rights and oppose this agreement until conditions improved sufficiently in Colombia. That time has not arrived. We join our Colombian civil society partners in opposing this trade agreement now.
<urn:uuid:0e4a38f4-9695-43f1-aabe-80a92a2c0390>
CC-MAIN-2016-26
http://www.huffingtonpost.com/lisa-haugaard/the-uscolombia-fta-bad-deal_b_983780.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967476
971
2.640625
3
I was about to jump into bed when I remembered how much I wanted to keep my goal for writing each day. I could go through my list of things that kept my husband and I in the car or at activities all night but you already know how that goes, we all get busy. So, I thought I would just share something I listened to last week that helped me think about the importance of exposing our students to opportunities for digital reading. Dr. Julie Cairo is a researcher at the University of Rhode Island and has studied online reading comprehension with upper elementary and middle school learners learners. She has found that with so many choices online that careful reading is lost. She also mentions that offline readers aren't sure how to tackle multiple texts in different places. Her big point in this pod cast is that we need to teach kids to stop think about cues, predict, infer and make meaningful choices. How can we help support students who are and will be online readers in their futures?
<urn:uuid:3646eb00-f33b-4eaf-831d-b6f673506cbb>
CC-MAIN-2016-26
http://creativeliteracy.blogspot.com/2012/03/slice-2012-12-of-31-digital-reading.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.984801
197
2.546875
3
- The panda's paternity is unclear - Mei Xiang gives birth after quick labor, zoo officials watch for second - Giant pandas are one of the world's most endangered species - Zoo Atlanta welcomed the first ever panda twins in July There's a new panda in Washington. Officials at the Smithsonian's National Zoo confirmed the arrival of a cub on Friday for Mei Xiang following a quick labor. DC's new celebrity resident was about the size of a butter stick and zoo officials said it appeared to be doing well, as was mom, who was cradling it in her den. "This is a very delicate time for us. We're still on the lookout for a possible second cub," the zoo's director, Dennis Kelly, said at a news conference. There was a 50% chance of a twin being born later Friday or early on Saturday. A healthy little panda would be extra special because the female cub that Mei Xiang delivered nearly a year ago died within days. A lack of oxygen due to underdeveloped lungs was determined as the cause. Zoo officials said they were pleased and hopeful things would work out this time, and they even brought in an expert from China to help out. "After our last experience, and this is such a small cub, I am not going to relax," Kelly said. "We're gong to be tense for the next two or three months. We have high hopes." It will take two to three weeks to know the sex of the cub and zoo officials won't name it for 100 days, following Chinese tradition. Conceived through artificial insemination, it was the third pregnancy for Mei Xiang, 15. The National Zoo says the cub's father is either their own Tian Tian, 15, or the San Diego Zoo's Gao Gao, who is about 23. All three pandas are on loan from China. Mei Xiang and Tian Tian are already the parents of Tai Shan, who was born in 2005 and now is in China, the native region for the endangered animals. American zoo officials are consulting with their Chinese counterparts about panda reproduction and ways to encourage newborns to thrive in captivity. The giant panda is one of the world's most endangered species, with an estimated 1,900 in existence. Among pandas born in captivity, about one in four males and one in four females die in the year following birth, according to the National Zoo. America welcomed its first panda twins in 26 years in July at Zoo Atlanta. The twins were the first for Lun Lun, who has two other offspring at that zoo, and were the product of artificial insemination as well. Their father is Yang Yang, also a resident. National Zoo experts began watching Mei Xiang a couple of weeks ago, and the anticipation peaked once she became restless and began cradling objects.
<urn:uuid:96afe468-a05b-4b7c-b5f5-7bf9c34d6f55>
CC-MAIN-2016-26
http://www.cnn.com/2013/08/23/living/panda-mei-xiang/index.html?hpt=hp_inthenews
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979211
591
2.640625
3