text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
The United States in the First World War: An Encyclopediaby Anne Cipriano Venzon, Paul L. Miles The Great War brings to mind the legendary exploits of Sergeant York, Lafayette Escadrille aces in aerial combat with the Red Baron, valiant bayonet charges across no-man's land, victory marches on New York's Fifth Avenue, and a parade of thousands of other events, forces, and personalities. A brand-new reference work now/b> Instant answers to many questions The Great War brings to mind the legendary exploits of Sergeant York, Lafayette Escadrille aces in aerial combat with the Red Baron, valiant bayonet charges across no-man's land, victory marches on New York's Fifth Avenue, and a parade of thousands of other events, forces, and personalities. A brand-new reference work now sorts out this complex subject and provides instant answers to thousands of questions on subjects that range from diplomatic initiatives to victory slogans, from political forces to armed forces, from legislation to the Lusitania, and every other important aspect of the war. Examines civil topics Organized alphabetically by topic, the Encyclopediaoffers a comprehensive overview from the period of preparation prior to American entry into the conflict, through the signing of the Armistice. Civil topics include articles on the political, industrial, and moral support of the war and organizational and individual opposition to it. Also examined are the important roles that civilians, especially minorities and women, played in the war effort. Places America's role in international context Military coverage includes sketches of important leaders, major campaigns and battles, and individual histories of the most important divisions, enabling the user to focus on specific actions and events. Also covered are foreign leaders, both civilian and military, foreign relations, diplomatic efforts to end the fighting, and the final settlement. A handy reference for scholars and researchers, the Encyclopedia provides a deeper understanding of the many aspects of the conflict by placing the role of the U.S. in an international context. Major articles contain a brief bibliography. and post it to your social network Most Helpful Customer Reviews See all customer reviews >
<urn:uuid:f8e8df05-4393-45bd-9a97-b504d77aa8ee>
CC-MAIN-2016-26
http://www.barnesandnoble.com/w/united-states-in-the-first-world-war-anne-cipriano-venzon/1113962128?ean=9780815333531
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890895
440
2.96875
3
Definition of windmill n. - A mill operated by the power of the wind, usually by the action of the wind upon oblique vanes or sails which radiate from a horizontal shaft. 2 The word "windmill" uses 8 letters: D I I L L M N W. No direct anagrams for windmill found in this word list. Words formed by adding one letter before or after windmill (in bold), or to diillmnw in any order: s - windmills All words formed from windmill by changing one letter Browse words starting with windmill by next letter
<urn:uuid:95e6f34a-5aec-4a54-8d46-bc38710004e8>
CC-MAIN-2016-26
http://www.morewords.com/word/windmill/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918473
130
2.78125
3
The main Meghalaya food comprises of rice along with fish or meat preparations. In fact, rice is the staple food of the people of Meghalaya. In this context, it can be said that the people of Meghalaya have a very liberal food habit. Ranging from rice and maize, food at Meghalaya comprises of millet, tapioca, etc. Besides, the people of Meghalaya rear goats, pig, ducks, fowls and consume their meat. Furthermore, the inhabitants also eat the meat of bison, deer, wild pigs, etc. Fish, crabs, eels, prawns, dry fishes also form a major part of the food in Meghalaya. Moreover, the people of Meghalaya practice 'jhum' cultivation; and the yields from these jhum fields form a integral item in the food of Meghalaya. A characteristic habit of the people of Meghalaya is chewing Betel leaf and unripe betel nut. In fact, after eating the main course of food, people in Meghalaya prefer having betel leaf, along with dried tobacco and lime. In Meghalaya, a special kind of beer is prepared from fermented rice. The rice beer is prepared by fermenting the rice, and then distilling it. The use of rice-beer is most prevalent during the various religious ceremonies. Thus, it is evident that the Meghalayan food is a typical Meghalaya cuisine with its own innovations and delicacies. Last Updated on : 14/06/2013
<urn:uuid:f0db8487-8cd8-49ac-9155-cb323aa1b3ba>
CC-MAIN-2016-26
http://www.mapsofindia.com/meghalaya/society/food.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00035-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95144
334
2.9375
3
The US Agency for International Development and the Swedish government announced a $25 million grant program Monday to increase access to clean water for farming. The Securing Water for Food program is intended to fund innovators and help their businesses take root in countries where the technology is desperately needed. "Almost three billion people on the planet right now live in areas impacted by water scarcity," USAID Global Water Coordinator Chris Holmes told AFP. "We want to take technology that has already proven it works and use the grant money to overcome hurdles to get it into countries that no one has bothered or been able to get into, like Sub-Saharan Africa." Grants were expected to range from $250,000 to a million dollars for winning proposals. "It is not just putting up cash; it is making a commitment that we will work closely with them to overcome obstacles in a developing country to try to build out a new technology," Holmes said. Grants will be awarded in categories such as improving water reuse and countering intrusion of salt water into rivers, streams, deltas or underground aquifers. "In a finite biosphere, solutions to pressing water challenges require new thinking and innovative financing," Swedish Minister for International Development Cooperation Gunilla Carlsson said in a statement. "Through a catalytic use of aid, Securing Water for Food will be able to capture and support the implementation of innovative ideas and new technologies for better water efficiency and sustainable development." Water scarcity affects more than 40 percent of the world's population, and approximately 70 percent of fresh water is used for agriculture, according to USAID. "Water scarcity and its impact on food security affect everyone on the planet," said USAID Administrator Rajiv Shah. "By harnessing the expertise and creativity of the world's brightest innovators, we can tackle this critical challenge with new thinking and partnerships." USAID and the Swedish International Development Cooperation Agency will begin accepting grant proposals in early November. Information about the challenge grant program was available online at securingwaterforfood.org. "I am really excited about this," Holmes said. "I really think this is something that is going to bear some fruit." Explore further: Nonprofit tech innovators inspire new philanthropy
<urn:uuid:d8472f7a-4177-4132-b960-fa293e6a8372>
CC-MAIN-2016-26
http://phys.org/news/2013-09-sweden-unveil-mln-technology-grant.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00176-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947911
457
2.6875
3
The future of global carbon markets Outlook: an uncertain future All countries, developed and developing, reached a climate agreement when the COP 17 Durban Platform was signed at the end of 2011. Or rather, it was an agreement to agree in the future. Until 2020, climate policy will be driven by the domestic efforts of individual countries. International policy and the Durban outcome The main decisions taken in Durban concern a mix of temporary commitments and tools. The temporary commitments were: - Implementation of the voluntary “pledge and review” agreements through to 2020, which were previously initiated at COP 15 and COP 16. - Extension of the KP for a second commitment period until 2017 or 2020 without the US, Russia, Japan and Canada. - The Durban Platform for Enhanced Action — a negotiation track that aims to agree on the targets and scope of a new future climate regime by 2015, applicable to all parties from 2020. Other important decisions taken in Durban concern mitigation, adaptation, technology and financing tools that need to be further developed over the next four years. It is important to notice that the nature and scope of the post-2020 agreements is by no means determined. Durban established an Ad Hoc Working Group on the Durban Platform for Enhanced Action, with the mandate to develop “a protocol, another legal instrument or an agreed outcome with legal force under the Convention applicable to all parties.” The clean development mechanism and new market mechanisms To date, the Clean Development Mechanism (CDM) has been the main international mechanism for mitigation in developing countries. However, its slow bureaucratic process, complex design, costs and sensitivity to fraud have all drawn criticism. In addition, to achieve the large-scale emissions reductions that are required to combat climate change, the mechanism has proved limited. For these reasons, the United Nations Framework Convention on Climate Change has been working on significant reforms while alternative mechanisms have also been proposed to scale up offsetting volumes. A number of new mechanisms are being considered to scale up offsetting in developing countries, namely: - Bilateral and sectoral mechanisms - Reducing Emissions from Deforestation and Forest Degradation (REDD+) - Credited Nationally Appropriate Mitigation Actions. These all aim to scale up the CDM significantly. Most of these mechanisms are still being designed and they are not likely to be launched before 2020, but can present significant opportunities to the private sector. The private sector: role and impact The effect of carbon markets on a company’s bottom line depends greatly on the maturity of the specific market, the rules and allocation methodology, and whether sectors are able to pass on costs. When assessing the economic implications for the private sector of the current new trend for carbon compliance markets expanding to developing countries, factors to consider include the type and scope of the global agreement, and whether sectors and markets will be linked. In the short to medium term, the rate of economic development will probably be much more influential than implementation of carbon legislation. Up to 2020… Although many businesses around the world will remain unaffected by carbon markets until 2020,those in the US, China, South Korea and Australia will need to look at setting up a carbon management strategy, monitoring emissions, learning how to trade carbon and lobbying the authorities. Firms in emerging markets will also need to start preparing as the shift in economic power from developed to developing economies accelerates. Until 2020, the impact on the private sector mostly relates to the administrative burden of entering an ETS rather than the negative effects on a company’s balance sheet. During the same period, while the Durban Platform are being implemented, opportunities will exist for businesses to get involved in the design and trials of the scaled-up market mechanisms, with the aim to capitalize later. The main conclusion regarding the implementation of the Durban Platform is that much uncertainty remains. The world gave itself four more years to come to terms with battling climate change. Although this may concentrate minds, many questions remain unanswered. What kind of commitments can we expect? Will there be any new market mechanisms to scale up emissions reductions? What is the effect on the shape and scope of carbon markets? Will companies be carbon constrained in the same way in China, the US and the EU? These questions lead to three distinct theoretical scenarios: - Ambition. A comprehensive, clear and ambitious multilateral agreement takes effect in 2020. All countries take on targets on an equity basis, with the major emitters and industrialized nations generally agreeing to cut emissions and developing countries generally accepting to limit the amount by which they increase emissions. New market mechanisms are implemented. - Weak agreement. A weak global agreement is reached, with targets that will not drive serious domestic mitigation. Some new market mechanisms will be implemented. - No agreement. There is no agreement whatsoever, the global financial crisis continues and countries pursue other priorities rather than responding to climate change. In effect this shifts us back to unilateralism and a bottom-up approach, where only a number of ambitious countries, regions and local authorities take the problem seriously. Businesses across the world must prepare for a future where they are carbon constrained. Firms in developing countries that proactively engage now with low-carbon and adaptation strategies will gain a competitive advantage and reap the benefits later when the countries in which they operate implement carbon policies. Carbon markets are a key mechanism of future climate finance, but they need to be expanded and reformed. There are high expectations from new scaled-up market mechanisms, but a demand for their credits is essential for them to be effective.
<urn:uuid:648e45eb-7b99-49a1-ad4b-9c388f6355a8>
CC-MAIN-2016-26
http://www.ey.com/GL/en/Services/Specialty-Services/Climate-Change-and-Sustainability-Services/The-future-of-global-carbon-markets_Outlook-an-uncertain-future
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00154-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935401
1,148
2.59375
3
Imagine a knight in his breeches. It’s up to you to put the correct clothing and armor on him. Without the proper threads, he’s absolutely lost. What’s a lad to do? We’ve seen paper dolls before, which almost always cater to girls and often involve ballerinas, fairies, and princesses. But these? Different. Usborne Activities takes sticker dressing in an entirely new direction with their Sticker Dressing Knights. We’re talking jousting and shiny armor. Children learn about different types of knights, as well as the associated garb, with easy-to-follow directions; although younger ones may need some assistance with the reading component. My littlest guy put the stickers wherever he darn well pleased, and that was fine too. Stickers are a form of expression for kids that is relatively cheap, eye catching and travels well. The chance to turn them into a historical tool now is just plain cool. –Eva
<urn:uuid:4fe65808-ed1e-4257-9d7f-73eb0a3073ee>
CC-MAIN-2016-26
http://coolmompicks.com/blog/2011/07/15/stickers-for-boys-or-girls/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963433
206
2.734375
3
Last week I addressed the Constitutionally mandated limits on the powers of Congress. Today’s topic investigates the history of these limits being violated, how the courts (Supreme, especially) reacted, and the status quo in light of this history. America’s Great Depression was a four year period in which the gears of the worldwide economy jarringly halted. Numerous causes have been proposed, but those are a topic for another post. Today’s topic focuses on proposed and attempted solutions and their impact on the landscape of American policy. Similar to the recent election, Roosevelt and the broader Democratic party entered office with significant political capital. This capital came in the form of backlash against a Republican party that espoused “rugged individualism” in the face of adverse economic climate. The Federal Reserve, a mere 16 years old at the time of the crash, is oft blamed for the problems leading to the economic downturn. Whether by the Monetarists’ view that the crash was due to the Fed not pumping enough liquidity into the markets or the Austrians’ view that it was a previous shift from gold that caused the crash does not change that the crash happened. Addressing that crash was viewed by the People as the responsibility of the Federal government – they created the Fed, the Fed didn’t stop the crash, they should fix the problem. The result was Roosevelt’s political capital, and its outcome – the New Deal. Roosevelt met staunch resistance from the Supreme Court during the beginning of the New Deal. Though the policies put forth by Congress and approved by Roosevelt were disparate, they shared one common thread: increased federal power. There are those that argue the expanse of federal power predate those policies Roosevelt approved. However, the policies predating Roosevelt, as well as his first-term policies, met repeated Constitutional challenge. A central piece of New Deal legislation was tested and found unconstitutional in Panama Refining Co. v. Ryan. Reading the majority opinion, it becomes clear that the Supreme Court was concerned that the breadth of power Congress was trying to vest in the Federal government were beyond prudence: The point is not one of motives, but of constitutional authority, for which the best of motives is not a substitute. While the present controversy relates to a delegation to the President, the basic question has a much wider application. If the Congress can make a grant of legislative authority of the sort attempted by section 9(c), we find nothing in the Constitution which restricts the Congress to the selection of the President as grantee. The court saw that, despite the best intentions of both Congress and the President to help the People, they are not granted unlimited authority in all legislative manner. Ultimately, they struck down the “Hot Oil Act” as unconstitutional, setting up the confrontation that would ultimately grant the Federal Government the growth it desired. After reelection in 1936, stinging with the rulings in Panama Refining Co. v. Ryan, Schechter Poultry Corp. v. United States, United States v. Butler, etc, and still upholding New Deal ideals, Roosevelt proposed the Judiciary Reorganization Bill of 1937. Known by many as the “Court Packing Bill,” Roosevelt saw the interference of the Supreme Court as an unacceptable hindrance to his New Deal. Roosevelt proposed appointing a new justice for every federal court justice over 70, providing a means to replace those justices standing in the path of his new policies. Though the packing bill was defeated (after a historical anecdote known as the “switch in time that saved nine“), it marked a radical shift in judicial policy. Post-West Coast Hotel, many New Deal and subsequent policies went unchallenged or were decided in favor of allowing increased Federal control. I opine the shift towards Federal control was bolstered at this point, perhaps irreversibly. Though the government had established over 100 years earlier, in McCulloch v. Maryland, that the Necessary and Proper clause granted broad Congressional power within Article 1 Section 8 guidelines, it was in 1937 that the US saw its first span of minimally-fettered federal growth. In what I consider a sad misappropriation of the reasons for the 10th Amendment’s existence, United States v. Darby Lumber Co. stated, in part: The amendment states but a truism that all is retained which has not been surrendered. There is nothing in the history of its adoption to suggest that it was more than declaratory of the relationship between the national and state governments as it had been established by the Constitution before the amendment or that its purpose was other than to allay fears that the new national government might seek to exercise powers not granted, and that the states might not be able to exercise fully their reserved powers. Even in writing the above, the Supreme Court acknowledged the fear that the Federal government would exceed its granted power. Moreover, it clearly shows that the purpose of the 10th Amendment was known to stand as a restriction of such excess. The court’s next paragraph even seems to contradict the thought as put forth (emphasis mine): From the beginning and for many years the amendment has been construed as not depriving the national government of authority to resort to all means for the exercise of a granted power which are appropriate and plainly adapted to the permitted end. Under this interpretation, the Supreme Court should not, in my opinion, grant such broad authority to the Federal government to exercise any power even tangentially related to those powers enumerated. The “plainly adapted” clause in Darby indicates that the laws under consideration must be obviously permitted by the Constitution when considering 10th Amendment inquiry. The Court dismissively refers to the 10th Amendment as “but a truism” – a tautology which does nothing but restates the basic premise of the Federal government. Somehow the entirety of the above, bolded statement is lost in pursuit of the first two words, “all means.” Bringing this back to its origin, a nationalized health care program, I invite those who think it’s within the current purview of the government’s power to show how. Moreover, that explanation should manage to stand without a tortured path of tangential relations to commerce. I don’t address the need for single payer healthcare, because I think the forum must be decided, first. Should enough of the People find it within the legitimate interests of Congressional control, I contend those people should follow the amendment process. Until then, each State, county, municipality, etc should have the choice to enact or reject its own program. Should the experiment of such be found effective, other states can witness the resounding success, convincing them such a plan benefits them. Moreover, the state-by-state competition allows the People a manner by which to compare the real-world quality and impact of such plans, before deciding which is better, if any at all. It is then the forum can be expanded, only after the experiment has been proven worthwhile within our system of governance. Supreme Court Justice Louis D. Brandeis is famous for his dissenting opinion in New State Ice Co. v. Liebmann, oft called “Laboratories of Democracy” quote: It is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country. Though others point to other countries as examples of these laboratories, the comparison falls short. Our government, its rules, and the composition of our nation’s heritage differs than those other countries. While such countries may inspire states of a similar bent to follow their plans, it is hardly a just comparison to state the entirety of the states must plunge into the murky waters of such a plan. This holds doubly true when one considers our legal history and the requirements placed upon our Federal government. Ultimately, the move towards a single-payer system lies in the hands of the States, or the People. Taking another path subverts and short-circuits the nature of our country’s policies. Perhaps it’s a city v. countryside issue, but that is in the realm of another post.
<urn:uuid:de185606-73d1-46c5-a51e-bb61cb7bf440>
CC-MAIN-2016-26
http://www.commonsensethoughtcontrol.com/2009/08/27/law-and-reform-healthcare-and-the-forum-of-change-part-2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967317
1,693
3.203125
3
- Historic Sites The Redskin Who Saved The White Man’s Hide Chief Washakie earned his battle scars in the service of the Great White Father, who—for once at least—kept faith with an Indian February 1960 | Volume 11, Issue 2 General George Crook, United States Army, angular and bearded, resisted the impulse to consult his watch again. From the opening of his tent he could have seen the wide stretch of sagebrush-covered hills to the west over the willow bottoms of Goose Creek, but he was tired of looking at it. Why didn’t Washakie come? The place was near Sheridan, Wyoming Territory, June 14, 1876. Crook, with eleven hundred cavalry and infantrymen, was campaigning against a determined alliance of Sioux, Chcyennes, and Arapahoes led by Chief Crazy Horse. They outnumbered him four to one and they were well armed. In three hot skirmishes on the march from Fort Fetterman, Crook had acquired a deep respect for the warriors from the Rosebud and their allies. Crazy Horse had sent word that “every soldier who crossed the Tongue River would die,” and though Crook had the strongest force ever seen on the Irontier, lie knew that he would need Indian reinforcements. He planned to cross the Tongue River the next day. Two hundred Crows under Old Crow, Medicine Crow, and Good Heart had arrived that afternoon. The Shoshones under Washakie had ended their old feud with the Crows two years previously. Now they had ottered the Army full co-operation against their common enemy, the Sioux. Washakie had never broken his word yet, and he had promised to join Crook with the best of his fighting men—but where was he/ Crook was determined to move the next day, without the Shoshones if necessary. But he didn’t relish the prospect. Then, over the noises of the heavily fortified camp, Crook heard the alarm call sound. A large body of horsemen, in smart columns of lours, was rushing down the steep slope from the west. One lone man on a magnificent pinto rode in the lead: another followed, carrying a long staff topped by an oriflamme of eagle feathers. This bearer was Hanked by two horsemen who broke out huge American flags as they approached the camp. Every warrior carried a glittering lance ornamented with a small pennant. At a sharp command from their leader, the long column swung into a parade front and halted. Eyewitness accounts do not agree as to numbers, but there were between yo and 160 superbly mounted warriors in that long, stiff line. Besides lance, shield, and pogamoggan (war club), every one carried a repeating rifle and a revolver. They were stripped to the waist, in full war paint, and decorated with brass and feather ornaments. The troops who had gathered at the alarm call cheered, and the Crows gave their ancient enemy a fierce cry of welcome. The Shoshones had arrived. Washakie dismounted and warmly took Crook’s extended hand. It was hard for Crook to realize, despite the snow-white hair that hung down over his shoulders, that this Indian was over seventy years old. Here he was—having come 160 miles and across two mountain ranges from Fort Washakie—dignified, proud, and anxious to support his white friends. Washakie asked where his warriors were to be quartered. Then he barked a command and the still-stiff line, with clockwork precision, swung into fours and trotted smartly off. Washakie promised to join Crook and the Crows in council immediately, mounted his war horse, and followed his men. At the council of war the Shoshones and Crows insisted that in the forthcoming campaign they be allowed to operate separately from the Army. Both pointed out that they had more experience in fighting the enemy and preferred their own methods. Crook agreed. He asked only that they maintain contact with him at all times. Washakie requested a detail of soldiers to be attached to his braves, so that the rest of the Army would not mistake them lor the enemy. Crook decided to postpone his march one more day. He wanted to strengthen his present position so that he could leave his wagons there under light guard. Also, both he and Washakie had agreed that the Shoshone horses would benefit from a day’s rest. Moving west, the Army crossed the Tongue River early in the forenoon of June ifi. Crook had mounted his infantry on mules from the supply wagons to give his corps greater mobility. They met no opposition from the Sioux, but returning scouts reported that they had found the trail of a large village moving north. Crook turned north into Montana, halted late in the afternoon, and bivouacked for the night. On June 17, with extreme caution, they marched into the Rosebud country. The Indian allies who had been moving on the flanks now took the lead, with their medicine men well in advance. The Army struck the headwaters of the Rosebud and marched downstream. Suddenly the Shoshone and Crow warriors returned at lull gallop, shouting, “Sioux! Sioux! Heap Sioux!” Then they whirled to give battle. The Shoshoncs took up a position at the head of a large coulee, where they had full view of the enemy as he swept down on the main concentration ol the Army five hundred yards below. Then the Shoshoncs, with a disengaged wing of the infantry, swung into the Sioux horde and delivered a withering fire. With a wild howl the Crows smashed into the opposite Bank, and the impact of the attack was broken.
<urn:uuid:55751ec5-8ceb-4767-9e38-67ca3aaeda0b>
CC-MAIN-2016-26
http://www.americanheritage.com/content/redskin-who-saved-white-man%E2%80%99s-hide
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00179-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982526
1,229
3.15625
3
What a Chemical Storage Building Entails Chemicals are hazardous and if poorly placed, they can cause harm to living beings. The best places where chemicals are supposed to be stored include cabinets, stores and buildings. Chemical storage is therefore the storage of hazardous chemicals in safe cabinets, stores and buildings. The storage space indoor or outdoor depends on the person or the company that owns the chemicals. It is a choice of most companies to make outdoor storage space for their chemicals. Individuals who own chemicals are different. It is easier for them to store their chemicals indoors. Chemicals are very dangerous and should be placed out of reach of children. You have options of keeping them in secret rooms or on very high cabinets. The list of things you are supposed to counter check before choosing a good chemicals storage is endless. Your site should always be well prepared. If you are using containers they should be well installed and tested to perfect in performance. Follow up approvals from local and larger government entities. Remember that there are rules governing the keeping of hazardous chemicals either at home, in work or in industries. You are supposed to ensure that the transport of the chemicals arrive safely to your place. The inspections are part of the transporting. Also safe offloading if there is need to put on protective cloths then it shouldn’t be avoided. Until the chemicals are safely placed in the store, you are not done. There has been an increased rate of steel buildings to store chemical substances. It is prudent that the buildings be made based on the stipulated rules. It is therefore possible to properly store hazardous chemicals in these storage buildings. Making chemical storage buildings has a number of considerations. All these will revolve around what kind of chemicals you own and the kind of government rules regulating them. Depending on the type of chemical material you will store there are only four types of materials. Corrosive, flammable, toxic and explosive materials are the four types of materials that can be stored in a chemical storage building. Corrosive materials are the type that has the ability of dissolving other materials. If poorly stored human skin can easily be destroyed. A good example of this kind of chemical is nitric acid which is very corrosive in nature. Fuel and gasoline can best suite this kind of chemicals. These materials have a combustion point of more than one hundred and are therefore considered highly flammable. To avoid accidents, it is important that they are stored safely. Black powder is an example of a material that is used to make explosives as seen mostly in movies. Lastly, toxic materials are very dangerous when inhaled and are mostly in gaseous or liquid form. It is necessary that each chemical is stored differently. Buildings that are used to store chemicals have different features that make them special in storing these chemicals.
<urn:uuid:bb64780c-a8fe-421a-a756-afe789f1c874>
CC-MAIN-2016-26
http://www.efrg.ga/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967183
564
2.875
3
]. Relative ὡς as an Adverb In comparative clauses, often correlated with οὕτως. Thus, X. A. 3.3.2, ἐκέλευσε τοὺς Ἕλληνας, ὡς νόμος αὐτοῖς εἰς μάχην, οὕτω ταχθῆναι he ordered the Greeks (thus) to be stationed as was their custom for battle 1. 2. 15. Cp. cross2462 ff. In similes and comparisons, 2481 ff. πιστὸς ἦν, ὡς ὑ_μεῖς ἐπίστασθε I was faithful, as you know ὡς is rarely used for ἤ after comparatives; as A. Pr. 629. Cp. cross1071. μή μου προκήδου μᾶσσον ὡς ἐμοὶ γλυκύ care not for me further than I wish In adverbial clauses ὡς is often used parenthetically; as ὡς ἐμοὶ δοκεῖ as it seems to me. Instead of ὡς δοκεῖ, ὡς ἔοικε the personal construction is often preferred; as X. A. 1.4.7. ἀπέπλευσαν, ὡς μὲν τοῖς πλείστοις ἐδόκουν, φιλοτι_μηθέντες they sailed away out of jealousy, as it seemed to most people ὡς restrictive for (cp. ut), involving the judgment of the observer, occurs often in elliptical phrases; as (Βρα_σίδα_ς) ἦν οὐδὲ ἀδύνατος, ὡς Λακεδαιμόνιος, εἰπεῖν Brasidas was, for a Lacedaemonian, not a bad speaker either T. 4.84, ταῦτα ἀκούσα_ς Ξέρξης ὡς ἐκ κακῶν ἐχάρη on hearing this Xerxes rejoiced as much as could be expected considering his misfortunes Hdt. 8.101. On ὡς restrictive with the dative, cp. cross1495 a, cross1497; with the absolute infinitive, 2012. ὡς is often used to heighten a superlative ( cross1086). With numerals and words indicating degree ὡς means about, nearly, not far from; as -- 671 -- X. A. 1.2.3, ὁπλί_τα_ς ἔχων ὡς πεντακοσίους having about five hundred hoplites P. R. 377b (lit. about over the great (er) part). ὡς ἐπὶ πολύ for the most part ὡς often indicates the thought or the assertion of the subject of the principal verb or of some other person prominent in the sentence. Here ὡς expresses a real intention or an avowed plea. So often with participles ( cross2086); and also with the prepositions εἰς, ἐπί, πρός; as ἀπέπλεον . . . ἐκ τῆς Σικελία_ς ὡς ἐς τὰ_ς Ἀθήνα_ς they sailed away from Sicily as though bound for Athens ὡς ἕκαστος means each for himself; as ἀπέπλευσαν ἐξ Ἑλλησπόντου ὡς ἕκαστοι (ἀπέπλευσαν) κατὰ πόλεις they sailed away from the Hellespont each to his own State ὡς exclamatory ( cross2682) may be the relative adverb ὡς how, the relative clause originally being used in explanation of an exclamation. Exclamatory ὡς has also been explained as ὡς demonstrative (so). On ὡς in wishes, see cross1815.
<urn:uuid:21db1ebe-cea8-4df0-8e2b-099f19e7a5da>
CC-MAIN-2016-26
http://perseus.uchicago.edu/cgi-bin/philologic/getobject.pl?c.9:6:305.perseusmonographs
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.860555
1,174
2.703125
3
Simulating protein folding on the millisecond timescale has been a major challenge for many years. When we started Folding@home, our first goal was to break the microsecond barrier. This barrier is 1000x fold harder and represents a major step forward in molecular simulation. Specifically, in a recent paper (http://pubs.acs.org/doi/abs/10.1021/ja9090353), Folding@home researchers Vincent Voelz, Greg Bowman, Kyle Beauchamp, and Vijay Pande have broken this barrier. The movie below is of one of the trajectories that folded (i.e. started unfolded and ended up in the folded state). From simulations like these, we have found some new surprises in how proteins fold. Please see the paper (url above) for more details. Why is this important? This is important since protein misfolding occurs on long timescales and this first simulation on the millisecond simulation for protein folding means we have demonstrated our new Markov State Model (MSM) technology can successfully simulate very long timescales. It make sense to go after protein folding first, since there is a wealth of experimental data for us to test our simulations. While this paper on protein folding has just come out, we have already been using this MSM technology to study protein misfolding in Alzheimer's Disease, following up from our 2008 paper. While our previous paper was able to get to long enough timescales to see small molecular weight oligomers, this new methodology gives us hope to push further with our simulations of Alzheimer's, making more direct connections to larger, more complex Abeta oligomers than we were previously able to do.
<urn:uuid:2a9b3534-26b7-42b9-b080-940804a0bacd>
CC-MAIN-2016-26
http://folding.typepad.com/news/2010/01/major-new-result-from-foldinghome-simulation-of-the-millisecond-timescale.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00088-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94271
347
3.15625
3
How Hot Is Your Compost Pile? Heat and steam mean the microbes are working The temperature in the middle of my compost pile ranges from 90 to 120 degrees. I measure using a compost thermometer with a 14-inch stem. The height of the pile has been shrinking rapidly, with the center sinking faster than the edges. Temperature and shrinkage tell me that the microbes are feasting, changing those leaves, weeds and grass clippings into compost. Heat is a by-product of composting, as is carbon dioxide and water vapor. Dig into your compost pile on a cold day and you should see steam rising and feel heat. These signs mean that the microbes are digesting the sugars, starches and cellulose and converting them to carbon dioxide. The decomposition of the leaves, grass clippings, weeds, etc., releases nutrients that become part of the compost’s slow-release fertilizer. If your compost pile is not shrinking and its temperature is similar to that of the surrounding air, that’s evidence that the pile is not composting. The leaves could be too dry. Or the pile could be inert because you failed to add either soil or active compost to leaves when building it last fall. To get it working, you need to add nitrogen in the form of commercial fertilizer, grass clippings or animal manure to balance the carbon to nitrogen ratio. Active composting requires that sufficient nitrogen, organic or inorganic, be included with the leaves. Are Your Rose Canes Whipping in the Wind? If your roses are more than two feet tall, prune the canes back to about 18 inches from the ground. Fall pruning soon after they have finished flowering is best, but it’s still not too late. Off-season pruning prevents the tops of the plants from being whipped by winter winds. This is especially important for grafted roses. Pruning Knockout roses — which are grown from rooted cuttings — back to 18 inches prevents their lower branches from splitting. Ask Dr. Gouin your questions at email@example.com. All questions will appear in Bay Weekly. Please include your name and address. At this time of year, the most effective method of wetting down a compost pile, especially one that is mostly dry leaves, is to dump dirty, greasy dishwater over them. If you did not add compost or soil to the pile when building it, dissolve a cup of high-nitrogen lawn fertilizer in the dishwater before dumping it. A daily dose of dishwater in conjunction with a weekly treatment of fertilizer will kick-start the composting. If you wish to hasten the process, turn the pile thoroughly in early to mid February.
<urn:uuid:4f37b3b4-d985-4dbb-aff3-d727d16c8b44>
CC-MAIN-2016-26
http://bayweekly.com/articles/bay-gardener-dr-frank-gouin/article/how-hot-your-compost-pile?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00167-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941584
562
2.90625
3
The current U.S. education system is failing to prepare millions of young adults for successful careers by providing a one-size-fits-all approach, and it should take a cue from its European counterparts by offering greater emphasis on occupational instruction, a Harvard University study published today concludes. The two-year study by the Pathways to Prosperity Project at the Harvard University Graduate School of Education notes that while much emphasis is placed in high school on going on to a four-year college, only 30 percent of young adults in the United States successfully complete a bachelor's degree. While the number of jobs that require no post-secondary education have declined, the researchers note that only one-third of the jobs created in the coming years are expected to need a bachelor's degree or higher. Roughly the same amount will need just an associate's degree or an occupational credential. "What I fear is the continuing problem of too many kids dropping by the wayside and the other problem of kids going into debt, and going into college but not completing with a degree or certificate," said Robert Schwartz, who heads the project and is academic dean of the Harvard Graduate School of Education. "Almost everybody can cite some kid who marched off to college because it was the only socially legitimate thing to do but had no real interest." The report highlights an issue that has been percolating among education circles: That school reform should include more emphasis on career-driven alternatives to a four-year education. The study recommends a "comprehensive pathways network" that would include three elements: embracing multiple approaches to help youth make the transition to adulthood, involving the nation's employers in things like work-based learning, and creating a new social compact with young people. Many of the ideas aren't new, and leaders, including President Barack Obama, have advocated for an increased role for community colleges so the country can once again lead the world in the proportion of college graduates. U.S. Education Secretary Arne Duncan will deliver opening remarks at the report's release in Washington. But the idea of providing more alternatives, rather than emphasizing a four-year college education for all, hasn't been without controversy. Critics fear students who opt early for a vocational approach might limit their options later on, or that disadvantaged students at failing schools would be pushed into technical careers and away from the highly selective colleges where their numbers are already very slim. "You've got to work on both fronts at once," Schwartz said, arguing for intensifying efforts to get more low-income and minority students into selective institutions while strengthening the capacity of two-year colleges. The study recommends that all major occupations be clearly outlined at the start of high school. Students would see directly how their course choices prepare them careers that interest them — but still be able to change their minds. Students should also be given more opportunities for work-based learning, such as job shadowing and internships. Students, the researchers recommend, should get career counseling and work-related opportunities early on — no later than middle school. In high school, students would have access to educational programs designed with the help of industry leaders, and they'd be able to participate in paid internships. The report notes that many European countries already have such an approach, and that their youth tend to have a smoother transition into adulthood. And not all separate children into different paths at an early age. Finland and Denmark, for example, provide all students with a comprehensive education through grades 9 or 10. Then they are allowed to decide what type of secondary education they'd like to pursue. Barney Bishop, president and CEO of Advanced Industries of Florida, said he would advocate for an approach that provides more alternatives and greater inclusion of the business community. "The problem for the business community is where you have kids who don't have the rudimentary skills, and you have to take the time and effort to train them, get them some of the rudimentary skills, plus the special skills," he said. Sandy Baum, an independent higher education policy analyst, said she think there needs to be more counseling in advising students about how to make choices. "I don't think the problem is too many people going to four-year colleges," she said. "The problem is too many people making inappropriate choices." - Nearly 1 in 3 on Medicare got commonly abused... - Obama administration decries judge's fracking... - Dems stage election-year sit-in on guns, GOP... - House Democrats cheer Clinton in first... - Capsule full of space station junk makes... - Trump launches broad attack on Clinton's record - Dump Trump movement preparing fight at GOP... - Islamic State militants push back in Syria,... - Immigration ruling called hurtful, a... 75 - Nearly 70 percent of Utahns say Donald... 62 - Dems stage election-year sit-in on... 46 - In need of help, Trump finds few... 35 - Love won't go to GOP national convention 32 - Democrats end 25-hour plus protest to... 30 - Trump, in Scotland, links Brexit vote... 26 - Video game 'Thoughts and Prayers' mocks... 26
<urn:uuid:df730e32-17fe-48b7-83bc-4ea1126297e9>
CC-MAIN-2016-26
http://www.deseretnews.com/article/700106263/Study-shows-students-need-more-career-paths.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00181-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964584
1,062
2.53125
3
Women in the United States have more rights and equality than women in most other countries, right? Doesn’t that sound like a reasonable statement? Then how is it that the U.S. ranks 79th “in the world for women’s political representation”? And that is after last week’s elections, which brought the number of female Senators up from 17 to a whopping 20. The underrepresentation of women in government matters. In the Huffington Post, Soraya Chemaly quotes Alexander Hamilton’s gem in the Federalist Papers that “all classes of citizens should be involved so that their feelings and interests be better understood.” If you’re not on the legislative floor, neither are your interests. When more women are in government, their interests are not only better understood, they are also promoted. An American University study called “Men Rule: The Continued Under-Representation of Women in Politics” summarized other studies’ findings that when women become legislators, they: - tend to enact “women’s health policies” - “are more likely than men to vote for reproductive rights legislation” - “are generally more active in sponsoring legislation with a focus on women’s interests” - “are more supportive of ‘women’s issues’” It’s not hard to believe that specific policies would change if women claimed their fair share of political power. For instance, in a world where women were equal, “the school day just might match the work day and health care would cover contraception and Viagra,” said Christine Bronstein, founder of the women’s social network A Band of Wives,” quoted in BlogHer. (Thankfully, President Obama is taking care of the contraception part with the Affordable Care Act.) But tectonic shifts like an alignment of the school and work days can happen only once women really do have an equal share of political power, not just some share. Tali Mendelberg and Christopher F. Karpowitz report in The New York Times their findings that “female lawmakers significantly reshape policies only when they have true parity with men.” As they observe, “we’ve got a very long way to go.” Where are the Women? Why are we 79th? Why don’t American state and federal legislatures have more female members? It’s hard to pin the gender gap on the electorate. It is well-documented that “when women run for office, they perform just as well as their male counterparts.” There are no differences in their electoral success. The trouble is that women rarely do run for office. The American University study found some differences between male and female potential candidates that account for the gap. - Women aren’t recruited to run for office. Politicians, mentors, advocacy groups, and others are more likely to prompt and encourage men than women to run for office. - Women believe the electoral process is biased against them. - Women believe they are not qualified to run for office, even when they have the same qualifications as men — or even better ones. - It is important to get to girls and women at early ages so they develop the “confidence, competitiveness, and ambition” that might spur them to run for office. Many men have a substantial stake in increasing the number of women in government. Given female legislators’ tendency to advocate for the weaker and less fortunate, Mendelberg and Karpowitz conclude that “at a time of soaring inequality, electing vastly more women might be the best hope for addressing the needs of the 99 percent.”
<urn:uuid:68b0fc12-69d2-4141-a35b-3729c294d23e>
CC-MAIN-2016-26
http://www.care2.com/causes/one-easy-way-to-get-more-women-into-office.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960531
776
2.984375
3
World's hungry top one billion Rome, June 20, 2009 The number of hungry in the world has reached a "historic high" of more than one billion people, the UN food agency said, blaming the global financial crisis for the surge. The Food and Agriculture Organisation (FAO) said on Friday "one sixth of humanity", or 1.02 billion people, do not get enough to eat. It predicted an 11 per cent increase for all of 2009. An estimated 642 million of the total are in the Asia-Pacific region, the agency said. Some 265 million are in sub-Saharan Africa, 53 million in Latin America and the Caribbean and 52 million in the Middle East and north Africa. But the FAO said there are some 15 million hungry in developed countries. A dangerous mix of the global economic slowdown combined with stubbornly high food prices in many countries has pushed some 100 million more people than last year into chronic hunger and poverty, FAO chief Jacques Diouf said. He called for a "new world food order" enshrining the "right to food and thus the right to exist", urging stepped-up investment in agriculture. The FAO had initially revised downward its estimate of hungry people from 963 million to 915 million because of a "better-than-expected global food supply".
<urn:uuid:200cd9bb-f057-46a8-abf1-deec38ab00c4>
CC-MAIN-2016-26
http://www.tradearabia.com/news/MISC_163182.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928816
272
2.9375
3
Why take a vitamin D supplement? Once thought to only be important for bone health, scientists have now found that vitamin D has important actions in almost every cell in the human body, and controls the expression of over 200 different genes. Sufficient vitamin D is crucial to the proper function of many of our body’s tissues. It is difficult (and potentially harmful) to get enough vitamin D from the sun. For most people, the main source of vitamin D is exposure to sunlight, which prompts vitamin D production by the skin. Very few foods naturally contain vitamin D, and achieving adequate vitamin D levels via sun exposure is difficult, considering that most of us have indoor jobs, and cover much of our skin with clothing, especially during winter. Also, getting enough sun exposure to produce optimal Vitamin D status may cause skin damage, aging and wrinkling, while increasing the risk of skin cancer.For the complete article see the 10-03-2012 issue. Click here to purchase an electronic version of the 10-03-2012 paper. Share on Facebook
<urn:uuid:b09a5827-3203-4602-bec1-0d84ed9302ee>
CC-MAIN-2016-26
http://www.newportnewstimes.com/v2_news_articles.php?heading=0&story_id=35810&page=79
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00179-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915189
215
3.390625
3
Six tiny University of Colorado at Boulder experiments will be lofted by a large helium balloon from Windsor, Colo., to a height of about 17 miles before drifting back to Earth on the eastern plains via parachute on Saturday, April 20. The experiments were designed and built by undergraduates affiliated with the Colorado Space Grant Consortium based in the College of Engineering and Applied Science. The high-altitude balloon will be launched by the Colorado-based "Edge of Space Sciences" group, or EOSS, a nonprofit organization that has been flying balloon payloads for Colorado students since the early 1990s. Each of the six CU-Boulder experiments is enclosed in a cube approximately four inches on a side and one pound in weight. They will be tethered beneath the balloon, which is expected to rise as high as 100,000 feet in about 90 minutes. The eight-foot-diameter balloon will expand to roughly 30 feet in diameter as it rises to roughly 17 miles, then burst and release the payloads via a parachute. The 31 students involved in the project, primarily freshmen and sophomores from various disciplines across campus, developed the experiments this semester in a class titled "Gateway to Space." The class is taught by Chris Koehler, deputy director and research coordinator for the Colorado Space Grant Consortium. EOSS volunteers will track the balloon with Global Positioning Satellite equipment following its scheduled 9 a.m. launch. The payloads are expected to drift eastward for roughly 100 miles with prevailing winds and then will be retrieved by a team of EOSS chasers. "This is a great experience for the students," Koehler said. "They take their idea, build it, test it and fly in one semester." The launch site is about one mile east of exit 262 on Interstate 25 just west of Windsor. EOSS participants also will be flying a special video camera that transmits live television back to a ground station at the Windsor launch site. Created with NASA funding in 1989, the Colorado Space Grant Consortium was designed to give students - primarily undergraduates - experience in designing, building and flying space instruments. Of the 50 space grant consortiums in every state, Colorado's has been the most active, designing, building and flying three sounding rocket payloads and three space shuttle payloads in the past decade. The consortium consists of students from CU-Boulder as well as 15 other colleges and institutions in the state. The consortium is headquartered at CU-Boulder and directed by Elaine Hansen.
<urn:uuid:d3f72148-3e62-45b7-ac1a-b1c67a00fd64>
CC-MAIN-2016-26
http://www.colorado.edu/news/releases/2002/04/12/cu-student-experiments-ride-balloon-17-miles-above-colorado-plains
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958434
509
2.8125
3
INEE is happy to share permission to print INEE materials including the CSE Pack, contact MStraining@ineesite.org for permission and assistance. Conflict sensitive education refers to the design and delivery of education programs and policies in a way that considers the conflict context and aims to minimize the negative impact (contribution to conflict) and maximize positive impact (contribution to peace). INEE developed a Conflict Sensitive Education Pack to support the integration of conflict sensitivity in education policies and programs. The Conflict Sensitive Education Pack includes a Guidance Note , Reflection Tool , and INEE Guiding Principles , and is available for download in the following languages: English French Arabic Spanish Guidance Note on Conflict Sensitive Education The Guidance Note offers strategies for developing and implementing conflict sensitive education programs and policies. Building upon the INEE Minimum Standards, the Guidance Note offers guidance on conflict sensitive education design and delivery at all levels and in all types and phases of conflict. It is a useful tool for practitioners, policy-makers and researchers working in conflict-affected and fragile contexts.The Guidance Note includes a Quick Reference Tool (available in English ), offers useful guidance, key actions and suggestions for conflict sensitive education. English French Arabic Spanish Dari Pashto Reflection Tool for Designing and Implementing Conflict Sensitive Education Programs in Conflict-Affected and Fragile Contexts This Reflection Tool is designed to help you reflect on the impact of conflict dynamics on education programs and how these education programs can help either mitigate or exacerbate the conflict dynamics. It can be used to integrate conflict sensitivity at all stages of the project cycle: assessment, design, implementation/management, monitoring and evaluation. Principles of community participation, equity, access, quality, relevance and protection are included across the tool and are based on the INEE Minimum Standards for Education. This Reflection Tool can be used in the following ways: 1. For an assessment of a new education program; 2. In the design of a new education program; 3. In the implementation / management of an education program; 4. In monitoring and evaluating an education program; and 5. In the review of an education program. English French Arabic Spanish INEE Guiding Principles on Integrating Conflict Sensitivity in Education This one-page document outlines the INEE Guiding Principles on Integrating Conflict Sensitivity in Education Policy and Programming in Conflict-Affected and Fragile Contexts. It describes the following principles: 1. Assess; 2. Do no harm; 3. Prioritize prevention; 4. Promote equity and the holistic development of the child as a citizen; 5. Stabilize, rebuild or build the education system; and 6. Development partners should act fast, respond to change, and stay engaged beyond short-term support. English French Arabic Spanish Portuguese INEE has developed documents to support promotional efforts of the INEE Conflict Sensitive Education Pack: PowerPoint presentation (English ), Talking Points (English ), and User Feedback Form [Word doc ] [online form For more guidance and tools, please see the Implementation Tools section below and the list of additional resources on conflict sensitive education
<urn:uuid:c916893d-0f6e-40c8-a0b8-68167db536cb>
CC-MAIN-2016-26
http://toolkit.ineesite.org/inee_conflict_sensitive_education_pack
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.875227
656
3.375
3
Lesson 3: Labeled Diagrams Bats | 750L - Learning Goal - Identify facts learned from a labeled diagram. - Explain that labeled diagrams teach you information. - Approximately 50 minutes - Necessary Materials - Provided: Unit Example Chart, Independent Practice Worksheet Not Provided: Bats by Gail Gibbons, chart paper, markers will explain that another characteristic of informational books is that they include diagrams. Diagrams are drawings that can help you understand information about a topic in a book. I will add this to my Characteristics of Informational Texts Chart that I started in Lesson 1 (Example Chart is provided in Unit Teacher and Student Materials). Then, I will model how to identify information in labeled diagrams in Bats by Gail Gibbons. I will look for a drawing that is trying to show information, Then, I will read the labels on the diagram to learn about the diagram’s topic. For example, on page 5, there is a diagram about a little brown bat. I will read the labels aloud to the class, and explain that this diagram taught me that bats have eyes, ears, a nose, a tail, wings, and their fur can be red, white, black, gray or brown. I also learned that the length of their whole body is about three inches. Ask: "How did I identify facts on a diagram?" Students should explain that you looked for labels and read them for information about the topic of the book. will continue to look at the labeled diagrams in Bats (pages 6-7, 16, and 22 have diagrams), and identify new information we learned about bats from the diagrams. We will reflect that we have learned many facts from the diagrams of an informational text, so we will add this title to the chart, along with 1-2 examples. will identify one new fact that you learned from a labeled diagram and explain how you knew the book was informational. (Independent Practice Worksheet is provided.) |Tier 2 Word: wing| |Contextualize the word as it is used in the story||[Diagram p. 5]| |Explain the meaning student-friendly definition)||A wing is a part of the body that some animals use for flying. Bats, insects, and birds all have wings. If the diagram says that the bat’s wingspan is 10 inches that means that the bat’s wings are 10 inches long from tip to tip. (Measure approximately ten inches in the air.)| |Students repeat the word||Say the word wing with me: wing.| |Teacher gives examples of the word in other contexts||The wings of an eagle are much bigger than the wings of a ladybug! I always wished I had wings. I would love to fly!| |Students provide examples||Can you name two animals that have wings? Start by saying, “Two animals that have wings are ____________________.”| |Students repeat the word again.||What word are we talking about? wing| |Additional Vocabulary Words||inches, pollen| After looking at the diagrams in Bats, explain that bats are nocturnal, which means that they sleep during the daylight and are awake at night. Bats usually live in caves, attics, barns, or other dark places. Some people are scared of bats, but they are actually very gentle animals who only eat insects. Some types of bats can eat 600 insects in an hour! Texts & Materials (To see all of the ReadWorks lessons aligned to your standards, click here.)
<urn:uuid:a335cf88-3de5-40c1-94c5-94ae0cf15561>
CC-MAIN-2016-26
http://www.readworks.org/lessons/gradek/genre-studies-informational-texts/lesson-3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930042
752
4.03125
4
In 1911, an entire block in the Iditarod business district was destroyed by fire. In 1949, the National Labor Relations Board ruled against the CIO Longshoreman's Union in a dispute with the Juneau Spruce Corporation. In 1959, acting Alaska Gov. Hugh Wade signed into law a bill creating 12 departments within the executive branch of Alaska government (such as the Department of Transportation and the Department of Education). In 1967, nine crew members were rescued from the sinking 72-foot Canadian halibut boat, Dollina, off the southwest tip of Kodiak Island. In 1968, a fire wiped out much of the Ocean Dock complex in Cordova. In 1969, U.S. Sen. Ted Stevens revealed that Ed Nixon - President Richard Nixon's brother - would be the new head of the Federal Field Committee for Development in Alaska.
<urn:uuid:9d533db9-bd3f-46df-b20c-2a6a68a40732>
CC-MAIN-2016-26
http://juneauempire.com/stories/040408/sta_265059377.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00189-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949515
175
2.515625
3
TOLEDO, Ohio — About 400,000 people in and around Ohio's fourth-largest city were warned Saturday not to drink or use its water after tests revealed the presence of a toxin possibly from algae on Lake Erie. The warning left residents across the Toledo area searching for water after they were advised not to shower with it or boil the water because that would only increase the toxin's concentration. People bought carts full of bottled water, bags of ice and flavored water, emptying store shelves within hours of the advisory that was issued overnight. "It looked like Black Friday," said Aundrea Simmons, who stood in a line of about 50 people at a pharmacy before buying four cases of water. "I have children and elderly parents. They take their medication with water." Toledo issued the warning just after midnight after tests at one treatment plant showed two sample readings for microsystin above the standard for consumption. The city's advisory said Lake Erie may have been affected by a bloom of harmful algae that produces the toxin. Consuming the tainted water could result in vomiting, diarrhea and other problems. The advisory covers city residents and those in Lucas County served by the city's water supply. The city said more tests are being run. Many restaurants were closed because of the water warning and Toledo's public school system canceled all its events Saturday. The University of Toledo closed its campus for the day and encouraged students who are from outside the Toledo area to return to their homes. Operators of water plants all along Lake Erie, which supplies drinking water for 11 million people, have been concerned over the last few years about toxins fouling their supplies. Almost a year ago, one township just east of Toledo told its 2,000 residents not to drink or use the water coming from their taps after tests on drinking water showed the amount of toxins had increased. That was believed to be the first time a city has banned residents from using the water because of toxins from algae in the lake. Most water treatment plants along the western Lake Erie shoreline treat their water to combat the algae. The city of Toledo spent about $4 million last year on chemicals to treat its water and combat the toxins. The annual algae blooms have been concentrated around the western end of Lake Erie. The algae growth is fed by phosphorous mainly from farm fertilizer runoff and sewage treatment plants, leaving behind toxins that can kill animals and sicken humans.
<urn:uuid:3fb9606b-b5e7-4ef2-a8dd-4273555f16b6>
CC-MAIN-2016-26
http://www.arkansasonline.com/news/2014/aug/02/toledo-residents-dont-use-or-drink-water/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979071
498
2.53125
3
...in 1789, George Washington concluded a ten-day presidential visit to Massachusetts. Adoring crowds of grateful citizens greeted him everywhere. People preserved the dishes he used, the chairs he sat on, and the beds he slept in. Many of the streets he traveled down were renamed "Washington Street." Only Governor John Hancock slighted the president, insisting that, since he was head of the Commonwealth, Washington should come to visit him. Hancock soon saw the error of his ways. The day after the president's arrival in Boston, Hancock belatedly paid his respects. His legs covered in bandages, he claimed an excruciating attack of gout had prevented him from welcoming the president. In the interest of promoting unity, Washington accepted the explanation with characteristic grace. By the time George Washington was sworn in as the first President of the United States on April 30, 1789, he was a wildly popular war hero of almost mythic proportions. Aware that his new government would need to unite the varied, and sometimes conflicting, interests of the 13 former colonies, Washington decided to use his own personal popularity to cement this loosely knit confederation. Demonstrating his keen sense of public relations, he announced that, during his first year in office, he would personally tour every state. In the autumn of 1789, Washington spent four weeks traveling through New England. Washington's first stop on his historic visit to Massachusetts was Springfield, where he inspected the federal arsenal. Over the next three days, his entourage stopped in Palmer, West Brookfield, Brookfield, Spencer, Leicester, and Worcester, where he received a 13-gun salute. The President continued east through Marlborough, Shrewsbury, Sudbury, and Weston. He arrived in Cambridge, where his headquarters had been during the siege of Boston, on the morning of October 24th. Lt. Governor Samuel Adams escorted Washington into Boston. The day was wet and cold. So was the greeting from the Governor of Massachusetts, John Hancock. The proud governor refused to come out to meet the president, insisting that as head of the Commonwealth, he outranked the federal president in his own state. Hancock's slight was outweighed by the open adoration of the rest of Boston's citizens. Ringing church bells, firing cannons, and the city's dignitaries minus Hancock greeted him. Although Washington had specifically requested that there be no ceremony, the people of Boston could not be restrained. A grand procession accompanied him from the Common to the State House. Grateful citizens, grouped by trade and organized alphabetically, lined the route. Each craft group flew a white silk flag bearing its insignia. When the president reached the State House, he passed through a temporary arch designed by Charles Bullfinch on the model of the triumphal arches of ancient Rome. As Washington passed through the arch, a chorus of young men serenaded him with a song written especially for the occasion "Washington, the hero is come." After several days of receptions and balls in Boston, Washington continued north to Marblehead, Salem, and Beverly. Here he was impressed by his visit to one of the new nation's first cotton mills. After dining in Ipswich, he lodged in Newburyport. On his last night in Massachusetts, Washington was treated to a celebratory display of rockets and other fireworks. The next day, October 31st, the president was escorted with great fanfare to the New Hampshire border. He crossed the Merrimack River into a waiting throng of adoring New Hampshire citizens. George Washington Papers at the Library of Congress "A Chilly Reception: President George Washington's Trip to Boston, October, 1789," in The Dial, Old South Meeting House Newsletter, Spring/Summer 2004.
<urn:uuid:9c2299b8-1842-4321-89ee-253b39c93b53>
CC-MAIN-2016-26
http://www.massmoments.org/moment.cfm?mid=314
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975143
763
3.34375
3
American lawyer, soldier, and statesman, the 14th vice president of the United States. Breckinridge was born near Lexington, Kentucky. Trained in the law, he served in the U.S. Army during the Mexican War, after which he was elected to the Kentucky legislature. From 1851 to 1855 he served in the U.S. House of Representatives. Breckinridge was elected vice president of the U.S. in 1856 on the Democratic ticket with James Buchanan. A leading spokesman of the proslavery faction of the Democratic Party, he was nominated for the presidency by that faction in 1860, but lost the election to the Republican candidate Abraham Lincoln. Shortly after the opening of hostilities between the Confederacy and the Union government, Breckinridge helped to organize the Confederate government of Kentucky, joined the Confederate army, and was made a brigadier general in 1862. In the Civil War, as a Confederate general, John Breckinridge saw action at Shiloh, Baton Rouge, Vicksburg, Chickamauga and Missionary Ridge. Breckinridge’s military career was tainted from time to time by his fondness for the bottle. General Bragg, not the most competent of authorities, blamed part of his defeat at Missionary Ridge on the supposed fact that General Breckinridge was so drunk he could not stand. Breckinridge is best known for his tremendous victory over superior forces at the Battle of New Market on May 15, 1864. Quickly putting together a rag tag force from several different locations including about 250 cadets from the Virginia Military Academy, Breckinridge marched north and met General Franz Sigel’s forces at New Market, soundly routing them in a rain storm. After serving in the Shenandoah Valley, he commanded his division at Cold Harbor and fought with CSA Gen. Early at Monocacy in Maryland. He was named the Confederate secretary of war just two months before the Confederate surrender at Appomattox in 1865. At the close of the conflict he fled to Europe. Returning to the U.S. in 1869, he thereafter devoted himself to the practice of law. Today, a visitor to Lexington's Cheapside, a grassy area criss-crossed with walks and dotted with streetlamps, fountains, and historical markers adjacent to the Fayette County, Kentucky, courthouse, cannot help but notice the impressive statue of John C. Breckinridge, erected by "The Commonwealth of Kentucky, A.D. 1887." The site of public slave auctions before the Civil War and of County Court Days after the war, until that public nuisance was abolished in 1921, the Cheapside area is now set aside for remembering the heroes and events of the city's past. |History of the War in the Valley||Historic Places||Tour||Valley Museums||Soldiers and Civilians||Site Map||Valley Historical Links|
<urn:uuid:ff9af2be-6aa8-482c-b4d2-5628bdec97b0>
CC-MAIN-2016-26
http://www.angelfire.com/va3/valleywar/people/breckinridge.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966549
612
3.078125
3
Using Actions to Automate Tasks in Photoshop While Adobe Photoshop has a host of automation tools, one of the most versatile and powerful of them is called an action. In fact, some of the other automation commands, such as batches and droplets, derive their functionality from actions. An action is like a macro or script; however, while scripts have a reputation for being confusing and difficult, actions are very easy to create. If you know how to use Photoshop, you know most of what you need to create your own actions. For example, let's say that you have 100 digital photos that you'd like to post on the Internet. Normally, you'd have to load each one, scale, color correct, sharpen, and then save them one at a time. Alternatively, you could create an action that does all the "dirty work" for you — and best of all, you'll get consistent results in far less time than you could've achieved by doing it manually! Although actions can be used to automate all sorts of tasks, some common uses include: - batch-processing multiple images; - applying consistent treatments; - repeating tedious or mundane tasks; or - distributing reproducible special effects. To better understand actions, let's begin with a comprehensive overview of the Actions panel, including the commands available in the Actions panel menu. We'll then use these commands to create a couple of actions of our very own! Introduction to the Actions Panel The Actions panel is sort of like a "mini action editor": it allows you to create (record), edit, load, save, delete and play actions (among other things). To show or hide the Actions panel, use the Window » Show Actions command or press the F9 key. The figure below shows the Actions panel in List View Mode (its default mode). Click on each of the letters below to see the corresponding descriptions. A. Stop Playing/Recording The Stop button () stops action playback and recording. This button is equivalent to pressing Esc or Ctrl/Cmd+.. You can also stop recording by choosing Stop Recording from the Actions panel menu (). B. Begin Recording Push the Record button () to begin recording a new action or to add additional commands to an existing action. If an action itself is selected, new commands are appended to the end of the action. If an action step is selected, new commands are inserted after the current action step. You may also begin recording by choosing Start Recording from the panel menu. Note: You may rerecord the parameters for a command by double-clicking on its associated action step. If available, the command dialog will appear allowing you to enter new values. Choose OK to apply the new settings or Cancel to leave the original settings. C. Play Selection If an action is selected, pushing the Play button () plays the entire action. If an action step is chosen, the action will begin playback from the currently selected command. You may also choose Play from the Actions panel menu. D. Create New Set Press the Create New Set button () to create a new action set. A dialog will appear, prompting you for the set's name. This button is equivalent to choosing New Set from the Actions panel menu. E. Create New Action Press the Create New Action button () to add a new action to the selected set. A dialog will appear, prompting for the action's name, associated set, keyboard shortcut and Button Mode color. This button is equivalent to choosing the New Action command from the Actions panel menu. As you might expect, the Delete button () deletes the selected set, action or command. A dialog appears to confirm your intentions. Alternatively, you may access the Delete command from the panel menu. Delete operations performed in the Actions panel are not added to the history, nor can they be undone via the Edit » Undo command; however, you may undo/redo the last delete operation by pressing Ctrl/Cmd+Z. G. Action Sets Sets behave much like folders, in that they allow you to organize your actions. Double-click on a set (or choose Set Options from the panel menu) to change its name. An action is basically a Photoshop macro containing one or more pre-recorded commands that may be replayed on another image (or series of images). Actions can only exist within a set (i.e., they can't be created in the Actions panel outside of a set). Double-click on an action (or choose Action Options from the panel menu) to change its name, keyboard shortcut and Button Mode color. I. Action Steps (Commands) Action steps are pre-recorded Photoshop commands. Actions are comprised of one or more action steps. J. Command Details Expanding an action step (by clicking on its associated disclosure icon, ) reveals the details/values that were set for the command at the time it was recorded. Note: You may change the parameters for a command by double-clicking on its associated action step (or by choosing Record Again from the panel menu). If available, the command dialog will appear, allowing you to enter new values. Choose OK to apply the new settings or Cancel to leave the original settings. K. Modal Controls The modal control is used to enable/disable a command's dialog; hence, it's only available for commands that have an associated dialog. If enabled, a small dialog icon () appears next to the command, and the associated dialog is displayed for that command during action playback. The default is to not display a dialog (indicated by an empty box), and instead, to use the values that were recorded for the command when the action was created. Note: Enabling (or disabling) the modal control for a set toggles all dialogs for all actions within the set. Similarly, enabling (or disabling) the dialog checkbox for an action toggles all dialogs for all applicable commands within that action. A red modal control icon () indicates that one or more (but not all) dialogs are enabled within an action (or set). A grey "ghosted" modal control icon () can mean one of two things: - the dialog has been enabled, but the command, action, or set has been excluded; or - the command was inserted without values, via the Insert Menu Item command (in which case, you'll be prompted for values during action playback). L. Include Checkboxes The include checkbox () is used to turn action steps — or even entire actions or sets — on or off. By default, all commands are checked, indicating that they are to be included when an action is played. An empty checkbox indicates that the command has been disabled (excluded), meaning that it will be skipped during playback. Note: Enabling (or disabling) the include checkbox for a set toggles all steps for all actions within the set. Similarly, enabling (or disabling) the include checkbox for an action toggles all steps within that action. A red checkmark icon () indicates that one or more (but not all) steps within an action (or set) have been disabled. M. Actions Panel Menu In addition to the many controls discussed above, the Actions panel menu also contains several commands that are very useful for creating and editing actions. To access the panel menu, click on the button, near the top right corner of the Actions panel. By default the Actions panel appears in List View Mode. Button Mode turns each action into a button displaying the name, color and keyboard shortcut assigned to it in the Action Options dialog. Simply click a button to play its corresponding action. To turn Button Mode off (and return to List View Mode), simply choose Button Mode again from the panel menu. Despite how easy Button Mode is to use, its usefulness is limited because you can't create, edit or modify actions while in this mode. Insert Menu Item The Insert Menu Item command allows you to insert the selected menu item into the current action. Inserted items appear below the active action step. This command is available regardless of whether or not you are in record mode. Two key uses for this command are: - to insert commands that might otherwise be unavailable (or inaccessible) while in record mode (such as showing or hiding panels); or - to insert a command without values (and display a prompt during action playback). A stop is a dialog that pauses the action to display a user-defined message of up to 254 characters. The dialog has a Stop button (hence the name), and may also contain an optional Continue button. Typical uses for a stop include: - to display instructions or copyright/version information; or - to stop playback, allowing the user to perform manual tasks such as painting, or inserting text, prior to resuming playback. Insert Path is only available when a path (or shape) is selected. This command inserts the selected path into the current action as a series of anchor and handle coordinates. Use the Set Options/Action Options command to rename an action/set, or to change its function key or Button Mode color. You can also double-click on an action/set to access the Options dialog. The Playback Options dialog allows you to set the playback speed for actions. - Accelerated plays actions as fast as possible (desirable for most circumstances); - Step by Step allows the screen to refresh between commands (useful for debugging); and - Pause For pauses between commands for the defined number of seconds (between 1 and 60). The Pause For Audio Annotation option allows you to pause action playback for documents that contain audio annotations. Clear All Actions As the name implies, Clear All Actions removes all actions (and sets) from the Actions panel. Use the Reset Actions command to remove all actions from the Actions panel and replaces them with the default sets. A warning dialog appears, allowing you to accept the replacement, cancel it, or append the default sets to the Actions panel. Load Actions allows you to load an existing action set. You can also load actions by choosing them (by name) from the bottom of the panel menu, by double-clicking on them in Explorer/Finder, or by dragging and dropping them onto the Photoshop application window. Note: In order for an action to appear in the panel menu, it must be saved in the Adobe Photoshop CS#\Presets\Actions\ folder (or subfolder). Replace Actions replaces all actions (sets) in the Actions panel with those that you select. Although the contents of the Actions panel are remembered from one session to the next, they're not actually saved until you physically save them using the Save Actions command. In fact, you can't use either of the above commands (Load Actions and Replace Actions) for a set until it's first been saved. Also, note that you can't save individual actions, only sets. If you want to save a single action, it must be placed into its own set. The best place to save your actions is in the Adobe Photoshop CS#\Presets\Actions\ folder (or subfolder). Creating a Simple Action Okay, enough theory. Time for something practical... How many times have you double-clicked on a layer to rename it? Wouldn't it be cool to have a keyboard shortcut to do this? Well, why not make your own? Once you get used to using it, you'll wonder how you ever managed without it! Before you begin recording an action, it's always useful to think through the steps involved. For this example, you could record an action that renames a layer using the Layer Properties dialog; however, it would always name layers using the same name – not very useful. We could turn on the modal control (), but the default name would still always be the name that was recorded with the action. Instead, we'll use the Insert Menu Item command, which inserts the command without assigning any values to it. Make sure the Actions panel is visible by pressing F9 (Window » Show Actions). Add a new set by pressing the Create new set button (). Name the set "Shortcuts" (or whatever you like). Create a new action by pressing the Create new action button (). When the New Action dialog appears, enter "Layer Properties" for the Name. Assign F2 as the Function Key, since that's typical shortcut for rename (on Windows, at least). Leave "Shortcuts" as the designated Set, and then select a Button Mode color if you wish. Begin recording by pressing the Record button. Next, choose Insert Menu Item from the panel menu (). When the Insert Menu Item dialog appears, choose Layer » Layer Properties from the main application menus, then press OK. Notice that an action step called "Select Layer Properties menu item" has been added to the action. Stop recording by pressing the Stop button (). That's it! Now let's try it out. Create a new document (Ctrl/Cmd+N, File » New) and add several new layers by pressing Ctrl/Cmd+Alt/Opt+Shift+N. Select a layer and press F2. When the Layer Properties dialog appears, name the layer and try it again on another layer. Remember: Don't forget to save your actions by choosing the Save Actions command in the Actions panel menu! Read the Save Actions description above for more details. Creating a More Complex Action That was fun, but now let's try something a little more complicated. For this example, we'll create an action that mirrors the active layer across both the vertical and horizontal axes — and we'll do this on a separate layer to ensure that the original layer remains unaltered. Finally, we'll center the results on the canvas. Once you've finished, you'll be able to use the action to create symmetrical shapes, repeating patterns, and refrigerator art. Assuming the Actions panel is already visible, create a new set (). Name the set "Mirror Corners". Add a new action to the set (). Name the action "Mirror Corners 1.0", and assign a function key and Button Mode color if you wish. Finally, begin recording by pressing the Record button. Because we're going to duplicate the current layer (to preserve the original artwork), we'll hide the layer so that it doesn't obstruct the view of the final results. Click the visibility icon () associated with the current layer. Notice that a command called "Hide current layer" has been added to the action. Next, duplicate the current layer by pressing Ctrl/Cmd+J (Layer » New » Layer via Copy). Turn on the new, duplicated layer by clicking on its visibility icon. Both commands should appear in the action. In this step we'll use the Edit » Free Transform command to flip a horizontal duplicate of the current layer. Press Ctrl/Cmd+Alt/Opt+T. In the Options panel, set the Reference point location to the right side (); then right-click in the document window and choose Flip Horizontal from the context menu. Press Enter to accept the transformation. Merge the two halves/layers by pressing Ctrl/Cmd+E (Layer » Merge Down). Now flip the current layer vertically using the same technique. Press Ctrl/Cmd+Alt/Opt+T to initiate Free Transform. In the Options panel, set the Reference point location to the bottom (); then right-click in the document window and choose Flip Vertical. Press Enter to accept the transformation. Again, merge the two halves together by pressing Ctrl/Cmd+E. To center the pattern, first select the entire canvas by pressing Ctrl/Cmd+A (Select » All); then, with the Move tool selected, press the Align vertical centers button ( on the Options panel), followed by Align horizontal centers (). Drop the selection by pressing Ctrl/Cmd+D (Select » Deselect). Note: The alignment commands are also available via the menus: Layer » Align To Selection » Vertical Centers, and Layer » Align To Selection » Horizontal Centers. Stop recording by pressing the Stop button (). As a final touch, let's add a Stop message to let other users know what this action does. Choose Insert Stop from the Actions panel menu (). Enter a brief description about what the action does and then enable the Allow Continue option. Since we want the message to be displayed at the beginning of the action, drag the newly added stop to the top of the action, above the Hide current layer command. Note: If you want the message to be present in the action, but not displayed each time the action is played, simply uncheck the Include checkbox () for the Stop. Now try your new action on different patterns and image sizes to make sure it works properly. Finally, don't forget to save the action using the Save Actions command. Tips for Editing Actions Here are some tips for editing your actions: - To begin playback from a specific step of an action, simply choose the desired step and press the Play button (). - Drag and drop steps to reorder them. This also works for actions and sets. - Alt/Opt-drag a step to duplicate it. This also works for actions and sets. - Double-click an action step to rerecord its associated values. Alternatively, you may choose Record Again from the Actions panel menu. - Ctrl/Cmd-click the Play button () to play only the selected step. This is useful for debugging an action. - Select non-contiguous action steps using the Ctrl/Cmd key. Use the Shift key to range select contiguous action steps. You may then delete, duplicate or even play the selected steps! - Alt/Opt-click the Delete button () to delete the selected item without confirmation. This is equivalent to dragging the desired item onto the Delete button. - Even though operations performed in the Actions panel can't be undone using the Edit » Undo command or History panel, you can undo/redo the last operation by pressing Ctrl/Cmd+Z. - Hold both the Ctrl/Cmd and Alt/Opt keys when choosing Save Actions (from the panel menu) to save all actions as a text file. This is very useful for reviewing or printing the contents of an action; however, the text file can't be reloaded into Photoshop. - Alt/Opt-click on an action's (not a step's) disclosure icon () to expand or collapse all steps within the action. - Alt/Opt-click on a set's disclosure icon () to expand or collapse all actions and steps within the set. To include a path within an action, first create the path (before recording the action); then, begin recording, select the path and choose Insert Path from the panel menu. Note: Set the ruler units to percentage before using this command to ensure that the path is sized and positioned relative to the canvas, regardless of the canvas' dimensions; otherwise, the path may be too large, or appear completely outside the canvas boundaries. - Use the Create new snapshot button (), at the bottom of the History panel, to create a document snapshot before running an action. That way, if you do not like the results, you can revert the document to its original state without having to undo every operation performed by the action. Tips and Guidelines for Creating Actions Keep things generic Try to build your actions using commands that don't require specific layer names (unless the layers are created by the action itself). For example, instead of choosing the previous or next layer by name, use the backward layer and forward layer keyboard shortcuts: Alt/Opt+[ and Alt/Opt+], respectively. To select the top- or bottom-most layers, press Alt/Opt+. or Alt/Opt+,, respectively. - Use Percent as your units of measurement for transformations that are meant to be relative to the canvas size, rather than absolute. - Perform all operations within a single canvas (if at all possible). Photoshop doesn't refer to documents by name, but as "next" or "previous" documents; and this this can cause problems if the number or order of documents changes. Reduce, reuse and recycle Optimize your actions wherever possible. The fewer steps, the faster the action will play and the easier it will be to debug when/if something goes wrong. For example, let's say that you have an action with four steps that duplicates a layer, names it, and then change its blending mode and opacity. All of these operations could, instead, have been accomplished in one step using the options available in the Layer » New » Layer via Copy dialog. - If several steps need to be repeated, record them as a separate action; then have the first action refer to the second action (like an "action subroutine"). - Examine other people's actions to learn new (or better) ways to accomplish things. Consideration for other users - Include instructions (by using stops) that inform others about what the action does and what's expected/assumed. If your action requires additional plugins, provide the name and source of the plugins. You might also want to include your name and email address in case users have problems, or suggestions for improvement. - Include a version number to distinguish different (updated) versions of the action. - Test your actions in several different situations to ensure trouble-free operation. - Preserve original layers whenever possible, or duplicate the document as the first step of the action. Managing your actions - Organize your actions by creating subfolders within the Actions folder ( Adobe Photoshop CS#/Presets/Actions/). Photoshop will still find these actions and even make them available from the Actions panel menu. - Add a tilde (~) to the beginning of action's filename (or even entire subfolders) to disable them. Photoshop will ignore any filenames beginning with a tilde. - Use the File » Automate » Batch command to play an action on a series/folder of images. - Use the File » Automate » Create Droplet command to save an action as a droplet. A droplet is a small executable file that will automatically launch Photoshop and apply the embedded action to any images that you drop onto it. For a great source of free actions, visit Adobe Exchange . Registration is free, and allows you to upload your actions to share with other users. And if you want to learn even more about actions — a lot more — check out Danny Raphael's Photoshop Actions Tutorial , a mammoth document that covers every aspect of actions in exhaustive detail. To download the files below, you may need to right-click on the provided links (Ctrl-click on the Mac), and choose "Save Target As". |Shortcuts||0.2KB||The completed action from the simple example (above)| |Mirror Corners||2KB||The completed action from the complex example (above)| |Test Pattern||49KB||A sample pattern to be used with the complex example (above)|
<urn:uuid:1cd79a42-2278-48ab-90a7-ffc04decd59e>
CC-MAIN-2016-26
http://morris-photographics.com/photoshop/tutorials/actions.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.889998
4,794
2.53125
3
In the tenth chapter of Genesis Sheba is one of the sons of Joktan, the ancestor of the south Arabian tribes. Foremost among them is Hazarmaveth, the Hadhramaut of to-day; another is Ophir, the port to which the gold of Africa was brought. But the same chapter also assigns to Sheba a different origin. It couples him with Dedan, and sees in him a descendant of Ham, a kinsman of Egypt and Canaan. Both genealogies are right. They are geographical, not ethnic, and denote, in accordance with Semitic idiom, the geographical relationships of the races and nations of the ancient world. Sheba belonged not only to south Arabia but to northern Arabia as well. The rule of the Sabaean princes extended to the borders of Egypt and Canaan, and Sheba was the brother of Hazarmaveth and of Dedan alike. For Dedan was a north Arabian tribe, whose home was near Tema, and whose name may have had a connection with that sometimes given by the Babylonians to the whole of the west. Such, then, was Arabia in the days of the Hebrew writers. The south was occupied by a cultured population, whose rule, at all events after the time of Solomon, was acknowledged throughout the peninsula. The people of the north and the centre differed from this population in both race and language, though all alike belonged to the same Semitic stock. The Midianites on the western coast perhaps partook of the characteristics of both. But the Ishmaelites were wholly northern; they were the kinsmen of the Edomites and Israelites, and their language was that Aramaic which represents a mixture of Arabic and Canaanitish elements. Wandering tribes of savage Bedawin pitched their tents in the desert, or robbed their more settled neighbours, as they do to-day; these were the Amalekites of the Old Testament, who were believed to be the first created of mankind, and the aboriginal inhabitants of Arabia. Apart from them, however, the peninsula was the seat of a considerable culture. The culture had spread from the spice-bearing lands of the south, where it had been in contact with the civilisations of Babylonia on the one side and of Egypt on the other, and where wealthy and prosperous kingdoms had arisen, and powerful dynasties of kings had held sway. It is to Arabia, in all probability, that we must look for the origin of the alphabet—in itself a proof of the culture of those who used it; and it was from Arabia that Babylonia received that line of monarchs which first made Babylon a capital, and was ruling there in the days of Abraham. We must cease to regard Arabia as a land of deserts and barbarism; it was, on the contrary, a trading centre of the ancient world, and the Moslems who went forth from it to conquer Christendom and found empires, were but the successors of those who, in earlier times, had exercised a profound influence upon the destinies of the East. [Footnote 6: 2 Sam. xvii. 27.] [Footnote 7: Jer. xl. 14.] [Footnote 8: Rehoboam is an Ammonite name, compounded with that of the god Am or Ammi. Rehob, which is the first element in it, was also an Ammonite name, as we learn from the Assyrian inscriptions.]
<urn:uuid:cb724034-a1b2-425d-807b-8a06e42e3091>
CC-MAIN-2016-26
http://www.bookrags.com/ebooks/12976/42.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982329
713
3.375
3
" DST="" --> Harnad, S. (1995) Grounding Symbolic Capacity in Robotic Capacity. In: Steels, L. and R. Brooks (eds.) The "artificial life" route to "artificial intelligence." Building Situated Embodied Agents. New Haven: Lawrence Erlbaum. Pp. 276-286. According to "computationalism" (Newell, 1980; Pylyshyn 1984; Dietrich 1990), mental states are computational states, so if one wishes to build a mind, one is actually looking for the right program to run on a digital computer. A computer program is a semantically interpretable formal symbol system consisting of rules for manipulating symbols on the basis of their shapes, which are arbitrary in relation to what they can be systematically interpreted as meaning. According to computationalism, every physical implementation of the right symbol system will have mental states. Artificial intelligence (AI) is the branch of computer science that is concerned with designing symbol systems that have performance capacities that are useful to human beings. Cognitive science includes AI as well as mind-modelling (MM), which is concerned with building systems that are not only useful to people with minds, but that have minds of their own. According to computationalism, AI can do both these things, and for several decades it was hoped that it would. AI's advantages in this regard were the following: AI could indeed (1) generate performance that ordinarily requires human intelligence and, unlike, say, behavioral psychology (Harnad 1982, 1984; Catania & Harnad 1988), AI could explain the functional and causal basis of that performance. There was also (2) reason to be optimistic about scaling up the performance of AI's initial "toy" models to human-scale performance because of formal results on the power and generality of computation; according to one construal of the Church-Turing Thesis, computation captures everything we mean by being able to "do" just about anything, whether formally or physically (Dietrich 1993). Hence a computer can do anything any physical system can do -- or, conversely, every physical system is really a computer. The last of the initial reasons for optimism about AI for MM was (3) the apparent capacity of the software/hardware distinction to solve the mind/body problem: If computationalism is correct, and mental states are just implementations of certain symbolic states, then the persistent difficulty that philosophers have kept pointing out with equating the mental and the physical is resolved by the independence of a physical symbol system's formal, symbolic level (the software level) from its physical implementation (the hardware level). A symbol system is implementation-independent, and so is the mind. Unfortunately, problems arose for AI, and not just when it tried to do MM. AI systems have so far not proved to scale up readily, not only for the human-scale performance necessary for MM, but even for the kinds of performance that were merely intended to be useful to people, such as pattern recognition and robotics. Rival approaches began to appear, among them (a) robots that made minimal use (or none at all) of internal symbol systems (Brooks 1993); (b) "neural nets," which were systems of interconnected units whose parallel distributed activity likewise did not seem to have a structured symbolic level (Hanson & Burr 1990); and (c) nonlinear dynamical systems in general, including continuous and chaotic ones that were not readily covered by the Church-Turing Thesis (Kentridge 1993). In addition, conceptual challenges were posed to the computationalist thesis, the one that had made it seem that AI would be capable of doing MM in the first place. Two such challenges were Searle's (1980) "Chinese Room Argument" and my own "Symbol Grounding Problem" (Harnad 1990a). Searle pointed out that the tenets of computationalism ("Strong AI") amounted to three hypotheses: (i) mental states are implementations of symbolic states; (ii) all physical implementational details are irrelevant (because any and all implementations of the right symbol system will have the same mental states, hence the differences between them are all inessential to having a mind); (iii) performance capacity is decisive (and hence the crucial test for the presence of a mind is the Turing Test (T2), which amounts to the capacity to interact with a person as a life-long pen-pal, indistinguishable in any way from a real person; note that this test is purely symbolic). Searle then pointed out that if there were a computer that could pass T2 in Chinese, he (Searle) could become another implementation of the same symbol system it was implementing (by memorizing all the symbol manipulation rules and then performing them on all the symbols received from the Chinese pen-pal), yet he would not thereby be understanding Chinese, hence neither would the computer that was doing the same thing. In other words, there was something wrong with hypotheses (i) - (iii): They couldn't all be correct, yet computationalism depended on the validity of all three of them. Searle's (1990, 1993) recommended alternative to computationalism and AI for those whose real interest was MM was to study the real brain, for only systems that had its "causal powers" could have minds. The only problem was that this left no way of sorting out which of the brain's causal powers were and were not relevant to having a mind (Harnad 1993a). We now knew (thanks in part to Searle) that the relevant causal powers were not exclusively the symbolic ones, but we did not know what the rest of them amounted to; and to assume that every physical detail of the real brain -- right down to its specific gravity -- was relevant and indeed essential to MM was surely to take on too much. A form of functionalism that sought to abstract the relevant causal powers of the brain already motivated AI: The relevant level was the symbolic one, and once that was specified, every physical implementation would have the relevant causal powers. Searle showed that this particular abstraction was wrong, but he did not thereby show that there could be no way to abstract away from the totality of brain function and causal power. The symbol grounding problem pointed out another functional direction in which the causal powers relevant to having a mind may lie: The symbols in a symbol system are systematically interpretable as meaning something; however, that interpretation is always mediated by an external interpreter. It would lead to an infinite regress if we supposed the same thing to be true of the mind of the interpreter: that all there is in his head is the implementation of a symbol system that is systematically interpretable by yet another external interpreter. My thoughts mean what they mean intrinsically, not because someone else can or does interpret them (e.g., Searle understands English and fails to understand Chinese independently of whether the English or Chinese symbols he emits are systematically interpretable to someone else). The infinite regress is a symptom of the fact that the interpretation of a pure symbol system is ungrounded. I think the "frame problem," which keeps arising in pure AI -- what changes and what stays the same after an "action"? (Pylyshyn 1991, Harnad 1993f) -- is another symptom of ungroundedness. Another way to appreciate the symbol grounding problem is to see it as analogous to trying to learn Chinese as a second language from a Chinese/Chinese dictionary alone: All the definientes and definienda in such a dictionary are systematically interpretable to someone who already knows Chinese, but they are of no use to someone who does not, for such a person could only get on a merry-go-round passing from meaningless symbols to still more meaningless symbols in cycling through such a dictionary. Perhaps with the aid of cryptography there is a way to escape from this merry-go-round (Harnad 1993c, 1994), but that clearly depends on being able to find a way to decode the dictionary in terms of a first language one already knows. Unfortunately, however, what computationalism is really imagining is that the substrate for this first language (whether English or the language of thought, Fodor 1975) would likewise be just more of the same: ungrounded symbols that are systematically interpretable by someone who already knows what at least some of them mean. So the problem is that the connection between the symbols and what they are interpretable as being about must not be allowed to depend on the mediation of an external interpreter -- if the system is intended as a model of of what is going on in the external interpreter's head too, as MM requires. One natural way to make this connection direct is to ground the symbols in the system's own capacity to interact robotically with what its symbols are about: it should be able to discriminate, manipulate, categorize, name, describe and discourse about the real-world objects, events and states of affairs that its symbols are about -- and it should be able to do so Turing indistinguishably from the way we do (I have called this the Total Turing Test or T3, Harnad 1989, 1992b, 1993a.) In other words, T2 and computationalism ("symbolic functionalism") are ungrounded and hence cannot do MM, whereas T3 and "robotic functionalism," grounded in sensorimotor interaction capacity, can. In my own approach to symbol grounding I have focussed on the all-important capacity to categorize (sort and name) objects (Harnad 1987, 1992a) -- initially concrete categories, based on invariants (learned and innate) in their sensory projections, and then abstract objects, described by symbol strings whose terms are grounded bottom-up in concrete categories (e.g., if the categories "horse" and "striped" are grounded directly in the capacity to sort and name their members based on their sensorimotor projections, then "zebra" can be grounded purely symbolically by binding it to the grounded symbol string "striped horse": a robot that could only sort and name horses and striped objects before could then sort and name zebras too). What is important to keep in mind in evaluating this approach is that although all the examples given are just arbitrary fragments of our total capacity (and the initial models, e.g. Harnad et al. [1991, 1994], are just toys), the explicit goal of the approach is T3-scale capacity, not just circumscribed local "toy" capacity. In my own modelling I use neural nets to learn the sensorimotor invariants that allow the system to categorize, but it is quite conceivable that neural nets will fail to scale up to T3-scale categorization capacity, in which case other category-invariance learning models will have to be found and tried. On the other hand, rejecting this approach on the grounds that it is already known that bottom-up grounding in sensorimotor invariants is not possible (e.g. Christiansen & Chater 1992, 1993) is, I think, premature (and empirically ungrounded; Harnad 1993e). The symbol grounding approach to MM can be contrasted with other approaches that prefer to dispense with symbols altogether. I will consider two such approaches here. One is pure connectionism (PC), which replaces the computationalist hypothesis that mental states are computational states with the connectionist hypothesis that mental states are dynamical states in a neural net (e.g., Hanson & Burr 1990). The crucial question for connectionists, I think, is whether the critical test of the PC hypothesis is to be T2 or T3 (I think it is a foregone conclusion that mere toy performance demonstrates nothing insofar as MM is concerned). If it is to be T2, then I think PC is up against the same objections as AI, if for no other reason than because connectionist systems can be simulated by symbol systems without any real parallelism or distributedness, and if those too can pass T3, then they are open to Searle's Argument and the symbol grounding problem (Harnad 1993a, Searle 1993). On the other hand, if the target is to be T3, and PC can manage to do it completely nonsymbolically, I, for one, would be happy to accept the verdict that it was not necessary to worry about the problem of grounding symbols, because symbols are not necessary for MM. On the other hand, there do exist prima facie reasons to believe that a PC approach would fail to capture the systematicity that is needed to pass the T2 (a subset of T3) in the first place (Fodor & Pylyshyn 1988, Harnad 1990b), so perhaps it is best to wait and see whether or not PC can indeed go it alone. There is a counterpart to PC in robotics -- let's call it "pure nonsymbolic robotics" PNSR (Brooks 1993) -- which likewise aspires to go the distance without symbols, but this time largely by means of internal sensorimotor mechanisms -- sometimes neurally inspired ones, but mostly data-driven ones: driven by the contingencies a robot faces in trying to get around in the real world. Such roboticists tend to stress "situatedness" and "embeddedness" in the world of objects (which they take to be "grounding" without symbols) rather than symbol grounding. PNSR places great hope in internal structures that will "emerge" to meet the bottom-up challenges of navigating and manipulating its world; much has been made, for example, of a wall-following "rule" that emerged spontaneously in a locomoting robot that had been given no such explicit rule (Steels REF). As with PC, however, it remains to be seen whether such "emergent" internal structures and rules, driven only by bottom-up contingencies, can scale up to the systematicity of natural language and human reasoning (Fodor & Pylyshyn 1988) without recourse to internal symbols. My own work on categorical perception (Harnad 1987), which is pretty low in the concrete/abstract scale leading from sensorimotor categories to language and reasoning, already casts some doubt on the possibility of scaling up to T3 without internal symbols, as PNSR hopes to do. A category name, after all, is a symbol, and we all use them. Categorical perception occurs when the analog space of interstimulus similarities is "warped" by sorting and naming objects in a particular way, with the result that within-category distances (the pairwise perceptual similarities between members of the same category, bearing the same symbolic category name) are compressed and between-category distances (the similarities between members of different categories, bearing different symbolic category names) are expanded. This seems to occur because after category learning, the sensorimotor projections of objects are "filtered" by invariance detectors that have learned which features of the sensory projection will serve as a reliable basis for sorting and labelling them correctly (and the warping of similarity space seems to be part of how backpropagation, at least, manages to accomplish successful categorization; Harnad et al. 1991, 1994). The next stage is to combine these grounded symbols into propositions about more abstract categories (e.g., "zebra" = "striped horse"). It is hard to imagine how this could be accomplished by a data-driven "emergent" property such as wall-following. It seems more likely that explicit internal symbolization is involved. Such internal symbols, unlike those of AI's pure symbol systems, inherit the constraints from their grounding. In a pure symbol system the only constraints are formal, syntactic ones, operating rulefully on the arbitrary shapes of the symbols. In a grounded symbol system, symbol "shapes" are no longer arbitrary, for they are constrained (grounded) by the structures that gave the system the capacity to sort and name the members of the category the symbol refers to, based on their sensorimotor projections: the shape of "horse" is arbitrary, to be sure, but not that of the analog sensory projections [see Chamberlain & Barlow 1982, Shepard & Cooper 1982, Harnad 19993f, Jeannerod, 1994] of horses nor of the invariants in those sensory projections that the nets have detected and that connect the "horse" symbol to the projections of the objects it refers to. All further symbolic combinations that "horse" enters into (e.g., "zebra" = "striped horse") inherit this grounding. Think of it as the "warping" of similarity space as a consequence of which things no longer look the same (from color sorting [Bornstein 1987], where "warping" is innate, to chicken-sexing [Biederman & Shiffrar 1987; Harnad et al., in prep.], where it is learned) after you have learned to sort and name them in a certain way. All further symbol combinations continue to be constrained by the invariance detectors and the changed in "appearance" that they mediate. So I am still betting on internal symbols, but grounded ones. In my view, robotic constraints play two three in MM: (1) They ease the burden of trying to second-guess T3 constraints a priori, with a purely symbolic "oracle": Instead of just simulating the robot's world, it makes more sense to let the real world exert its influence directly (Harnad 1993b). More important than that, (2) the robotic version of the Turing Test, T3, is just the right constraint for the branch of reverse engineering that MM really is. T2 clearly is not (because of Searle's argument and the symbol grounding problem), whereas Searle's preferred candidate, "T4" (total neurobehavioral indistinguishability from ourselves) is overconstraining, because it includes potentially irrelevant constraints. Finally, (3) robotic capacity is precisely looks like just what one would want to ground symbolic capacity IN, given that symbols cannot generate a mind on their own. Depite these considerations in favor of symbol grounding, neither PC nor PNSR can be counted out yet, in the path to T3. So far only computationalism and pure AI have fallen by the wayside. If it turns out that no internal symbols at all underlie our symbolic (T2) capacity, if dynamic states of neural nets alone or sensorimotor mechanisms subserving robotic capacities alone can successfully generate T3 performance capacity without symbols, that is still the decisive test for the presence of mind as far as I'm concerned and I'd be ready to accept the verdict. For even if we should happen to be wrong about such a robot, it seems clear that no one (not even an advocate of T4, or even the Blind Watchmaker who designed us, being no more a mind-reader than we are) can ever hope to be the wiser (Harnad 1982b, 1984b, 1991, 1992b). Andrews, J., Livingston, K., Harnad, S. & Fischer, U. (in prep.) Learned Categorical Perception in Human Subjects: Implications for Symbol Grounding. Biederman, I. & Shiffrar, M. M. (1987) Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual-learning task. Journal of Experimental Psychology: Learning, Memory, & Cognition 13: 640 - 645. Bornstein, M. H. (1987) Perceptual Categories in Vision and Audition. In S. Harnad (Ed.) Categorical perception: The groundwork of Cognition. New York: Cambridge Univerity Press Catania, A.C. & Harnad, S. (eds.) (1988) The Selection of Behavior. The Operant Behaviorism of BF Skinner: Comments and Consequences. New York: Cambridge University Press. Chamberlain, S.C. & Barlow, R.B. (1982) Retinotopic organization of lateral eye input to Limulus brain. Journal of Neurophysiology 48: 505-520. Dietrich, E. (1990) Computationalism. Social Epistemology 4: 135 - 154. Dietrich, E. (1993) The Ubiquity of Computation. Think 2: 27-30. Brooks, R.A. (1993) The Engineering of Physical Grounding. Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. NJ: Erlbaum Christiansen, M. & Chater, N. (1992) Connectionism, Learning and Meaning. Connectionism 4: 227 - 252. Christiansen, M.H. & Chater, N. (1993) Symbol Grounding - the Emperor's New Theory of Meaning? Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. NJ: Erlbaum Fodor, J. A. (1975) The language of thought New York: Thomas Y. Crowell Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical appraisal. Cognition 28: 3 - 71. Hanson & Burr (1990) What connectionist models learn: Learning and Representation in connectionist networks. Behavioral and Brain Sciences 13: 471-518. Harnad, S. (1982a) Neoconstructivism: A unifying theme for the cognitive sciences. In: Language, mind and brain (T. Simon & R. Scholes, eds., Hillsdale NJ: Erlbaum), 1 - 11. Harnad, S. (1982b) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47. Harnad, S. (1984a) What are the scope and limits of radical behaviorist theory? The Behavioral and Brain Sciences 7: 720 -721. Harnad, S. (1984b) Verifying machines' minds. (Review of J. T. Culbertson, Consciousness: Natural and artificial, NY: Libra 1982.) Contemporary Psychology 29: 389 - 391. Harnad, S. (1987) The induction and representation of categories. In: Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press. Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25. Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346. Harnad, S. (1990b) Symbols and Nets: Cooperation vs. Competition. Review of: S. Pinker and J. Mehler (Eds.) (1988) Connections and Symbols Connection Science 2: 257-260. Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54. Harnad, S. (1992a) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag. Harnad, S. (1992b) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October) 9 - 10. Harnad, S. (1993a) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Harnad, S. (1993b) Artificial Life: Synthetic Versus Virtual. Artificial Life III. Proceedings, Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI. Harnad, S. (1993c) The Origin of Words: A Psychophysical Hypothesis In Durham, W & Velichkovsky B (Eds.) Muenster: Nodus Pub. [Presented at Zif Conference on Biological and Cultural Aspects of Language Development. January 20 - 22, 1992 University of Bielefeld] Harnad, S. (1993d) Problems, Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem. PSYCOLOQUY 4(34) frame-problem.11. Harnad, S. (1993e) Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component. Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. NJ: Erlbaum Harnad, S. (1993f) Exorcizing the Ghost of Mental Imagery. Commentary on: JI Glasgow: "The Imagery Debate Revisited." Computational Intelligence (in press) Harnad, S. (1994, in press) Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't. Special Issue on "What Is Computation" Minds and Machines Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In: Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March 1991; also reprinted as Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG. Harnad, S. Hanson, S.J. & Lubin, J. (1994) Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding. In: V. Honavar & L. Uhr (eds) Symbol Processing and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration. (in press) Jeannerod, M. (1994) The representing brain: neural correlates of motor intention and imagery. Behavioral and Brain Sciences 17(2) in press. Kentridge, R.W. (1993) Cognition, Chaos and Non-Deterministic Symbolic Computation: The Chinese Room Problem Solved? Think 2: 44-47. Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135 - 83 Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford Pylyshyn, Z. W. (Ed.) (1987) The robot's dilemma: The frame problem in artificial intelligence. Norwood NJ: Ablex Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424. Searle, J.R. (1993) The Failures of Computationalism. Think 2: 68-73.
<urn:uuid:7076a1f8-7a6c-4288-adf7-44a999a722cc>
CC-MAIN-2016-26
http://eprints.soton.ac.uk/253358/2/harnad95.robot.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00055-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92159
5,654
2.53125
3
Potassium Iodide and Prussian BlueMarch 18, 2011 Written by JP [Font too small?] No matter where in the world you’re reading this, you’re almost certainly aware of the crisis that’s unfolding in Japan. Among other things, the tragic events of this past week have drawn much attention to two otherwise obscure substances that are used to protect against radioactive fall out. Potassium iodide is the better known of the two. Prussian blue isn’t making much news yet, but that may change as the level of concern about radiation exposure continues to spread. The most important details to note about potassium iodide and Prussian blue is what they can and can’t do. An awareness about the potential side effects also needs to be factored in before using either as a preventive measure without good cause. Here is some data I’ve collected for both radioprotective substances from the most reliable sources I know: Potassium iodide (KI) is a stable form of non-radioactive iodine that protects the thyroid from radioactive iodine exposure. It does so by saturating the thyroid gland and essentially blocking entry of the radioactive form of this essential mineral. According to Dr. Irwin Redlener, of Columbia University, potassium iodide is “not a radiation antidote in general”. In fact, KI is primarily beneficial for children and pregnant women whose thyroids are active and growing and more likely to absorb radioactive iodine. Dr. John Boice of Vanderbilt University goes on to say that in general, “it’s not recommended that adults over the age of 40 take potassium iodide. The benefit is miniscule because our thyroid glands are not that sensitive”. Timing is vitally important when using KI. Taking it a few hours or so prior to radiative contact is best. However, it still may afford some degree of protection if administered up to three to four hours post exposure. (1,2) While iodine is contained in some foods (select seaweeds in particular) and in iodized salt, the quantities present aren’t high enough or reliable enough to be used as an alternative to standardized KI pills. As such, dulse and kelp supplements that are commonly available in health food stores should be avoided for this purpose. Finally, although iodine is an essential nutrient and generally considered safe, very high dosages can cause allergic reactions, “intestinal upset, nausea and rashes”, and “abnormalities of thyroid function”. This is why some leading natural health experts such as Dr. Alan Gaby have publicly stated that he “would not take a large dose of iodine without any clear evidence of radiation exposure”. (3,4) Note: Appropriate potassium iodide dosages can be found on the Food and Drug Association’s Bioterrorism and Drug Preparedness page (link). Thyroid Cancer Mostly Affects Younger Populations Exposed to Radiation Source: HORMONES 2009, 8(3):185-191 (a) Prussian blue is a vibrant blue dye that is capable of binding with radioactive cesium in the intestines, thereby preventing the radioactive material from being re-absorbed by the body. Once cesium is bound to the dye, it can be passed out of the body through normal elimination. Modern research indicates that the administration of Prussian blue shortens the retention or biological half-life of radioactive cesium from roughly 110 days to about 30 days. (5) Prussian blue is only available by prescription. It’s typically given in a capsule dosage of 500 mg, 3 times/day for 30 days. Exact dosages vary based on age, body weight and degree of exposure. As is the case with potassium iodide, Prussian blue also carries the risk of certain adverse effects including: allergic reactions, constipation and intestinal upset. Experts advise that patients inform treating physicians about pregnancy, the use of other drugs and/or any stomach problems prior to beginning a course of Prussian blue. No one knows for sure what the coming days will bring for the people living in the areas surrounding Japan’s Fukushima Daiichi nuclear power plant. Another unknown is how real the threat of radiation spread will be for regions far beyond Japanese borders. The only thing we do know for certain is that this yet another reminder that we all need to be prepared as best as possible in the event of any major emergency. The Centers for Disease Control and Prevention have a specific site that provides practical advice about how to prepare for and deal with a radiation emergency. Keeping a supply of potassium iodide and Prussian blue on hand appears to be a reasonable step to take as part of a larger preparedness effort. However, it’s important that we all understand that neither are radioactive panaceas and both should only be used as intended. (6) Tags: Iodine, Radiation, Thyroid Posted in Alternative Therapies, Detoxification, General Health
<urn:uuid:6d437868-187a-4810-9021-bb5c24b8bc56>
CC-MAIN-2016-26
http://www.healthyfellow.com/808/potassium-iodide-and-prussian-blue/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929407
1,042
2.734375
3
"to the battery" is also a somewhat nonsensical concept, because there are single conductors within the battery, from the cells to the connectors. So there will be a T- or Y-style split at some point for sure. The main idea is to make sure that: 1) H-bridge is wired to battery using sufficiently thick wiring all the way 2) digital electronics are not on the "direct path" between the battery and the controller Easiest is to make a Y split right at the battery connector, with a thinner cable going to digital electronics, and a thicker cable going to H-bridge. There are other options. For example, you can wire thick cables to the H-bridge, and then use thin cables from H-bridge input to digital electronics. This is slightly less ideal, though. In any case, you probably want to clean up the microcontroller input with a 10-100 uH inductor and a 100-1000 uF capacitor before the actual entry to the MCU.
<urn:uuid:3a2c8741-45a7-419e-9b5e-8115737ce169>
CC-MAIN-2016-26
http://www.societyofrobots.com/robotforum/index.php?topic=17566.0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926614
211
2.5625
3
Med people shun Med Diet Overweight rising in region 29 July 2008, Rome - Hailed by experts as keeping people slim, healthy and long-lived, the Mediterranean diet has followers all over the world – but is increasingly disregarded around the Mediterranean. According to FAO Senior Economist Josef Schmidhuber, over the past 45 years the famed diet revolving around fresh fruit and vegetables has “decayed into a moribund state” in its home area. With growing affluence the food habits of people in the southern European, North African, and Near East regions, once held up as a model for the rest of the world, have sharply deteriorated, Schmidhuber reports. His findings are contained in a paper presented at a recent workshop organized by the California Mediterranean Consortium of seven US and EU academic institutions on Mediterranean products in the global market. People on the shores of the Mediterranean have used higher incomes to add a large number of calories from meat and fats to a diet that was traditionally light on animal proteins. What they now eat is “too fat, too salty and too sweet”, Schmidhuber reports. In the 40 years to 2002, daily intake in (15-nation) Europe increased from 2960 kcal to 3340 kcal – about 20 percent. But Greece, Italy, Spain, Portugal, Cyprus and Malta, who started out poorer than the northerners, upped their calorie count by 30 percent. “Higher calorie intake and lower calorie expenditure have made Greece today the EU member country with the highest average Body Mass Index and the highest prevalence of overweight and obesity,” says Schmidhuber. “Today, three quarters of the Greek population are overweight or obese.” More than half of the Italian, Spanish and Portuguese populations are overweight too. At the same time there has also been a “vast increase” in the overall calories and glycemic load of the diets in the Near East-North Africa region. All EU countries disregard the WHO-FAO recommendation that lipids should account for no more than 30 percent of total Dietary Energy Supply, but Spain, Greece and Italy are all well over that limit and have become the EU’s biggest fat guzzlers. The country which registered the most dramatic increase was Spain, where fat made up just 25 percent of the diet 40 years ago but now accounts for 40 percent. Schmidhuber attributes the change in eating habits not only to increased income but to factors such as the rise of supermarkets, changes in food distribution systems, working women having less time to cook, and families eating out more, often in fast-food restaurants. At the same time, calorie needs have declined, people exercise less and they have shifted to a much more sedentary lifestyle. On the positive side however, he notes Mediterranean people now consume more fruit and vegetables and more olive oil. But they generally fail to follow the diet which their ancestors devised and which several Mediterranean countries want to be placed on UNESCO’s world heritage list. Media Relations, FAO (+39) 06 570 53762 (+39) 349 5893 612 (mobile) e-mail this article
<urn:uuid:db206a54-3fa7-4bac-a024-5dae609debcd>
CC-MAIN-2016-26
http://www.fao.org/newsroom/en/news/2008/1000871/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944338
661
2.703125
3
Ghosts of the Prairie DIANA OF THE DUNES Dunes State Park near Gary, Indiana Around 1915, the area that is now the Dunes State Park in northern Indiana was mostly uninhabited wilderness. The stories spread around the vicinity of Chesterton, Indiana that fisherman who were along the beach at certain times of the day had been lucky enough to catch a glimpse of a naked woman swimming in the lake. The story spread that a beautiful woman was living as a hermit along the beach and her notoriety grew to a point that many compared her to the ancient Greek goddess Diana...hence the name of this legendary creature. In truth, her name was Alice Marble Gray and she was the daughter of an influential couple from Chicago. Alice had traveled extensively and was cultured and educated. She had worked in the city as an editorial secretary for a popular magazine, so what had made her take up the lonely life of a recluse. Some have claimed that Alice came to the dunes because of a broken love affair but actually she left the city life because her deteriorating eyesight had made her work impossible. She had sought refuge in the rough land that she had enjoyed as a child. Alice moved into an abandoned fisherman's cottage on the beach and lived a life of peace, borrowing books from the library, walking in the woods and of course, swimming naked in the chilly waters of Lake Michigan.In 1920, Alice met a drifter named Paul Wilson and he moved into the cabin with her. He was an unemployed boat builder with a shaky past but he seemed to make Alice happy and the two of them stayed together until 'i 922....when tragedy struck. The badly burned and beaten body of a man was found on the beach and police suspected that Wilson had a hand in the murder. He was questioned but eventually let go. He and Alice moved to nearby Michigan City, Indiana, where they made a small living selling handmade furniture. Alice bore her husband two daughters but he treated her terribly, often beating her severely. In 1925, Alice died in her home, shortly after the birth of her second daughter. The official cause of death was said to be uremic poisoning...complicated by repeated blows to her back and stomach. Wilson disappeared and later tuned up in a California prison, serving time for auto theft. The fate of Alice's daughters is unknown.So ended the life of Alice Gray.... or did it? Legends of the dunes say that Alice still returns to the beach and the wilderness that she loved so much. Over the years, many have claimed that they have seen the ghostly figure of a nude woman running along the sand or disappearing into the water. Perhaps she does still walk here, the trials and pain of her lonely and sad life forgotten, at least for a time, as she vanishes along her beloved beach or disappears into the waters of the lake. The Dunes State Park is located east of Gary, Indiana, between Miller and Michigan City. Alice is buried in Oak Lawn Cemetery in Gary. Her burial site is lost as she was buried in a common potter's field. © Copyright 1998 by Troy Taylor. All Rights Reserved. Return to the Ghosts of the Prairie Home Page
<urn:uuid:699c8dc1-8cd2-4a5d-8e0e-1fa0d77f1753>
CC-MAIN-2016-26
http://www.prairieghosts.com/dunes.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00054-ip-10-164-35-72.ec2.internal.warc.gz
en
0.985424
656
2.5625
3
Timelines are generally considered the lowest form of data visualization, because displaying data chronologically doesn’t tend to provide much journalist value. But The New York Times recently upended that theory with a timeline-cum-scattergraph of driving safety records. This timeline works because it’s not actually a timeline—rather than running in a linear fashion, years have been plotted as dots, which are then set in a matrix whose axis measure the numbers of car crashes and miles driven. The focus of the piece is not on the historical angle of the story, but rather on the relationship between miles traveled and deaths on the road. Hannah Fairfield, the Times’s senior graphics editor, said her inspiration for this piece came from reading Steven Levitt and Stephen Dubner’s Super Freakonomics. Their schtick is that they don’t always take data sets and trend reports at face value. In the book, the authors discuss the fatality analysis reporting system, a database of reports on factors behind road accidents. This discussion sparked Fairfield’s interest in looking at previously unexplored trends in driving safety records. She noticed that data about fatality records is almost always examined through the lens of miles driven every year. Thinking freakonomically, Fairfield started to probe further into the numbers. Fairfield said she took figures relating to deaths per mile driven over time and “tore them apart.” This meant charting deaths per 100,000 people against miles driven over time and plotting the results as a scattergraph. Each dot represents the annual figure of how many car-related deaths there were in relation to miles driven. The outcome was that, as the American population grew, driving miles increased drastically. The production of this piece came together in what Fairfield described as a process similar to reporting any other story. The bulk of her time was spent gathering data from various sources, including the National Highway Traffic Administration (NHTSA) database, and then analyzing it all. After charting the numbers in various ways, she discovered the central element to her story: the plateau-drop pattern. That is, the number of road fatalities remained relatively steady during the 1970s and then took a drastic plunge; interrogating the data revealed a number of reasons for this, mainly technology advances and tighter road rules. In order to verify her hypothesis, she spoke to auto safety experts and the NHTSA. The design of this piece is simple and subtle. The topic of driving of course lends itself to movement, so the slightly curved and wonky lines of the graph are in keeping with the thematic tone without ostentatiously incorporating images of cars and roads. The six text blocks add necessary historical context needed to explain the drastic variations in the data. Despite the simplicity and lack of interactivity in this piece, it feels very animated. The graph snakes around the text, incorporating it into the piece and inviting the reader to read it. The reader looks at it and sees that as time has passed, auto fatalities have decreased while miles driven have increased. Making the annotations clear and readable was a big consideration for Fairfield. “For most people, it’s not an intuitive chart, since readers are more used to seeing time on the horizontal axis rather than as dots on a scatterplot, so the design needs to guide readers through it,” she said.Anna Codrea-Rado is a digital media associate at the Tow Center for Digital Journalism at the Columbia University Graduate School of Journalism. Follow her on Twitter @annacod.
<urn:uuid:100a1fa9-4d1e-4bb2-bbb1-bc56e406f576>
CC-MAIN-2016-26
http://www.cjr.org/data_points/a_timeline_that_isnt_boring.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959887
729
2.875
3
On occasion, we have shared with French friends the fact that, in America, a person can move from one state to another and begin a new life with very little documentation. One can marry showing only a driver's license from another state, for example. This little detail just about always prompts the same response: "Incroyable! If you do not check his records, a man could have many wives at the same time." "Well, it's possible," we say slowly, "But most people do not want to do that." This invariably brings smugly arched eyebrows and an exchange of knowing nods at what simpletons Americans are about human nature. In order to marry in France, one must present a copy of the birth registration, which will show any previous marriages and divorces in the margin. One must also post banns. Increasingly, the online subscription data bases for genealogy in France (which we have discussed in a previous post), are adding searchable collections of banns as well as of marriages. Now, we are seeing family genealogies based on research on those sites giving the date of one of the banns postings as that of a marriage, and confusing the banns with the marriage. Thus, an explanation. The posting of marriage banns (according to French sources, the word's origin is the germanic bannjan -- a gloriously chantable word which means "proclamation"; according to English sources it comes from the Middle English word bane, meaning "summons") in France has been required since the twenty-fifth session of the Council of Trent in 1563. Banns ensure that the community be notified of a couple's intention to marry, giving anyone who opposes the marriage an opportunity to do so before the event. Originally, they had to be posted in the natal parishes of both persons of the couple. Added to this later was the requirement that they also be posted in the parishes where they lived, if different from where they were born. The procedure was to set the date for the marriage first. Then, the banns - the proclamation of the intention to marry -- were announced at mass in the parish church on the three Sundays before the planned wedding. A notice to the same effect may also have been posted on the church door for the three previous Sundays. After the Revolution, the banns were posted at the town hall, the Mairie. According to the chatterboxes we have interviewed, the sole purpose of banns is to curtail the supposedly indubitable urge of every Frenchman to have a cross-departmental harem. Lest we think this was archaic and no longer applies, be informed that marriage banns are still required to be posted at the town hall, though only once, not three times. As the couple are permitted to marry only in the town where at least one of them resides, the banns still perform their original function. The marriage can take place no sooner than ten days and no later than one year after the posting. In genealogical research, banns are useful for finding the acte de mariage. They give the names of the couple and sometimes their parents' names, a date, possibly a residence. However, banns cannot be taken as proof of marriage. There could have been banns; there even could have been a marriage contract signed, and yet, for any number of reasons, the marriage might not have taken place. ©2010 Anne Morddel
<urn:uuid:7eaa3fce-a100-48e1-8b54-b640a483cda1>
CC-MAIN-2016-26
http://french-genealogy.typepad.com/genealogie/2010/09/banns-are-not-marriages.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970738
726
2.953125
3
1. When Keats rewrites the medieval poem, "La Belle Dame sans Merci", he recreates the tale of fated love. But, most importantly, he frames love as the reason for living, and fuel of life. The only sounds in the poem are made by the lady, she "sing's fairy's songs," makes "sweet moans," and "sigh'd full sore'." Besides these mentions of living sound, we are left with "no birds singing." When Keats uses sight in this poem, it is to the same end. With her wild eyes, "she looked at [the king] as she did love," and he "nothing else saw all day long" when with her. In this way, the lady comes to embody life itself, because it is only her parts of the poem that are alive, all else is a barren wasteland, the "granary is full, and the harvest done," no action or motion. So, this becomes a tale not of deception and destruction, but rather a tale of life, true life, that can be experienced only when one is in a state of love? (Julianna Sassaman) 2. In the poem "Ode on Melancholy," Keats takes a sinister look at the human condition. The idea that all human pleasures are susceptible to pain, or do inevitably lead to pain, is a disturbing thought. Keats comments on the miserable power of melancholy, especially how it thrives on what is beatiful and desirable and turns it into its opposite. She dwells with Beauty — Beauty that must die; And joy, whose hand is ever at his lips Bidding adeiu; and aching Pleasure nigh, Turning to poison while the bee-mouth sips: Ay, in the very temple of Delight Veil'd Melancholy has her sovran shrine, Though seen of none save him whose strenuous tongue Can burst Joy's grape against his palate fine; His soul shall taste the sadness of her might, And be among her cloudy trophies hung. (ll.21-30) In this passage, there seems to be an emphasis on lost hope. There seems to be this idea that true happiness is either ephemeral or unreachable. For example, Keats writes above about "Joy...Bidding adeui" and Pleasure Turning to poison." Keats seems to be saying that happiness is a temptation which people are tragically prone to dream about, an illusion upon which is unrealistic. Having also read "La Belle Dame sans Merci: A Ballad," what similarities are evident in both poems? How does the imagery of both poems, and of this scene in particular, augment their themes? What do you think Keats thinks is more powerful - temptation or the suffering that comes from being melancholy? Considering that Keats sees the lurking of evil behind everything good, is there also hope behind what is miserable? (John Rosenblatt) 3. When comparing "Ode on a Grecian Urn" [text of poem] to "Ode on Melancholy", both poems appear to be different sides of the same coin; the first being joy and the latter being depresson. "Ah, happy, happy boughs! that cannot shed / your leaves, nor ever bid the spring adieu,"(lines 21 - 22) says Keats in "Ode on a Grecian Urn" and then follows with "But when the melancholy fit shall fall / sudden from heaven like a weeping cloud . . . / Then glut thy sorrow on a morning rose"(lines 11 - 12, 15) in "Ode on Melancholy." If these odes were intentionally published in this order, Keats would appear to be contradicting himself but, he may be arguing that the melancholy is a prerequisite to the joy of life and a necessary component to its full appreciation. If this is in fact true, why does it make sense to place "Ode on a Grecian Urn" in front of "Ode on Melancholy"? (Dan Shindell) 4. The speaker's mind in Mont Blanc is overwhelmed by a moment of natural sublimity, positing that the human mind would be "vacant" if it couldn't extract significance from overwhelming experience. I am interested in the cultural centrality of the figure of the poet, how much of "Mont Blanc" [text of poem] is linked to a political perspective, how much of the language is metaphorical, and the poet's relation to the sociological reality of his time. In this passage Dizzy Ravine! and when I gaze on thee I seem as in a trance sublime and strange To muse on my own separate phantasy, My own, my human mind, which passively Now renders and recieves fast influencing (ll. 34-38) What is Shelly trying to say about his "own separate phantasy"? And is this "mus[ing]" related to the human capacity (or the poets' capacity) to perform a philosophical, artistic function, yet also a political function? And if the language is entirely metaphorical, what is he saying about the poets' relationship to the political uproar of this time? (Molly Rosen) 5. In the first three stanzas of Keats' "La Belle Dame sans Merci: A Ballad" a knight is being addressed by an unknown questioner, and the rest of the poem takes the form of his reply, which is full of old and middle English. Gloam (meaning twilight, line 41), sedge (a marsh plant resembling coarse grass, line 47), and so on. In lines 41-43, ("I saw . . . found me here"), the knight, having seen ghosts of his dead peers, has a clear moment of shocked awakening, directly after which he encounters his questioner, who speaks in familiar old English. And yet, in lines 27-28, "And sure in language strange she said — /I love thee true". Is Keats' knight experiencing linguistic evolution firsthand in this encounter with "La Belle Dame sans Merci"? Is this a poem of progress, or of nostalgia? (Geoffrey Litwack) 6. Keats seems to believe that melancholy is in fact necessary and must be experienced in an atmosphere conducive to happiness in order to be truly felt: Nor let the beetle, nor the death-moth be Your mournful Psyche, nor the downy owl A partner in your sorrow's mysteries; For shade to shade will come too drowsily, And drown the wakeful anguish of the soul. Melancholy may be useful to Keats as a poet, but is it a fundamental part of being human, or would we function better if we were happier more often? (Megan Lynch) 7. Keats, perhaps the most alluring poet we've read as of yet, cradles his reader in a world of erotic mysticism balanced with reflection of painful reality. By creating a narrator who becomes enveloped in the illusive world of sensory experience that is immediately inspired by the delicacies of nature, Keats entrances his reader. By concentrating his poem, "Ode to Nightingale", on a dream-like diversion from reality, Keats offers a poetry of escape: Fade far away, dissolve, and quite forget What thou among the leaves has never known. The weariness, the fever, and the fret." (lines 21-23) Yet as we are lured from the harshness of reality, we are at once being reminded of our humanness. For instance, with the mention of death, Keats explores the pleasantries of experiencing a painless death, while simultaneously reminding us of our immortality. By the end of the poem, when the narrator asks, "Do I wake or sleep?" (line 80), has Keats struck us with a deeper sense of reality or illusion? (Kristen Dodge) 8. Getting back to Wordsworth's "Tintern Abbey": We've discussed the poem in various ways, but what about its quality as a piece of poetry? What about Wordsworth's talent as a poet? I make the contention that "Tintern Abbey" is simply bad poetry. Take for instance the line(s) "If this/Be but a vain belief, yet, oh! how oft". Compare Wordsworth's trite diction with Shelley's, or Keats'. Can we not settle the matter and say that the latter are (in most instances) better poets than Wordsworth? (Wes Hamrick)<\p> 9. In "Ode on a Grecian Urn," John Keats is admiring a vase that captures a moment of the following things: - two lovers chasing each other - a pastoral piper under a tree playing his instrument - the procession of priest and townspeople marching down the field. Obviously, the people engraved to the vase are immobile, lifeless. Yet, as Keats circles around the vase and observes the engravings, he sees a story unfolding and things in motion. He superimposes not only intense emotions, associated with each character and ambiance, but time. Can we argue then that the current observer of the vase will always experience such flow of events every time he looks at it? Can the following passage be an evidence of Keats' fascination with the contrariety of permanence? Therefore, ye soft pipes, play on... Who are these coming to the sacrifice? To what green altar, O mysterious priest, Lead'st thou that heifer lowing at the skies And all her silken flanks with garlands drest? This is not to say that Keats is not acknowledging the rigidity of art. What then can be said about the timelessness of art such as this and its purpose? (Hyun Kim) 10. In "When I have fears that I may cease to be," John Keats reluctantly faces his own mortality. The poem's title (which is also its first line) is a leading clause that is not concluded until the poem's final three lines, and even at that point the central issue does not feel truly resolved. What is Keats' purpose in establishing an opening clause that he only answers with despair? Does Keats intend for his poem's depressing ending to stand as the only conclusion for those who have fears that they may cease to be? (Benjamin McAvay) 11. In "Ode on a Grecian Urn," Keats explores the incessant mystery of the past. Left with souvenirs of past moments, conversations, individuals, etc., Keats is full of questions, intrigued by a silence so full. Shelley, too, in Mont Blanc, addresses sound and the meanings and implications of silence. Like Keats, he realizes that silence is never really quiet, but instead full of secrets and hidden life: "The secret strength of things / Which governs thought...And what were thou, and earth, and stars, and sea / If to the human mind's imaginings / Silence and solitude were vacancy?" (669). "Ode on a Grecian Urn" recognizes the contradiction of silence: "Heard melodies are sweet, but those unheard/Are sweeter; therefore, ye soft pipes, play on" (793). The past will always be loud because it is mysterious and sealed off: "And, little town, thy streets for evermore Will silent be; and not a soul to tell Why thou art desolate, can e'er return... Thou, silent form, dost tease us out of thought As doth eternity: Cold Pastoral!" (793-794). What is the intrigue of silence? A silent Grecian urn provides a myriad of possibilities and thoughts for Keats — would it be so interesting if its past could be explained? (Maura McKee) Incorporated in the Victorian Web July 2000
<urn:uuid:d812f5a0-ee2b-419b-8308-05c05ae5dcde>
CC-MAIN-2016-26
http://www.victorianweb.org/previctorian/keats/lqcw.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952471
2,499
3.015625
3
Scientists of research institute MESA+ of Twente University have developed a technology for contactless deposition of liquids at nanoscale. In doing so, they make use of an electric field. Their technology will lead to new 3D-applications and can be of great value to, for example, cell research, nano-lithography and printable electronics. The findings of the Twente-based Mesoscale Chemical Systems Department have recently been published in the academic journal Applied Physics Letters. In conventional techniques for liquid deposition, pressure is exerted on liquids, or capillary forces are used. This is done with the aid of a so-called AFM (Atomic Force Microscopy) 'dip-pen' probe or a 'nano-fountain pen' probe. These probes have been equipped with a tip which permeates the liquid. A disadvantage of this method is that several elements, such as humidity and liquid or surface properties, can affect the deposition negatively. The contactless deposition method with the AFM nano-fountain pen probe ensures a reliable and quick deposition of liquids on a 50 nanometre scale. This is thanks to the use of an electric field. By applying a voltage, the liquids inside the tip are charged. The difference with the charge of the surface causes the liquid to be pulled out of the probe. A relatively low voltage (60 Volt) can already be sufficient. As the pulse duration increases, the volume of the liquid deposition will grow too. The research now published was carried out in collaboration with the company SmartTip. This spin-off of the University of Twente develops and produces smart probes with new functionalities. Researcher Joël Geerlings of the Mesoscale Chemical Systems Department expects that many new possible 3D-applications lie ahead with the development of the new deposition method. "Think of a 3D-printer with nanoscale resolution that produces a scaffold (construction) for cell research." Other applications are arrays of DNA or proteins, photonic crystals, microfluidic structures, printed electronics and MEMS structures (micro-electromechanical systems) for sensors, for example." Explore further: Microscopic fountain pen to be used as a chemical sensor "Electric field controlled nanoscale contactless deposition using a nanofluidic scanning probe." Appl. Phys. Lett. 107, 123109 (2015); dx.doi.org/10.1063/1.4931354
<urn:uuid:f10861b5-06ed-4d3f-9adf-9520e4cfeb85>
CC-MAIN-2016-26
http://phys.org/news/2015-10-printable-electronics-contactless-liquid-deposition.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00067-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885433
512
3.34375
3
118 words, the repetition is tighter, the sentence structure makes more sense. The frags work better. "We planted another boy today, old enough to husband, not around enough to father. Small mercies there. We've practice so everybody knew what to do. We knew when to line streets and welcome him home. We knew the correct, repeated, so sorry words to say. Waited in line to say them to his parents and wife. We knew the honor guards' names from before. We knew to line the streets again as he went to rest. We could recite the well-worn words of commendation spoken before the open ground. We knew which casseroles we liked. We knew the precise moment to crack the right joke in the church basement. We were well practiced. Wars make for good practice."
<urn:uuid:b6f96f8d-5532-47ff-a644-91a694dc2332>
CC-MAIN-2016-26
http://storybones.blogspot.com/2007/05/better-second-drafts.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982658
167
2.703125
3
Giving your children the information they need to be safe is one of your most important responsibilities as a parent. It's normal to feel uneasy about a subject that's surrounded by taboos and secrecy. You may also worry that you'll cause your children to be confused and fearful about sexuality or distrustful of all adults. This article, however, can increase your confidence and knowledge about sexual abuse and empower you to begin a discussion with your children today. Why talking is important Child sexual abuse is the exploitation or coercion of a child for the sexual gratification of someone else, usually an adult but sometimes an older child. Abusers are in a position of power over their victims. Some experts estimate that up to one in four girls and one in six boys are sexually abused before age 18, and approximately 90 percent of victims know their abuser. Abusers can be family members, friends, teachers, coaches or anyone else who may come into contact with children. Making sure children have accurate information about their bodies, rights, and the rules and strategies to follow when abuse occurs is the most important way parents can protect them from abuse. Without this knowledge, children are more vulnerable to the manipulations abusers often use to lure them into physical contact, and they're more likely to keep abuse secret once it has occurred. Talking with kids about sexual abuse should be an ongoing process that's as natural as other safety lessons, such as staying on the sidewalk and buckling a seatbelt. Discussions can be casual conversations that take place any time during your normal routines, for example, while driving in the car, while you're playing or in response to a question or comment that gives you an opening. Keep your talks plain and simple without covering too much information at one time. A good way to start is by teaching your children the proper names for parts of the body, including private parts. Explain that private parts are the parts covered by a swimsuit. You can begin this discussion when you think your child is old enough to understand, usually around age 3. Consider teaching children the correct names for private parts, that is, penis, vagina, breasts and buttocks, so they'll have the right words to tell you if someone does approach or harm them. Or you may have names for the private areas that you use only within your family. Regardless of the terms you use, what's most important is to instill in your children the knowledge that their bodies belong to them and deserve to be respected and protected. As children learn about their bodies, they need to also learn that they have the right to say no to anyone who wants to touch them in a way that makes them feel uncomfortable, afraid or confused. You can tell them that this is true even when the person is an adult, a relative or someone they like and trust. For children to have confidence that their no has meaning, it's important that all members of your family have the same understanding: a no will always be respected, even when there's no harm intended. The right to say no is reinforced when families have clear and openly-discussed rules for respecting privacy and personal boundaries. Safe and unsafe touches When children are a little older, usually around age 5, they can begin to understand the differences between different types of touch. Explain to them that there is good touching, bad touching and secret touching. Tell them that most touching is good and makes them feel good. Bad touching is when someone hurts them by hitting or pinching or touching them in a way that doesn't feel good or doesn't seem right. Make sure your children understand that no one needs to touch their private parts unless it's a special situation like being examined by a doctor. Secret touching occurs when someone touches a child's private parts and wants to keep it a secret. Explain to your children that secret touching can be confusing, and sometimes it's not easy to know what to do. But you can encourage your children to always trust their feelings about the way someone touches them, no matter who it is. Be sure they understand that secret touching is always wrong - but it's never the child's fault. Tell your children that adults can be wrong sometimes. Children should always say no to secret touching and get away from the person as fast as they can, even when it's someone in a position of authority such as a teacher, coach or clergy member. Children need to understand how important it is to tell a parent or another trusted adult when someone bothers them. Abused children often keep it secret because they're embarrassed or afraid of upsetting their parents. The abuser may have used threats such as "I'll hurt you if you tell," or "No one will believe you." If the abuse has been going on for a while, they may be so ashamed about keeping secrets that they continue to keep them. If they're very young, or if they've been told that it's just a game, they may not understand that there's something to tell. By practicing good communication in your family relationships, you can increase the likelihood that your children will disclose abuse if it happens. If you make a habit of talking to your children about their daily activities, listening to their concerns and caring about their feelings, they'll feel safer coming to you if something happens. Make sure your children understand that you may not be able to protect them unless you know what happened. They must tell you any time someone does something to make them feel scared, uncomfortable or confused - even if it's a person that your family knows and likes or looks up to. With younger children, you may need to be more direct and ask them to tell you if anyone touches their private parts. Above all, be sure your children understand that when they tell you about something that's happened, you will believe, protect and not blame them. Keep in mind that children do not always disclose abuse in a straightforward way. They may choose to tell an adult other than a parent. They may only hint at what happened to see what kind of response they get or pretend it happened to someone else. Managing your own reaction to a disclosure of abuse is important because children may stop talking if their parents respond with strong emotion rather than calm reassurance. Taking responsibility for personal safety As children get older and begin spending more time in activities without close adult supervision, parents should teach them how to be responsible for their own personal safety. Talk with your school-age children about safety precautions in a variety of situations where there might be risk - for example, in video arcades, locker rooms and any isolated outdoor play areas. Discuss what can happen in different situations, and agree on safety rules for each one. Then check often to make sure that your children are following the rules. Since the Internet has become a favorite vehicle for sexual predators looking for victims, it deserves special emphasis when you talk with children about taking responsibility for their own safety. They need to understand that the perception of anonymity when interacting in cyberspace makes it easier to take risks and participate in inappropriate or dangerous exchanges. If your children are frequent computer users, it's important to educate yourself about activities popular among young people such as instant messaging, social networking through sites like Facebook and exchanging photos and video. It's also important to establish firm guidelines and find ways to monitor their activities online. As children approach puberty, they become more aware of and interested in their own sexuality. For safety's sake, it's important that older children learn about appropriate sexual behavior. When children start asking questions about relationships and sexuality or make observations about sexual content in the media, it's time to talk about your family's standards of sexual conduct and your expectations for your children's sexual behavior with peers. Throughout the teenage years, continue to stress personal safety and responsible behavior. Because teenagers can be vulnerable to sexual abuse by an older person they feel romantically attracted to, it's also important to discuss dealing with being pressured to have sex. By talking openly about how confusing it can be when someone makes them feel good but doesn't respect their feelings, you can help your teenager make the right decisions in dating situations. National organizations dedicated to child sexual abuse prevention can provide you with more information, including how to recognize signs of abuse and what to do if you suspect it. Two such organizations are The Child Molestation Research and Prevention Institute and The National Child Traumatic Stress Network. Adults have a moral obligation to protect children from abuse and to take action whether it involves their child or someone else. Speak up if you see anyone behaving inappropriately toward a child, and report suspected abuse to your local child protective services agency or the installation Family Advocacy Program if you're in a military community. You can also call your state's child abuse reporting hotline or the Childhelp National Child Abuse Hotline at 800-422-4453 (4-A-CHILD).
<urn:uuid:6c88fb80-ba11-47fe-8ccb-7c8cb74102ee>
CC-MAIN-2016-26
http://www.militaryonesource.mil/abuse?content_id=266690
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970898
1,790
3.21875
3
Tasmania/35,000 Year Old Tree Fossil Julie Boyd from Global Learning Communities in Tasmania, Australia, mailed me the following newspaper clipping with a note that said, "Quite a bit of information is starting to emerge here which links Australia (and particularly Tasmania) to Gondwanaland, and this fossil is causing great excitement." FOSSIL SEEN AS GONDWANA CLUE by Claire Braund "The discovery of a 35-million-year-old fossil of a monster plant in North- West Tasmania is "stunning new evidence" of the existence of a Gondwana supercontinent, scientist have claimed. A team of researchers from the University of Tasmania found the fossilized foliage of the giant conifer, fitzroya tasmanensis, on the Lea River in the Cradle Mountain area about 18 months ago. Currently the tree, which has a base diameter of up to five metres, grows only in Chile in South America. Head of the Department of Plant Science in Hobart, Professor Bob Hill, said that until the discovery in Tasmania there had been no reason to predict that the fitzroya had grown anywhere else in the world except South America. Prof. Hill, who is recognized as a world authority in the area of macrofossils, led the team of University of Tasmania researchers that found the fossilised plant. He said that while the find was not totally unexpected, it was a stunning confirmation of the existence of a supercontinent known as Gondwana. He said the discovery of the fossil showed that once upon a time common forests ranged across Gondwana. These forests could have included the closely related Tasmanian king billy pine, fosssilised remains of which were found near those of the fitzroya. Prof. Hill said that the discovery would help researchers establish why one species died out and the other survived when the both had similar climatic requirements. He will go to Chile in January to present his findings at the second Southern Connection conference." Return to Main Page June Julian firstname.lastname@example.org
<urn:uuid:79ed7e10-8ff5-4d58-afda-23d1bca13503>
CC-MAIN-2016-26
http://www.nyu.edu/projects/julian/gondwana.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00018-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959606
440
2.984375
3
Innocence Project, which tries to reverse incorrect death penalty convictions, including an impressive list of convictions overturned because of DNA forensic evidence: The Causes of Wrongful Conviction As the pace of DNA exonerations has grown across the country in recent years, wrongful convictions have revealed disturbing fissures and trends in our criminal justice system. Together, these cases show us how the criminal justice system is broken – and how urgently it needs to be fixed. We should learn from the system’s failures. In each case where DNA has proven innocence beyond doubt, an overlapping array of causes has emerged – from mistakes to misconduct to factors of race and class. Those exonerated by DNA testing aren’t the only people who have been wrongfully convicted in recent decades. For every case that involves DNA, there are thousands that do not. Only a fraction of criminal cases involve biological evidence that can be subjected to DNA testing, and even when such evidence exists, it is often lost or destroyed after a conviction. Since they don’t have access to a definitive test like DNA, many wrongfully convicted people have a slim chance of ever proving their innocence. Here you will find further information about seven of the most common causes of wrongful convictions: •Unvalidated or Improper Forensic Science •False Confessions / Admissions •Informants or Snitches These factors are not the only causes of wrongful conviction. Each case is unique and many include a combination of the above issues. Review our case profiles to learn how the common causes of wrongful convictions have affected real cases and how these injustices could have been prevented. To stop these wrongful convictions from continuing, we must fix the criminal justice system. Click here to learn about Innocence Commissions, a reform that can help identify and address the fundamental flaws in the criminal justice system that lead to wrongful convictions. The chart below represents contributing causes confirmed through Innocence Project research. Actual numbers may be higher, and other causes of wrongful convictions include government misconduct and bad lawyering. and this, the Texas Moratorium Network, which along with the Texas Innocence Project, looks at cases where innocent people are wrongly convicted, and in some cases, executed. There is no lack of cases where Justice appears to have gone wrong. Do we really want a man who doesn't acknowledge forensic evidence or the possiblity of innocent people convicted as President? There are a lot more questions here to be asked - and answered; answered more honestly and with more depth and thought than Governor Perry appears to be giving the questions.
<urn:uuid:366e4feb-a73a-4caf-b1a0-798336a94048>
CC-MAIN-2016-26
http://mikeb302000.blogspot.com/2011_09_11_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950392
528
2.53125
3
All of this science if quite interesting, but even more valuable is the example set by the most successful runners. The best runners in the world, as everyone knows, are the Kenyans. The diet of the typical Kenyan runner is 76 percent carbohydrate. Compare that to the diet of the typical American, which is less than 50 percent carbohydrate. Kenyan runners get the majority of their calories from ugali, a dish made entirely from cornmeal, that supplies a whopping 38.5 grams of carbs per half-cup serving. The only runners whose abilities rival the Kenyans are the Ethiopians. The diet of the typical Ethiopian runner is 78 percent carbohydrate! This does not mean that you should automatically aim to get more than three-quarters of your daily calories from carbs. The amount of carbohydrate a runner needs in order to handle his or her training is tied to the amount of training he or she does. In my book, The New Rules of Marathon and Half-Marathon Nutrition, you will find a handy table that tells you how much carbohydrate to include in your diet based on how much you train (and your weight). The requirements vary from as little as 3 grams of carbs per kilogram of body weight daily for those who do just a few short runs per week all the way up to 10 g/kg for the heaviest trainers. If you train very lightly, it is possible that you are already consuming more carbohydrate than you need to optimize your training capacity. But it's more likely that the guidelines in that table will require you to adjust your carbohydrate intake upward, especially during periods of heavy training, such as before a marathon. If you do, I am certain that you will get more out of your training and reach the finish line of your event faster.marathon.
<urn:uuid:592d1077-827c-4c2d-b78b-e0ff5525cb1e>
CC-MAIN-2016-26
http://www.active.com/running/Articles/The-New-Rules-of-Marathon-Nutrition-How-Many-Carbs?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966645
360
2.8125
3
Answers 1Add Yours The events of Chapter 11 help underscore the severe racial intolerance of many of the townspeople, and the extreme ostracizing the Finches undergo in the name of maintaining good conscience. Mrs. Dubose calls all black people "trash" without exception, and tests Jem's patience. Atticus wants the children to understand that courage has to do with the fight for one's personal goals, no matter what the odds are against achieving the goal. Heroism consists of the fight itself, the struggle against fate, circumstance, or any other overpowering force. Mrs. Dubose's goal is to break free from her addiction to morphine. Her struggle against the clock and mortality is easily compared to Atticus's struggle to uphold his own morals despite the hopelessness of his case and the lack of support he has in town. According to Atticus's definition, he and Mrs. Dubose are both brave, even heroic, and he wants the children to follow their example. Even though Mrs. Dubose is a mean and bigoted old woman, she does have good qualities that demand respect. Atticus wants the children to see that though many of the townspeople are ignorant and racist, they also have personal strengths and are not fundamentally bad people. Jem learns some lessons on how to remain impassive even when his father's judgment is questioned and criticized. Jem is usually calmer and quieter than Scout, but his outward calm often disguises as much hurt and anger as Scout feels and expresses. Because he so rarely expresses his rage in verbal or physical fights, he often ends up bottling his feelings up. When these feelings explode, as when he cuts up Mrs. Dubose's flowers, the explosion is much bigger and more destructive than anything Scout would normally do, and he finds himself extremely ashamed afterwards. Part of Scout and Jem's growing up consists of understanding how to manage their feelings of anger. Scout must learn to calm her responses, whereas Jem may need to learn to find useful ways to express his feelings rather than suppress them.
<urn:uuid:bce12d2e-3f7d-4747-a261-5faa804b7ac1>
CC-MAIN-2016-26
http://www.gradesaver.com/to-kill-a-mockingbird/q-and-a/how-does-the-reader-benifit-from-scout-telling-the-story-about-mrs-dubose-as-an-adult-looking-back-at-her-childhood-130580
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976527
419
3.578125
4
Could Community Based Accountability Get the Federal Government Out of Our Schools? Schools today are seeing an unprecedented expansion of federally-driven accountability practices. In addition to annual high stakes standardized tests, more and more students now take interim assessments for use in teacher evaluations, mandated by NCLB waivers and Race to the Top grants. Soon we will have beginning of the year as well as spring testing so we can precisely measure growth. Common Core national standards will soon deliver standards-aligned curriculum and tests to thousands of schools across the nation. All of this is driven by the need to "hold teachers and schools accountable" for results. The dictionary defines "accountable" as "subject to the obligation to report, explain, or justify something; responsible; answerable." There is a relationship implied here. Whomever is held accountable is obliged to report to someone else, who acts as the judge for the performance. No Child Left Behind has, in effect, empowered the federal government as this judge. Of course teachers and schools should be responsible for doing their work. But is it necessary or wise to have the federal government restructure our entire education system to allow them to exercise this level of supervision? In the past, I have shared the ideas of former Nebraska Commissioner of Education Doug Christensen, who speaks of the importance of local initiative, and also Yong Zhao, who has written about what he calls "mass localism." Now, Julian Vasquez Heilig, along with a long list of co-authors, has offered a comprehensive framework he calls Community Based Accountability. Dr. Vasquez Heilig works in Texas, which pioneered the use of high stakes accountability, and was the model for No Child Left Behind. However, as he points out, the state has seen little real growth in student learning from this approach. Here is what how the Executive Summary describes Community Based Accountability (CBA): CBA involves a process where superintendents, school boards, school staff, parents, students and community stakeholders create a plan based on set short-term and long-term goals based on their local priorities. - CBA strategic plan developed at the local level would serve as alternatives to NCLB's intense focus on a top-down, one-size-fits-all policy. It would enable local communities to focus on the outcomes that really matter in addition to test scores (i.e. career readiness, college readiness, safety). - This new form of accountability would allow for communities to drive a locally based approach that focuses on a set of measures of educational quality for their one-year, five-year, and ten-year goals. - State and federal government role would be to calculate baselines, growth, and yearly ratings (Recognized, Low-Performing etc.) for a set of goals that communities selected in a democratic process. What would this process look like? According to the executive summary, A lead agency (school board, non-profit, etc.) convened by local elected officials will fulfill the accountability mandate of the community via a democratic process. The process can include: - Those leading the community process with standing in the community and are viewed as representative leaders. - Infrastructure will be developed to convene members of the community to engage in educational discussions. - Those involved have the ability and knowledge necessary to make decisions on educational issues for the community or engage experts when necessary. - There is a commitment to bring in whatever resources are needed to fulfill the community's direction and goals. Members of the community are engaged in and feel represented by the lead agency and the community process. There are a number of things that are very appealing about this model. Each community has its own context and particularities. It is important that there be a process whereby community members get involved and determine the goals and priorities that make sense for their schools. Schools are an integral part of their communities, and rely on their support and involvement. We need our community leaders, parents and students to share in a sense of responsibility for what happens in our schools. Schools should not bear this responsibility alone. A Community-Based Accountability process could build an understanding of these mutual responsibilities and strengthen our public schools. Our public schools belong to their communities, not to the federal government. They should be accountable to the people they serve, not distant officials. I wrote recently about the increasing level of skepticism regarding the Common Core standards from across the political spectrum. Texans have lived with this longer than anyone, and have been among those speaking out the loudest. Perhaps a community-based approach to accountability might be something people of varying points of view might be able to support. This also relates closely to a set of ideas enunciated recently by California Governor Jerry Brown. Brown states: The laws that are in fashion demand tightly constrained curricula and reams of accountability data. All the better if it requires quiz-bits of information, regurgitated at regular intervals and stored in vast computers. Performance metrics, of course, are invoked like talismans. Distant authorities crack the whip, demanding quantitative measures and a stark, single number to encapsulate the precise achievement level of every child. This year, as you consider new education laws, I ask you to consider the principle of Subsidiarity. Subsidiarity is the idea that a central authority should only perform those tasks which cannot be performed at a more immediate or local level. In other words, higher or more remote levels of government, like the state, should render assistance to local school districts, but always respect their primary jurisdiction and the dignity and freedom of teachers and students. Subsidiarity is offended when distant authorities prescribe in minute detail what is taught, how it is taught and how it is to be measured. I would prefer to trust our teachers who are in the classroom each day, doing the real work - lighting fires in young minds. Brown has proposed a new funding system that provides additional funds for high poverty schools, and gives districts more control of their spending. More details can be found here. What do you think? Is the Community Based Accountability model worth exploring? Continue the dialogue with me on Twitter at @AnthonyCody
<urn:uuid:f7f40098-78e5-4965-806d-d7c3b751e6a8>
CC-MAIN-2016-26
http://blogs.edweek.org/teachers/living-in-dialogue/2013/02/could_community_based_accounta.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963387
1,266
2.734375
3
On January 20th,2010, about a week after NYC Mayor Bloomberg proposed a controversial salt reduction initiative, evidence was presented in the New England Journal of Medicine that salt reduction truly can save lives. Using mathematical models, the authors were able to make estimates of cardiovascular disease rates based on a population-wide 3 g decrease in salt consumption (1200 mg sodium). By their projections, a 3 g salt reduction would result in 60,000 fewer cases of coronary heart disease, 32,000 fewer strokes, and 54,000 fewer heart attacks each year. This is comparable to the cardiovascular benefit from smoking cessation efforts. These estimates don’t even take into account the beneficial effects on other diseases related to salt excess, like osteoporosis, kidney disease, and stomach cancer. Health care costs were predicted to decrease by $10 billion to $24 billion, making this type of intervention much more cost-effective than medicating people who have hypertension. With health care reform at the forefront of American politics, this study highlights the value of prevention in bringing down costs. Since about 80% of salt in the diet is already in the food when it is purchased, this intervention must occur at a national policy level rather than a personal responsibility level – hopefully, these data will not be ignored by policymakers. A 1200 mg decrease in sodium consumption would represent a 34.3% drop in sodium consumption of average Americans, somewhat more ambitious than the 25% reduction proposed by Mayor Bloomberg. But based on the above figures even a 25% reduction is likely to bring cardiovascular benefits. Bibbins-Domingo K et al. Projected Effect of Dietary Salt Reductions on Future Cardiovascular Disease. NEJM. Published at www.nejm.org January 20, 2010 (10.1056/NEJMoa0907355) Appel LJ and Anderson CAM. Compelling Evidence for Public Health Action to Reduce Salt Intake. Published at www.nejm.org January 20, 2010 (10.1056/NEJMe0910352)
<urn:uuid:9ffe9968-56dd-4e7e-a764-874d7f1aa606>
CC-MAIN-2016-26
http://www.diseaseproof.com/archives/hurtful-food-salt-update.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00013-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936458
418
2.671875
3
Oregon’s new solar powered highway reinforces the state’s commitment to sustainable renewable energy. It takes a lot of energy to power Oregon’s state transportation system--about 45 million hilowatt-hours. Developers just broke ground on the renovations to the interchange at Interstate 5 and Interstate 205 in Tualatin to install an 8,000 square-foot solar photovoltaic system. The solar panels will produce electricity during the day, sending it back to the Portland General Electric grid. At nightfall, PGE will return an equivalent amount of electricity to the highway to keep the area lit. This exchange of solar power will supply about 28% of the kilowatt hours needed to light the interchange. This sustainable highway project is an all-Oregon effort. All building materials, including the essential solar panels, are provided by Oregon companies, stimulating the local economy. The solar highway is the first of its kind in the United States, but similar highways are already in place in Switzerland and Germany. Oregon’s solar powered highway is set to be completed in December 2008. More Stats +/- Solar Power Yachts 11 Solar Panel Vehicles Green Japanese Cities Photovoltaic Rooftop Waves Air Cleansing Concrete
<urn:uuid:a0013aad-8e74-40ea-add3-9ae07274156c>
CC-MAIN-2016-26
http://www.trendhunter.com/trends/clean-renewable-energy-solar-powered-highway-in-oregon
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897026
259
2.890625
3
Analysis: Form and Meter Whitman's particular style of writing has come to be known as "free verse," but not everyone agrees with this term. The term "free verse" was popularized by 20th century poets like William Carlos Williams and Allen Ginsberg whom Whitman inspired. The term means "a poem with no regular form or meter." If that's the definition, then "Song of Myself" is free verse. Other critics prefer not to use the term "free verse," arguing that Whitman borrows forms and styles from all over the place. According to this train of thought, labeling Whitman's poetry "free verse," would cover up this vast diversity styles he draws from. Either way, we don't think it's a huge deal. Technical terms in poetry can be overrated. Besides, a verse of Whitman's poetry is recognizable from a mile away. He uses tons of repetition, including the repetition of words at the beginning of lines, called "anaphora." His stanzas are frequently long lists, called "catalogues." And his lines are generally longer than those in most other classic poems. As we mentioned, Whitman does not use a regular meter, but his ear for rhythm is probably his greatest poetic strength. At some points he seems to slip into a traditional use of stresses and beats, as in this phrase from Section 1: Hous-es and rooms are full of per-fumes More often, though, he uses sharp beats at random, like someone reciting a hypnotic chant from Section 8: The blab of the pave . . . . the tires and carts and sluff of boot-soles and talk of the prom-en-ad-ers Gallons of ink have been spilled on Whitman's peculiar sense for rhythm, and your best bet is to explore the poem on your own. Finally, the original edition of the poem was not divided into sections. Whitman simply used stanzas of varying length and changed from topic to topic without warning. In the 1867 edition of "Song of Myself," he divided the poem into 52 sections, and we use these sections to make it easier to refer to specific parts of this very long poem. These sections often center on a specific topic or vignette (mini-story), but they are somewhat arbitrary. If you have a version with section divisions, we recommend you also try reading one without the divisions. Viewing the poem as an organic and ever-changing whole can be a refreshing and liberating experience.
<urn:uuid:c45dc6f8-8f40-4d1b-ac7e-875402cd30b9>
CC-MAIN-2016-26
http://www.shmoop.com/song-of-myself/rhyme-form-meter.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967485
518
2.890625
3
Incorrect. Perception is the organization, identification, and interpretation Which all require cognitive activity and you admitted that perception contains no cognitive activity. It's clear by now that your definitions are faulty, to say the least. Stimuli produces a sensation, which produces a perception. Or, to put it another way, perceptions are reactions to sensations But you already said that sensation itself was a reaction . which are reactions to stimuli. Have I made this clear enough now? Hardly. It should have been clear from context that I was saying that detection and sensation were synonymous, while perception was something else. But detection involves cognition and perception which you have already admitted requires organization and interpretation. All cognitive activity. Given that the last so-called "conflation" you accused me of was due to your misunderstanding, I think it is a fair statement that this one is as well. Perhaps you should stop trying to play word games with my statements in what has so far proven a futile effort to negate them without having to actually rebut them. I am showing you huge holes in your definitions and you keep on in your obstinacy. You don’t have a theory. Just come to grips with this. A sensation is not a sense. Sensations are produced by the senses. You said above, A sensation is the reaction of an organ that has developed to detect certain things So then a sense is an organ and a sensation is that which the sense produces. So then the sensation is not in the external object but in the organ. There are only five known senses So only five organs produce sensations? Could you delineate those for me and tell me which organs do not produce sensations? By the way, organisms which do not have brains (and therefore do not have cognitive ability) can still have perceptions based on sensations. We are not talking about other organisms. You didn’t answer the question. No, we're talking about sensation and perception, which are not limited to humans. Notice the OP: “How do WE account for the existence of numbers? ” The operation of other creatures is irrelevant and seeing that I have already shown a clear distinction between humans and other creatures in that we are the only species with grammar and dictionaries and mathematics, I am chalking this up to your obstinancy. Triangles (and triangularity) are hardly abstract. We can concretely define triangularity whereas we cannot concretely define an abstract idea. In order for a triangle to be drawn you must locate a distinct point which I have proven impossible because you can show no distinction at in the material world. Not at all. The problem is that for an abstract idea to be universal, it must be innate. Triangularity is universal but not innate. That was created by men. I don't even know what you mean by that. I know you don’t among many other things said here. This is a nonsensical attempt by you to try to obfuscate things. This is just another way to say what I quoted you saying above. As soon as I take you out into water too deep for you, you blame me for your incapability. The plain and simple fact is that logic has no independent existence. It is dependent on the "chronological physical world", as you put it. So is abstraction; abstract ideas do not exist in some separate realm besides the physical. Abstract ideas are created by minds which have developed enough to conceive of them, and without those minds (and the brains which produce those minds), they could not exist at all. Ad hoc. How could particular experience ever produce a universal in a mind? We are right back to the initial questions that you cannot answer. Given how you just attempted to quote-mine my statement to pretend I meant something other than what I actually did, it is utterly dishonest for you to try to accuse me of "shady behavior". My full statement was, "Gill may have "proven" that language was impossible, yet it clearly exists; we are using it to communicate with each other right now. If language was actually impossible, we would not be able to have this argument, therefore it clearly is possible." It should have been evident that I rejected Gill's statement from the get-go. I will repeat: ad hoc. You rejected her not logically with any arguments but ad hoc, you justify the existence by the results thus proving the principles by the conclusion. In other words, it's circular logic. No it is not because an axiom is not a conclusion. Given that there is no such difference, your point is invalid. Abstraction is produced by the physical world, the same as the mind is produced by the brain. Without the physical structure to support it, you cannot have a mind (or abstraction). Ad hoc. You have yet to explain how something physical could produce abstraction. Universal ideas (such as triangles/triangularity) are natural consequences of a universe with spatial dimensions. Ad hoc. I have already shown that you have no definition of space and you cannot individuate anything. You just assert it. A triangle, for example, is meaningless if you cannot have straight lines and angles between those lines that add up to 180 degrees. Everything you just said depends on the existence of distinct points which you only assert exist by ad hoc. Moreover, with a line segment you need a fixed point and that does not exist. There are no such things as fixed points. And science does not do that. On the contrary: Bertrand Russell said, “All inductive arguments in the last resort reduce themselves to the following form: ‘If this is true, that is true: now that is true, therefore this is true.” This argument is of course, formally fallacious. Suppose I were to say: “If bread is a stone and stones are nourishing, then this bread will nourish me; now this bread does nourish me; therefore it is a stone, and stones are nourishing.’ If I were to advance such an argument, I should certainly be thought foolish, yet it would not be fundamentally different from the argument upon which all scientific laws are based.” (The Scientific Outlook by Bertrand Russell, page, 51) By the way, this is exactly what you're doing with coherence theory. You're affirming the conclusion I have already answered this. An axiom is not a conclusion. Scientific laws are nothing more than mathematical expressions to begin with. Thermodynamics, motion, gravity, whatever, they're just mathematical expressions of how those forces work, as seen from the "inside" (that is, we're affected by them too). It's not perfect, but going into science expecting perfection is silly. If the possibilities are infinite, then not only is science not perfect, its theories carry the probability of 1/infinity which equals zero. It seems that you don't understand what scientific laws are to begin with. Your catalog is of your misunderstandings and obfuscations, as I've shown. As I have shown, the reader may substitute your use of “obfuscation” with “I don’t have the ability to follow”. This presupposes that abstractions have a separate existence from physicality. Which you have only asserted ad hoc. You must show evidence to support this, otherwise you are using circular logic to justify your position. And, by the way, not an assertion. It is based on the fact that you can't assert a conclusion and then validate the premise with that conclusion. From my axiom, not my conclusion, that the Protestant Canonical list is the embodiment of demonstrable human knowledge, I can deduce that there is a God who thinks thoughts. This God created the world and human beings in his image which is essentially the rational faculty of man (Col 3:10). I can also deduce that persons can be considered outside of a physical body. (2 Cor 12:3). Thus the arche of all knowledge, in the genus of being, are divine ideas within a divine mind and this divine mind has no physical brain. I speak to this issue in detail here: http://eternalpropositions.wordpress.com/2011/08/14/eighteen-theses-against-behaviorism-by-drake/ As I am not talking about operations (that was your assertion, you never gave any evidence to show that it was the case), that doesn't matter. Your view clearly sees truth as a demonstration of physical objects in the chronological/historical order. That is contrasted with my view of propositional demonstration which is what you were rejecting when I replied at #91. As it happens to be the conclusion of coherence theory, you will have to excuse my skepticism as to your claim that it's nothing more than an axiom/postulate. Frankly, I don't buy it. The conclusion of coherency theory is yes, it is coherent or no its not. My axiom is the Protestant canonical list. Don’t confuse them. This has nothing at all to do with what I wrote. Asserting a confusion is not the same thing as explaining it. Notice how I consistently explain your conflations all throughout this dialogue. It appears we have reached a road block as I have a strong inclination that you are not able (At this time) to understand quite a number of issues here so continuing would be a waste of my time. I hope the best for you.
<urn:uuid:58ea80ce-0150-44f9-bc2d-94bbfc5aef9c>
CC-MAIN-2016-26
http://whywontgodhealamputees.com/forums/index.php?action=karmamessage;u=7762;topic=24348;m=543470;start=0;sort=action
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966477
1,996
2.59375
3
Authors: Reginald B. Little I The Little Rule and Effect describe the cause of phenomena of physical and chemical transformations on the basis of spin antisymmetry and the consequent magnetism of the most fundamental elements of leptons and quarks and in particular electrons, protons and neutrons causing orbital motions and mutual revolutionary motions (spinrevorbital) to determine the structure and the dynamics of nucleons, nuclei, atoms, molecules, bulk structures and even stellar structures. By considering the Little Effect in multi-body, confined, pressured, dense, temperate, and physicochemically open systems, new mechanisms and processes will be discovered and explanations are given to the stability of multi-fermionic systems for continuum of unstable perturbatory states with settling to stable discontinuum states (in accord to the quantum approximation) to avoid chaos in ways that have not been known or understood. On the basis of the Little Effect, the higher order terms of the Hamiltonian provide Einstein’s missing link between quantum mechanics and relativity for a continuum of unstable states. Comments: 51 Pages. Previously submitted to sciprint.org in 2005 [v1] 2012-12-02 11:57:18 Unique-IP document downloads: 192 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:8b321cc4-a659-40d2-ac77-8f449dc5157c>
CC-MAIN-2016-26
http://vixra.org/abs/1212.0011
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00113-ip-10-164-35-72.ec2.internal.warc.gz
en
0.86495
311
2.921875
3
As tough and durable as they are pretty, marigolds (Tagetes spp.) grace gardens with non-stop yellows, golds, oranges, russets and reds from spring until frost. Winter hardy only in U.S. Department of Agriculture plant hardiness zones 9 through 11, marigolds are killed by frost. The plants thrive as annuals throughout North America. While marigolds are undemanding and nearly care-free, judicious fertilizing gives them a little extra energy for the best continuous seasonal flowering performance. Fertilize container marigolds every four to six weeks with a balanced liquid product for blooming plants throughout the growing season until frost. Do not fertilize perennial marigolds during the winter months. Feed your garden marigolds a slow-release granular 11-40-6 fertilizer about seven to 10 days after you set them out in early spring. Feed returning perennial marigolds in early spring after the last frost for your region and before new growth emerges. Use about one teaspoon per plant. Sprinkle the product on the soil above the root zone. Water it in thoroughly. Do not overfeed marigolds. Too much fertilizer promotes lush foliage flushes and reduces flowering. Follow the packaging instructions. Repeat the granular fertilizer application for garden marigolds in June or July. Broadcast about one teaspoon of the material between each plant. Some marigold varieties tend to flower less during the hottest weeks of the summer. A dose of fertilizer will perk them up and encourage them to resume blooming. Things You Will Need - Balanced liquid fertilizer for blooming plants - Slow-release granular 11-40-6 fertilizer - Deadhead marigolds as soon as blooms fade to encourage continuous flowering and keep the plants looking tidy. - Jupiterimages/Brand X Pictures/Getty Images
<urn:uuid:448c53e9-d370-40bc-b5e5-05be5cb8ec96>
CC-MAIN-2016-26
http://homeguides.sfgate.com/fertilize-marigolds-63324.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916817
384
2.703125
3
World's Diet Means Bad Things for Bessie and Wilbur We may have Meatless Mondays and a twice-elected presidential ticket that’s gone vegan, but while Americans have reduced their meat consumption in past years, the world is becoming an increasingly carnivorous place. You may have read the news stories about the rising demand for meat in countries such as India and China, where growing middle classes are shifting toward far meatier diets than were common in less prosperous times. Now, a study published in the Proceedings of the National Academy of Sciences attempts to locate humanity’s exact place on the trophic scale, which measures an organism’s ranking on the food chain. “Although trophic levels are among the most basic information collected for animals in ecosystems,” the study reads, “a human trophic level (HTL) has never been defined.” At level 1 on the scale you’ll find self-sustaining organisms like plants; at level 5.5 are true carnivores—predators like polar bears that consume lots of other, smaller mammals. Despite every steak-and-bacon-loving bro you’ve ever heard say, “I’m basically a carnivore,” humans are decidedly not; we had a median trophic level of 2.21 in 2009. That puts us “on par with other omnivores, such as pigs and anchovies, in the global food web,” Hannah Hoag writes for Nature. That number has crept upward, however, rising by 3 percent over the past 50 years. Around the time the incremental increase in meat consumption began, China was coming off three years of famine that killed between 20 and 43 million people. In India, 40 percent of the rural population and half of Indians residing in cities were living in poverty in 1960. More reliable food supplies, decreased poverty, and overall economic improvement in China and India are nothing to lament, but considering that meat production is responsible for more climate-change-causing emissions than the entire transportation sector, the uptick is worrisome. Still, the study shows that some regions of the world have shifted away from meat-heavy diets. “Places such as Iceland, Mongolia and Mauritania, where traditional diets are mostly based on meat, fish or dairy, have seen their trophic levels decline as they diversified their daily fare,” Hoag writes. So what’s the overall takeaway, on a personal diet level? At the very least, keep eating like pigs and anchovies, and not like polar bears.
<urn:uuid:1a84cb36-7764-4e88-b482-776c3b315493>
CC-MAIN-2016-26
http://www.takepart.com/article/2013/12/03/humans-are-increasingly-carnivorous
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959442
540
3.046875
3
In part 1, we looked at the characteristics of online quizzes and explored how they could be used to assist or assess learning. In this instalment, we look at the various question formats and the types of learning for which they are suited. In an adult learning context, factual information is usually supplemental to the core learning objective and more often than not just for general interest. However, some facts really do need to be known by heart: When was … ? What is … ? Who is … ? If it is essential that the learner can recall the information without prompting, then you have little choice than to ask a question that requires them to type the answer in. If it is only necessary that they are able to recognise the right answer, then various forms of multiple choice will do. Concepts provide a common language for understanding a subject. Generally the aim is for the user to be able to identify the class or category to which given objects belong, whether these are tangible (like types of computer) or abstract (like schools of thought). The most common way of checking this knowledge is to provide the learner with examples and ask them to place these in the correct categories, as in the examples below: A process explains how something works as a chain of cause and effect relationships. To check understanding of a process, you can ask questions about causes or about effects, as shown below: In this instance our aim is for the learner to be able to identify the locations of parts of an object, device, physical space or system. The easiest way to check this knowledge is with a question that has the learner click on a given part as shown below: Procedural knowledge is tougher because in many cases what you really want to test is whether the learner can actually carry out the procedure rather than just answer questions about it. However procedural knowledge is a first step and you can use a variety of questions to check learning: These examples were created in Articulate QuizMaker, although many quiz tools could do a similar job. In the next instalment we look at the principles underlying the writing of quiz questions.
<urn:uuid:7e6d0ec0-b7cf-44e8-84dc-f81511ed70c5>
CC-MAIN-2016-26
http://onlignment.com/2011/08/a-practical-guide-to-creating-quizzes-part-2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955192
436
3.890625
4
Dr. Andrea Warner-Czyz, an assistant professor at the Callier Center for Communication Disorders in the School of Behavioral and Brain Sciences, recently wrote an article in the Journal of the Acoustical Society of America that found infants hear speech differently than adults. Two to three children out of 1,000 are born deaf, but cochlear implants, a technology that improves the odds of these children hearing, are becoming available to them at younger ages. A cochlear implant with more than eight or nine channels does not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants. Normal hearing babies participated in a test during which they heard speech sounds normally or as though the sounds had been processed through a 16- or 32-channel cochlear implant. The infants responded the same way to the 32-channel processed sounds as to the normal speech sounds; however, they could not tell the difference between the sounds processed as though through a 16-channel cochlear implant. The researcher concluded these results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech. Warner-Czyz recommends clinicians take these developmental differences into account when working with very young cochlear implant recipients. Article: "Vowel discrimination by hearing infants as a function of number of spectral channels" Jana Mueller, a doctoral student and Callier research assistant, and Dr. Christine Dollaghan (pictured), a professor at the Callier Center and the School of Behavioral and Brain Sciences (BBS), are authors of an article in the Journal of Speech, Language, and Hearing Research reviewing the accuracy of assessments for identifying executive function (EF) impairment in adults following acquired brain injury. Executive functions, which include crucial skills such as planning, reasoning and judgment, are frequently observed in adults after head injury, but there is little agreement concerning the most accurate measures for diagnosing EF deficits. Electronic databases were searched for studies of executive function assessments in adults with acquired brain injury (ABI) that reported any of three values: likelihood ratios (LR); standardized group mean comparisons (d); or correlations among EF tests (r2). The searches found 1,417 citations; after full texts of 129 articles were reviewed, 34 studies were found to report at least one value of interest. Nineteen positive and negative LRs, 114 d-values, and 104 correlations concerning a wide variety of EF measures were synthesized. Though some point estimates were in the clinically informative range, all confidence intervals extended beyond it. The researchers concluded that strong evidence is lacking concerning diagnostic accuracy and concurrent validity of EF measures for adults with ABI. They recommended more studies aimed at improving the quality of evidence concerning EF tests. Article: "A systematic review of assessments for identifying executive function impairment in adults with acquired brain injury" Dr. Thomas Campbell, executive director of the Callier Center and Sara T. Martineau Professor in Communication Disorders, and Dr. Christine Dollaghan, a professor at Callier, are lead authors of an article examining the effects of traumatic brain injury in the Journal of Speech, Language, and Hearing Research. Using the percentage of consonants correct-revised (PCC-R) metric, the researchers evaluated 56 children injured under the age of 11 over 12 monthly sessions, beginning when the child produced more than10 words. At each session, odds of normal-range PCC-R were compared in children injured at younger and older ages. The researchers then calculated correlations between final PCC-R and age at injury, injury mechanism, gender, maternal education, residence, treatment, Glasgow Coma Score and intact brain volume. The PCC-Rs varied within and between children. The odds of normal-range PCC-R were significantly higher for children injured at a later age than the younger group. Over a 12-month period, severe traumatic brain injury had more adverse effects for children whose ages placed them in the most intensive phase of speech-language development than for children injured later. Article: "Consonant accuracy after severe pediatric traumatic brain injury: a prospective cohort study" Dr. Christine Dollaghan, a professor in the Callier Center and the School of Behavioral and Brain Sciences, is co-author of an article on noun bias in bilingual children in the Journal of Child Language. Most previous research about cross-language variation in noun bias - or the tendency to favor nouns in early language development - came from comparing groups of monolingual children who were acquiring different languages. Because such groups also differed in developmental level and sociodemographic characteristics, it has been difficult to draw strong conclusions about whether noun bias varies in different languages. This new study compared noun bias in bilingual toddlers who were acquiring two languages: Mandarin and English. The percentage of nouns in Mandarin (38%) was significantly lower than the percentage of nouns in English (54%), suggesting that the preference for nouns is not equal across languages. Instead, the frequency of nouns used by parents in a particular language significantly influences young children’s word choice. “Language specific noun bias: evidence from bilingual children Dr. Raul Rojas, assistant professor at the Callier Center for Communication Disorders and UT Dallas’ School of Behavioral and Brain Sciences, has contributed an article to the journal Child Development, which examines the language growth of children who speak Spanish and are learning English. Rojas co-authored the study with Dr. Aquiles Iglesias of Temple University. The researchers determined the shape of English language learners’ language growth patterns across 12,248 oral narrative language samples - 6,516 Spanish and 5,732 English. The samples were produced by 1,723 English language learners during the first three years of their formal schooling. Results indicated distinct trajectories of language growth over time for each language, affected at different rates by summer vacation and gender. Rojas and his team also found significant intra- and interindividual differences in initial status and growth rates across both languages. The research findings advance understanding of bilingual language growth and the varying trajectories for different language-learning groups. The study also provides a foundation for supporting the development of theoretical frameworks of bilingualism and the identification of possible language learner subgroups with different growth trajectories, Rojas wrote. Article: " The Language Growth of Spanish-Speaking English Language Learners” Dr. Christine Dollaghan, a professor at the Callier Center for Communication Disorders and UT Dallas’ School of Behavioral and Brain Sciences, has contributed to two recent articles related to evaluating children’s language impairment. She is sole author of a paper that appeared in the October edition of the Journal of Speech, Language, and Hearing Research. For that project, Dollaghan evaluated scores from more than 600 six-year-old children, some of whom had specific language impairment (SLI) and others who had normal language skills. She wanted to find out whether children with SLI appear to represent a qualitatively distinct group, or simply the children whose language skills fall at the lower end of the normal curve. The results showed that the children with SLI did not have qualitatively different language skills from their peers, suggesting that treatment approaches for SLI should be tailored to individual children rather than to a diagnostic label. Article: “Taxometric Analyses of Specific Language Impairment in 6-Year-Old Children” Dollaghan also is a co-author of an article in the November edition of Artificial Intelligence in Medicine with colleagues from UT Dallas’ Erik Jonsson School of Engineering and Computer Science, K. Gabani, T. Solorio, Y. Liu, and K. Hassanali. The study explored the use of automated computer-based methods, including natural language processing and machine learning, to identify children with SLI based on 15-minute conversation samples. The automated methods performed well, suggesting that future collaborations between computer science and communication disorders are likely to be useful. Article: “Exploring a corpus-based approach for detecting language impairment in monolingual English-speaking children” Dr. Mandy Maguire has an article slated for publication in the journal Brain and Cognition which provides a clearer understanding of how response inhibition develops in children. Maguire, an assistant professor in the School of Behavioral and Brain Sciences, investigates child development. She is interested in how the inhibitory process differs as tasks become more difficult because inhibition is necessary in many higher order cognitive tasks throughout childhood. For this study, she monitored the behaviors and brain responses of children in two groups—7 to 8 year olds and 10 to 11 year olds—in three fast tasks where children had to press a button 80% of the time and inhibit a button press response 20% of the time. The tasks differed in difficulty. By comparing across the three tasks she found that although the two groups had similar reaction times and error rates, the brain responses showed that they were using different strategies. Younger children were focused on when to press a button, but older children were using an adult-like strategy of looking for when not to press the button. "The results are important to our understanding of the developmental changes in inhibition that occur in middle-childhood or the ages of 6 to 11," Maguire said. "This may be of particular interest in studying children with inhibitory deficits such as attention-deficit disorder." Article: "How Semantic Categorization Influences Inhibitory Processing in Middle-childhood: An Event Related Potentials Study" Dr. William Katz, a professor in the School of Behavioral and Brain Sciences, is the lead author in a study on foreign accent syndrome (FAS), a rare disorder characterized by the emergence of a perceived foreign accent following brain damage. In this case study, researchers obtained functional magnetic resonance images (fMRI) of the brain during a speech task for an American English-speaking patient who presented with FAS without a known cause and was thought to sound "Swedish" or "Eastern European." Katz and his team used fMRI during a picture-naming task designed to broadly engage the speech motor network. The results suggested substantial brain reorganization for speech motor control. Testing of more patients who present with similar characteristics will be needed in order to better understand the neural bases of this disorder, both for patients of unknown etiology and for individuals who acquire FAS as the result of stroke or traumatic brain injury. This case study is scheduled to be published in the journal Neurocase. Article: "Neural Bases of the Foreign Accent Syndrome: A Functional Magnetic Resonance Imaging Case Study" Dr. Noah Sasson, an assistant professor in the School of Brain and Behavioral Sciences, is the lead author of an article detailing the benefits of comparing autism and schizophrenia for revealing mechanisms of social cognitive impairment. The article, published in the June print edition of the Journal of Neurodevelopmental Disorders, argues that direct comparison of social cognitive impairment can highlight shared and divergent mechanisms underlying pathways to social dysfunction. The process may provide significant clinical benefit by informing the development of tailored treatment efforts. While autism and schizophrenia share a long history of diagnostic confusion because of their overlap in social abnormalities, Dr. Sasson writes that “the goal of direct comparisons is not to conflate once again, but rather to reveal distinctions that illuminate disorder-specific mechanisms and pathways that contribute to social cognitive impairment.” Dr. Sasson is currently conducting additional studies at the Callier Center that examine social cognition in adults with autism and adults with schizophrenia. Article: "The benefit of directly comparing autism and schizophrenia for revealing mechanisms of social cognitive impairment" Dr. Christine Dollaghan is the lead author in one of the first meta-analysis studies to examine the accuracy of tests currently being used to diagnose language impairments in the large and growing number of bilingual Spanish-English children in the U.S. The study, currently in press, can be accessed online in the Journal of Speech, Language, and Hearing Research. Dollaghan co-authored the study withUT Dallas graduate student Elizabeth Horner. "Children with language impairments have an increased risk of reading and academic difficulties, so it's important to diagnose them as early as possible," Dr. Dollaghan said. Dollaghan, a professor in the School of Behavioral and Brain Sciences, and Horner found that evidence on accuracy could be located in 17 measures of language skill; ranging from standardized tests to professional observations to parent questionnaires. Although no measure was found to be definitive for diagnosing language impairments in this population, the majority yielded suggestive results. The study concluded with several suggestions for strengthening future research on diagnostic accuracy. Article: "Bilingual Language Assessment: A Meta-analysis of Diagnostic Accuracy"
<urn:uuid:c85c155a-2746-492c-8487-8a3bf2d013b7>
CC-MAIN-2016-26
https://utdallas.edu/calliercenter/research/in-press.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947472
2,627
3.40625
3
Nicky Nzioki, Agnes Maganjo, Catherine Kariuki, University of Nairobi, Kenya The aim of this paper is to review the current accessibility legislation on the built environment in Kenya and document the current situation in the light of persons with disabilities. The paper will briefly explain the various categories of disability. In addressing disability and current legislation, the paper will embark on a detailed description of government policies. In the light of the foregoing information, various legislations will be discussed each in detail as they affect the overall design of the built environment. The paper will compile in its conclusion a list of feasible recommendations which the government can consider in order to cater adequately for the disabled person. In itemizing the recommendations, a distinction will be made of those which can be tackled immediately, i.e. addition of a ramp to buildings and those which require to be implemented over time and for those to be included in new building designs. In many parts of the world, the 1970's emerged as the decade of persons with disabilities - when the disabled people in different parts of the world started to band together and in a common voice demanded recognition for their existence, their needs and their rights. (Falta 1976) Among others, their agitation was based on familiar human rights principles such as equal opportunities, non-discrimination, integration and normalization. It was argued, and rightly so, that their lives were severely handicapped by social, political, economic and physical barriers in society which not only hampered their full participation in society but also reduced them to objects of pity and welfare recipients and thus suffered segregation and debasement. Patricia Falta, for instance, argued that "whatever program received by the handicapped has been in response to activist political pressure by the handicapped themselves rather than a recognition of their valid rights". She cited unwillingness, fears and disgust "nurtured by traditional unfounded myths and unknowns" in the parts of the governments as the main factors which have hindered them from giving people with disabilities freedom to participate in development, the opportunity to develop their abilities, express their individuality as well as economic and social freedom. (Falta 1976) In very general terms, the needs facing disabled persons which should be addressed can be divided into two main categories: First and foremost is the need to be incorporated into the economic mainstream as full and equal members of society. Thus shifting emphasis from merely the physical strength or limitation to the brains. Secondly, and of equal importance, is the need for governments to recognize and address the needs of disabled people as equally important as those of other members of society and hence form the basis of all planning and development strategies. In response to these unique needs facing people with disabilities, 1981 was designated as the United Nations' Year of the Disabled Persons with the aim of sensitizing and directing the attention of the member states to the plight of disabled people. In 1982, all member states unanimously adopted the World Plan of Action aimed at making the physical environment accessible to all including persons with various types of disabilities. This was in recognition of the fact that the situation of the disabled persons should, as Weiss puts it, "be improved mainly by the adaptation of society and not necessarily through measures related to individuals. It is therefore the built environment that ought to adapt to peoples' possibilities of using it not the other way round". (Weiss 1984) In order to achieve the above objectives, it is important to develop programs and strategies that will eliminate all design barriers that tend to limit the degree of integration and independence of those with disabilities. This can, as may have been pointed out, be achieved through amending the Building Codes and Regulations to incorporate accessibility in the design and construction of the built environment and secondly, adopt the existing buildings to the special needs of persons with disabilities. This, if achieved, will go a long way in affording disabled persons an opportunity to integrate freely with the rest of society. It has been noted by many and indeed by disabled people themselves that "any attempts to segregate them into special schools, special housing, special transport, etc. is interpreted as an evidence of over-protection and patronizing of the society and indeed the worst oppression. This is because, to people with disabilities, all these acts are a constant reminder that they are different from the rest of society and this tends to extend handicapism to those already vulnerable". (Thiberg 1984) Following the 1982 UN Resolutions, many countries, especially in the developed world, joined those which had already enacted access legislation. In countries such as Britain, United States of America, the Scandinavian countries and some developing countries such as China, South Africa, etc. efforts were made to create a barrier-free environment in response to the international concern over the accessibility of disabled persons to the built environment. A lot of successes have definitely been achieved towards this end. For instance, on Wangfujing Street in Beijing, China, a very busy shopping street, disabled people could not use the street despite the fact that most commercial activities were concentrated there. Numerous obstructions hampered accessibility by disabled people to the street: Seeing the state of affairs, the Municipal Council decided to eliminate those architectural barriers by installing ramps in the entrances of selected commercial and recreational stores with the surfaces of the ramps made of durable, non-slip materials. In addition, lowered handrails were mounted on both sides of the ramps and on one of the public toilets. Along the full length of the street sidewalk curbs were remodelled into ramps thus creating unobstructed passage for wheelchairs and crutch users as well as other pedestrians. Audio instruction boards were installed to help the visually impaired find their way along the street and Braille signs were put up at bus stops. (Bai Demao 1987) Another good example of accessibility legislation and efforts is the Lesotho Paper on Accessibility which gives the technical details put into consideration in building a barrier-free environment. This has and will form the basis for other African countries contemplating Access Legislation. Despite success in this regard, there are feelings that full understanding of disabled persons and concrete progress to give them their full and rightful place has not been fully achieved even in these countries. In the African region, the problem is even more critical with few countries having implemented the 1982 UN Resolution on Elimination of Design and Architectural Barriers in the built environment. In Kenya, in particular, one would not be exaggerating to say that "the built environment is designed by and for the healthy young persons, not children, the elderly one, expectant mothers and disabled". (Biswas 1988) In all the important legislations dealing with the design and construction of the built environment, there is direct and implied requirement that the built environment should accord 'a reasonable degree of safety and accessibility' to the public using it. In interpreting the "reasonableness" of the design criteria, the yardstick used is that of a non-disabled person, thus the safety and accessibility for those with some physical or mental limitation of one kind or other being completely omitted as will be observed below. Before any attempt is made to review the existing Access Legislation in Kenya, it is important to define what we mean when we talk of 'disabled' persons and try to group them into specific categories with the premise that their specific needs of access to the built environment depends on the particular kind of disability. This is to avoid banding persons with disabilities together under 'disabled persons' thus sacrificing and overlooking their varied individual needs as far as facilitating their mobility is concerned. For the purposes of this paper, disabled persons shall be taken to mean those who, due to some physical, sensory, visual or mental impairment, experience difficulties in using the built environment if the building is not oriented to their needs. Their needs can be put into four main groups: This group includes all those persons without legs or whose legs are not able to support their weight or whose legs need extra support to support all the weight. This group is further divided into three sub-groups. (John Hunt & Lesley Hoyer 1982) These are people confined to beds by the nature of their disability or as a result of a chronic illness. In their case, accessibility to the built environment is irrelevant. Their needs could be said to include a well-heated bedroom which offers a good view from the bed. Wheelchair users require mobility building, offering access, adequate space in all rooms, doorways and passages for wheelchair circulation and handles, switches, windows and work surfaces at wheelchair height. The former are people who do not use wheelchairs and are able to walk to a limited extent, in some cases only with the aid of another person. The latter, on the other hand, are persons who do not use wheelchairs and walk with difficulties, in some cases only with the aid of another person. These two groups of people can comfortably be accommodated in mobility buildings with ramped access and wider doors. In their research, John Hunt and Lesley Hoyes (1982) also found that age was still an important factor in the incidence of not only physical disability but also other types of disability such as sensory and mental disabilities. For instance, of the 1,446 disabled persons considered, 73 per cent were aged 60 years and over. As far as these people are concerned, the services provided should be based on the philosophy of enabling them through the provision of comprehensive support system and orientation of buildings to their needs. This provides a chance to retain an active and independent lifestyle away from the former philosophy of building homes for care and attention, which is not only expensive, but denies old persons an opportunity to be independent and active. In their conclusion, Hunt and Hoyes recommended avoidance of steps, provision of wider doors and passages, and the location of switches, windows, work surfaces, and sanitary facilities at a comfortable height. These would benefit disabled people, pregnant women, children, and to a large extent make life easier also for those who are still young. These include those persons with varying degrees of difficulty in hearing and communicating. Their needs, with regard to accessibility to the built environment, primarily evolve around sound signals such as fire alarms, sounds of falling objects, etc. Since their disability limits them from hearing, very good visual indications should be included and planned for in the design of projects to ensure their safety in buildings. Additionally, some hearing problems are medically said to lead to imbalance of the body and so ramps would be preferably to stairs in case of a person falling. Handrails are an added advantage in such cases. These persons are limited in mobility by blindness. Their safety and access is greatly improved by introduction of audio signals in strategic places such as audio instructions in elevators. Also, Braille signs should be put up at notice boards, bus stops and all other places deemed necessary to facilitate their access to buildings. Sudden changes in floor level, uneven staircases all hamper their mobility and indeed endanger their lives. Mental disability could be characterized by disorientation in time and space, memory loss, confusion and functional mental illness, among others. Among disabled persons, the mentally disabled suffer the greatest social rejection and segregation, especially in severe cases where the disabled person shouts at people, has poor personal hygiene, etc. As J. A. Muir Gray notes, isolation given to those people "to say the least is brainwashing and damaging to the victim", thus, compounding an otherwise moderate problem. (Gray 1977) Good and accessible design should therefore be flexible enough to cater for all these varying and at times conflicting needs of persons with disabilities. On the other hand, to group all the needs of the individual into one bundle, 'disabled', tends to limit the flexibility of design in combating the specialized needs of the different categories of disabled people. The challenge to the design team and the legislatures is to try as much as possible to incorporate all the needs into the design with the least cost. Although it is virtually impossible to increase the stock of buildings quickly (especially housing) because of financial limitations, much can be done to help people with disabilities in phases. This can be achieved through adaptations of the already existing buildings to orient them to the needs of persons with disabilities as in Beijing (see above notes). Most importantly, countries which have not drawn up Access Legislation can collect data analysis and enact legislation for future construction. The issue here is to ensure that all licences issued for the construction of public buildings are subject to those buildings incorporating mobility standards in their design to meet the needs of persons with disabilities. The 1970-74 Development Plan recognized the vital role played by the welfare services rendered to the disadvantaged as a prerequisite to greater economic progress. The Plan asserted the fact that although "certain services have no immediate economic implication, their neglect has severe effects on the well-being of the whole society". (1970-74 Development Plan) This was in recognition to the "truism that economic development cannot be divorced from the social advancement of the society". (1970-74 Development Plan) In the above regard, disabled persons were singled out among others as a priority group to be given special attention considering the limited resources at the disposal of the government. This was to be achieved through the introduction of rehabilitation centers where persons with disabilities would be trained in various skills and rehabilitated to fit in the social stream. The government recognized that the development of the disability community was an asset to the nation rather than a liability. The same idea has been reflected throughout the Five-Year Development Plans where the objective has been to provide welfare services to persons with disabilities. Without underscoring the important role played by welfare services, the complete omission of accessibility to the built environment as a basic need in integrating persons with disabilities into both the economic and social life of the community is too obvious. Factors leading to this state of affairs can be speculation ranging from lack of awareness on the side of government officials; traditional beliefs that, at best, disabled people should be treated with sympathy and pity and hence accord them welfare services on humanitarian grounds; lack of enthusiasm and a looking down on those with disabilities with the old belief that they are less human and hence no need to bother with them, etc. This, as will be seen below, is reflected in the complete omission of accessibility legislation in the various acts governing the construction industry. Such acts as the Street Act, the Building Code, the Public Health Act, the Factory Act etc. have some relevance in these areas. However, for the purposes of this paper only the Building Code and the Public Health Act have been reviewed. The others, though important, are completely silent on the issue of accessibility and to review them will amount to repeating ourselves. Nevertheless, in the event of Access Legislation in this country, all these acts and any other related to accessibility of the individual should be redefined to incorporate accessibility to disabled people. This Act is the overriding legal authority regarding the local bylaws related to any matters that may be construed as affecting the health of the public. The Act does not in itself define standards of design and construction, but it requests and can require local authorities (Municipal Councils, Urban and Area Councils) to make by-laws to define those requirements. (Section 126A). As in all other technical design issues the Act is completely quiet on issues relating to accessibility to buildings by persons with disabilities. Currently the detailed requirements for the erection of buildings in Kenya are contained in the Local Government (Adoptive By-Laws) Building Order 1968 (Generally referred to as Grade I By-Laws) and the Local Government (Adoptive By-Laws) (Grade II) order, 1968. These two orders are published by the Republic of Kenya in one volume under the title of Building Code and are tantamount to a National Building Code, although it should be noted that they are adoptive and not mandatory and any municipal council may adopt them. There is an apparent assumption by the Code that the degree of safety and access specified caters for all persons with reasonable normal mobility. Consequently, the safety and accessibility for those with some limitation of one kind or other is completely omitted. It is not a surprise then that specifications on sizes of door openings, corridors, stairs, etc., tend to assume a young person with no disability. Ramps, for instance, are treated very narrowly and not as focus for accessibility. Codes are adoptive and not mandatory, this has a lot of weight when we are dealing with enforcement of accessibility legislation. Problems related to Building Codes and Standards responsive to the safety needs of persons with disabilities can be grouped into five distinct areas. These include general problems, problems of information transfer, movement, protection, and search and rescue. In the category of general problems, there is a lack of data in a form useful in making code decisions that relates the disabilities associated with specific disabilities to various building types and uses. There is no distinctive data base on the actual experience of the disabled person in emergencies. There is limited information on the abilities and disabilities of disabled individuals in using building safeguards for the non-disabled. There is a tendency in building code-making bodies to categorize all types of disability together, impending efforts to resolve problems related to specific disabilities. Among the information transfer problems, current modes of occupant notification of initial threat to safety are ineffective for individuals with certain disabilities, and similarly existing modes of occupant location of exits, areas of refuge, and other safety feature are ineffective for individuals with certain disabilities. Neither are disabled individuals provided with information needed to evaluate personal risk in terms of their particular disabilities and the safety measures of the buildings they use, nor do current practices provide disabled individuals with means of obtaining assistance during an emergency. Among the movement problems, disabled individuals can have problems moving from a threatening situation because they are obstructed by certain conditions or elements that become barriers because of their specific disabilities. These are conditions of elements which are not currently addressed in relevant building code provisions and include floor coverings, grates, mats, hardware, illumination signs, protruding objects, and level changes. The length of time that it takes a disabled person to move away from a threatening situation can be seen as a function of their particular disability. No current code provisions take into account this type of time and distance information on disabled persons. People with disabilities often cannot use conventional exit systems. The use of stairs as emergency exits in multi-storey buildings does not satisfy the exit needs of individuals with certain disabilities. Traditional elevator standards preclude the use of emergencies. Certain configurations and sizes of corridors can create exit problems for disabled persons. This is also a problem area for persons without disabilities. It is important for both disabled and non-disabled people to be able to immediately grasp a sense of direction when exiting a corridor in an emergency. The size of door openings and opening factors such as hardware can create exit problems for disabled individuals. Certain disabilities may force individuals to seek safety within the building rather than trying to exit. Current practices may not provide adequate safety for these individuals. In providing areas of refuge from fire and smoke, it is particularly important that the individuals expected to use such areas have confidence in their safety. Among search and rescue problems, it has been observed that in many building types, emergency service personnel are limited in their ability to identify the presence and location of disabled individuals in an emergency. Certain conventional rescue techniques (e.g. the use of aerial ladders, some carrying techniques) can pose problems in rescuing disabled individuals. Also the location and type of emergency warning systems may hamper or preclude their use by certain persons with disabilities. In making general recommendations for building codes and standards for safety and persons with disabilities, it must be pointed out that no means of escape for disabled people should be either exclusive to them or of a character that would not meet Building Code Standards for the general population. If it is determined that different escape strategies for people with disabilities would be beneficial for both disabled and non-disabled persons, these potential escape strategies must be developed to fulfil all the requirements for acceptable escape. Building codes and standards for disabled people should be cost-sensitive and should not impose undue burden on society's resources. These codes and standards should be performance-based and readily amendable to take advantage of technological and other advances in life. safety. To be credible, building codes and building standards should be based on adequate reliable data. At the same time, knowledge gaps should not be allowed to unduly impede the development of useful codes and standards. They should be developed as integral parts of general life provisions. There should be no separate safety codes and standards for disabled people. Normally, building codes and standards are most suited to controlling physical elements of buildings and other aspects of the built environment. Educational programs, management practices, and others are efficiently dealt with by other means. So far the only literature available on this area is the work of a Task Force Report for the Association of Professional Societies in East Africa (1989). In its 140th meeting, the Council of the Association of Professional Societies in East Africa set up a Task Force to look into ways of enacting changes in Building Regulations in order to ensure accessibility of physically disabled persons to all new buildings and those undergoing major renovations and to which the public normally have rights of access. Their objectives were: To achieve the said objectives, the Task Force set up to examine and study at length the Lesotho Paper on Accessibility, and the Resolutions of the International Year for the Disabled Persons. The Task Force recommended that a new Section covering ramps (sizes, construction, location, upkeep) be inserted into the Building Code as is the case with stairways; that the following public institutions be included in the Sixth Schedule as part of the Public Buildings: Banks, Post Offices, Central and Local Government Buildings and all other buildings to which the public has access. Finally, they recommended that the final report of their work to be sent to the Ministry of Local Government and Attorney General for effecting the changes/recommendations. As can be seen from the recommendations of the Task Forces, the Council tended to concentrate solely on the technical aspects of design and even then the recommendations are general in nature and groups all types of disabilities under one 'bundle'. More serious work needs to have been carried out to determine the proportion of disabled people in the total population by category and determine their access needs accordingly. As Sven Thiberg (1984) puts it, disabled persons have to participate in developing solutions to their problems and defining the criteria for the evaluation of the same. Disabled people are to be considered as the experts of their own lives and as experts to be included in the design, planning and execution of the built environment. The immediate need therefore is to arouse the enthusiasm of people with disabilities and expose them to their rights in the built environment. They should be encouraged to organize themselves not just for welfare services but as a pressure group to initiate, execute and implement their own policies. The tendency in Kenya has been for disabled people to organize themselves into societies which form a good forum for charitable aid rather than an active force for more tangible developmental strategies. Maybe the starting point is for disabled persons to accept themselves to be accepted by society. Of importance too, in any attempt to enact Access Legislation is the need to determine and establish political and social awareness and enthusiasm towards Access Legislation. This, as has been noted elsewhere, tends to be the basis of success or failure in Access Legislation. (Falta 1976). Creation of political public and professional awareness to the plight of disabled people is mandatory in any attempt to change the current order of building design. To do this there is need to train and incorporate the following disciplines in the design stage of all buildings, behavior scientists, doctors, estate managers, social workers and developers. This would involve a review of current syllabus in schools for many disciplines. The Law Reform Commission would also benefit from the inclusion of some of the above disciplines. Training in architecture and related fields should be geared towards mobility design as a long term tool in creating barrier-free environment. Various campaigns by community organizations, non-governmental organizations and welfare agencies would go a long way in sensitizing the needs of persons with disabilities. Another important area that must be reckoned with in a bid to create barrier-free environment is the need for an enforcement body. In places such as the United States where Access Legislation has been in operation for a longer period of time, enforcement is quoted as one of the factors determining success or failure. For instance, when the legislation was introduced voluntary compliance was found not to work possibly because of the extra cost involved. Consequently, the need to create an enforceable act, violation of which would be punishable, was imperative. (DeJong, G. & Lifchez, R 1983) In Kenya as has been clearly shown, a wide research gap exists and nationwide research should be carried out to ascertain the number of persons with disabilities and more importantly interpret those needs into design process. This way it will be possible to develop comprehensive standards/codes for accessibility legislation. One way of doing this is to make certain conditions through legislation for developers. This would involve putting constraints on them to develop a certain proportion of their development for special needs. The constraints could also come through their financiers. Falta, Patricia, Housing and People, Vol. 7, No. 2, 1976. Thiberg, Sven, in Report of the International Expert Seminar Building Concept for the Handicapped, CIB W84 Building Non-Handicapping Environments, Stockholm, April 1984. Weiss, Hanne, in Report of the International Expert Seminar Building Concept for the Handicapped, CIB W84 Building Non-Handicapping Environments, Stockholm, April 1984. Bai, Demao, in Report of the 2nd International Expert Seminar on Building Non-Handicapping Environment: Renewal of Inner Cities, CIB W84 Building Non-Handicapping Environments, Prague, October 1987. Biswas, Ramesh Kumar, in Report of the 3rd International Expert Seminar on Building Non-Handicapping Environments: Accessibility Issues in Developing Countries, CIB W84 Building Non-Handicapping Environments, Tokyo, September 1988. Hunt, John & Hoyes, Lesley, in The Journal of the Institute of Housing, Vol. 16, No. 4, 1982. Gray, J. A. Muir, in The Journal of the Institute of Housing, Vol. 13, No. 3, 1977. Republic of Kenya (1970); National Development Plan 1970-74, Nairobi, Government Printer. The Public Health Act, Government Printers, Nairobi. Local Government Building Order of 1968. Council of the Association of Professional Societies in East Africa (1989); A Task Force Report on Building Regulations for the Physically Handicapped. DeJong, G. & Lifchez, R. "Physical Disability & Public Policy" in Scientific American, June 1983, Vol. 248, No 6.
<urn:uuid:94f3cb8d-8d23-4d37-9e7d-46782963cb3d>
CC-MAIN-2016-26
http://www.independentliving.org/cib/cibharare15.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00017-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959396
5,511
2.515625
3
This video is definitely strange. It was taken in Tokyo Central Park on the afternoon the magnitude 9.0 earthquake struck in northern Japan. What it shows has been described by some as liquefaction. I'm not sure that's what's going here but whatever it is, I think most people would find it very unsettling. That doesn't seem to be the case with people in the park. Be sure to watch it past the first minute (and the constantly barking dog) as that is when it gets the most interesting. Check out this amazing map. It shows the number of foreshocks, the big quake, and aftershocks, as well their location, date/time, depth, and magnitude. Stick with it: it starts off slowly, but it gets pretty horrifyingly spectacular. Follow this link to an amazing overlay of before and after Japan tsunami aerial photos. A slide bar allows you to "swipe" the tsunami over the before photo to see after effects. The remaining 50 emergency workers were pulled from the Fukushima Daiichi plant tonight for an hour or so due to a spike in radiation levels. (They're back in, now. For more on just how much radioactivity nuclear operators can be exposed to, read this NYTimes article.) The disaster is now rated a 6 on the 7-point scale. Three Mile Island was a 5; Chernobyl was a 7. 200,000 people within a 12 mile radius of the power plant have been evacuated. Another 140,000 people within a 20 mile radius of the area have been told to stay inside, and a 19 mile no-fly zone has been imposed over the plant. The only good news tonight seems to be that the winds are blowing out to sea, helping to disperse the radiation away from populated areas. This MSNBC update also includes a good infographic about how much radiation people are generally exposed to. The Washington Post has a good interactive feature that sums up the crisis. More in the morning... Courtesy NOAASome interesting scientific angles on the recent Japanese earthquake and subsequent human disasters: Fukushima Nuclear Accident – a simple and accurate explanation. This post is long, but does a great job of explaining exactly how a modern nuclear reactor works, and how engineers plan for natural disasters. Courtesy USGS/Cascades Volcano ObservatoryThe gigantic volcano seething under Yellowstone National Park could be ready to erupt with the force of a thousand Mt. St. Helenses! Large parts of the U.S. could be buried under ash and toxic gas! Or, y'know, not. This story has popped up in a couple of places recently, including National Geographic's website and, more sensationally, the UK's Daily Mail. Shifts in the floor of Yellowstone's caldera indicate that magma may be pooling below the surface, a phenomenon that might be the very earliest stages of an eruption. Then again, it's difficult to predict volcanic eruptions with much accuracy because there's no good way to take measurements of phenomena happening so far below the earth's surface. Incidentally, the contrast in tone between the two stories makes them an interesting case study in science reporting: The Daily Mail plays up the possible risk and horrific consequences of an eruption, while National Geographic is much more matter-of-fact about the remoteness of that possibility. Which do you think makes better reading? Alright, it's absolutely beautiful outside today. So what's up with this predicted flooding? Remember all that rain the week of September 20th? (We got 2-4" here in the Twin Cities, but areas to the southwest of us got as much as 10".) Courtesy National Weather Service It all had to go somewhere, and that somewhere was the Minnesota River. Why does that affect us here in St. Paul? Take a look at another map: Courtesy NASA (Landsat) Remember: rivers don't necessarily flow south. The reddish line is the Minnesota River. The blue is the Mississippi. And that little blip just north of where the two rivers come together is downtown St. Paul. (The yellow elipse is the area of highest rainfall.) All that rain is flowing right past us. And it should be impressive. The river's at 15.4' this morning (moderate flood stage), and predicted to crest at 18' (major flood stage) on Saturday morning. But the recent spate of lovely weather means that the flooding should pass quickly--today's prediction has the water level back under 17" by Monday morning. St. Paul police have closed all the river roads and parks, and are discouraging people from walking down by the river. But you can get a stellar view of everything from outside the Museum on Kellogg Plaza, or inside the museum from the Mississippi River Gallery on level 5. It's been a crazy couple of days of rain. Courtesy National Weather Service Forecasters say it's mostly over, although we can expect some rainfall through Saturday. But rain upstream swells the rivers downstream, and flood watches and warnings are in effect for much of Minnesota. Here in downtown St. Paul, the river is expected to rise about ten feet over the next week. "1128 am CDT Fri Sep 24 2010 The Flood Warning continues for the Mississippi River at St Paul. - At 10:15 am Friday the stage was 6.8 feet. - Moderate flooding is forecast. - Flood stage is 14.0 feet. - Forecast... rise above flood stage by early Wednesday morning and continue to rise to near 16.4 feet by early Friday morning. - Impact... at 18.0 feet... Warner Road may become impassable due to high water. - Impact... at 17.5 feet... Harriet Island begins to become submerged. - Impact... at 17.0 feet... secondary flood walls are deployed at St Paul Airport. - Impact... at 14.0 feet... portions of the Lilydale park area begin to experience flooding. - Flood history... this crest compares to a previous crest of 18.4 feet on Mar 24 2010." Still with me? Then check out Buzz coverage of the March 2010 flood along the Mississippi. Courtesy Paige Shoemaker Next time you look at the clouds, shake your fist and yell at those jerks for making our lives difficult. You might look crazy, but somebody needs to tell those fools. While it's relatively easy to model temperature changes over the last century thanks to detailed records, clouds are more tricky to understand because we don't have a similar history of cloud observations, and also because they are ornery. So in order to understand how clouds work, scientists are building a body of evidence to model cloud behavior and help show how clouds will impact our weather as well as our climate in the future. I believe they also plan to show those clouds who is the boss of them. Like a child running loose in a toy store, hurricanes have always been difficult to predict because they can unexpectedly change direction. This confounds plans for evacuation, leading some people to leave areas that are never hit, leading others to stay put and potentially face nasty weather because they don't trust the meteorologist, and leading meteorologists to keep Advil in business. But since the 90s, our ability to predict where hurricanes will make landfall has become twice as accurate. This new prescience is due to the development and use of more accurate models of how clouds work, which is in turn due to better understanding of cloud dynamics and faster computers. How about that, punk clouds? Intensity, however, remains elusive to model. (Shh, don't let them know we have a weakness!) "While we pride ourselves that the track forecast is getting better and better, we remain humbled by the uncertainties of the science we don't yet understand," Schott said. "This is not an algebra question where there's only one right answer." Despite being a "forecasting nightmare," Earl ended up hitting about where it was predicted to go. This means that the right people have been evacuated to avoid injury and fatality. That's right, stick your tail between your legs, Earl. Connecting to climate Short-term events such as hurricanes and other storms are difficult to predict, but climate change is a whole other world of uncertainty--again, thanks to those uncouth clouds. Climate scientists are developing new tools, such as satellite technologies that show how much light different cloud types reflect and models that demonstrate localized cloud processes. These approaches look specifically at certain groups of clouds and their patterns of change to add detail to older, larger models that look at climate over larger scales. Courtesy Nic McPhee The problem with the older models is that they have a low resolution that doesn't accurately represent clouds because the clouds are smaller than they can show. Think of it like Google maps--at the beginning, you're looking at the entire planet, or a whole continent--this is similar to older, low-res climate models. The new models are like zooming in on a city--you can see bus stops, restaurants, and highways. But you have to zoom out to see how these small pieces relate to the larger surroundings. In a similar way, the new high-res models are helping to inform older models--this type of work is called multiscale modeling. Researchers at the Center for Multiscale Modeling of Atmospheric Processes (CMMAP) are developing this exact type of model. You can read about their advances here. This work is important because it brings insight into questions about whether clouds will reflect or trap more sunlight, which can have a big impact on the rate of global warming. It also helps us understand whether geoengineering projects that alter clouds will really have the intended effect. Plus it's just one more way we can pwn clouds.
<urn:uuid:240017e5-cd25-43e0-b569-77af0bb5410b>
CC-MAIN-2016-26
http://www.sciencebuzz.org/buzz_tags/natural_disasters?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959803
2,019
3.015625
3
This week we are studying the markings on the globe. I decided to help the kids make a little model of a globe so they could paint, identify and highlight different markings on the globe. I bought Styrofoam balls at the local craft store. We stuck a wooden stick on one end of the ball to make it easier for the kids to hold and paint it. After the paint dried, my husband drew an outline of the world with Sharpie marker (so thankful he can draw, because I surely can not do it justice). I drew the lines of latitude and longitude with pencil and had the kids try to trace over them. It was kind of hard for them to do it, so I ended up tracing them. They just stood there and watched me do it as we talked some about the globe and the markings on it. For the Equator and Meridian, I wanted those lines to pop out and be different, so we used Puffy Paint to highlight those lines. I am planning on putting the models on a string and hanging them on the ceiling in our schoolroom. I love to display the kids work in the schoolroom so we can decorate their space with their work, plus they go around and start looking at all the stuff on the walls and they start reciting the material they have learned. It is so cool to me to watch them recall the information and be reassured that they are retaining the material, that it is all in their little brains. These some of the great moments as a homeschool mom! Here’s a video explaining how we made it: Have a blessed day! 🙂
<urn:uuid:94a48b1f-9950-47c3-92e0-c7d15e27505f>
CC-MAIN-2016-26
http://thewisenest.com/markings-on-the-globe-model.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00067-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960141
333
3.046875
3
Malta is the least physically active country in the world, with nearly 72% of its population not getting recommended levels of exercise. Close behind is Serbia, with more than 68% of its residents reporting a lack of sufficient physical activity. The Maldives has the highest rate of appropriate exercise levels, with only 39% of its population failing to exercise enough. Despite this, Malta has an average life expectancy of about 81 years, while that of the Maldives is about 76 years. More about exercise: Just who is determining these statistics? Is there really some organization that feels justified to judge how the entire world should exercise? A more rational scale would be to rate all other countries from Malta being zero and the Maldives being 100 with all other countries listed respectively in between. If Malta is the least physically active, why is Malta not in the top 10 fattest countries of the world?
<urn:uuid:19c9e12a-543d-4377-817f-c5701af38ced>
CC-MAIN-2016-26
http://www.wisegeek.com/which-countries-citizens-are-the-least-physically-active.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00162-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966014
183
2.6875
3
palmetto palm or palmetto (pălmĕtˈō) [key] [Span., = little palm], common name for palm trees of the genera Sabal and Serenoa, ranging from the sandy pinelands of the S United States to Colombia. Sabal palmetto, the common native palm of the Southeastern states, is one of the trees called cabbage palm; it has an erect stem and fan-shaped leaves that are edible when young. Palmetto wood is used for pilings and the leaves for thatch. South Carolina, where the tree is indigenous, is sometimes called the Palmetto State. In cooler climates the palmetto is often grown as a greenhouse ornamental. An extract of the dried ripe fruits of the saw palmetto, Serenoa repens, is used as an herbal remedy for prostate-related urinary conditions in men though studies have questioned its efficacy. Palmetto palms are classified in the division Magnoliophyta, class Liliopsida, order Arecales, family Palmae. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
<urn:uuid:cd6c47ad-4d98-444c-b44f-b245d1705548>
CC-MAIN-2016-26
http://www.factmonster.com/encyclopedia/science/palmetto-palm.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927148
246
3.25
3
The basics: what are watts, volts and amps? If you are new to solar electricity, or any electricity for that matter, there are likely a whole lot of terms that are causing confusion. Don’t worry, you’re not dumb, the first time most people look at these terms and numbers it goes right over their heads too. It did to me. Let’s start with the basics of electricity here (and the ones you will see most in dealing with solar electricity): watts, volts and amps. What is a watt? Dictionary.com defines a watt as: “the SI unit of power, equivalent to one joule per second and equal to the power in a circuit in which at a current of one ampere flows across a potential difference of one volt. Abbreviation: W, w.” All cleared up? Probably not. A really basic definition of a watt is the amount of power provided by a circuit, basically the work performed. A basic incandescent light bulb requires 60w of power to perform its designed work, to illuminate. Make a bit more sense now? What is a volt? Dictionary.com defines a volt as: “the SI unit of potential difference and electromotive force, formally defined to be the difference of electric potential between two points of a conductor carrying a constant current of one ampere, when the power dissipated between these points is equal to one watt. Abbreviation: V” What that is really saying is that volts are a measure of electrical pressure, how much electricity is being pushed through a circuit. Think of it like water pressure in a pipe. The amount of force with which the water is pushed through the pipe, either with a pump or through gravity, is similar to a volt in electricity. What is an amp? Dictionary.com defines an amp as: “the base SI unit of electrical current, equivalent to one coulomb per second, formally defined to be the constant current which if maintained in two straight parallel conductors of infinite length, of negligible circular cross section, and placed one meter apart in vacuum, would produce between these conductors a force equal to 2 × 10−7 newton per meter of length. Abbreviation: A, amp.” Again, not likely very helpful. Aren’t dictionaries great? Let’s simplify a bit. What you really need to know is that an amp is the amount of electricity that flows past a given point. Again, think of the pipe. Where a volt was the amount of pressure forcing the water through the pipe, an amp is like the amount of water that is able to pass through the pipe. The basic breakdown goes like this: - the watt is the power, or the amount of work that can be completed. - the volt is the amount of force with which the electricity is pushed through a circuit (like water pressure in a pipe). - the amp is the amount of electricity passing through a circuit at a given point (like the amount of water through a pipe). Great, now we know the basic terminology, but how is it all connected? And how do we put this new found knowledge to practical use? The relationship between the 3 looks like this: Watts = Volts x Amps Volts = Amps / Watts Amps = Watts / Volts Which means, if we remember our high school math, when we have any 2 of those units, we can figure out the third. In real life what does that mean? Lets use a cooking example, because really, who doesn’t love food? The slow cooker in my kitchen runs at 180w on low and 120v. So if I am cooking up a pulled pork roast, on low at 180w using 120v, how many amps must be used? Based on the equations above: Amps = 180w / 120v 1.5A = 180w / 120v Keep in mind, this example is based on AC (alternating current), while a solar system will store power in DC (direct current) and doesn’t incorporate time into the equation. Since I’m pretty sure you’re close to, if not completely, overwhelmed by now, I will save those for another update soon.
<urn:uuid:ae6c1bc9-5d79-4a83-b1e4-1d9de3111e2b>
CC-MAIN-2016-26
http://homemade-solar.com/the-basics-what-are-watts-volts-and-amps/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00152-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93908
901
4.03125
4
In a series of unassuming and penetrating investigations, he asks basic questions, such as: What does it mean for something to occur? What is meant by "going" or by "coming"? Does the eye see? Does fire burn fuel? What is an example of being right? What does it mean to be wrong? Nagarjuna extends an invitation to open-minded and unprejudiced inquiry, and from his reader he asks for nothing more and nothing less than sincere and honest answers. Yet where are our answers? Once we begin to follow Nagarjuna's clear and direct steps, the gateway to the inconceivable emerges. Perhaps unexpectedly. The present work contains Nagarjuna's verses on the Middle Way, accompanied by Mabja Jangchub Tsondru's famed commentary, the Ornament of Reason. Active in the twelfth century, Mabja was among the first Tibetans to rely on the works of the Indian master Candrakirti, and his account of the Middle Way exercised a deep and lasting influence on the development of Madhyamaka philosophy in all four schools of Buddhism in Tibet. Sharp, concise, and yet comprehensive, the Ornament of Reason has been cherished by generations of scholar-practitioners. The late Khunu Lama Tenzin Gyaltsen Rinpoche, a renowned authority on the subject, often referred to this commentary as "the best there is." A visual outline of the commentary has been added that clearly shows the structure of each chapter and makes the arguments easier to follow.
<urn:uuid:69634a75-bec2-44ee-82a1-ffdba52ecb45>
CC-MAIN-2016-26
https://play.google.com/store/books/details/Mabja_Jangchub_Ts%C3%B6ndr%C3%BC_Ornament_of_Reason?id=B47VNqU1EdYC
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00168-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955229
317
2.828125
3
People keep thinking up new things to do with computers, and hardware keeps evolving to let you do things you couldn't do before. It's hard to know where to begin, but let's take watching TV as an example. In October 1965, if you wanted to watch "I Dream of Jeannie", you had to turn on the right station, at the right time. There were no other options: you couldn't record it or buy it or anything. And if you missed it, well, too bad. Maybe in a few years, they'd syndicate it and (if you were lucky) you could catch the show you missed sometime in 1970. Oh, and in 1965, nobody had a computer except corporations and universities, they were usually at least as big as a small car, and involving a computer in TV playback was purely a sci-fi notion. By 1996, you could buy "I Dream of Jeannie" episodes on VHS, which you could then play on a VHS player that might have an embedded CPU, and a CRT TV which might also have an embedded CPU, which of course somebody had to program. There were devices you could use to capture the video output from your VHS into your computer, but the consumer-level hardware horsepower back then was so low, you could suck down a huge chunk of a state-of-the-art hard drive with one episode, and then you could only play it back in a little teeny window on your screen. In 2006, they started releasing "I Dream of Jeannie" episodes on DVD. When you played them back, your DVD player and flat-screen TV both required an embedded CPU, both of which somebody had to program. Or, of course, you could play the DVD on your computer, which somebody had to program to do it. Shortly after the DVD releases, people started ripping the DVDs using DVD-ripping software that somebody had to program. And then they edited the rips into clips (using non-linear video editing software that somebody had to program), and used their browsers (which somebody had to program) to upload their videos to YouTube (which somebody had to program) for other people to view in video playback browser plug-ins (which somebody had to program). Now, you don't even have to rip the DVDs! You can use iTunes (which somebody had to program) and go to the iTunes Store (which somebody had to program) and purchase and download 140 different episodes of "I Dream of Jeannie", which you can play back using software on your computer (which somebody had to program) While I haven't checked this, I strongly suspect you could also take those iTunes versions of "I Dream of Jeannie" and play them on your iPhone or iPod Touch, which, of course... somebody had to program. And I have absolutely no idea where "I Dream of Jeannie" is going to turn up next. Maybe on my wristwatch, or on some kind of wearable fabric, or projected onto the lenses of my glasses. But wherever it turns up, somebody will have to program it. Office apps? Meh. I'd be perfectly happy still using Word 5.1 from the early 90s. But everything else? I've been programming for 40 years now, and I expect the market for programming work to just keep growing and growing.
<urn:uuid:fe7ec6f7-d755-4d99-ba07-7368e0b45bfb>
CC-MAIN-2016-26
http://programmers.stackexchange.com/questions/49580/how-come-there-is-still-so-much-programming-work/49655
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.985796
690
2.734375
3
- Created: Tuesday, 07 May 2013 21:55 - Written by Super User - Hits: 814 I don’t recall hearing the phrase "Lame Duck" more than I have recently. It is a reference to the House of Representatives mostly, but when considering the changes in the Senate, it applies to Congress as a whole. I thought it odd that the phrase "Lame Duck" was what was used. Why a duck? Why lame? I found out that it has been around for over 100 years. It came to us through England and was used on the London Stock Exchange. It was a term for a broker who defaulted on his debt, referring to the fact that he wasn’t able to keep up, just like a lame duck wouldn’t be able to keep up with the rest. Today the term is pretty well contained to a politician who is at the end of the term, especially with a successor already elected. Like a lame duck, they can’t keep up with what is going on, because they are on their way out. Hmmm. Not sure if that lame duck story has any application at all. So . . . we are in a lame duck session right now. Does that mean that congress can’t keep up with the run away inflation? That they can’t support the sinking moral of our society? Does it mean that they aren’t going to be able to pass any more legislation? No more intrusion into our health plans, education, state immigration efforts, plus a myriad of other things I could think of? My thoughts - if a lame duck session means that they are not going anywhere quick, then, please, please, somebody break the other leg! We are probably better off with a congress that can’t keep up - maybe we can finally get ahead! "You can lead a man to Congress, but you can't make him think." - Milton Berle
<urn:uuid:fe5477ee-3a55-4d63-b659-e2b38d25bb76>
CC-MAIN-2016-26
http://www.southlandbaptist.com/index.php/pastor-s-blog/16-lame-duck
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978105
408
2.71875
3
Chapter 15 of the book Fabian Freeway. In the future as in the past, the continuing leadership of the Socialist movement in the United States resided in America’s Fabian Society, (1) the polite but persistent Intercollegiate Socialist Society, which changed its name but not its nature in 1921. Discarding the Socialist title, that by now had become a liability, it called itself the League for Industrial Democracy—the name under which it survives today. This alias implied no break with the destructive philosophy and goals of international Socialism. It was rather a device for pursuing them more discreetly, at a temporarily reduced speed. Few outsiders connected the term Industrial Democracy with those archetypes of Fabian Socialism, England’s Sidney and Beatrice Webb, who had used it as the title for one of their earliest propaganda books. The slogan adopted by the LID, “Production for use and not for profit,” originated with Belfort Bax, another vintage British Socialist. It was a handy formula for expressing Marxist aims in non-Marxist language. Although most of its members and friends now described themselves publicly as liberals, basically the American society remained the same. As ever, its self-appointed function was to produce the intellectual leaders and to formulate the plans for achieving an eventual Socialist State in America. Like its British model, the LID proposed to operate from the top down and meet the working masses halfway. Voting power and financial support would come from labor, which was to be organized as far as possible into industry-wide, Socialist-led unions. As the Lusk Committee only vaguely surmised, (2) British Socialists, not Russian nor German, had set the pattern for gradual social revolution to be followed in America and other English-speaking countries. The development of an elite, and research for planning and control purposes, were its primary tasks. Penetration and permeation of existing institutions, indirect rather than direct action, were its recommended procedures. Owing to the greater expanse and complexity of the United States as compared to England, and to the wide variety of opinions due to the varied national origins of its people, special emphasis had to be placed on the formation of opinion-shaping and policy-directing groups at every level—particularly in the fields of education, political action, economics and foreign relations. While as yet such groups existed only in embryo, and Socialist programs were in public disrepute, sooner or later the opportunity for a breakthrough would come. The way of the turtle was slow but sure. Superficially, some changes in LID operations were made in deference to the times. Adults were now frankly admitted to membership in an organization which they had always dominated. Student chapters, disrupted by the war, had almost disappeared; but until 1928 no direct effort was made to revive them in the name of the Students’ League for Industrial Democracy. For the moment, it seemed more prudent to operate through the new Intercollegiate Liberal League, formed in April, 1921, at a Harvard conference attended by 250 student delegates from assorted colleges.(3) Keynote speakers at this conference included such trusty troupers of the old Intercollegiate Socialist Society as Walter Lippmann, Henry Mussey, Charlotte Perkins Gilman and the Reverend John Haynes Holmes (4)—all billed as liberals rather than Socialists. The objectives of the organization, as stated in the prospectus, were even more carefully understated than those of the former ISS. They were: The cultivation of the open mind; the development of an informed student opinion on social, industrial, political and international questions. (5) Due to the reassuring tone of the prospectus and the psychological appeal of the word liberal, three presidents of leading Eastern colleges actually consented to address the organizing conference. (6) In his speech on that occasion, the Reverend Holmes invited students to “identify themselves with the labor world, and there to martyr themselves by preaching the gospel of free souls and love as the rule of life.” Vaguely, he predicted a revolution and added, “If you want to be on the side of fundamental right, you have got to be on the side of labor.” A militant advocate of pacificism during the war, Reverend Holmes had frequently been under surveillance by Federal agents. Intelligence sources reported that his speeches were used as propaganda material by the German Army in its efforts to break down the morale of American troops. Subsequent meetings of the Intercollegiate Liberal League dealt with what British Fabians of the period often referred to as “practical problems of the day.” Speakers were provided through the cooperation of the New Republic, whose literary editor, Robert Morss Lovett, was also president of the LID. Both English and American Fabian Socialists responded to the call. In January, 1923, the Fabian News of London announced: “W. A. Robson has gone to America for about six months, as a member of a small European mission which will lecture at the leading universities under the auspices of the Intercollegiate Liberal Union [sic].” Evidently a touch of Fabian elegance was needed, for the Liberal League’s Socialist slip was already showing. In 1922, that outspoken American Socialist, Upton Sinclair, making a tour of the universities, had delivered several lectures sponsored by the Intercollegiate Liberal League(7)—and very nearly succeeded in exposing its Socialist origin. Concerning such incidents, a committee of the American Association of University Professors reported tolerantly: “The Intercollegiate Liberal League suffered from misinterpretation, and somewhat at the hands of ‘heresy hunters.’” (8) In 1922, it merged with the Student Forum and its membership numbered a select 850 on eight college campuses. Like the young people whom it was schooling in duplicity, the parent LID cultivated a liberal look and an air of candid innocence. This pose was rendered more credible by the fact that certain troublesome “cooperators” had voluntarily withdrawn from the ISS. Gone but not forgotten were firebrands like Ella Reeve Bloor, Elizabeth Gurley Flynn, William Z. Foster and Robert Minor, who had been active in the violent IWW-led strikes of other years and who later became top functionaries in the Communist Party. No suspicion of Communist ties could be permitted to cast its shadow upon the League for Industrial Democracy, on which the future of the Socialist movement in America depended. Yet individual members and even ranking officers, acting independently or through subsidiary organizations, continued to display a puzzling solicitude for the well-being of illicit Communists. To an outsider it sometimes looked as if the chief concern of open-minded League members in the nineteen-twenties was to procure the survival of the illegal Communist Party, then calling itself the Workers’ Party, with whose methods they were officially in disagreement. In this connection, it may be pointed out that the role of the renovated LID was from the start a defensive one. After 1917, both public officials and the American public at large regarded Communism very much as Anarchism had been viewed in the eighteen-eighties and nineties. Since virtually all members of Communist parties here and abroad were former Socialists, and since a good many avowed Socialists (9) had now one foot in the Communist camp, the average American could hardly be expected to make much distinction between them. A respectable front was urgently needed. Like the Bellamy clubs of a previous era, the LID was called upon, not only to make Socialism acceptable under other names, but to preserve the whole social revolutionary movement in this country from possible extinction. “Left can speak to left”—a principle later voiced by the British Fabian, Ernest Bevin, at Potsdam—was its undeclared but pragmatic rule of action. There is no doubt that radicals of every kind were highly unpopular in the United States after World War I—and no doubt there were good reasons. Information had been received linking a number of left wing publications in this country with the Communist International’s propaganda headquarters in Berlin. As a result, the Department of Justice launched an all-out drive to immobilize centers of seditious propaganda in America. A series of raids was conducted in 1919-20 by order of Attorney General Mitchell Palmer, which led four Harvard Law School professors headed by Felix Frankfurter to file a protest with the Justice Department. (10) Socialist-liberal writers—enjoying themselves hugely, as Walter Lippmann recalls—joined forces to taunt and harass the earnest if unsophisticated officers of the law. When steps were also taken in 1919-20 to close the Rand School of Social Science on grounds that it harbored known Bolsheviks, (11) there was some fear that even the Intercollegiate Socialist Society itself might soon be exposed to summary action. Not only August Claessens, but a whole flock of ISS valued “cooperators” were listed as instructors and lecturers at the Rand School in June, 1919, (12) when the New York State Legislature appointed a committee headed by Senator Clayton R. Lusk to investigate radical activities. The Senator’s methods were of a classic simplicity. He issued a search warrant and called for State Troopers to escort the investigators who descended suddenly on the Rand School, impounding records and files. On the basis of evidence so obtained, the Committee took steps to close the school by court injunction and throw it into receivership. With the help of Samuel Untermeyer, a prominent New York attorney whose brother, Louis, taught Modem Poetry at the Rand School, the injunction was lifted and the school’s records were returned. Thereupon the so-called Lusk Laws were passed,(13) requiring all private schools in New York State to be licensed. The purpose was to close the Rand School on grounds that it did not meet the necessary qualifications. Here the hidden source of Socialist power in New York hinted at by August Claessens, suddenly revealed itself. The attorney for the Rand School, Morris Hillquit, was backed by the mass indignation and voting power of the Amalgamated Clothing Workers and other Socialist-led trade unions. Prudently the Lusk Laws were vetoed in 1920 by that happy warrior, Governor Alfred Emanuel Smith, in what has been described as the most brilliant veto message of his career. The episode is significant because it marked the first step in an unholy alliance between the New York State Democratic organization and the Socialist-led needle trades unions: an alliance that was to put Franklin D. Roosevelt into the Governor’s mansion and eventually into the White House, and bring “democratic Socialists” into the highest councils of Government. Governor Smith’s veto of the Lusk Laws also offered a striking example of the uses of Fabian Socialist permeation in America—the technique recommended so warmly by Beatrice Webb, explained so clearly by Margaret Cole (14) and employed so successfully by British Fabians operating inside the Liberal Party in England. It is a technique of inducing non-Socialists to do the work and the will of Socialists. No one supposes for a moment that Governor Al Smith was himself a Socialist; nor does anyone imagine he drafted that very brilliant veto message personally. Besides being an astute politician of the Tammany Hall stripe, Smith was a devout Catholic layman. To reach him required not only permeation at first hand, but permeation at second hand as well. In this instance, it may be noted that one of Governor Smith’s counselors on matters involving “social justice” was Father (later Monsignor) John Augustin Ryan of the National Catholic Welfare Council, (15) who in 1915 founded the Department of Social Sciences at the Catholic University of America. In an objective analysis entitled The Economic Thought of John A. Ryan, Dr. Patrick Gearty has revealed that much of Father Ryan’s thinking on social and economic matters was derived from John Atkinson Hobson, the British Fabian Socialist philosopher and avowed rationalist. In 1919, Father Ryan had already unveiled the draft of a postwar “reconstruction” plan, in an address delivered in West Virginia before the conservative Knights of Columbus. The Ryan plan has since been known by the somewhat misleading title of “The Bishops’ Program of Social Reconstruction,” because it was printed over the signatures of four Bishops who formed the National Catholic Welfare Council’s Executive Committee. It was reprinted in 1931, just prior to the election of Franklin Delano Roosevelt as President. An illuminating fact about the plan was that it took special note of “the social reconstruction program of the British Labor Party”—a program written by Sidney Webb and published as Labour and the New Social Order. Father Ryan specifically cited the “four pillars” of the Webb opus. Concerning them, he stated, “This program may properly be described as one of immediate radical reforms, leading to complete socialism …. Evidently this outcome cannot be approved by Catholics.”(16) True to Catholic orthodoxy, “complete Socialism” must be rejected; but not the bulk of the ill-begotten Fabian “reform” program. Illogically, Father Ryan praised the means while rejecting the end. Although his views certainly cannot be regarded as typical of the Catholic leaders of his day, he left disciples behind him and founded a school of thought which has since come to be accepted unquestioningly by many otherwise devout Catholic teachers and students of the social sciences. More concretely, Father Ryan defended in speeches and articles the right of the five expelled Socialist Assemblymen to be seated in the New York State Legislature. In 1922, his name appeared on the letterhead of the Labor Defense Council, a joint Socialist-Communist construct, set up to obtain funds for the legal defense of illegal Communists arrested at Bridgman, Michigan, whose attorney of record was Frank P. Walsh. Although controversial Catholic clerics of conservative economic views have occasionally been silenced, somehow John Augustin Ryan contrived to do very much as he pleased. At a later date he was frankly known as the padre of the New Deal; and for services rendered was honored in 1939 with a birthday dinner attended by more than six hundred persons. The guests included Supreme Court Justices Frankfurter, Douglas and Black, Secretary of Labor Frances Perkins, Secretary of the Treasury Henry A. Morgenthau, Jr., plus a liberal assortment of left wing trade union leaders, progressive educators and New Deal congressmen. There is no question that the moral influence of Father Ryan, coupled with considerations of practical politics, led Governor Smith in 1920 to intervene on behalf of the Rand School. In other respects, also, Smith anticipated that tolerance for Socialist programs and personalities which characterized his successor, Franklin D. Roosevelt. During Smith’s campaign for the Presidency in 1928, most of his eager supporters scarcely noticed it when he announced “over the radio” that he favored “public ownership of public power.” The Lusk Laws were briefly revived in 1921 under Governor Nathan Miller, but the Rand School continued to operate happily without a license. It even collaborated in opening a summer school at Camp Tamiment vaguely patterned after Fabian Summer Schools in Britain. There New Republic regulars George Soule and Stuart Chase, Mary Austin, Evans Clark and other LID pundits (17) tutored the humbler Rand School rank-and-file in Socialist politics, economics and general culture. With time and patience, the school settled its legal difficulties and has survived to the present day as a teaching, research, publishing and propaganda center of “peaceful” Marxism known as the Tamiment Institute. It has lived to enjoy 40th, 45th, 50th and 55th anniversary dinners, complete with souvenir booklets celebrating old times and old-timers. During its lifetime, it has been regularly favored with visits by leading British Fabians: from Bertrand Russell, John Strachey, M.P. and Norman Angell to Margaret Bondfield, M.P., Margaret Cole and Toni Sender, (18) representative of the International Confederation of Free Trade Unions at the United Nations. While no change in the Rand School’s outlook has ever been recorded, so far has Socialism been rehabilitated, that the present Taminent Institute now wears an aura of respectability in some academic circles. In the same year that the Lusk Laws were revived and every known radical organization in the country seemed to be under fire, the LID chose Robert Morss Lovett, professor of English at the University of Chicago, as its president, a post he was to hold for seventeen years. He was a man of keen intelligence, quiet charm and unfailing courtesy, with a thorough knowledge of nineteenth-century English prose sometimes called the literature of protest. To paraphrase Henry Adams, Lovett had been educated for the nineteenth century and found himself obliged to live in the twentieth, a situation to which he was never quite reconciled. Born on Christmas Day to thrifty, pious New England parents, he came of pilgrim stock but never referred to it. He had graduated summa cum laude from Harvard in the days when Bellamy-type Socialism, adorned with touches of John Ruskin and William Morris, was attracting young Cambridge intellectuals; and he made connections there that lasted until his death at the age of eighty-four. During the eighteen-nineties, Lovett went to Chicago to assist University President John Rainey Harper in bringing culture and scholarship to the booming Midwest. Soon he became a sort of campus legend by virtue of his wit, audacity, kindly disposition and practically unshakable aplomb. An inveterate diner-out and something of a bon vivant, he was punctual in keeping appointments and punctilious in meeting his commitments, academic or social. Because of a certain engaging simplicity of manner, all his life people were eager to protect him and insisted he was somehow being taken advantage of—though the fact was that he invariably did as he chose, without excuses or explanations. Through his wife, a close friend of Jane Addams and Florence Kelley, Lovett was drawn into the circle of settlement workers, social reformers, pacifists, American Socialists and visiting British Fabians that revolved around Hull House. Due to his own pacifist activities during World War I, he became a scandal to patriots and a hero to Socialists. The event that transformed the rather aloof university professor into a public figure was a mammoth peace meeting in Chicago which ended in a riot. The circumstances under which Lovett happened to preside at that gathering shed some light on his subsequent career. At the last minute, the original chairman of the meeting failed to appear, and other possible substitutes evaporated. Nobody of prominence could be found willing to take the responsibility for an event almost sure to provoke a public scandal. Obligingly and with a certain amused contempt for the absentees, Lovett agreed to act as chairman, thereby inaugurating a long and tangled career as front man for a legion of left wing organizations and committees. At moments when no one else of established reputation cared to expose himself, Lovett was always available. After the heat was off, others were pleased to take over. In 1919, Lovett was invited to New York to become editor of The Dial, a literary monthly attempting to endow radicalism with a protective facade of culture and to provide an outlet for the talents of young college-trained Socialists then beginning to throng to the great city. Among his youthful staff assistants on The Dial were Lewis Mumford, (19) who has since become something of an authority on civic architecture and city planning, and Vera Brittain, who later married Professor George Catlin, a prime architect of Atlantic Union. In a year or two, Lovett was made literary editor of the New Republic, a position he occupied six months of the year while retaining his chair at the University of Chicago. He was also named to the Pulitzer prize fiction awards committee. These vantage posts not only provided liberal cover for a confirmed Fabian Socialist, but enabled him to promote the new literature of protest, with its emphasis on “debunking” American institutions, that became popular in the nineteen-twenties and thirties. Through S. K. Ratcliffe, the New Republic’s long time London representative, and through that magazine’s opposite number in Britain, the New Statesman, it was easy enough to keep regularly in touch with the fountainhead of Fabian Socialism. So many eminent British Fabian authors and educators were busily traveling back and forth across the Atlantic, to share in the wealth of a country whose crassness they deplored, that they passed each other in transit on the high seas. Scarcely a one missed being entertained at the New Republic’s weekly staff luncheons, and Lovett and his associates were helpful in booking many on the lucrative university lecture circuit. As he confided to friends, Lovett longed to visit England; but was blacklisted by the British Foreign Office because he had aided some Hindu revolutionaries, only incidentally financed by German agents, during the war. Thus contacts between the Fabian Society of London and the titular head of its American affiliate necessarily remained indirect. For the time being, perhaps it was better so. Throughout the nineteen-twenties—while the United States was enjoying a giddy whirl of industrial growth and paper profits, and the outwitting of Prohibition agents became a major national pastime—there was always that same small, close-knit core of studious men and women bent on remaking the country according to a more or less veiled Marxist formula. Bitterly disappointed that world war had not produced a world-wide Socialist commonwealth, they still found much to console them in the international picture. The predominance of the Social Democratic Party in Germany; the existence of a somewhat crude but frankly all-Socialist State in Soviet Russia; and the emergence of the Fabian Socialist-controlled Labour Party as the second strongest political party in Britain: these developments gave them hope of being able some day to bring the unwilling United States to heel. True, the Socialist movement in America still seemed a comparatively small affair, foreign to the great majority of average Americans. Its appeal was still confined chiefly to social workers, rebel college professors and students, a handful of ambitious lawyers and wealthy ladies, and a few militant Socialist-led unions that were far from representing a majority in the ranks of American labor. The postwar scene, however, was enlivened by the addition of many college-trained young people, cut adrift from family discipline and religious moorings, who found companionship, a faith and ultimately well paid careers within the reorganized Socialist movement. The prestige of British Fabian authors in New York publishing and book review circles helped to open doors for their liberal brethren in the United States. Superficially, the American version of the British Fabian Society almost looked, as it had in England, to be a species of logrolling literary society. Political power, however, was the prize for which it secretly yearned, insignificant as its efforts in that direction might appear at the moment to be. Socialist intellectuals already aspired to influence the military and foreign policy of the United States and continued to plan quietly for the creation of a Socialist State in America within a world federation of Socialist States. Their postwar aspirations had been foreshadowed in a “Wartime Program” issued early in 1917 by the American Union Against Militarism: a program that in a small way echoed the British Fabian Socialist plan contained in Leonard Woolf’s International Government. The “Wartime Program” stated: “With America’s entry into the war we must redouble our efforts to maintain democratic liberties, to destroy militarism, and to build towards world federation. Therefore, our immediate program is: “To oppose all legislation tending to fasten upon the United States in wartime any permanent military policy based on compulsory military training and service. “To organize legal advice and aid for all men conscientiously opposed to participation in war. “To demand publication by the Government of all agreements or understandings with other nations. “To demand a clear and definite statement of the terms on which the United States will make peace. “To develop the ideal of internationalism in the minds of the American people to the end that this nation may stand firm for world federation at the end of the soar. To fight for the complete maintenance in wartime of the constitutional right of free speech, free press, peaceable assembly and freedom from unlawful search and seizure. With this end in view the Union has recently established a Civil Liberties Bureau ….” (20) Founders of the organization issuing that statement were described as “a group of well-known liberals. (21) Closer inspection, however, reveals that virtually every member of its founders’ committee was a long-standing “cooperator” of the Intercollegiate Socialist Society, later the League for Industrial Democracy. (22) When it became evident after the war that the Union’s dream of world federation must be postponed, the LID remained the directive and policy-making body behind a gradual Socialist movement soliciting public support on a variety of pretexts. Its aims were promoted through a handful of closely related organizations, invariably staffed at the executive level by directors and officers of the League. Chief among them were the American Civil Liberties Union ( ACLU ), the Federated Press, and the American Fund for Public Service, also known as the Garland Fund, a self-exhausting trust which helped to forestall deficits in the other organizations and even contributed charitably to the subsistence of masked Communist enterprises. Through such organizations, the Socialist movement maintained discreet contacts with illegal Communist groups in the nineteen-twenties. William Z. Foster, identified then and later as a leader of the Communist Party, was both a director of the Federated Press and a trustee and indirect beneficiary of the Garland Fund. As late as 1938, four acknowledged Communists served on the national committee of the ACLU. (23) While the LID stood aloof, taking no responsibility for the actions of its subsidiaries, their unity was visibly confirmed by the fact that Robert Morss Lovett held top posts in all four organizations. He was not only president of the LID, but a director of the A(:LU and the Federated Press, which served a number of labor papers and left wing publications, both Socialist and Communist. Lovett also sat on the board of trustees of the Garland Fund, and he chaired a host of ephemeral committees. In fact, he appeared in so many capacities at once that he was sometimes compared to the character in W. S. Gilbert’s ballad who claimed to be the cook, captain and mate of the Nancy brig plus a number of other things. Obviously, Lovett could not really have directed all the organizations and committees over which he presided in the twenties and after. The administrative and editorial work of the League was handled by Harry Laidler, aided after 1922 by the former clergyman Norman Thomas in the sphere of Socialist politics and by Paul Blanshard as LID organizer. Paul Blanshard later directed the Federated Press. (24) More recently, he has been identified with an organization known as “Protestants and Other Americans United for the Separation of Church and State,” dedicated to expunging all references to God from public schools and public life in America. He anticipated G. D. II. Cole, the president of the London Fabian Society, who smilingly advocated “the abolition of God”! Though Lovett’s actual duties—aside from his work as an editor, teacher and public speaker—always remained somewhat mysterious, he appears to have acted mainly as a liaison between top-level Socialists and Communists as well as academic and moneyed groups. During the Socialist movement’s period of temporary regression, he was in his glory. His contacts were numerous, and his personal amiability combined with discretion, made him acceptable to all. “Let one hand wash the other” and “recoil, the better to spring forward” (Reculer pour mieux sauter) were the private maxims that guided him on his variegated rounds. It was hard to believe that so delightful and considerate a dinner guest, as Felix Frankfurter has described in his autobiography, and so informed and sober a classroom figure could be so dangerous a radical. Yet an old friend, who never shared his political views, still recalls how the normally serene Robert Morss Lovett once remarked with sudden intensity: “I hate the United States! I would be willing to see the whole world blow up, if it would destroy the United States!” His startled companion dismissed the incident as a momentary aberration —and refrained from mentioning to Lovett that his words were much the same as those of Philip Nolan in The Man Without a Country. Most conspicuous of the postwar organizations manned by League for Industrial Democracy members was the American Civil Liberties Union. Like the LID, the ACLU has survived to the present day, acquiring a patina of respectability with the passage of time and the decline of old-fashioned patriotism, for which both bodies cherish an ill-concealed contempt. Formed in January, 1920, the ACLU was a direct outgrowth of the wartime Civil Liberties Bureau, a branch of the American Union Against Militarism. The Bureau assumed “independent” life in 1917 when a young social worker from St. Louis named Roger Baldwin moved to New York to direct the work of its national office. (25) During the war, it furnished advice and legal aid to conscientious objectors, thus gaining the support of some quite reputable Quakers. When it was reorganized on a permanent basis after the war as the ACLU, Roger Baldwin, who had just finished a prison term for draft-dodging, returned as its executive officer. For all practical purposes, he ran the organization for approximately forty years. While the ACLU was still in the process of formation, Baldwin wrote in an advisory letter: “Do steer away from making it look like a Socialist enterprise. We want also to look patriots in everything we dot-We want to get a good lot of flags, talk a good deal about the Constitution and what our forefathers wanted to make of the country, and to show that we are really the folks that really stand for the spirit of our institutions.” (26) Such deceptive practice was in the classic Fabian tradition—symbolized by the wolf in sheep’s clothing that decorates the Shavian stained-glass window at a Fabian meetinghouse in England. Promptly adopted by Baldwin’s associates, this tactic has succeeded in deluding not a few well-intentioned Americans. The immediate function of the American Civil Liberties Union in 1920 was to combat the postwar flurry of arrests, deportations and court actions against Communists and other seditionists, many of whom were foreign born. Baldwin had previously described such individuals “as representing labor and radical movements for human welfare,” and contended they were being “insidiously attacked by privileged business interests working under the cloak of patriotism.” (27) Twin weapons of the quasi-forensic ACLU were legal aid and a species of propaganda designed to arouse public sympathy for the “victims” of the law—an expedient normally frowned upon by the American bar. If it was Roger Baldwin who defined the propaganda line, another founder of the ACLU,(28) Harvard Law Professor Felix Frankfurter, provided the legalistic approach. In his protest of 1920 to the Department of Justice; in his argument as amicus curiae before a federal court in Boston, where he assured the right of habeas corpus to criminal aliens awaiting deportation; (29) and earlier, in two reports submitted as counsel for President Wilson’s Mediation Commission, Frankfurter initiated the mischievous practice of invoking the Constitution for the benefit of its avowed enemies. Perhaps more than any other American, Frankfurter helped to establish the fiction that it is somehow unconstitutional and un-American for the United States to take measures to defend itself against individuals or groups pledged to destroy it. His reports on the Preparedness Day bombings and the Bisbee deportations won him a sharp rebuke from that forthright American, former President Theodore Roosevelt, who wrote in a personal letter to Frankfurter: “I have just received your report on the Bisbee deportations …. Your report is as thoroughly misleading a document as could be written on the subject . . “Here again you are engaged in excusing men precisely like the Bolsheviki in Russia, who are murderers and encouragers of murder, who are traitors to their allies, to democracy and to civilization . . . and whose acts are nevertheless apologized for on grounds, my dear Mr. Frankfurter, substantially like those which you allege. In times of danger nothing is more common and more dangerous to the Republic than for men to avoid condemning the criminals who are really public enemies by making their entire assault on the shortcomings of the good citizens who have been the victims or opponents of the criminals …. lt is not the kind of thing I care to see well-meaning men do in this country.”(30) One of the more sensational events in which early leaders of the American Civil Liberties Union took a hand was the case of the “Michigan Syndicalists.” The circumstances leading up to it were peculiar, to say the least. In August, 1922, a Hungarian agent of the Communist International, one Joseph Pogany, alias Lang, alias John Pepper, arrived illegally in the United States. Having assisted in setting up the short-lived Bela Kun Government in Hungary, he was presumed to be something of a specialist in the bloodier forms of revolutionary behavior. Pogany brought with him detailed instructions for organizing both legal and illegal branches of the new Communist Party USA. Those instructions were to be divulged by him at a secret Communist convention, held at a camp in the woods near Bridgman, Michigan, which was duly raided by the authorities. As a result, seventeen Communists—including William Z. Foster, then editor of the Labor Herald–were arrested and arraigned under Michigan’s anti-syndicalist laws. At his trial in Bridgman, Foster, who later openly headed the Communist Party, testified under oath that he was not a Communist, thereby escaping conviction. Many others attending the conclave had prudently slipped away the night before the raid, leaving a mass of records and documents behind. In sifting this material, it was discovered that several of the delegates were connected with the Rand School of Social Science. Some, like Rose Pastor Stokes and Max Lerner, have since been listed as “cooperators” of the LID. (31) Max Lerner, a bright young intellectual who had been a student leader of the Intercollegiate Socialist Society at Washington University in St. Louis, was among the seventeen persons arrested in or near Bridgman. Like Foster, he claimed to have attended that secret convention in an editorial capacity. What his other motives may have been are not recorded, since from that time forward Lerner appeared to operate strictly within the framework of the Fabian Socialist movement. For years he continued to write articles for The Nation, The Call and The New Leader, and to lecture on economics at the Rand School, the New School for Social Research and more conventional institutions of learning. He was a lifelong admirer of the self-proclaimed Marxist, Harold Laski, who found Lerner’s political outlook close to his own.(32) When Laski was quoted in 1945 by the Newark Advertiser as condoning bloody revolution, he sued for libel in a London court—and lost the case. It was Max Lerner (together with Harvard Professor Arthur M. Schlesinger, Sr.) who took the initiative in collecting an American “fund” for Laski,(33) to help defray the latter’s court costs of some twelve thousand pounds. More recently, we find an unreconstructed Max Lerner writing a widely circulated column for American newspapers. In an article sent from Switzerland in August, 1963, he deftly exploited the malodorous Stephen Ward pandering case (forced into prominence by the Fabian Harold Wilson, M.P.) as a means of promoting sympathy for Socialism.(34) The pained outcry that the Bridgman case evoked in the twenties from Socialist-liberal writers and publicists was symptomatic of a curious phenomenon never explained by medical science: Wound a Communist, and a Socialist bleeds! A circular letter of April G, 1923, soliciting funds for the legal defense of the arrested Communists, described them plaintively as a “group of men and women met together peacefully to consider the business of their party organization.” This letter appeared on the stationery of the Labor Defense Council, whose national committee included the names of well-known Communists. It was signed by eight equally well-known members of the LID and/or ACLU. (35) At about the same time, Robert Morss Lovett persuaded the wealthy wife of a University of Chicago professor, to post securities valued at $25,000 as bond for the Bridgman defendants. The securities were subsequently forfeited when several of the accused jumped bail and fled to Moscow. A more enduring cause celebre, in which both Socialist- and Communist-sponsored “defense” organizations battled jointly to reverse the course of justice, was the Sacco-Vanzetti case. Nicola Sacco and Bartolomeo Vanzetti were Italian immigrants of admitted Anarchist views (36) who were arrested in 1920 for the robbery and murder of a paymaster and paymaster’s guard in South Braintree, Massachusetts. Found guilty and sentenced to die, they were finally executed in 1927. Since several million words have already been written about the case in the form of legal briefs, editorials, articles and books, it would be superfluous to review the matter in detail. Some $300,000 was contributed for the legal defense of those “two obscure immigrants about whom nobody cared”—as Arthur M. Schlesinger, Jr. has described them sentimentally in The Age of Roosevelt. Left wing leaders had apparently promised Sacco and Vanzetti they would be saved at any cost, and a mighty effort was made to that end. All the available propaganda stops were pulled out. The whole spectrum of leftist literary lights, from Liberal to Socialist to Communist, was brought into play. Academic Socialism’s foremost figures were enlisted to dignify the campaign, and student organizations were rounded up. Among the legal scholars who helped to prepare documents on the case was Harvard Law Professor Francis B. Sayre, son-in-law of Woodrow Wilson and a relative of the Reverend John Nevin Sayre. (37) The Brandeis family became so emotionally involved in the cause of the two allegedly persecuted immigrants that Justice Brandeis felt obliged to disqualify himself when the question of reviewing the case reached the Supreme Court. For several years the Harvard campus was split down the middle on the issue of Sacco and Vanzetti’s guilt or innocence. Professors Felix Frankfurter and Arthur M. Schlesinger, Sr. rallied the innocence-mongers. They were supported by Roscoe Pound, Dean of the Law School and a disciple of Brandeis in the field of sociological law. On the other hand, University President A. Lawrence Lowell urged moderation and suggested that some credence be placed in the good faith and common sense of Massachusetts’ judges and law enforcement officers. So vehemently did Felix Frankfurter denounce his academic superior that it was suggested the little law professor resign. “Why should I resign?” asked Frankfurter, adding insolently, “Let Lowell resign!” When it was all over, the long-suffering President Lowell wrote in mild exasperation to Dean Pound that he thought “one Frankfurter to the Pound should be enough.” Not only The Nation and New Republic, but at least two respected New York dailies, insisted to the end that Sacco and Vanzetti were the blameless victims of a Red scare or public witch hunt. So impassioned and so confusing was the public debate that some Americans today are still under the impression that Sacco and Vanzetti were somehow “framed” or “railroaded” to their death. Only recently a final confirmation of their guilt has come to light. It was contained in a quiet announcement by Francis Russell, a man who has spent the better part of his life seeking to demonstrate Sacco and Vanzetti’s innocence. In the June, 1962, issue of American Heritage Russell told how he finally traced the long-missing bullets found in the body of the paymaster’s guard, Berardelli, to a police captain, now deceased. Two ballistic experts, using modern techniques, analyzed the bullets and testified they had unquestionably been fired from the .32 caliber pistol which Sacco was carrying at the time of his arrest. Thus Francis Russell was forced to conclude that Sacco wielded the murder weapon and that Vanzetti was at least an accessory. Oddly enough, a similar conclusion based on less objective evidence was made public by Upton Sinclair in 1953. In a memoir published serially in the Rand School’s quarterly Bulletin of International Socialist Studies, (38) Sinclair quoted Fred A. Moore, an attorney for Sacco and Vanzetti, as saying he believed Sacco to be guilty of the shooting and Vanzetti to have guilty knowledge of it. Sinclair further relates how Robert Minor, a Communist Party official, telephoned him long distance in Boston and begged him not to repeat the attorney’s opinion. “You will ruin the movement! It will be treason!” cried Minor. From that indiscreet telephone call, it is inferred that Sacco and Vanzetti may have robbed and killed to fill the Party’s underground treasury, as Stalin and his Bolshevik comrades are known to have done in Russian Georgia during 1910-11. At any rate, the missing payroll funds, amounting to nearly $16,000, were never recovered. A third man, reported by witnesses to have assisted at the South Braintree crime, vanished coincidentally with the cash. This, however, is not the “legacy” referred to by Professor Arthur M. Schlesinger, Sr., who in 1948 wrote the introduction to an emotion-packed volume perpetuating the martyr legend of “the poor fish-peddler and the good shoemaker.” (39) As of 1962, Schlesinger’s son, Arthur, Jr., was a member of the national committee of the American Civil Liberties Union, which had handled the appeals and coordinated the propaganda in the historic Sacco and Vanzetti case.(40) 1. Forty Years of Education (New York, League for Industrial Democracy, 1945), p. 56. A telegram to the League on its fortieth anniversary from Mandel V. Halushka, a Chicago schoolteacher, read, “Birthday greetings to America’s Fabian Society!” 2. Only two direct references to the Fabian Society occur in the Lusk Report, and the first is misleading: “In England during the ‘80’s the Fabian Society was formed which remains an influential group of intellectual Socialists, but without direct influence on the working man or Parliament.” Revolutionary Radicalism, Vol. I, p. 53. “We have already called attention to the Fabian Society as an interesting group of intellectual Socialists who engage in a very brilliant campaign of propaganda.” Ibid., p. 145. Obviously, the Lusk Committee underestimated both the current and potential influence of the Society. 3. Depression, Recovery and Higher Education. A Report by (a) Committee of the American Association of University Professors. Prepared by Malcolm M. Willey, University of Minnesota, (New York, McGraw-Hill, Inc., 1937), p. 317. 5. Italics originally added, now removed. 7. At other colleges and universities Upton Sinclair’s lectures were sponsored by local units of the Cosmopolitan Club–an organization similar in character and inspiration to the Intercollegiate Liberal League. 9. Algernon Lee, author of The Essentials of Marxism, said: “A large proportion in the early nineteen-twenties went Communist, and of these only a few have found there way back.” Quoted in August Claessens’ autobiography, Didn’t We Have Fun? (New York, Rand School Press, 1953), p. 20. 10. Helen Shirley Thomas, Felix Frankfurter: Scholar on the Bench (Baltimore, Johns Hopkins University Press, 1960), p. 19. Distributed in England by the Oxford University Press. 11. Who’s Who in New York for 1918 lists A. A. Heller as a director of the Rand School. Treasurer and general manager of the International Oxygen Company, which had benefitted from wartime contracts, the Russian-born Heller served as commercial attache of the unofficial “Soviet Embassy,” whose chief, Ludwig Martens, left the United States under pressure. 12. In 1919, instructors and lecturers at Rand School included: Max Eastman, Charles Beard, Elmer Rice, Oswald Garrison Villard, John Haynes Holmes, Harry Laidler, Lajpait Rai, Joseph Scholossberg, August Claessens, Harry Dana, Henrietta Epstein, E. A. Goldenweisser, James O’Neal, Eugene Wood, A. Philip Randolph, I. A. Hourwich, Henry Newman, Harvey P. Robinson and Joseph Slavit. Bulletin of the Rand school, 1918-19. See Appendix II. 13. The year that the Lusk Laws were passed and vetoed by Smith, 1920, the School heard Louis Lochner on Journalism, Gregory Zilboorg on Literature, Leland Olds on American Social History, Frank Tannenbaum on Modern European History, and James P. Warbasse on the Cooperative Movement, Bulletin of the Rand School, 1919-20. See Appendix II. 14. Margaret Cole, The Story of Fabian Socialism (London, Heinemann Educational Books, Ltd., 1961), pp. 84 ff. 15. Renamed in 1923 The National Catholic Welfare Conference. 16. Italics originally added, now removed. 17. The year after the Lusk Laws were repassed in 1921 marked the opening of Camp Tamiment. Evans Clark taught Political Science, William Soskin, Modern Theatre, Mary Austin, American Literature, Otto Beyer, Industrial Problems. Robert Ferrari lectured on Crime, Taraknath Das on the Far East. The roster of lecturers also included Clement Wood, Arthur W. Calhoun, George Soule, Joseph Jablonower, Norman Thomas, Solon DeLeon, Jessie W. Hughan and Stuart Chase. Bulletin of the Rand School, 1920-21. See Appendix II. 18. Toni Sender’s salary was partially paid by the AFL-CIO, an item regularly reported in its annual budget. 19. See Appendix II. 20. David Edison Bunting, Liberty and Learning. With an Introduction by Professor George S. Counts, President, American Federation of Teachers. (Washington, American Council on Public Affairs, 1942), p. 2. 21. Ibid., p. 1. 22. This committee was composed of Lillian D. Wald, of the Henry Street Settlement; Paul U. Kellogg, editor of Survey Graphic; the Reverend John Haynes Holmes; Rabbi Stephen S. Wise; Florence Kelley, president of the Intercollegiate Socialist Society and head of the Consumers League of America; George W. Kirchwey; Crystal Eastman Benedict; L. Hollingsworth Wood, a prominent Quaker attorney; Louis P. Lochner, afterwards of The New York Times Bureau in Berlin; Alice Lewisohn: Max Eastman; Allen Benson and Elizabeth G. Evans. Ibid. See Appendix II. 23. Ibid., p. 10. See chart of political affiliations of national committee, American Civil Liberties Union. 24. Paul Blanshard was a contributor to the official 1928 Campaign Handbook of the Socialist Party, entitled The Intelligent Voter’s Guide and published by the Socialist National Campaign Committee. Other contributors were: W. E. Woodward, Norman Thomas, Freda Kirchwey, McAllister Coleman, James O’Neal, Harry Elmer Barnes, James H. Maurer, Lewis Gannett, Victor L. Berger, Harry W. Laidler and Louis Waldman. All were officials of the League for Industrial Democracy. See Appendix II. 25. Bunting, op. cit., p. 2. 26. Revolutionary Radicalism, Vol. I, p. 1087. 27. Bunting, op. cit., p. 3. 28. Thomas, op. cit., p. 21. 29. Ibid., p. 19. 30. Roosevelt to Frankfurter, December 19, 1917, The Letters of Theodore Roosevelt, Manuscript Division, Library of Congress, Washington, VIII, 1262. 31. See Appendix II. 32. Kingsley Martin, Harold Laski: A Biographical Memoir (New York, The Viking Press, Inc., 1953), p. 86. “Among the younger men, including, for instance, Max Lerner, he [Laski] found intellectuals whose political outlook was close to his own.” 33. Ibid., p. 168. 34. San Francisco Examiner (August 11, 1963). “We underestimate,” writes Lerner, “how deeply most people need a rebel-victim symbol. There is a lot of free-flowing aggression in all of us, and one of the functions of a cause celebre is to give us a chance to channel some of it. . . . This brings us back to Ward as the rebel against society, and the victim of its power-groups.” 35. Signers of this letter were: Freda Kirchwey, editor of The Nation; Norman Thomas, leader of the Socialist Party; The Reverend John Nevin Sayre; Mary Heaton Vorse, contributor to The Nation and the friend and inspirer of Sinclair Lewis; Roger Baldwin, director of American Civil Liberties Union; The Reverend Percy Stickney Grant: The Reverend John Haynes Holmes; Paxton Hibben, director and solicitor of funds for the “Russian Red Cross” in the United States. All are listed by Mina Weisenberg as “cooperators” of the League for Industrial Democracy. See Appendix II. 36. In this connection, it is interesting to note that in June, 1919, the first issue of Freedom–a paper published by the Ferrer group of Anarchists at Stelton, New Jersey–stated editorially: “It may well be asked, ‘Why another paper?’ when the broadly libertarian and revolutionary movement is so ably represented by Socialist publications like the Revolutionary Age, Liberator, Rebel Worker, Workers’ World and many others, and the advanced liberal movement by The Dial, Nation, World Tomorrow and to a lesser degree, the New Republic and Survey. These publications are doing excellent work in their several ways, and with much of that work we find ourselves in hearty agreement.” (Author’s note: One of the founders of the Ferrer School, Leonard D. Abbott, was also a founder of the Intercollegiate Socialist Society. He was associate editor of Freedom. Members of that short-lived paper’s editorial staff were teachers at the Rand School.) 37. The Reverend John Nevin Sayre was a founder of the ACLU and signed the appeal for funds in the Bridgman case. 38. Upton Sinclair, “The Fishpeddler and the Shoemaker,” Bulletin of International Social Studies (Summer, 1953). 39. Cf. Louis G. Joughlin and Edmund M. Morgan, The Legacy of Sacco and Vanzetti. With an introduction by Arthur M. Schlesinger (New York, Harcourt, Brace & Co., 1948). 40. Freedom Through Dissent, 42nd Annual Report, July 1, 1961 to June 30, 1962, American Civil Liberties Union, New York, 1962. (List of officers, directors, national committee members, etc. Page not numbered, opposite p. 1.)
<urn:uuid:ac316bc2-b48b-4d12-93c2-1b72c8e4cf43>
CC-MAIN-2016-26
http://progressingamerica.freecapitalists.org/2012/11/21/chapter-15-stays-the-same/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965378
11,010
2.6875
3
WEDNESDAY, Jan. 22, 2014 (HealthDay News) — Higher vitamin D levels are associated with better thinking and mood in people with Parkinson’s disease, a new study suggests. The finding may lead to new ways to delay or prevent the onset of thinking problems and depression in people with the progressive neurodegenerative disease, the researchers said. Their analysis of nearly 300 Parkinson’s disease patients revealed that higher blood levels of vitamin D — the “sunshine vitamin” — were associated with less severe physical symptoms, better thinking abilities and lower risk of depression. This link was especially strong in patients without dementia, according to the study in the current issue of the Journal of Parkinson’s Disease. “About 30 percent of persons with [Parkinson’s disease] suffer from cognitive impairment and dementia, and dementia is associated with nursing home placement and shortened life expectancy,” study author Dr. Amie Peterson, of the Oregon Health and Sciences University, said in a journal news release. “We know mild cognitive impairment may predict the future development of dementia,” she added. Preventing the development of dementia in these patients may potentially improve rates of illness and death related to Parkinson’s disease, Peterson suggested. However, the study doesn’t show whether low vitamin D dulls thinking or if the opposite is true — that people with more advanced Parkinson’s disease get less sun exposure because of their limited mobility and have lower levels of vitamin D as a result. The study also did not ask if patients were taking vitamin D supplements. While the study showed an association between vitamin D levels and thinking problems, it did not prove a cause-and-effect link. Vitamin D is absorbed by the body from sunlight. It is also found in foods such as fatty fish and in supplements. Low levels of vitamin D increase the risk of type 2 diabetes, multiple sclerosis, high blood pressure, cancer and infections, the study authors noted in the news release. Parkinson’s disease affects about 1 million Americans and 5 million people worldwide. Its prevalence is expected to double by 2030. The U.S. National Institute of Neurological Disorders and Stroke has more about Parkinson’s disease.
<urn:uuid:7f4e4c93-6695-460d-b924-766cf8ca32f2>
CC-MAIN-2016-26
http://news.health.com/2014/01/22/vitamin-d-levels-linked-to-parkinsons-symptoms/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93668
467
2.84375
3
The IVANHOE Game Instructions 1. Select a perspective / personality to take on as a persona, a mask. That is, pretend to be a feminist, a Marxist, an anarchist, a fundamentalist, whatever kind of critic you can imagine. You decide who you will pretend to be: 2. Tell everyone in your group what persona you have selected. Then decide together which poem you would like to use for the game. 3. Select a word from your passage and change it to another word: the change should reflect your perspective (as feminist, Marxist, anarchist, fundamentalist, etc.) Word in the passage: Change it to: 4. Go around in a circle with the members of your group, each taking a turn. Each person will change a word. Choose one from your list that someone else has not already changed when it is your turn. 5. Challenge someone (or be challenged) if the change made does not seem to really reflect the persona, the personality a player is pretending to have. The group will decide whether the challenge is successful or not. If someone successfully challenges your move, you are out of the game. 6. Players drop out of the game if a) they are successfully challenged, or b) they cannot think of any words to change. 7. The last person left in the game wins. WINNER FOR YOUR GROUP:
<urn:uuid:1d51e047-71d4-4254-aed2-4fd8b25f44e1>
CC-MAIN-2016-26
http://www.rc.umd.edu/pedagogies/commons/innovations/IVANHOE/ivanhoeinstructions.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00162-ip-10-164-35-72.ec2.internal.warc.gz
en
0.896339
285
2.671875
3
Networking is the process of discovering and making use of connections between people. These network connections can be as informal as talking to your family and friends, or as formal as attending a career event with prospective employers. Networking is useful for gathering information about a certain industry, organization, career path, or skill. Networking is about building relationships with people who can provide information and advice, and may lead to future opportunities. Networking can help you find a job: - 80% of jobs are found through networking - 94% of successful job hunters claimed networking made all the difference - 63.4% of all workers use informal job-finding methods How to Network - Identify Your Networks. Making a list of people you know will help you to realize you may already have a strong foundation for your network. Work / Internships Family / Friends Activities / Hobbies - Expand Your List. Identify others you have met inside and outside MIT or methods for how to meet new people who may have similar interests or expertise. Below are some events/opportunities to do so: At MIT Outside MIT Company presentations Professional associations/conferences Career Fairs Local/regional career fairs and events Alumni Association’s ICAN Network Community groups Special MIT events: Residence halls, living group, and student group events Online groups: LinkedIn, Doostang, Facebook, listservs, newsgroups - Assess Your Goals. What are you hoping to get out of your networking experience? Understanding your intentions can clarify for you who would be most beneficial to connect with. - Create Your Elevator Pitch. Develop a 30-second script you can use to introduce yourself to people. Repeat it until you're comfortable because you will need to use it at a moment’s notice. You may want several versions to use depending on the audience. Start by defining the goal of the pitch: are you looking for a job? researching an industry or organization? building a relationship with a recruiter? This helps identify the information it will be essential to convey. You'll likely want to include your major and year of graduation, as well as your relevant interests and experiences. Mention accomplishments, your top skills and, if possible, anything unique that will help you be remembered. It's ok to credit or compliment the other team members from any group projects you might mention, and it can help to end with a question to engage your listener and start a conversation. Here's an example: “Hi, my name is Robert Robertson. I’m graduating soon from MIT with a Bachelor’s in Computer Science in June. I’m interested in developing software that helps people live healthier lives and have interned at a couple of startups developing web apps in Ruby. I worked on an iOS app that tracks sleep cycles for a class project and my team took first place in a competition of twenty teams. I’ve been looking at jobs at Fitbit because I really like their fitness apps, and was hoping you could tell me more about your experience working there.” - Make Contact. An informational interview is a meeting where you ask for information and advice rather than employment. The job seeker gathers information on the field, finds employment leads, and expands his or her professional network. Basically, introduce yourself, ask questions, obtain referrals, and close. This helpful informational interview handout addresses each stage of the informational interview process -- reaching out, preparation, conducting the interview, follow up, and evaluation. - Follow Up. Be sure to follow up with an email or letter thanking the person for his/her time. This professional courtesy goes a long way. - Repeat.There's no limit to the number of individuals you can reach out to and learn from. Creating genuine relationships through networking is a lifelong practice, so master your techniques and go explore.
<urn:uuid:632092f3-497f-49be-b9e2-cf53f2bda6f2>
CC-MAIN-2016-26
http://gecd.mit.edu/jobs-and-internships/finding-jobs-and-internships/networking-and-informational-interviews
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00004-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946235
794
2.765625
3
The ability to search by educational standards is a functionality added to DLESE in July, 2003 that allows you to append additional criteria to your search to better target your needs. Currently the National Science Education Content Standards (NSES) and the National Geography Standards are available. The NSES are hierarchical and allow you to choose grade level, broad topic, and ability, while the National Geography Standards are a list of 18 concepts grouped by topic only. Development in this area is ongoing, with more standards to be added in subsequent releases. The association of a standard with a resource signifies that the content of the resource supports the student learning and attainment of the specific ability noted. This can be through many different mechanisms and resource types, including access to background and text-based material as well as inquiry-based activities. Some standards are general in nature, some more specific. The resource need not address the entire scope of the standard for the association to be made, and some resources may not map to any standards at all. Resources in DLESE are selected and cataloged by community members. Those with experience and familiarity with the standards are encouraged to select those that support the content of the resource they are cataloging. As such, not all resources have had standards assigned at this point in time. If you have experience with educational standards and would like to enhance resources with standards data, please contact us at email@example.com.
<urn:uuid:b969ace5-7ab4-444c-9684-6e79b9413ab7>
CC-MAIN-2016-26
http://www.dlese.org/resources/search_standards.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957349
288
3.21875
3
In 1980 Syracuse University, in upstate New York, opened the largest on-campus dome facility in the United States. Syracuse, known as the snowiest and wettest large city in the United States, after 73 years of an open air facility called Archibald Stadium decided on a domed, Teflon coated, fiberglass inflatable roof at a cost $26.85 million. Carrier Corporation, inventors of the air conditioner, chipped in $2.75 million for the naming rights and its large industrial fans were used to inflate the roof. The technology required to move roofs that can weight millions of pounds were not available at the time. Had they been, though, the project would have cost much more. Unfortunately the Carrier Dome roof cannot be retracted on warm and sunny days which are something to celebrate in upstate New York. Recent large stadiums, also in wet and cold areas, use retractable roofs and fans love them. Retractable Stadium Rooftops are moved using a variety of tracts and motors. Technology similar to that used for opening draw bridges and traveling cranes has made a new era in stadium design possible. Skydome (Rogers Centre), Toronto — ($500 million) The home of the Blue Jays of Major League Baseball (MLB) was opened in 1989 and began the era of retractable roof stadiums. The roof has for large steel panels. One panel is fixed and the other three move. All panels are made of steel trusses and corrugated steel with a weatherproof coating. The entire roof weighs more than 11,000 tons but the pieces move at a rate of 71 feet per minute. Chase Field, Phoenix ($364 million) Home to the Diamondbacks of Major League Baseball, Bank One Ballpark, or “the BOB,” opened in 1998 in downtown Phoenix, Arizona. The retractable roof weights 9 million pounds and is made of structural steel and operates on technology used in drawbridges and traveling cranes. A pair of 200 hp motors open and close the roof in about 4 minutes. The roof consists of three moveable trusses. The roof can be opened to maximize sunlight on the playing field’s turf. Safeco Field, Seattle ($517 million) In 1999, baseball’s Mariners moved to Safeco from the Kingdome. Safeco has a retractable roof that provides a climate controlled enclosure. The roof has three major sections moving along a set of tracks and is powered by an electric motor and it takes about 10 minutes to open or close the roof. Minute Maid Park, Houston ($333 million) Opened in 2000 Minute Maid Park is home to the Houston Astors and brought open air baseball to Houston for the first time in 35 years.. It is the newest of the retractable roof facilities and has three panels which open and close in 10 to 20 minutes. Forged steel wheels 35 inches in diameter move the panels that range from 1,905 to 3,810 metric tons. The roof is opened and closed 160 times a year, a distance of 14.6 miles.
<urn:uuid:7377d981-99c2-48f5-b68a-40cbc06f52de>
CC-MAIN-2016-26
http://www.industrytap.com/retractable-roof-stadiums-fans-love-them-but-at-what-cost/3602
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951711
642
2.734375
3
PHYSICAL GEOGRAPHY, Ninth Edition, uses the combined expertise of four respected geographers to show how Earth's physical geography impacts humans, and how humans impact Earth's physical geography. The text emphasizes three essential themes to demonstrate the major roles for the discipline -- Geography as a Physical Science, Geography as the Spatial Science, and Geography as Environmental Science. With a renewed focus on examining relationships and processes among Earth systems, this text will help you understand how the various systems interrelate and how humans are an integral aspect of geography. Historically the first book to take a conservation approach, the authors continue to emphasize the theme of environmental and human impacts. Back to top Rent Physical Geography 9th edition today, or search our site for other textbooks by Robert E. Gabler. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Brooks Cole. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Geology tutors now.
<urn:uuid:c36fb168-f074-4b00-b751-4fea7894b7e0>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/physical-geography-9th-edition-9780495555063-0495555061
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919684
213
3.390625
3
Luther's Mighty Prayer And Prophecy. At one time in the life of Luther, there was a critical moment in the affairs of the Reformation. Bitter persecution prevailed with extraordinary power, and threatened every one. They were the dark days when faith could only cling. There were but few friends to the reformers, and these were of little strength. Their enemies were every where strong, proud, arrogant. But Luther relied on his God, and at this moment, with his favorite hymn in his heart, "_A strong fortress is our God,_" he went to the Lord in prayer, and prayed that omnipotence would come to the help of their weakness. Long he wrestled alone with God in his closet, till like Jacob he prevailed. Then he went into the room, where his family had assembled, with joyous heart and shining face, and raising both hands, and lifting his eyes heavenward, exclaimed, "_We have overcome, we have overcome_." This was astonishing, as there was not the slightest of news which had yet been heard to give them hope of relief. But immediately after that, the welcome tidings came that _the Emperor, Charles V., had issued his Proclamation of "Religious Toleration in Germany_." In Luther's prayer was fulfilled the remarkable promise of Proverbs, 21: I. "_The king's heart is in the hand of the Lord, as the rivers of water; he turneth it whithersoever he will_." Next: John Knox And His Prophetical Prayer. Previous: President Finney's Prayer For Rain.
<urn:uuid:969e32ba-5cdd-4ddc-8b00-72cb4b5fa400>
CC-MAIN-2016-26
http://www.catholicprayer.ca/PrayersAnswered/Luthers-Mighty-Prayer-And-Proph.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00200-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981421
350
2.703125
3
- ST. MAURICE, SWITZERLAND -- "I shall show those insolent herdsmen and cheesemakers!" thundered Adolf Hitler in 1940, after Switzerland refused to allow the German Army to pass through its territory to outflank France's Maginot Line forts. Soon after France's defeat, Hitler and Mussolini ordered their general staffs to complete Plan von Menges, the invasion and partition of Switzerland by the combined armies of Germany and Italy. - But the Axis never invaded tiny Switzerland, then a nation of only 5 million. The reason was not, as revisionists claim, because they needed Switzerland for banking. Other neutrals - America, Spain, Turkey, Sweden, Portugal - were also available for finance and trade. Or because the Swiss co-operated with Hitler's Germany, an outrageous myth concocted by American lawyers and politicians seeking to soak the wealthy Swiss. - In 1940, when America was still neutral to Hitler, Swiss fighters shot down 11 intruding Luftwaffe aircraft. - The true reason was Switzerland's fierce national determination to remain free, backed by its top-secret National Redoubt - an immense system of over 100 mighty forts and thousands of casemates and bunkers buried deep in the heart of the Alps. - In July, 1940, as Europe was surrendering or being overrun by invincible German armies, General Henri Guisan convoqued all senior officers of Switzerland's citizen army to Rutli Meadow and issued his famous order: "Fight to your last cartridge, then fight with your bayonets. No surrender. Fight to the death." The world's oldest democracy would stand alone against Hitler and Mussolini. The Germans and Italians decided against attacking Switzerland because of the casualties they would have - Switzerland's 700,000 soldiers were given the grim command to be ready to leave behind their homes, wives and children, then retreat into the mountain fortress system, which had only enough food and shelter for the army. - Each high Alpine valley was to become a little Thermopylae; every Alpine fort another Verdun. Working round the clock, in two years Swiss engineers created over 100 powerful artillery and infantry forts dug into granite mountainsides. Switzerland's secret Alpine Redoubt exceeded in size, strength, firepower - and, of course, effectiveness - France's famed Maginot Line, hitherto believed to be the world's mightiest fortress - Drove right by - At the heart of this huge military complex, whose existence is only now coming to light, lay Dailly, the world's largest and most powerful fort. For four decades, I have driven by Dailly without ever suspecting its existence. Now, as a guest of the Swiss General Staff and the elite Festungwachtkorps (Fortress Guard Corps), I was one of the first non-Swiss allowed to inspect the top-secret fortress. - This Swiss Gibraltar lies some 15 kms south of Lake Geneva's eastern end, between Montreux and Martigny, the gateway to the St. Bernard Pass, commanding the Valais, a highly strategic valley formed by the Rhone River, the major land route between Italy and northern Europe. At St. Maurice, the Valais is further constricted by the outthrust of the Dailly massif, a steep, pyramid-shaped mountain spur that juts into the valley, narrowing the defile to under two kilometres in width. Here, in 47 A.D., Roman Emperor Claudius had the first bridge built across the fast-flowing Rhone. - Fortification of Dailly began in 1892. By the early 1940s, Dailly had literally become, as the fort's technical chief, the redoubtable Aspirant Jean-Claude Raboud told me, "a giant Swiss Gruyere cheese," honeycombed by 60 kms of underground galleries (tunnels), with camouflaged gun embrasures, searchlights, troops barracks, magazines, supply depots and headquarters. North and south of Dailly lie numerous other forts: neighbouring Savatan, Scex, Cindey, Petit-Mont, Follateres, and more, a lethal gauntlet of underground strongholds with a staggering 300 kms of tunnels and interlocking fire from artillery, mortars, and machineguns. - From outside, the forts are invisible, save for a few nondescript wooden buildings. The camouflaged embrasures for machine guns and artillery - trompe-l'oeil flaps that look like rock - are indistinguishable from more than a few feet away. They suddenly open, pour a withering fire, - Turrets are disguised as rustic chalets, sheds or boulders. All guns are pre-registered on their targets and can be fired blind, directed only by voice or electronic commands. The valley is crisscrossed by tank barriers, minefields, and obstacles. The main road and its bridges are mined with special demolition charges. Together, the Valais forts represent the pinnacle of 20th-century military architecture and engineering. - Dailly staggers the mind and body. To reach its entrance at 1,400 metres requires negotiating 29 vertiginous switchbacks etched onto the mountain's steep side. At the fort's narrow summit - known as "The Needle" - you look straight down, a terrifying sheer drop 1,800 metres to the valley floor. From this aerie, one sees - and the fort's big guns can reach - all the way north to the end of Lake Geneva, the fabled Chateau of Chillon, and Montreux; and south to Martigny and the St. Bernard pass into Italy. - The fortress was designed to accommodate 1,800 soldiers, with enough munitions, food and water to hold out "buttoned up" for six months. Neighbouring Savatan held 1,600 troops. Hewn into virgin granite, and protected by elaborate air filtration systems, Dailly and many other alpine forts were immune to everything except for direct hits by nuclear weapons. - Upgraded in the '70s - Fearing a Soviet invasion, the Swiss extensively upgraded their forts until the late 1970s. France similarly upgraded and upgunned some of the Maginot forts during the 1960s. - Dailly's fighting power came from a variety of weapons designed for distant and close-in action: machineguns; 75 mm rapid-fire guns; 105 and 120 mm artillery with a range of 17 kms; 81 and 120 mm semi-automatic mortars; 20 mm AA guns; and two turrets with fully automatic 150 mm cannon. These latter are fed by an elaborate production line 50 metres below the surface. Shells and propellant cartridges are loaded onto conveyer belts, mated, fused and then fed up by an ammo hoist system to the automatic cannon, huge, evil machines that can fire a storm of 22 heavy shells per minute to a distance of 25 kilometres. - Watching this production line of death in operation was a remarkable experience. My Swiss escort and friend, Lt. Colonel Marcel Krebbs, rightly described the huge 150 mm guns and their 50-metre high barbettes as "pharonic," worthy of an Egyptian pharaoh. So were the fort's power plants, barracks, and magazines. The Swiss spared no expense on these battleships buried in the Alps. - The Cold War's end led Switzerland to sharply reduce its armed forces and decommission many forts. Large forts are being replaced by smaller artillery works armed with 155 mm long-ranged guns. But much of Dailly and its neighbours are still active, serving as bases for Swiss mountain brigades defending the nation's fortress heartland. - Though I'm a veteran fortress explorer, Dailly left me at times with both vertigo from "The Needle" and claustrophobia after hours of tramping through narrow, dimly lit concreted galleries, or squeezing in to a tiny lift that took us up through the rock inside the 150 mm turret. Just looking down the 560-metre deep shaft of the funicular elevator that supplied the garrison made my head spin. After eight hours at titanic Dailly, one of the true wonders of the world, I was overwhelmed, elated and totally exhausted. And I finally understood why Swiss friends used to tell me, "Switzerland isn't a country; it's a fortress that looks like a country."
<urn:uuid:d56401bb-198e-4cc7-bef1-655254da5f47>
CC-MAIN-2016-26
http://rense.com/general26/inside.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930011
1,860
3.328125
3
Plant of the Week Cliff Jamesia (Jamesia americana) By Walter Fertig Jamesia is a genus of just two surviving species of shrubby plants in the Hydrangea family (Hydrangeaceae) that is restricted to the Sierra Nevada and Rocky Mountains from California, Utah, and southern Wyoming to northern Mexico. Paleobotanists have identified fossils dating to the Oligocene (23-33 million years ago) from the Crede fossil beds in southeastern Colorado that apparently belong to an extinct species of Jamesia. Modern Jamesia species can be recognized by their shreddy, gray to reddish-brown bark, large white, or pinkish (often hairy), four or five-petaled flowers, and broadly oval and coarsely-toothed opposite leaves. Cliff jamesia (Jamesia americana) is by far the most common and variable of the two species. It differs from its cousin, J. tetrapetala (a narrow endemic of scattered Great Basin Mountains of eastern Nevada and western Utah) in having numerous flowers with five petals, instead of solitary blossoms with four petals. Several localized varieties of J. americana are currently recognized, including one that is almost entirely restricted to hanging gardens in Zion National Park and vicinity (the appropriately named var. zionis). The name Jamesia honors Edwin James, a 19th Century medical doctor and botanist who accompanied explorer Stephen Long’s 1820 expedition to the southern Rocky Mountains of Colorado. Along with two companions, James was the first white explorer to reach the summit of Pike’s Peak (previously reported as unable to be climbed by Zebulon Pike and his Native American guides) and the first botanist to explore and collect alpine tundra habitats in the western United States. During the summer of 1820, James discovered nearly 100 new species for science, including the Blue columbine (Aquilegia coerulea) which would later become the state flower of Colorado.
<urn:uuid:d90b0f00-9bf1-4c55-9597-fae394b957f1>
CC-MAIN-2016-26
http://www.fs.fed.us/wildflowers/plant-of-the-week/jamesia_americana.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00077-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941217
418
3.390625
3
Start things off well by arriving with enough time to find a seat, settle in, and calm your nerves before the exam begins. Bring an extra pen or pencil and a watch. Consider using earplugs and bringing a drink to keep focused and comfortable. Write down important details immediately. On scratch paper, immediately write down the important details, names, and facts you have memorized. This will free you to pay close attention to the questions and organize your answers conceptually. Read through the entire test carefully before you start. Reading over the test first will help you become familiar with its layout and locate the easy and difficult questions. Looking over the whole test also serves to remind you of the scope of the material, situating each question in the context of the test as a whole. Make sure to understand each question before you start to answer. A major reason students do poorly on exams is that they do not answer the question that is being asked. Use time to your advantage. Plan how much time you will spend on each section of the test, and force yourself to follow this schedule. You should spend more time on questions that are the most in-depth and are worth the most points (usually essay questions). Answer questions that you feel confident about first in order to save more time for those that require additional thought. Be sure not to waste time giving elaborate answers to simple, short-answer questions. Answer every question. If you run out of time, jot down an outline for any thoughts you are not able to develop fully. Even if you do not know how to answer a question, you probably know enough to speculate about an answer. Professors usually give some credit for attempted answers but cannot give credit for questions left blank. Other "Blue Book Exam" topics: Writing for Blue Book Exams Home Preparing for the Exam Tackling the Essay Exam Question Becoming a Better Writer Home
<urn:uuid:b08ffbd5-05bf-45f5-a384-3222bd6711f0>
CC-MAIN-2016-26
http://www.wts.edu/resources/westminster_center_for_theolog/become_writerhtml/writing_for_blue_book_exams/taking_the_exam.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948448
396
2.734375
3
Definitions for precedeprɪˈsid This page provides all possible meanings and translations of the word precede predate, precede, forego, forgo, antecede, antedate(verb) be earlier in time; go back further "Stone tools precede bronze tools" "Most English adjectives precede the noun they modify" precede, come before(verb) be the predecessor of "Bill preceded John in the long line of Susan's husbands" move ahead (of others) in time or space precede, preface, premise, introduce(verb) furnish with a preface or introduction "She always precedes her lectures with a joke"; "He prefaced his lecture with a critical remark about the institution" To go before, go in front of. To have higher rank than (someone or something else). Origin: Latin praecēdō, from prae- + cēdō to go before in order of time; to occur first with relation to anything to go before in place, rank, or importance to cause to be preceded; to preface; to introduce; -- used with by or with before the instrumental object Origin: [L. praecedere, praecessum; prae before + cedere to go, to be in motion: cf. F. prceder. See Pre-, and Cede.] Chambers 20th Century Dictionary pre-sēd′, v.t. to go before in time, rank, or importance.—v.i. to be before in time, or place. [Fr. précéder—L. præcedĕre—præ, before, cedĕre, go.] British National Corpus Rank popularity for the word 'precede' in Verbs Frequency: #877 The numerical value of precede in Chaldean Numerology is: 5 The numerical value of precede in Pythagorean Numerology is: 2 Sample Sentences & Example Usage Beautiful thoughts precede a beautiful life. If virtue precede us every step will be safe. Financial rewards follow accomplishment; they don't precede it. If the headache would only precede the intoxication, alcoholism would be a virtue. Always remember that striving and struggle precede success, even in the dictionary. Images & Illustrations of precede Translations for precede From our Multilingual Translation Dictionary Get even more translations for precede » Find a translation for the precede definition in other languages: Select another language:
<urn:uuid:1cf1917d-7a7c-450b-be75-08a0f62cf690>
CC-MAIN-2016-26
http://www.definitions.net/definition/precede
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.712255
558
3.09375
3
Participating in Research All types of people are needed to volunteer for Alzheimer’s research. People with Alzheimer's disease or MCI, those with a family history of Alzheimer’s, and healthy people with no memory problems and no family history of Alzheimer’s may be able to take part in clinical trials. Participants in clinical trials help scientists learn about the brain in healthy aging and in Alzheimer’s. Results of these trials are used to improve prevention and treatment methods. The Alzheimer’s Disease Education and Referral (ADEAR) Center’s clinical trials finder makes it easy for people to find out about studies that are sponsored by the federal government and private companies, universities, and other organizations. It includes studies testing new ways to detect, treat, delay, and prevent Alzheimer’s disease, other dementias, and MCI. You can search for studies about a certain topic or in a certain geographic area by going to www.nia.nih.gov/alzheimers/clinical-trials. To find out more about Alzheimer’s clinical trials, talk to your health care provider or contact the ADEAR Center at 1-800-438-4380 or email@example.com. Also, visit its website at www.nia.nih.gov/alzheimers/volunteer.
<urn:uuid:82bb78bd-56d2-4f6e-815a-96bc391f46d6>
CC-MAIN-2016-26
http://nihseniorhealth.gov/alzheimersdisease/participatinginresearch/01.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00194-ip-10-164-35-72.ec2.internal.warc.gz
en
0.89473
281
2.59375
3
Six Types of Tea There are six major types of tea in China –green tea, black tea, Oolong tea, dark tea and white tea, distinguished mainly by different methods of production. Folklore relates each type of tea to certain human characteristics. Thus it is said that green tea, simple and light, stands for scholasticism of south China; black tea, mild and reserved, is regarded as rather ladylike; Oolong tea, warm and persistent, resembles to perseverance of philosophers, dark tea, with its lingering aftertaste, symbolizes the wisdom of the elderly, and so on and so forth. China, the homeland of tea, is a leading producer and consumer, and the discovery and usage of tea has a history of four or five thousand years. Tea developed form the earliest fresh-boiled tea taken as a kind of soup, to later dried-and-preserved teas, and from simple green tea to the blending of six major kinds of tea. History of Tea Culture Drinking tea first started to become popular in the Tang (618-907) and Song (960-1279) Dynasties, and has continued into contemporary times. The flavour of tea, which may be drunk weak or strong, contains both bitter and sweet elements. What is more, with its unique appeal, tea has broken free of its region of origin and has been transported to most parts of the world. The origin of tea is lost among history and legend. What can be roughly confirmed is that tea originated in the southwest of China. In Yunnan, and elsewhere, there are still some wild tea trees that are over 1,000 years old. It is said that the first person to discover the effects of tea was Sheng Nong –the father of agriculture and herbal medicine in China. In time immemorial, people knew very little about plants. In order to find out which plants could be eaten safely, Shen Nong tasted various kinds of plants to test them as food or medicine. After he had eaten the plants, Shen Nong observed their reactions in his stomach – he is reputed to have had a “transparent stomach”! There is a famous legend of “Shen Nong Tasting A Hundred Plants”. One day, after walking for a long time, Shen Nong felt tired and thirsty, so he rested under a tree and started a fire to boil water in a pot. Suddenly some leaves fell into the pot from a nearby tree. Shen Nong drank the water and found it not only sweet and tasty, but freshening as well. He felt less tired, so went on to drink all the water from the pot. Another version of this tale is a little different and more amazing. It is said that Shen Nong tried 72 different kinds of poisonous plants in a day and he lay on the ground, barely alive. At this moment, he noticed several rather fragrant leaves dropping from the tree beside him. Out of curiosity and habit, Shen Nong put the leaves into his mouth and chewed them. After a little while he felt well and energetic again. So he picked more leaves to eat and thus detoxified his body and all the poison. Whatever the story, tea interested Shen Nong and attracted him to do further research into its characteristics. The ancient Chinese medical book called the Sheng Nong Herbal, which is attributed to him, states that “tea tastes bitter. Drinking it, one can think quicker, sleep less, move more nimbly, and see more clearly”. This then was the earliest book to record the medicinal effects of tea. By the Zhou Dynasty (1046-256 BC), the function of tea to refresh the body and clear the mind had gradually replaced its function as medicine. People started drying the leaves to preserve tea. When they made tea, they put the leaves into a pot and made a kind of thick soup. The princes of the Zhou Dynasty were used to this thick soup, but due to its bitterness, it did not become widely popular. In the Han Dynasties (206 BC- AD 220), both the collecting and processing of wild tea leaves were improved. Tea became a tasty drink and was very popular amongst the nobility. In the Wei Period (220-265) and Jin Dynasties (265-420), tea came to be the drink of banquets and lubricated philosophical and metaphysical discussions. Tea’s “freshness and purity” came to be preferred to the “violence and intoxication” of wine. The last emperor of the Three-Kingdoms Period (220-280) was Sun Hao (reigned 264-280). He asked his ministers to drink six litters of wine every time he held a banquet. One minister was not good at drinking, so he secretly asked Sun if he could drink tea instead. In fact the relationship between tea and wine has always been subtle. Wine drinking is appropriate for a joyous occasion; while tea drinking is best suited to tranquillity. These two drinks differ in many aspects, but they are also the best partners because tea can counter the effects of drunkenness. In later times, the opposing aspects of tea and wine were reflected in a dialogue between them in a book called On Tea and Wine. Thus by the time of the Shu Kingdom (261-263) tea had spread to the lower reaches of Yangtze River, and by the Eastern and Western Jin Dynasties and the Northern and Southern Dynasties (265-589), rulers advocated drinking tea and eating simple food in order to restrain competition in extravagance amongst the nobility. Buddhism and Taoism also played an indispensable role in the spreading of tea. Buddhists liked tea because it prevented dreariness and languor, while Taoists believed that tea helped people to stay young and store tea in brick or cake form. When they wanted to make tea, they ground the cakes into powder and put this, along with other condiments, into hot water. It is often said that “tea started in the Tang Dynasty and flourished in the Song Dynasty”. In the Tang Dynasty a method called “green steaming” was invented, the aim of which was to rid tea leaves of their “grassy” flavour. After steaming, the tea leaves were ground, made into cakes, and then dried and sealed for storage. Before the Tang, tea was known by many names, one of these being a Chinese character meaning “bitter”. It was also in the Tang Dynasty that teahouses in their true sense came into being, and in some big cities there were also tea shops, which stored large amounts of tea leaves and prepared tea for their customers. Poems and articles dedicated to tea also appeared, and poets such as Lu Tong and Bai Juyi all wrote about tea. Furthermore, the Tang Dynasty also saw the first definitive publication about tea –The Book of Tea, which was the first of its kind in the world. This book which contained a comprehensive summary of all aspects of the culture of tea including medicinal uses, picking, tea making, cooking, and utensils was then a complete synthesis of knowledge about tea. Its author, Lu Yu (733-c.804), was consequently dubbed the “Saint of Tea” by later generations. During this period, tea became the most popular commodity in foreign trade, and Japanese Buddhists brought tea leaves back from China to Japan. For the sake of easier transportation, tea leaves were made into bricks, from which convenient pieces could be broken off to prepare tea. The Song Dynasty was a golden age for tea, and the teahouse played a prominent role. The calligrapher Cai Xiang (1012-1067) wrote Record of Tea and Emperor Huizong, Zhao Ji (1082-1135) wrote General Remarks on Tea. Then, in the Ming Dynasty (1368-1644), tea culture, which had been set back by Mongolians, underwent a renaissance with the familiar dark tea, green tea, and Oolong tea all developed during this time. Zhu Yuanzhang (reigned 1368-1398), the first Ming Emperor, oversaw a change from roll tea to loose tea, and this tradition has been retained ever since. As their understanding of tea improved, people were no longer content to harvest tea from the wild, but began to plant and cultivate tea trees, while at the same time processing techniques were improving, with different methods producing the six major types of tea. Nor did people continue to take tea simply as food or medicine. Rather, drinking tea began to take on a spiritual dimension, containing deep cultural meanings. The tea ceremony in China consists not only of the choice of tea, but also many other elements such as type of water, utensils, time and presentation. Manners of Tea Drinking There are also detailed requirements of the drinkers. Meanwhile, with the popularization of tea, people in different regions and of different nationalities developed their own unique customs of taking tea. In Guangdong, for example, people like drinking morning tea, in Fujian they prefer Kongfu tea, Hunan has Lei tea, Sichuan people love “covered-bowl tea”, while people of the Bai nationality treat their guests with “Three-Course Tea”. Tibetan people prefer buttered tea and those from Inner Mongolia like milk tea. These various tea customs constitute the rich and profound Chinese tea culture. Trade among nations spread tea to all parts of the world. Japanese monks took tea seeds, the techniques of tea making, and tea utensils back to Japan, which led to the appearance of the Japanese tea ceremony. The earliest record of tea in Europe was in the travel notes of an Arab, while Marco Polo mentioned in his notes that a Chinese minister of finance was deposed because he levied excessive taxes on tea. At the end of the sixteenth century, the Dutch brought word to Europe that there was a kind of magic leaf in the east, from which tasty drinks could be made, and this was the first time that Europeans heard of tea. In 1610, the East India Company was the first to sell tea to Europe, after which the habit of drinking tea took root there. In 1936, tea entered France and two years later it entered Russia, whereas Britain, a nation famous for its tea drinking, did not have tea until 1650.
<urn:uuid:7d73b217-95e4-4c34-bf84-a71d3e14349f>
CC-MAIN-2016-26
http://www.chinatravel.com/facts/chinese-tea.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978609
2,169
2.6875
3
Counting votes in Karachi, a pro-Musharraf constituency Controlling Army-led Democracy Through Manipulated Vote Wajid Shamsul Hasan August 23: Pakistan's founder Quaid-i-Azam Mohammed Ali Jinnah was a democrat par excellence. If he had known that the ideals that he had lived for, struggled all his life and fought for, would be raped so blatantly, as has been done repeatedly by its military establishment and Bonapartist generals, he would have thought twice before opting for an independent state. did not have, nor did he seek, help from more than a 100,000 Muslim army officers and other men in uniform serving as the most loyal servants in the British imperial armed forces with quite a few of them at the top licking the boots of their Gora (white) higher ups for promotions. He believed in the power of the ballot over the bullet and hence restricted his struggle for freedom within the democratic parameters. his first speech to the Legislative Assembly of Pakistan (11 August 1947) he had laid bare categorically his magna carta for the democratic management of the country. In his Pakistan all citizens were to be equal irrespective of their caste, creed or color and that religion had nothing to do with the business of the state. subsequent emphasis, as long as he lived, was that since it was to be a people's government, responsible to the people and none else but the people, it was the sole prerogative of the masses to change the government in Pakistan and its policies. It also rested within the powers of the people to vote in and vote out a government when it failed to perform in the largest interest of the greatest numbers. He had also warned the civil and military bureaucrats and told them: "Make the people feel that you are servants and friends" and that they should maintain the "highest standard of honor, integrity, justice and fair play." goes to the credit of the people of Pakistan that despite subversion of democracy by frequent military interventions, they have stood by their commitment to the democratic ideals bequeathed to the nation by the Quaid. However, now we have come to a crucial pass after many constitutional and electoral dislocations, especially following the farce in the name of local bodies elections that were inflicted on us on Thursday, August 18, that a stage has been reached for the entire nation and its political leadership to evolve a new strategy to meet the Praetorian challenges. from home, thank goodness to the number of Pakistani TV channels, we could see with our own eyes the most shameless mockery of vote. It seemed to be in continuation of the policy of the militarization of the state by the present regime to further disenchanting the masses away from the power of vote, thereby to weaken the democratic forces that don't give up challenging its absolute authority. sponsored rigging, fraudulent results and installation of military's favorites in the government have disheartened the voters to the extent that they feel discouraged to vote since they have been denied their right to elect their representatives. This is one major reason for the gradual decline in the voting pattern and the regime feels confident that it can hoodwink international opinion by jacking up falsely the figures of voting turnout. Pervez Musharraf's Local Government Ordinance of 2001 drafted painstakingly by the best Praetorian brains, aided by their civilian experts, had an overall objective of not promoting democracy at the grass root level but to control it so that managing of local affairs remains at the mercy and sweet will of the Center. It was perennially designed to convert the real rulers, the people, into serfs and power sharing in it was so devised that on paper it seemed to be devolution but in fact it meant more of overwhelming control of Islamabad. In short, it has been the most deplorable recipe for controlled democracy in the country. has been rightly alleged that instead of devolving power in three tiers, by moving power down the provinces and reducing the load of federal ministries, the President has become the reservoir of all power. General Musharraf has had the cake and has been gulping it too. He has used local bodies not for empowering the people at the grass root level but as an institution for extending semblance of civil legitimacy to it, much in the pattern of General Ayub Khan's basic democracy and General Zia's party less local Musharraf's they also had one objective to further fracture and fragment Pakistani society so that instead of national cohesion, there should be more of local biradari lords with the sole purpose of reducing and minimizing the power of the collective vote. Especially from General Zia's time to this day, calculated attempts are being made to fragment the society into ethnic, feudal and sectarian groups to divide and reduce the democratic power of the people to change a government through their collective vote. This has been real reason for holding non-party local body elections rather than the empowerment of the people. usual the regime's propaganda machinery is busy orchestrating that Thursday's local polls were the most transparent and peacefully held ever with more than 50 cent of registered voters turning out. Contrary was the view of various panels of experts who were invited by the private TV channels to comment and analyze the daylong proceedings punctuated by bloody violence, 11 deaths with was also a sad commentary on the performance of the Election Commission. It is understandable since an Acting Chief Election Commissioner heads it. It did not take cognizance of pile of complaints lodged at its doors from the day elections were announced. The blatant transfers and postings of officers by the Chief Ministers and for other bandobast (management) it did not have spine good and strong enough to take a stand. Rather, the President and his Chief Ministers who did not feel shy for lobbying openly for their favorite candidates did it most obtrusively in gross violation of its code of conduct. to TV discussions. Some panelists had a point that needs to be answered by the political leaders. They were of the view that since General Zia's time the political parties had been opposing non-party elections and yet they have been participating in them knowing well that the very concept of non-party elections is tendentiously undemocratic, especially when it is inherently designed to divide the political power of the masses. is time a consensus decision was taken by the ARD and APC and get over with their contradiction of demanding party-based elections and yet surrendering themselves to a party less contraption designed entirely for the service and perpetuation of the military regime. therefore, expects that having had the bitter and nightmarish experience of the first phase of local bodies elections, ARD and APC parties should get together to tell Musharraf enough is enough, that they cannot be a party to his shameless electoral farce. It needs to be noted that he is already under pressure and he is no more in a position to shrug off lightly any united protest by the Opposition parties. It is time they corrected their stand on non-party polls. writer is a former Pakistan High Commissioner to UK
<urn:uuid:263c36c8-f2a6-4e11-9b54-3f6cd2a468af>
CC-MAIN-2016-26
http://antisystemic.org/satribune/www.satribune.com/archives/200508/P1_wajid3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970727
1,577
2.859375
3
This should be achievable, but there’s one sector in the U.S. that is increasing its CO2 emissions at a rapid pace—trucking. Currently, trucks move 72% of the tonnage and 70% of the goods’ value nationwide. By 2050, truck travel is expected to increase by 80% nationally and by 50% in California. Given current trends, the Energy Information Administration projects trucks will account for a large and growing share of freight transport energy use (Figure 1) and CO2 emissions through 2040. But this CO2 future does not have to happen—there are a range of measures that can be taken to dramatically cut truck CO2 emissions. One is fuel economy improvements, and this is being tackled via the federal government’s truck fuel economy standards program, with measures nearly set through 2027. That is great news, but it’s not enough—it will probably keep truck CO2 emissions at a fairly constant level rather than reducing them. Another measure is to replace fossil diesel and natural gas with renewable fuels. Low carbon diesel alternatives (e.g. made from waste oils or natural gas captured from landfills and waste water treatment facilities) could make a significant contribution to cutting carbon emissions from trucks. But, competition for these fuels from hard-to-electrify sectors like aviation and limitations on the amount of low carbon renewable feedstocks will constrain their overall impact. Therefore, to go for deep CO2 reductions from trucks, we will very likely also need very low CO2 emission technologies—namely fuel cell and battery electric vehicles, both of which are “ZEVs” – zero emission vehicles. The only CO2 they will emit is from upstream processes to produce the fuel, and these are progressing towards very low CO2 emissions over time. ZEV trucks can also help tackle a related problem—air pollution. These vehicles do not emit any pollutants at the tailpipe, a huge co-benefit, particularly in polluted areas such as around Los Angeles. But we have a problem: there are almost no ZEV trucks on the nation’s roads at this point. Why not? An obvious reason is that the key technologies (e.g. batteries and hydrogen/fuel cell systems) are new and more expensive. Another issue is the “range problem.” Trucks often need to drive long distances in a day, and battery systems typically do not have sufficient energy density to meet the needs of high-mileage trucking, particularly given their long recharge times. Fuel cell trucks can typically travel farther and refuel much faster (like diesel trucks), but need hydrogen fuel, which is not easily available in many locations, another major challenge. But shoots of grass are emerging in the cracks, as some types of trucks can more easily run on batteries than others. For example urban delivery trucks, large refuse collection trucks, and drayage trucks which operate at ports often have a daily use pattern that can fit with a battery system, and some electric trucks are appearing in these markets. Battery costs for cars have been dropping rapidly, and this also helps to lower battery costs for other vehicle types—so electric truck costs are declining even if very few are being built today. Other types of trucks that “return to base” once or twice per day can operate on hydrogen that is dispensed at that base—they don’t need a widespread refueling infrastructure. A few hydrogen trucks and bus projects are underway around the country. AC Transit, located in Oakland, California, has been operating fuel cell buses for ten years, and the California Air Resources Board (CARB) recently proposed a very large demonstration program for ZEV trucks and buses at California ports and in disadvantaged communities across the state. Another challenge will be “scale-up”—how do we get from a few promising applications and projects to much more widespread use of these technologies? The needed rate of scale up is one question; we produced a white paper on this topic in 2015 that shows that a major transition to ZEV trucks needs to begin fairly soon if it is to be completed by 2050. As shown in the figure below, even with a very rapid transition, it takes a long time to go from niche markets to dominating the large markets, so each year counts. In our paper (and in the figure to the right) we also show that widespread use of advanced biofuels in conventional trucks could really help, since some types of biofuels do not require changing truck technologies—a big plus. But drop-in diesel replacement biofuels require advanced technologies. Producing high volumes of these biofuels from sustainable feedstocks resulting in low greenhouse gas emissions will be a significant challenge. So, what’s to be done? There are in fact a number of things that our local, state and federal governments can do to get moving on a transition to ZEV trucks: In this process, there is an important “virtuous circle” we can benefit from: the more we produce and use these vehicles, the better and cheaper they will become. Governments have a critical role to play to help the truck manufacturing industry and truck purchasers/operators to get onto that circle. This can be done, for example, with price incentives to produce and purchase these technologies, perhaps starting with the applications that make the most sense. Overall the outlook is bright for moving to very low emissions trucking in the U.S. We have several ways to do it and we are getting some initial experience in some “pioneer” applications. But we have to take up the challenge to move this along faster, and create a sense of urgency that may be lacking today on many fronts. 2050 is just around the corner… Dr. Lewis Fulton has worked internationally in the field of transport/energy/environment analysis and policy development for over 25 years. He is Co-Director of the Sustainable Transportation Energy Pathways (STEPS) program within the Institute of Transportation Studies at the University of California, Davis. There he leads a range of research activities around new vehicle technologies and new fuels. He is also a lead author on the recent IPCC 5th Assessment Report, Mitigation (“Climate Change 2014: Mitigation of Climate Change”, transport chapter). Dr. Marshall Miller received his B.S.E. from the University of Michigan and his Ph.D. in physics from the University of Pennsylvania in 1988. After a postdoc at the University of Chicago, he joined the Institute of Transportation Studies at UC Davis. For over 20 years he has worked on advanced fuels and technologies to increase vehicle fuel economy and reduce vehicle criteria pollutants and greenhouse gases. Dr. Miller runs a laboratory on campus where he studies advanced batteries and ultracapacitors for use in electric and hybrid vehicles.]]> One of the best memories from my time working in the Corn Belt over the past four years was a visit I had with a farmer in South Dakota. I spent the afternoon hours interviewing this farmer in his kitchen where we talked about conservation practices, his experience with extreme weather, and perspectives on climate change. He then took me on a “tour” of his corn and soybean fields to show off his no-till fields, proud as he was of his “residue,” or residual plant life from a prior years’ cash crop. This farmer had observed that his farming practices had improved the health of his soil and thus improved the productivity and resilience of his farm operation. Another memory that strikes me is a visit I made to a farmer in Northeast Iowa, who took me on a truck bound tour on dirt roads around his cropland to point out the erosion problems he has observed on neighboring fields while proudly boasting about the lack of erosion problems on his own fields. Both of these farmers, along with many others interviewed as part of this project (159 farmers in total) spent a lot of time qualitatively discussing their relationship to their soil resources by noting the color, texture, and function of their soils, often describing improvements in infiltration rates and reduced compaction due to conservation practices that they had implemented on their farms. My research examines in-depth interviews with farmers across nine Corn Belt states by assessing how farmers respond to weather related risks and specifically, how they might alter management practices in response to increased weather variability and projected climate change. Through my conversations with farmers, many of them described an evolving relationship with their soil resources, through a kind of social-ecological feedback, often brought about through changes in their use of conservation practices, such as no-till farming and cover crops. Outreach efforts with the Natural Resource Conservation Services’ (NRCS) Soil Health Initiative, and other climate change outreach efforts targeting agricultural producers, should start with the concept of soil health stewardship. Focusing on the message of managing soil health to mitigate weather related risks and preserve soil resources for future generations may provide a pragmatic solution for engaging farmers in strategies that have soil building and soil saving at their center. Focusing on communication, however, may not be enough to incentivize farmers to manage their soil resources differently, particularly in agriculturally intensive regions such as the Corn Belt where there is a direct cost associated with the use of conservation practices in the context of an already expensive production system. We need to do a better job of placing value on soil resources that have been retained (e.g., erosion prevention) and enhanced (e.g., improved soil health) so that farmers are better able to implement soil stewardship on their farm for the long-term resilience of their operation. This challenge requires new research to simultaneously ensure profitability while protecting and renewing the environmental systems that support agriculture. Increased funding for the Agriculture and Food Research Initiative, a competitive research program of the USDA, is an important investment in the future of food and will likely benefit farmers, society, and pave the way for a more sustainable agrifood system. My work suggests that starting with the soil, and farmers’ relationship to this valued resource, provides a critical pathway that links on-farm productivity with longer-term environmental sustainability. Gabrielle recently received her PhD in Sociology and Sustainable Agriculture at Iowa State University. She has worked in the U.S. Corn Belt for the past four years conducting and analyzing in-depth interviews with large-scale corn producers as part of a multi-state effort to examine climate change impacts and resilience-building strategies for corn-based cropping systems (www.sustainablecorn.org). The results reported in this blog will be published in a peer-reviewed journal later in 2016. Follow Gabrielle on Twitter @G_Roesch or on Research Gate to read the full piece once it is published.]]> This new Organic Initiative appears to be the manifestation of the goals of “green consumerism”—the idea that more informed and responsible shoppers can transform the way goods are produced. But in the case of organic wheat, what seems like a straight line from consumer demand to physical reality is not so straightforward. In other words, the votes have been cast, but what happens next is unclear. The economic incentive for farmers to go organic is substantial. Today, organic wheat sells for almost 3 times as much per bushel than conventional. But as the chasm between supply and demand indicates, this incentive has not been enough. Ardent Mills hopes that they can ramp up organic wheat supply through long-term contracts for transitional and organic wheat, and greater access to educational resources for making the transition. But there may be deeper reasons for the lag in supply. “Organic farming is a philosophical and emotional commitment.” This is what Jean Hediger told me when I asked her why more wheat farmers weren’t growing organic. Jean has been a dryland organic wheat and millet farmer at her family farm in Nunn, Colorado for almost 30 years. “The truth of the matter is it’s really hard to be an organic farmer if your heart is conventional, and the other way around.” In part, this is because organic farmers are prohibited from using some cheap and easy solutions to unpredictable and devastating problems. Put yourself in Jean’s shoes: it’s the summer of 2015, weeks away from harvesting the largest wheat crop you have ever grown. Then, in a matter of days, the whole field turns bright orange, the result of a fungal pathogen called wheat stripe rust. Fungicide would be a quick and inexpensive solution for conventional producers, but the organic standards prohibit the use of these synthetic pesticides. It takes a deep commitment to watch a fungus take over hundreds of thousands of dollars worth of wheat and not give in to the desire to spray fungicides. But Jean held on and it paid off. Though she didn’t know it at the time, the rust had arrived too late to cause much damage. The Hedigers transitioned to organic in 1989 before there was a large economic incentive to do so. They did it to keep their son safe from harmful chemicals, and they are truly proud of how they farm. “Even if you told me conventional wheat was 8 times the price of organic wheat, I still don’t think we would do it.” The Hedigers’ long-term commitment to the organic standards suggests a worrisome prospect for the Organic Initiative. Most of the farmers who are motivated by reasons other than economics to go organic have already done so. For farmers who go organic in pursuit only of economic gain, what will they do if organic practices aren’t in their immediate economic interest? Compared to conventional, the organic toolkit for handling pests and pathogens is inconvenient, and it can be extremely difficult to justify a holistic management approach when nature threatens this year’s bottom line. Ardent Mills is connecting prospective organic producers with experienced ones to help prepare them for situations like these, but undoubtedly there will be producers who realize too late the depth of the organic commitment. While meeting organic demand is an important goal, rushing to meet it by reaching minimum standards risks losing sight of what “organic” means to many producers and consumers alike. After decades of simplification and industrialization in the organic sector, new calls to ramp up organic production need to be vigilantly matched by the call for environmental sustainability if organic is to stay an ecologically preferable alternative to conventional. Some organic wheat production practices are already falling far short of the organic ideal. There are a hundred ways to farm organically, but I doubt if wheat-fallow belongs in that category. Wheat-fallow is a crop rotation that involves planting wheat one year, leaving the land bare (fallow) for the next 14 months, and then planting wheat again. According to the USDA organic regulations, farmers are required to implement a crop rotation that maintains or builds soil organic matter, works to control pests, manages and conserves nutrients, and protects against erosion. Wheat-fallow achieves none of these goals, but accredited certifiers in states like Wyoming and Colorado are certifying wheat-fallow, citing the semi-arid climate as a sufficient reason for its permission. For dryland farmers like the Hedigers, lack of rainfall is a barrier to diversifying crop rotations beyond wheat-fallow. But there are enough examples of diversified producers to show that wheat-fallow isn’t a necessity. In the same dry climate where wheat-fallow is an acceptable practice, the Hedigers grow a rotation of wheat and millet with a cover crop of yellow sweet clover to provide the soil with a natural source of nutrients. With growing American concern about the realities of climate change and biodiversity loss, hopefully more consumers will be looking to agriculture to become part of the solution instead of its role today as a massive part of the problem. Organic agriculture is often touted as a path to move us closer to achieving environmental goals, but for this to be effective, the commitment to ecological sustainability shouldn’t be an afterthought. Hitching the organic wheat wagon to ongoing movements in soil health or agroecology could help prepare a new wave of organic producers for the difficult road ahead, and more closely align the outcomes of the Organic Initiative with the organic ideal. Steven is a Ph.D. student in Soil and Crop Sciences at Colorado State University. He is researching the socioeconomic and political barriers to diversifying crop rotations, and assessing the impact of more diverse crop rotations on soil health. You can find him on Twitter @zweigST]]> Here in the US, public opinion has been muted by misinformation and lobbying campaigns directed by the energy industry. The result is little political will for constructive engagement on climate change and climate impacts. It is worth noting that this is National Public Health Week, and on Monday the U.S. Global Change Research Program released a comprehensive report, The Impacts of Climate Change on Human Health in the US. Concerted urgent action is needed if we are to realize humanity’s opportunities for improved health and well-being. Over the past few years I’ve been joined by dedicated Minnesota doctors and nurses to form a new organization, Health Professionals for a Healthy Climate (HPHC). Our goals are to educate health care professionals and institutions on the public health implications of climate change and to encourage a strong and sustained voice of health care providers to affect public attitudes and actions in this area. Like other public health campaigns such as smoking cessation or seat belt use, information about climate and pollution can be very effective in motivating change. Last year, HPHC created and presented an accredited Continuing Medical Education Course, Climate Change and Public Health, an Interprofessional Review, as part of our efforts to educate health professionals on the connection between climate change and public health. We recently composed a letter to Minnesota legislators in support of Minnesota’s Clean Power Plan (CPP). The letter, co-signed by over 125 doctors and nurses as well as 13 Minnesota health care organizations, urges Minnesota legislators to support our state’s plan to meet the standards of the federal CPP. It has received significant attention, including a recent feature in Midwest Energy News. We need more health care professionals and groups to join our campaign so we can bring even more awareness to this critical issue. HPHC is training climate speakers, producing informational videos, lobbying officials, and writing letters and commentary pieces among other efforts. If you’re interested in adding your talents, energy and ideas to our efforts, you can contact us through our website , our Facebook page or email us directly at firstname.lastname@example.org. Health professionals can play a unique and powerful role in educating the public and policy makers on climate change and its impacts and encouraging them to act. I hope you’ll join us in advocating for a strong Clean Power Plan in Minnesota. Dr. Bruce Snyder is a retired neurologist. He is a UCS Science Network member and works with several environmental organizations including the Sierra Club and Fresh Energy. He lives in Mendota Heights, Minnesota and is hoping the squirrels don’t eat his tulips.]]> Hundreds of scientific studies now show that organic agriculture can produce sufficient yields, be profitable for farmers, protect and improve the environment, and be safer for farm workers. Thirty years ago, there were just a couple handfuls of studies comparing organic with conventional agriculture. In the last 15 years, the number of these kinds of studies has skyrocketed. The review study, “Organic Agriculture in the 21st Century,” is featured as the cover story for the February issue of the journal Nature Plants. It is the first to compare organic and conventional agriculture across the four goals of sustainability identified by the National Academy of Sciences: productivity, economics, environment, and social wellbeing. Critics have long argued that organic agriculture is inefficient, requiring more land to yield the same amount of food. It’s true that organic farming produces lower yields, averaging 10 to 20 percent less than conventional. Proponents contend that the environmental advantages of organic agriculture far outweigh the lower yields, and that increasing research and breeding resources for organic systems would reduce the yield gap. Sometimes excluded from these arguments is the fact that we already produce enough food to more than feed the world’s 7.4 billion people but do not provide adequate access to all individuals. In some cases, organic yields can be higher than conventional. For example, in severe drought conditions, which are expected to increase with climate change in many areas, organic farms can produce as good, if not better, yields because of the higher water-holding capacity of organically farmed soils. What science does tell us is that mainstream conventional farming systems have provided growing supplies of food and other products but often at the expense of other sustainability goals. Conventional agriculture may produce more food, but it often comes at a cost to the environment. Biodiversity loss, environmental degradation, and severe impacts on ecosystem services have not only accompanied conventional farming systems but have often extended well beyond their field boundaries. With organic agriculture, environmental costs tend to be lower and the benefits greater. Overall, organic farms tend to store more soil carbon, have better soil quality, and reduce soil erosion compared to their conventional counterparts. Organic agriculture also creates less soil and water pollution and lower greenhouse gas emissions. And it’s more energy-efficient because it doesn’t rely on synthetic fertilizers or pesticides. Organic agriculture is also associated with greater biodiversity of plants, animals, insects and microbes as well as genetic diversity. Biodiversity increases the services that nature provides, like pollination, and improves the ability of farming systems to adapt to changing conditions. Despite lower yields, organic agriculture is more profitable for farmers because consumers are willing to pay more. Higher prices, called price premiums, can be justified as way to compensate farmers for providing ecosystem services and avoiding environmental damage or external costs. Although studies that evaluate social equity and quality of life for farm communities are few, what is available suggests that both organic and conventional farming leave room for improvement. Still, organic farming comes out ahead when it comes to providing jobs for workers and reducing farmworkers’ exposure to pesticides and other chemicals. Many organic certification programs also have wellbeing goals for farmworkers, as well as animals. Organic agriculture has been able to provide jobs, be profitable, benefit the soil and environment, and support social interactions between farmers and consumers. Yet, no single type of farming can feed the world. Rather, what’s needed is a blend of organic and other innovative farming systems, including agroforestry, integrated farming, conservation agriculture, mixed crop/livestock, and still undiscovered systems. With only 1% of global agricultural land in organic production, organic agriculture can contribute a larger share in feeding the world. Yet, significant barriers to farmers adopting organic agriculture hinder its expansion. Such hurdles include existing policies, the costs of transitioning to organic certification, lack of access to labor and markets, and lack of appropriate infrastructure for storing and transporting food. Governments should focus on creating policies that help develop not just organic but also other innovative and more sustainable farming systems. Specifically, agricultural policies should: For a copy of the study, please email John Reganold. Dr. John Reganold is Regents Professor of Soil Science and Agroecology at Washington State University and has spent 30-plus years bringing a blend of innovative research and teaching on sustainable farming systems into the mainstream of higher education and food production. His research has measured the effects of organic, integrated, and conventional farming systems on productivity, financial performance, environmental quality, and social wellbeing on five continents. His former students are on the front lines of sustainability around the world, bringing food security to sub-Saharan Africa for the U.S. Agency for International Development, adapting quinoa to the salty soils of Utah, working on agroecology for Pacific Foods in Oregon, and turning wastes into resources in Haiti. The recent Paris climate agreement has helped to thrust the issue of climate change into international awareness. As countries promise to reduce their contributions to global emissions, the environment should be something we all now consider and incorporate into our daily lives. We should be more aware of how our energy choices impact the environment, and in turn, our health. Contradicting this mantra, however, the U.S. continues to promote and extract domestic oil and gas, even when the market is flooded with this product. Why? Because the collective “we” demands it. Americans want cheap energy. We want to be able to heat our homes and cook our meals without breaking the bank. We see sleek ads promoting domestic oil and natural gas as the ultimate solution to ending our reliance on foreign-produced energy resources. What we do not see are the impacts this growth in unconventional drilling has on people and the environment. As a visitor flying in to southwestern Pennsylvania, for example, you might see a drilling rig and some equipment on a well pad. You might even see this well flaring from the highway. What you won’t see, unless you live near this industry, are the impacts caused by dense unconventional drilling over a longer timeframe. Part of the issue with unconventional drilling is its magnitude and scope, with approximately 1.7 million active oil and gas wells in the U.S. Yes, money can be made by some townships, landowners, and local businesses. Yes, there can be benefits for mineral rights owners who sign well-reviewed leases. When the process fails, however, there can also be serious problems: And finally, in October 2015, a massive leak was discovered at a natural gas storage well in California. At the time of this article, the leak has not been fixed, releasing natural gas into the atmosphere at a rate similar to the pollution emitted from the operation of 4.5 million cars per day. This situation is just one example where drilling has adversely affected the environment. While often touted as a cleaner fossil fuel option, extracting, storing, and distributing natural gas and other hydrocarbons can unintentionally release methane – the key component of natural gas and a major contributor to climate change. Climate change, itself, has health implications. Examples include the capacity to: Likely, many more impacts of unconventional drilling go undiscovered. Uncertainty and inaction With the known environmental health and climate change threats surrounding unconventional drilling, the U.S. should be actively moving away from fossil fuels toward cleaner energy options. Unfortunately, a lack of transparency, research gaps, demand for fossil fuels, regulatory conflicts, money, and politics make the path winding and convoluted. I cannot offer a quick fix to this complicated issue, but here are a few steps people can take: Between the direct and indirect problems I discussed earlier, large-scale oil and gas operations including unconventional drilling will affect everyone in some way. If we continue to demand that such risky extraction techniques fill our energy gaps, environmental health may be the one to suffer. What we know about this broad issue is concerning. What we don’t is worse. Samantha Rubright, MPH, CPH serves as The FracTracker Alliance’s Manager of Communications and Partnerships, and is finishing her doctorate in environmental health from the University of Pittsburgh Graduate School of Public Health. Learn more about oil and gas impacts at www.fractracker.org. You can follow her on Twitter @SamMaloneMPH]]> People in the Arctic aren’t the only ones: farmers rely on predictable climate to feed their livestock and ensure their harvests are profitable. City infrastructure is carefully built to withstand historical weather extremes. But as climate is changing, it’s making it harder for farmers to earn a consistent living off their land; harder for cities to cope with increased risks of flooding or water shortages, or both; and more difficult for the elderly, young, and poor to survive extreme heat waves and cold snaps. These problems are worrisome in developed nations, where the frequency of extreme precipitation has increased by 37 percent across the U.S. Midwest in the last 55 years and Farmer’s Insurance temporarily sued the city of Chicago for failing to adequately prepare for the impacts of climate change; when two devastating years of crop losses throughout the U.S. Great Plains (the result of natural drought patterns exacerbated by unnaturally record-breaking heat) were followed by a recommendation by the U.S. Government Accountability Office’s to reduce crop insurance; and where the impact of a heatwave in 2003, the risks of which had already doubled due to human-induced climate change, resulted in over 70,000 premature deaths across France and northern Europe. In developing nations, however, which lack the safety net of infrastructure, public services, and insurance, such issues can be orders of magnitude more devastating. Although much of the focus at COP21, and in the days afterwards, has been on the emission reductions needed to achieve the given targets, for developing nations the real issue on the table was climate finance: funds to support climate mitigation and adaptation, which up until now have been vastly inadequate to alleviate the local effects of climate change while enabling such nations to continue to develop. Equitable financing has always been at the forefront of international climate negotiations. Emissions-heavy countries care more about mitigation and bear a greater historical responsibility. Developing countries produce little emissions but are experiencing the brunt of the impacts. According to a recent report by Oxfam America, the richest 10 percent in the world produce half of the world’s carbon emissions, while half of the poorest are responsible for just 10 percent of emissions. The two sides to the equation are unequal. The purpose of the financing discussed at COP21 is to assuage these differences. But currently much more money goes toward mitigation than to adaptation. This imbalance becomes particularly troubling when we know that the local effects of climate change are increasing, the gap between rich and poor is widening, and that we are locked into a certain amount of warming even if we stopped producing emissions this very second (if it were only that easy!). According to Alden Meyer, Director of Strategy and Policy for the Union of Concerned Scientists, the division between rich and poor is arbitrary, and getting more ambition from countries in equitable ways is difficult. This is one of the reasons why equity became one of the “down-to-the-wire” issues in the Paris climate talks—how to set fair emission reduction targets between developed and developing countries and delegating responsibility for financing. Developed nations have been unwilling to establish more aggressive mitigation goals, while developing countries that have not been as responsible historically (remember, just 10 percent) are concerned that equal emissions targets will inhibit opportunities for economic development. New financing mechanisms to help countries mitigate and adapt to climate change were proposed in 2009 to try and resolve some of these differences. The 2009 Copenhagen Accord resulted in a pledge of $100 billion by the year 2020 by developed countries to help developing countries deal with climate change. The following year during the COP16 meeting in Cancun, the Green Climate Fund (GCF) was established to help countries reach this $100 billion goal. The GCF receives funds from governments and the private sector, with $10.2 billion in pledges as of October 2015, signifying that the international community is committed to building resilience to climate change. The GCF is very important, but it will not take us to our goal. As Meyer put it, “the GCF is one part of the finance architecture but it is not the lion’s share of funds.” The initial capitalization of GCF funds is $10 billion spread out over a number of years. One important guideline contained in the GCF is that 50 percent of its financing will be for adaptation. Right now, only about 15-20 percent of the finance is going to adaptation—far from 50 percent (although it is important to note that many mitigation and adaptation strategies share co-benefits, meaning funding adaptation can often contribute to mitigation, and vice versa). Part of the reason is that private sector funds go much more often to mitigation and green energy technology; it is much harder to get private sector funding for adaptation. Will developed countries meet the commitment they made in Copenhagen in 2009 to mobilize $100 billion from public and private sources in climate finance by 2020? Many countries do not have time to wait and see. That’s why organizations like Oxfam, whose goals are to end poverty, hunger, and injustice, are moving forward and developing innovative programs and initiatives that are helping to build local capacity and increase resilience to such extremes. In 2010, Oxfam launched the R4 Rural Resilience Initiative to help farmers around the world manage their risks to disasters and secure their livelihoods. In 2014 alone, R4 provided drought insurance to over 26,000 farmers in Ethiopia and Senegal, helping people learn how to become more resilient. Selas Samson Biru, a 50-year-old farmer in a remote Ethiopian village called Adi Ha, is one of the many farmers who have benefited. Increased climate variability has led to more uncertainty from year-to-year, putting her crops and livelihood in jeopardy. Now equipped with a new tool to manage drought risks, Biru receives payouts for crops she has insured. And when her weather insurance became available, Biru joined other farmers to buy an irrigation pump. This investment has provided more abundant, profitable harvests. Farmers in drought-stricken parts of Texas similarly find their livelihoods threatened by the growing unpredictability in climate and weather. West Texas is well accustomed to droughts. Farmers here have traditionally relied on playa lakes—surface depressions that fill with water during certain times of year—but playas can no longer sustain farmers through severe droughts. Instead, farmers are implementing new technologies to manage changing risks and stay afloat. Andy Timmons, a grape producer in West Texas, moved away from farming row crops and started growing grapes to diversify his operations – a decision he made as he began losing confidence in the Farm Bill and federal commitments to agriculture, which he says are “going away.” Even though grapes adapt to extreme weather better than row crops, he has to cultivate his vineyard differently every year, testing new irrigation strategies to continue to increase production. To deal with early spring freezes during dry years, he installed wind machines to circulate air and keep it from stagnating so grapes wouldn’t freeze. Timmons had the resources to adapt – but Biru and many other farmers in developing nations don’t. A successful climate agreement at COP21 and beyond isn’t just about reducing our emissions: it’s about helping people—real people—adapt to the changes we cannot avoid. The $100 billion pledge, including the GCF, is an encouraging start. However, much of the money has only been “announced” and not yet disbursed. Actions at the ground level also seem to be outpacing top-down funding mechanisms. While countries continue working out ways to reach the $100 billion financing goal, stories like Biru’s and Timmons’s remind us of the importance of ground-up grassroots actions that are already helping to build local knowledge and resilience. Are there examples you can share from your community?]]> First, the concept of legally binding doesn’t exist between sovereign nations. Some nations will make the commitments legally binding within their own states, but there won’t be an international police force to penalize noncompliant states. Second, such commitments would have required the U.S. Senate and similar bodies in other nations to formally ratify the Accord. And as UCS expert Alden Meyer points out, this is unlikely to occur in the current political climate. And third, for the above reasons, the goal in Paris was not to come away with legally binding commitments, but rather for all nations to seriously consider the most they can do, and publicly commit to their best effort—with an emphasis on the word publicly. One way to think of it is like a potluck dinner, in reverse. At a potluck dinner, each guest brings food to share. No one person brings a complete meal; but once all the food is assembled, there is supposed to be enough for everyone. In preparation for the potluck, each guest reviews their food supplies (and their favourite recipes) to determine what to bring. In the same way, each country brought their Intended Nationally Determined Contributions (INDCs) to Paris. Before setting their INDCs, they reviewed their emissions and reduction options to determine how much action they could agree to, and of what type. For example, India plans to install LED streetlights and leapfrog a traditional distributed grid; the EU will cap and trade carbon; and Bhutan is planning to regrow their forests. In addition to calculating carbon sinks and storage, many developed nations also estimated the amount they could contribute to financing adaptation in developing nations already suffering the impacts of climate change. At a potluck, as at COP, there is also the challenge of assigning the amount of food each guest is to bring. This is particularly challenging when some guests have large appetites and large resources, while others are near starvation and have little to offer. Once all the INDCs had been assembled and laid on the table last week, it was clear that reductions were insufficient to achieve the global goal of a 2oC target or below. For that reason, various countries such as Canada stepped up their contributions over the course of the negotiations, and a great deal of discussion has focused on the role of land use in carbon sequestration–could it provide the salad course? There is a catch with voluntary measures, of course. If enough of the “guests” don’t carry through on their promises to bring food, the table will be sparse and the world will collectively miss its goal. At a potluck, there is no formal mechanism for penalizing such guests. No fines or jail time are handed out for failure to participate. There is a price, though: social ostracism. In the same way, even binding limits in international treaties don’t provide much more accountability. If an agreement included a penalty for not meeting one’s targets—payment into a fund, for example, to support nations that would need to spend more on adaptation—how could anyone be forced to pay? Until other nations are willing to implement meaningful sanctions over non-compliance, there’s no big stick to wave. The small stick we’re left with is public shaming. With shaming as the only current viable recourse for failure to comply with COP targets, why not agree on voluntary commitments, and use those to push for greater ones? Searching through the cupboards one more time to see if there’s a bag of rice or package of cookies left to bring to the potluck is the equivalent of each nation taking another long look at its economy, its transportation and industry, and its resources. What more is capable with existing technology? Could public and private investments be tapped? What separates the low-hanging fruit from the deeper, more costly reductions? The Conferences of Parties are designed to publicly highlight each nation’s commitments and capabilities. As a public and very high profile forum, it provides a powerful instrument for facilitating change. One country’s reductions, which may have seemed ample and sizeable back home, may suddenly shrink when displayed on a global stage, side-by-side with similar efforts from other economies. No one wants to be the person who dragged out the single-sized tart of last year’s apples from the back of the freezer, while their neighbors brought a fresh-made pie for twelve. So let’s not be too hasty to claim an accord without binding limits is a failure. Voluntary reduction may still lead to a successful global potluck, if each nation’s contributions are regularly reviewed, sampled, and commented on around the world—as is the plan. Caleb Crow, sustainability engineer and Ph.D. student in political science at Texas Tech University, contributed to this post. We sat down with Meyer over breakfast to get his unique and invaluable perspectives on the evolution of past COPs and how they have shaped the path to Paris. It’s evident from the first exchange that Meyer has a wealth of surprising facts and important insights to share. He has witnessed first-hand many key turning points in the history of international climate negotiations during last 25 years of COP meetings, a long history that provides him with unique insight into the battles fought and impediments faced that still reverberate through the climate negotiations at COP21 today. Few people probably realize that preliminary negotiations took place in 1990 in Westfield, Virginia, USA—the first and only such talks to be held on U.S. soil. These negotiations began right after the Intergovernmental Panel on Climate Change (IPCC) released its first report in 1990. Meyer was there when the UN General Assembly passed a resolution authorizing the negotiations that led up to the Rio Framework Convention, which were adopted in New York in 1992 just before the first meeting in Rio de Janeiro, known as the Earth Summit. Meyer describes the Earth Summit as a huge deal in its size and scope. It led to the UNFCCC and the Convention on Biological Diversity. But tension around the idea of legally binding commitments was already thick in the air. At the 1992 Earth Summit, many worked hard with the U.S. to persuade then-President George H. W. Bush that climate change was a real threat. Although the science was taken seriously in that agreement—calling for limits on carbon dioxide levels that would prevent dangerous anthropogenic, or human, interference with the climate system—the text of the treaty was so convoluted that it was difficult to say whether the U.S. was taking any specific measure under the agreement. For example, one section of the text stated a goal of returning emissions to 1990 levels and another section stated it would demonstrate leadership if industrialized countries could achieve that goal by the year 2000. However, in keeping the two sections separate, it prevented the U.S. from being legally obligated to these goals. There was also an understanding with the U.S. Administration that if there was any amendment in the future that imposed binding emissions obligations on the U.S., this agreement would come back to the Senate for a vote. In fact, this is the origin of the debate today around the terminology “legally binding.” For instance, if an amendment is set to “achieve” a target, this triggers an understanding such that the agreement here in Paris would have to go to a vote in the Senate, which would not likely get anywhere near 67 votes given the current outlook of Republican senators in today’s climate, Meyer points out. Alden Meyer recalls the first official convening of the COP in Berlin in 1995. This meeting represented the first time there was agreement among countries that not enough progress was being made to implement the goals of the framework treaty and that we needed to negotiate a protocol – the 1997 Kyoto Protocol. The Kyoto Protocol included binding emissions reductions for developed countries, but not for developing countries like India, China, and Brazil. This rapidly became a source of contention, and the focal point used by opponents. Even TV ads ran in the U.S. criticizing the protocol, saying it wouldn’t work because it was not global. And while Asian countries were being told not to accept binding agreements because it would hurt their economies, the U.S. public was told that the U.S. shouldn’t accept binding commitments because these other countries wouldn’t accept them. COP6 in 2000, which took place in The Hague in the Netherlands, was the only COP in history to be suspended. Debates between the U.S. and the EU on rules for crediting carbon sinks or using offsets toward the Kyoto targets resulted in a standstill. The session resumed five months later, but disagreements continued. After the resumed session and before the next COP meeting in 2001, Meyer remembers how the U.S. announced it was going to pull out, as George Bush famously stated that the Kyoto Protocol was dead. This announcement angered the rest of the world, and countries rallied to save Kyoto with the Marrakesh Accords in 2001. At COP15 in Copenhagen in 2009, several countries opposed the Copenhagen Accord, so the outcome there was that they only took “note” of the accord. In response, the U.S. and other countries launched a big campaign to persuade countries to put forward pledges, which was the antecedent of a new process to develop INDCs – Intended Nationally Determined Contributions – ahead of Paris. INDCs are bottom-up, self-differentiated goals among nations. They offer a tangible way for governments to communicate the steps they will take to address climate change. Copenhagen essentially became a stepping-stone in developing a process for setting emissions reductions targets from the ground up. The U.S. has supported the INDC process ever since it was first proposed. But, according to Meyer, INDC commitments are not ambitious enough. To Meyer, a major element of the COP21 agreement will be setting up top-down assessments of how we are doing collectively, and establishing expectations for countries to “up their game” moving forward into the 2020s, 2030s, and beyond. But because the INDC process is bottom-up and self-determined, getting adequate ambition is very difficult. Meyer is hopeful, however. Decentralized renewable energy is becoming more cost-effective and efficient. He spoke to a senior negotiator from India earlier in the week here at COP21, who told him that in India the cost of an LED light bulb had dropped from the equivalent of $5 USD to just over $1 per bulb in the last 17 months. India’s goal is to replace all street lighting in the country with LEDs by the year 2019, which, Meyer says, “is more ambitious than any developed country I know of.” According to Meyer, issues of equity and leadership have always been central at the COP meetings. The division between rich and poor countries remains arbitrary. Non-developed countries were exempted from Kyoto, which meant that rich countries like Qatar and Saudi Arabia with much higher per capita GDPs were exempted, compared to some of the poorer industrialized nations like Portugal and Greece. Meyer describes this as a shorthand mechanism for establishing rich and poor, but not one that has been objectively based. The Climate Action Network is working to establish an equity reference framework that sets up principles, criteria, and indicators to make this process more objective. This will include INDCs, finance, and technology support because, as he explains, “everything’s connected.” “The notion that development and carbon emissions are inextricably linked is one of the biggest problems in this process. It is not true. And now, it is certainly less true than it ever was,” Meyer said. With the scaling up of renewables in Germany, the U.S., and China, prices continue to go down. By 2020, India, China, and others will be able to do much more than they think they can do now. For this reason, Meyer says, it’s important not to lock the current INDCs in place. Instead, there must be flexibility in the process. Meyer says one of the most important things they are fighting for here in Paris is simply an opportunity that civil society can use to urge all countries to increase the ambition of their first round of INDCs. “COPs are where you make climate change a priority,” Meyer said. While it is true that no international agreement will force countries like the U.S., China, and Brazil to do anything they didn’t intend to do in the first place, COP meetings are the moments when the world discusses the problems, when civil society rallies around them, the media covers the issues, and they become a political issue. To Meyer, the focus is not so much on the details of countries’ INDCs, but rather on formulating the process. This process is making a change in domestic policy, because, for the first time, coordination and consulting with ministries and stakeholders will change what they think they can do to tackle the issues. In other words, examining the issues uncovers opportunities they may not have known were there. But what will it really come down to at the end of the wire for COP21? Meyer says the biggest issues are review, finance, and transparency. The monitoring, reporting, and verification (MRV) regime for developing countries are crucial. The U.S. cannot accept self-reporting without some type of independent verification. Countries need a flexible review process post-2020 to have all countries increase their INDCs moving forward. This is still being negotiated and, according to Meyer, will be one of the last-minute items to be settled. Adaptation and loss and damages will also have to be a big part of the Paris agreements. But how to get more ambition from countries in an equitable way remains a big question. There is a lot of work still to do here in Paris. Reaching a Paris agreement is crucial, because we don’t have time to waste. The rate of emissions reductions we’d need to achieve to get on the 2°C pathway goes up the longer we wait. Meyer believes that, while we may not be able to solve the problem of getting on the 2°C track at Paris, COP21 may be a tipping point and a transformational moment if countries commit to doing more by the end of the decade. At the very least, he believes we can keep the option of the 2°C track open. Despite the outcomes of past meetings, some successful, some more disappointing, Meyer remains hopeful that “we will leave Paris with something dynamic and energizing, something that preserves some hope in the system.” I think we are all hoping for that!]]> Despite—or perhaps even in part due to—these pressures, at the beginning of the second week, with an on-time delivery of preliminary draft outcomes, there are positive indications that COP21 may still produce the international treaty the world needs and demands. Despite the recent tragedy and the oft-discussed existential despair of climate scientists, the air here is hopeful. Delegates, scientists, concerned observers, students, and many other visitors to the vibrant Climate Generations Area are full of energy and optimism. There’s a lot to do and see here; and while outside we know the impacts of climate change continue unchecked, inside hope pervades. The Climate Generations area is constantly abuzz with people, activity, and events. Multiple exhibit areas showcase the climate action work of businesses and organizations from around the world. Conference sessions offer presentations and panel discussions that span topics from oceans, forests, and agriculture to youth and education, highlighting the many ways in which people are getting involved and taking action. Throughout the day, impromptu music, parades, and people in eye-catching costumes relay topical climate messages. Sustainability is both talked and walked here. The buildings are made of natural materials and 100% electric cars and hybrid busses in combination with public transportation get people to and from the meeting space. Food areas are lined with recycling and composting bins, and tables are stocked with water carafes and drinking fountains to cut down on the use of water bottles. Pedal-powered charging stations for laptops, phones, and other devices let you cycle as you work…or power a blender to make some organic juice! Here at the beginning of week two, it is clear the spirit of COP21 remains hopeful. During a somber time in the wake of the Paris terrorist attacks, we are reminded of what the COPs have set out to do since they first began – provide a space for the international community to come together to tackle climate change and limit its impacts, especially to the poorest and most marginalized. COP21 is about the environment, the economy, national security, our health, and our children’s future. Caleb Crow, sustainability engineer and Ph.D. student in political science at Texas Tech University, contributed to this post.
<urn:uuid:f16ce944-f290-4e20-b08d-e48e575c9ab9>
CC-MAIN-2016-26
http://feeds.feedburner.com/TheEquationUCSScienceNetwork
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95864
10,786
3.140625
3
Credit: Sang Wan Lee/Caltech Switching On One-Shot Learning in the Brain Most of the time, we learn only gradually, incrementally building connections between actions or events and outcomes. But there are exceptions—every once in a while, something happens and we immediately learn to associate that stimulus with a result. For example, maybe you have had bad service at a store once and sworn that you will never shop there again. This type of one-shot learning is more than handy when it comes to survival—think, of an animal quickly learning to avoid a type of poisonous berry. In that case, jumping to the conclusion that the fruit was to blame for a bout of illness might help the animal steer clear of the same danger in the future. On the other hand, quickly drawing connections despite a lack of evidence can also lead to misattributions and superstitions; for example, you might blame a new food you tried for an illness when in fact it was harmless, or you might begin to believe that if you do not eat your usual meal, you will get sick. Scientists have long suspected that one-shot learning involves a different brain system than gradual learning, but could not explain what triggers this rapid learning or how the brain decides which mode to use at any one time. Now Caltech scientists have discovered that uncertainty in terms of the causal relationship—whether an outcome is actually caused by a particular stimulus—is the main factor in determining whether or not rapid learning occurs. They say that the more uncertainty there is about the causal relationship, the more likely it is that one-shot learning will take place. When that uncertainty is high, they suggest, you need to be more focused in order to learn the relationship between stimulus and outcome. The researchers have also identified a part of the prefrontal cortex—the large brain area located immediately behind the forehead that is associated with complex cognitive activities—that appears to evaluate such causal uncertainty and then activate one-shot learning when needed. The findings, described in the April 28 issue of the journal PLOS Biology, could lead to new approaches for helping people learn more efficiently. The work also suggests that an inability to properly attribute cause and effect might lie at the heart of some psychiatric disorders that involve delusional thinking, such as schizophrenia. "Many have assumed that the novelty of a stimulus would be the main factor driving one-shot learning, but our computational model showed that causal uncertainty was more important," says Sang Wan Lee, a postdoctoral scholar in neuroscience at Caltech and lead author of the new paper. "If you are uncertain, or lack evidence, about whether a particular outcome was caused by a preceding event, you are more likely to quickly associate them together." The researchers used a simple behavioral task paired with brain imaging to determine where in the brain this causal processing takes place. Based on the results, it appears that the ventrolateral prefrontal cortex (VLPFC) is involved in the processing and then couples with the hippocampus to switch on one-shot learning, as needed. Indeed, a switch is an appropriate metaphor, says Shinsuke Shimojo, Caltech's Gertrude Baltimore Professor of Experimental Psychology. Since the hippocampus is known to be involved in so-called episodic memory, in which the brain quickly links a particular context with an event, the researchers hypothesized that this brain region might play a role in one-shot learning. But they were surprised to find that the coupling between the VLPFC and the hippocampus was either all or nothing. "Like a light switch, one-shot learning is either on, or it's off," says Shimojo. In the behavioral study, 47 participants completed a simple causal-inference task; 20 of those participants completed the study in the Caltech Brain Imaging Center, where their brains were monitored using functional Magnetic Resonance Imaging. The task consisted of multiple trials. During each trial, participants were shown a series of five images one at a time on a computer screen. Over the course of the task, some images appeared multiple times, while others appeared only once or twice. After every fifth image, either a positive or negative monetary outcome was displayed. Following a number of trials, participants were asked to rate how strongly they thought each image and outcome were linked. As the task proceeded, participants gradually learned to associate some of the images with particular outcomes. One-shot learning was apparent in cases where participants made an association between an image and an outcome after a single pairing. The researchers hypothesize that the VLPFC acts as a controller mediating the one-shot learning process. They caution, however, that they have not yet proven that the brain region actually controls the process in that way. To prove that, they will need to conduct additional studies that will involve modifying the VLPFC's activity with brain stimulation and seeing how that directly affects behavior. Still, the researchers are intrigued by the fact that the VLPFC is very close to another part of the ventrolateral prefrontal cortex that they previously found to be involved in helping the brain to switch between two other forms of learning—habitual and goal-directed learning, which involve routine behavior and more carefully considered actions, respectively. "Now we might cautiously speculate that a significant general function of the ventrolateral prefrontal cortex is to act as a leader, telling other parts of the brain involved in different types of behavioral functions when they should get involved and when they should not get involved in controlling our behavior," says coauthor John O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center. The work, "Neural Computations Mediating One-Shot Learning in the Human Brain," was supported by the National Institutes of Health, the Gordon and Betty Moore Foundation, the Japan Science and Technology Agency–CREST, and the Caltech-Tamagawa global Center of Excellence.
<urn:uuid:d38668a1-d1f8-4ba4-aeac-ba18d0d015ce>
CC-MAIN-2016-26
http://www.caltech.edu/news/switching-one-shot-learning-brain-46629
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961991
1,196
3.265625
3
Hungry wild polar bear prefers playing with a chained sled dog to eating it. It sounds like a paradox. How could play - defined as "apparently purposeless activity that's fun to do and pleasurable" - be vital for grim survival in such an often random and dangerous world? - And not just play in childhood, but throughout life. - And throughout life not only in humans but in all sorts of animals, including hungry polar bears, chained sled dogs, rats, cats, otters, migratory birds, and, just maybe - yes - ants. All sizes. One research team, led by ethologist Robert Fagen, now professor emeritus at the University of Alaska, spent 15 years sitting in trees in Alaska and Western Canada to observe and document how bears play in the wild. They found that bears who played more often and more successfully throughout adulthood had longer and healthier lives, and thus left more offspring behind. As for insects at play - scientists including famed Harvard myrmecologist Edward O. Wilson and Gordon Burghardt, author of "The Genesis of Animal Play," have described activities including the mutual grappling of and tussling with mandibles - a sort of rough and tumble play in off hours at the anthill - that looks, in any case, like playful practice at the very least. What might be the connection between play and survival? "Play is one of the brain's best forms of exercise," says Stuart Brown, author of "Play - How It Shapes the Brain, Opens the Imagination and Invigorates the Soul." The exploratory and risk-taking nature of play - including the healthy "rough and tumble play" that can sometimes frighten protective parents - opens the brain to new ideas, Brown told ABC News. Play gives a brain the experience and thus the courage to search outside the box, he says, - to try out new ways of doing things in an unpredictable world that constantly keeps presenting new kinds of menacing problems and obstacles to survival. "Play keeps minds and brains flexible," says Brown, one of the founders of modern play behavior studies. He started his career as an MD and practicing psychiatrist, and is now the director of the National Institute for Play in Carmel Valley, California. A growing number of scientists and other professional researchers are amassing evidence that, in all kinds of creatures, an innate impulse and ability to play - goof off a little, be curious, or just have what looks to even the most jaundiced human gatherer of data like having some aimless and enjoyable fun - has been favored by evolution down through the eons. It seems to be an attribute that somehow helps animals survive longer, and thus be more likely to pass on their fun-loving genes. And it's not only true of play among the members of the same species. Invitations to play - play signals - work also between some species, and not only between humans and domestic dogs with eager eyes and wagging tails. Take the remarkable case of the hungry polar bear and the vulnerable chained sled dog shown at the beginning of the Natures' Edge video below. As Brown explains, the bear's winter-long play deficit seemed to trump his wild hunger. According to those on the scene, the polar bear even hung around and played with the chained sled dogs for a week, before finally ambling off toward his seal-hunting grounds to break his long winter's fast. The fact that non-verbal invitations to play are clearly understood between some species is an enticing clue for philosophers who are trying to describe animal intelligence and the nature of consciousness itself. There are now countless scientific studies documenting how different kinds of "tame" or (in the wild) "habituated" animals, including even such species as the supposedly vicious and reclusive North American wolverine, can develop rambunctiously playful relationships with humans. Such back-and-forth play between humans and all manner of pets usually include joyful mutual activities such as light wrestling and hide-and-seek - that are "apparently purposeless but fun and pleasurable" and that are triggered by a variety of non-verbal "play signals." It suggests to some thinkers that different species, including humans, may share a common mental experience of self-awareness and awareness of others that is both highly intelligent (whatever that may mean) and fully conscious (whatever that may mean.) And if you're a pet owner, it may, of course, be something you have felt in your own guts when responding to a play-invite from, say, a super-energetic ferret, or a bouncing mutt who's trying to get you to play a game of chase or tag that the two of you will probably make up as you go along in that creativity of exploration that is, says Brown, one of the essential hallmarks of play. 'Play Deficit' May Also Offer Insight Into Murderers and Even Terrorists The new study of play behavior may even, say researchers, help explain why some people become murderers and terrorists, and others don't. They report that the opposite of play - an unnatural lack of play activity, or even worse, the constant suppression of play by a parent or adult group or political leader - may increase the likelihood of violent behavior. Having a severe "play deficit" at any age is associated in a number of studies with a mild but chronic depression, and in some cases even with mass murder. As Brown recounts in the video below, Charles Whitman, who suddenly seemed to snap and then killed his wife and mother before climbing the tower at the University of Texas in Austin with a rifle and shooting many more people, suffered a severe suppression of natural play behavior and shared a similarly dismal "play history" with a number of other murderers. Play Keeps Brains Healthy and Socially Adept In addition to exercising mental flexibility and encouraging the ability to seek out a variety of options - in humans and other animals - he says play also produces such life-nurturing rewards as a sense of community, a sense of belonging, plus a variety of mental attributes including empathy and optimism. Stuart Brown says he does some of his best thinking in the tree house study where his did the interview for this short Nature's Edge segment. Take a look: The arrival of the World Wide Web also means that we're now being constantly flooded with new examples from around the globe of play behavior among animals wild and otherwise. Here, for fun (of course) is just one of the latest. It's gone so viral that you may well have seen it - but if not, take a peek at this brief video of the snowboarding crow: Video: Crow "Snowboarding" on Roof apparently in Russia: We invite you to follow our weekly Nature's Edge Notebook on Facebook and on Twitter @BBlakemoreABC This is the sixth in a series on animal intelligence and the science of play behavior : 1- " Hunting With a Most Endangered Hunter - Dateline Botswana" At: http://abcn.ws/rI6Obx (Nature's Edge Notebook #11) 2- "Dogs Use Subway, Cat Takes Bus…" At: http://abcn.ws/zXPpDd (Nature's Edge Notebook #12) 3- "Who Needs Words? Crows? You? Wild Gorillas? Alison Krauss? …" At: http://abcn.ws/wM25PJ (Nature's Edge Notebook #13) 4- "How Would a Prairie Dog Describe You? Just Ask One!" At: http://abcn.ws/yIZ9it (Nature's Edge Notebook #14) 5- "Dolphins Reported Talking Whale in Their Sleep: Freud's 'Royal Road to the unconscious' may have surfaced at a pool in France." At: http://abcn.ws/yXpddT (Nature's Edge Notebook #15)
<urn:uuid:0d4dd917-e417-422d-8084-d24aabd8a8e6>
CC-MAIN-2016-26
http://abcnews.go.com/blogs/technology/2012/02/fun-and-play-are-key-to-survival-for-bears-dogs-humans-birds-and-maybe-even-ants/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949769
1,656
2.859375
3
A Comparative Reference Grammar of Bosnian / Croatian / Serbian Danko Sipka This grammar is laid out as a comprehensive yet user-friendly reference for beginning to intermediate English learners. The basic grammar text gives rules in a form of decision-trees, tables and figures. Longer lists of exceptions are provided in the appendices. Most sections are divided into a structural description and a contrastive section. Features that contrast with english are elaborated in detail. Along with the sections containing grammatical information in a narrower sense (phonology with prosody, inflectional and lexical morphology, syntax), the grammar contains practical metagrammatical information such as that on the orthography and the use of pragmatic operators, which the learner will need in everyday communication. Main differences between standard and substandard grammatical forms are also provided. Three principal theoretical approaches deployed in the grammar include minimal information grammar, the application of decision theory in modeling language structures, and cognitive linguistics. They were used in the first stage of preparing the text, followed by the application of a user friendly interface. An overarching goal of this grammar is to equip its users with the heuristics which will enable a swift learning process and easy solutions to any problem in the use of the language. 2007, 740 pages, hardbound. Concise Bosnian-English / English-Bosnian Dictionary This is the only dictionary reflecting the daily use of language in modern-day Bosnia and Herzegovina, with phonetic pronunciation guides for both languages. This dictionary can be used by Bosnians, Serbs, and Croats who are learning the English language as well as English-speaking travelers or business people. The author is a native of Sarajevo. 8,500 entries. 331 pages. Paperback. Shipping weight 1 lb. Dictionary of Serbian, Croatian, and Bosnian New Words Danko Sipka This dictionary of some 5,000 entries attempts to document new terms that have come into general usage in the 1990ís, i.e., after the publication of the two most reliable existing sources, Morton Benson 1991 and Milan Drvodelic 1989. The dictionary includes terms from the three major forms of the language, Serbian, Croatian, and Bosnian Muslim. It is intended to serve advanced professional English speaking Serbian linguists to assist them in reading recent Serbo-Croatian texts, particularly texts of a technical or specialized character. 2002, 180 pages, hardbound.
<urn:uuid:d521f722-dbe0-4ccc-91f6-5501a7481875>
CC-MAIN-2016-26
http://multilingualbooks.com/bosnianref.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912565
508
2.765625
3
Writing in this month's issue of the Australia and New Zealand Journal of Public Health, researchers Tim Lobstein and Mike Daube acknowledge that previous studies have shown a 'J-shaped' curve, indicating that a little alcohol might lower your risk of heart disease while a lot will certainly raise the risk. But they were concerned that the original data came from surveys undertaken over forty years ago, among populations who were much slimmer than they are now. 'We were concerned that the findings from a previous generation may not apply to our modern, fatter population,' said lead author Dr Tim Lobstein. 'So we revisited the data in the classic Framingham Heart Study, and examined the differences between slimmer and fatter men to see how the J-shaped curve held up. It held pretty well for slim men, but not for those with a higher Body Mass Index, above 27.5 kg/m2.' 'In effect, the standard advice about a small amount of alcohol being good for the heart doesn't stack up for overweight men,' he said. 'We will need to check other surveys and see if they show the same pattern, and we will need to check the data for women.' 'We know that apart from heart disease, other causes of disease are made worse by even small amounts of alcohol, including cancer, diabetes and stroke - the major chronic disease killers,' he added. 'For now, the advice has to be that there is no such thing as a beneficial level of consumption, especially if you are overweight.' "Alcohol: No cardio-protective benefit for overweight adults?" is published today in the Australia and New Zealand Journal of Public Health (2012), vol. 36 no. 6, page 582. A pre-publication proof is available here: http://www.iaso.org/site_media/uploads/ANZJPH_2012-6_-_p582_Lobstein_Letter.pdf Posted Mon 10 Dec 2012 12:00 Uploaded by Louisa Ells
<urn:uuid:339685e6-820c-44c5-8fd8-3f80dcc4e41b>
CC-MAIN-2016-26
http://www.noo.org.uk/news.php?nid=2169
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00051-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950445
414
2.53125
3
Details about The Complete Idiot's Guide to Philosophy; 3rd Edition: Thousands of years of wisdom, in one updated guide. Socrates’s admonition that “the unexamined life is not worth living” still resonates with many people, and this guide is a great introduction to that mental exercise. The author skillfully covers the subject both historically and topically and brings the reader all the way up to the present, with insights into 21st-century philosophical thought. • Essential philosophers and philosophies, from ancient times right up to today • New information on such topics as Eastern philosophy, women philosophers, postmodernism, and critical theory • The relevance of philosophy to a variety of other subjects and to today’s world Back to top Rent The Complete Idiot's Guide to Philosophy; 3rd Edition 3rd edition today, or search our site for other textbooks by Jay Stevenson. Every textbook comes with a 21-day "Any Reason" guarantee. Published by DK Publishing, Inc.. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
<urn:uuid:cc18a207-445c-4599-b466-86329aa9b568>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/the-complete-idiot-s-guide-to-philosophy-3rd-edition-3rd-edition-9781592573615-1592573614
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907202
245
2.59375
3
Global warming deniers often suggest that the Intergovernmental Panel on Climate Change Fourth Assessment Report is a political document, and they’re partially right — but not in the way that they might think. The report is conservative by nature, relying on studies that were largely published before 2005, and the picture it paints is far rosier than it should be. Over the last five years, study after peer-reviewed study has suggested that the Fourth Assessment Report is already out-of-date, and global warming is barreling along. So it’s worthwhile to reconsider the science on this, Blog Action Day. Luckily for me, I don’t have to do the heavy lifting. Leading experts have made good on a promise to update the climate change science in advance of Copenhagen, and they’re telling politicians that humanity is risking “abrupt and irreversible climatic shifts” from the accelerating pace of global warming. Rising global surface and ocean temperatures, surging sea levels, extreme weather events, and the retreat of Arctic sea ice* are all coming harder and faster than research suggested five or 10 years ago. The takeaway message is that politicians had better find a way to work together at the next international climate summit in December — or shortly after — or the results will be devastating. The 36-page document summarizes more than 1,400 studies presented at an emergency climate conference held last in March in Copenhagen. The report said that greenhouse gas emissions are growing faster than expected, and evidence accumulates that the planet itself is becoming a factor. Some carbon sinks like the oceans and Canadian boreal forests are diminishing, and many places in the far north show signs of liberating methane into the atmosphere. “Rapid, sustained, and effective mitigation … is required to avoid ‘dangerous climate change’ regardless of how it is defined,” according to the report. “Temperature rises above 2°C will be difficult for contemporary societies to cope with, and are likely to cause major societal and environmental disruptions through the rest of the century and beyond.” And a business-as-usual approach will take us well beyond the 2°C threshold in less than 50 years. The report suggests that deep emission cuts are essential, and the sooner the better. “Weaker targets for 2020 increase the risk of serious impacts, including the crossing of tipping points” beyond which irreversible natural forces could push temperatures to unthinkable levels. First Nations in North America have a wonderful — and pointed — expression: We don’t inherit the Earth from our ancestors, we borrow it from our children. I hope our leaders are listening. * Among the predictions made, and summarized in this 36-page document: The coming decade will be the warmest ever, and summer arctic sea ice will largely melt by 2020. Droughts will intensify, and hurricanes will become ever more potent. By 2100, we should expect sea levels to rise by between 5 and 7 feet; that more than 70 percent of the Amazon rainforest will die (if it isn’t already cut down); that 50 to 70 percent of species will go extinct; that agriculture will fail in California; that the American Southwest will be turned into a permanent dust bowl; and that a few billion people in Asia to have no water for life. And that’s but a sampling of dozens of apocalyptic predictions.
<urn:uuid:a06e9b3b-560e-438a-bdb2-1e2af5f1b4c4>
CC-MAIN-2016-26
http://www.triplepundit.com/2009/10/climate-inaction-is-inexcusable/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938208
692
2.96875
3
EU-UNU Tokyo Global With approximately one-sixth of humanity living in absolute poverty; the impetus for development has never been more urgent. While many developing countries, such as China, Brazil and India, have recorded impressive gains over the past several years, much of the developing world faces less rosy prospects. Doing More, Better, and Faster: A Global Partnership for Eradicating Poverty September's World Summit gave international leaders and others a chance to reappraise the achievements of the UN's Millennium Development Goals after half a decade. While progress has been tangible, the possibility of attaining the goals by 2015 will require the redoubled efforts of all developed and developing countries working in partnership together. Recognizing the centrality of development in and of itself, as well as its importance to global security, human rights, and environmental protection, the EU has played, and continues to play, a leading role. In 2004, the EU provided over one-half of the world's official development assistance, and has further pledged to increase aid over the next decade with the aim of reaching the MDG objective of 0.7% of GNI by 2015. Japan is also a major contributor of ODA, providing $8.9 billion in net disbursement, three times the DAC average. Japan has focused much of its ODA on Asia and the Pacific, where providing economic infrastructure has helped nearly 200 million people out of poverty. The EU-UNU Tokyo Global Forum will provide an opportunity for EU, UN and Japanese policymakers and practitioners to exchange views. The sixth in a series of forums organized by the Delegation of the European Commission and the United Nations University, this exchange will both reflect the close co-operation that exists between these two partners, and spark further topics for collaboration as well as co-operation with Japanese partners. 1. Development and Trade Trade can play a critical role in development and the eradication of poverty. Recognizing this important, albeit not direct, link, the EU passed its "Everything But Arms" Initiative, which grants products (excepting arms and munitions) from the 50 least developed countries duty, and quota-free access to the vast European Union market. The UN's Millennium Development Goals (MDGs) have stressed the importance of creating a "Global Partnership for Development" between poor and wealthy nations. The 2005 Report on MDGs shows that substantive progress has been made in the realm of international trade between the developed and developing worlds, as nearly two-thirds of products produced in the developing world now enter the developed world duty-free. Japan has been active in promoting Asia-Africa Trade within the Tokyo International Conference on African Development (TICAD) Framework. Japan has also been active in facilitating business exchanges with Africa and promoting product development. How might the EU, the UN and Japan build on what has already been achieved to encourage the developing world to play their part in its ascent out of poverty, by creating an attractive business environment through stable and transparent regulatory, legal, judicial, and institutional reforms ? How can the WTO continue to contribute to this movement in the wake of the Hong Kong Ministerial Meeting? 2. Development, Democracy and Human Rights Numerous issues stand at the intersection of development, democracy, and human rights. In the words of the 2000 UN Human Development Report, "A decent standard of living, adequate nutrition, health care, education, decent work, and protection against calamities are not just development goals - they are also human rights." The EU likewise incorporates human rights provisions into co-operation agreements, such as the Cotonou Agreement with the African, Caribbean, and Pacific states (ACP). Japan's ODA Charter also states that "full attention should be paid to efforts for promoting democratization." Indeed, embedding development into a human rights framework can help raise living standards across the globe. As dire poverty frequently impedes the realization of human rights, state signatories to international treaties may be called upon to provide for the gradual enrichment of its people: from basic subsistence and universal education, to workers' rights and social development. What mechanisms are available to make sure development reaches all who need it? 3. Development and Security The UN's In Larger Freedom campaign aims to reaffirm, in UN Secretary-General Kofi Annan's words, "the three great purposes of this Organization: development, security and human rights." Likewise, speaking in the context of the EU's strong commitment to African development, European Commission President Jose Manuel Barroso has noted that "there is no real development without security." In today's interconnected world, security threats can no longer be viewed as purely local phenomena. Development efforts need to focus on stabilizing post-conflict regions, and preventing terrorism. Otherwise, how can foreign aid and other forms of assistance be effectively channeled to respond to these threats? 4. Development and Environment While development and environmental protection appear to be at loggerheads, the EU and UN have made several attempts to harmonize the two. A recent study by the UN and World Bank demonstrates that linking aid to environmental protection may effectively reduce poverty. Similarly, the EU has consistently promoted development policies that aim to preserve the environment. On many global environmental protection issues, such as climate change, it has received first-rate support from Japan. This panel fleshes out ways in which development can proceed in an environmentally sustainable manner. What kinds of legislation can developing countries enact to ensure environmentally friendly use of their natural resources? What role can supranational organizations, multinational corporations, and individuals play in ensuring adherence to an environmentally friendly model of growth?
<urn:uuid:575b6c99-9b86-4c36-b0a6-c380e14adae8>
CC-MAIN-2016-26
http://archive.unu.edu/p&g/eu/2006/background06.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945127
1,147
2.640625
3
Yeast - A Treatise - Section II Table of Contents Factors Effecting Fermentation Given the essentially anaerobic environment that exists in dough once the available oxygen is used, one would expect the primary physiological activity of yeast to be that of fermentation. However, the organism also undergoes some growth and cell multiplication during the fermentative process. For example, a test dough with a yeast content of 1.67%, based on flour, and fermented at 80° F (27° C), demonstrates no significant increase in yeast-cell count during the first two hours of fermentation with the actual rise in cell numbers being on the order of 0.003%. The most vigorous yeast growth was observed during the period between the second and fourth hours of fermentation, when the yeast cell count increased by 26%. Between the fourth and sixth hours, the rate of yeast multiplication declined again to about 9%, based on the original cell count. Other findings indicate that the smaller the original quantity of yeast in the dough, the greater the percentage increase in cell numbers during the fermentation, with all other conditions being held constant. Thus a 0.5% yeast addition to a test dough produced an 88% increase in cell count after 6 hr of fermentation, while with a 2% original yeast level the corresponding increase in cell numbers was only 29%. This is not surprising given the fact that at the lower yeast level, the competition for nutrients is far less than at the higher yeast levels. Thus, each yeast cell has access or at least the opportunity for access to greater food supplies during fermentation. Another study found that yeast growth in a sponge fermented for 4 hr was 56%, with only an additional 1% growth by the end of the proof period. The original yeast level of 2.25% was thus increased to 3.55% in the course of the entire fermentation. In a liquid preferment made with 3% yeast, the cell count increased by only 1% in the preferment, but by 15% in the dough. This reduced growth rate of yeast in liquid ferments accounts for the general practice of using higher original yeast levels in these doughs. Not all work in this area is in agreement with the specific findings described above. Carlin, and then Reed, found no increase in the yeast population over a 4-hr fermentation period reporting essentially the same observations. It is rather difficult to determine the actual number of cells in a dough, it is relatively easy to establish the percentage of yeast cells that have buds. Compressed yeast will normally contain about 2 to 5% budding cells, and this number increases to about 30 to 50% by the end of the sponge fermentation, with no additional increase during dough fermentation. This increase in bud formation by the yeast cells is basically a sign of incipient yeast growth. In the case of straight doughs, there is very little budding of yeast cells during the first three rises, but a substantial increase to about 40% during the proof period. No increase in the number of yeast cells was observed in liquid flour preferments, while budding was found in only about I8% of the cells after 3.5 hr of fermentation. When yeast is first added to the sponge or dough, it is still in a relatively dormant state. A number of studies have shown that yeast requires about 45 min in a favorable environment to attain full adaptation to fermentation, although it begins to evolve carbon dioxide and ethanol in a much shorter time. During this period of adaptation yeast exhibits a high degree of sensitivity to both favorable and unfavorable environmental influences. Adaptation is somewhat more readily accomplished in sponge-dough than in straight-dough systems. In sponges, in which the critical yeast adaptation takes place, such yeast-inhibitory ingredients as salt, and high sugar levels are normally withheld to enhance fermentation. No such amelioration of the environment for yeast is possible with straight doughs, so that in this system, the adaptive stage of fermentation represents a more critical phase. The common practice with flour-containing preferments of withholding the salt, and the bulk of the fermentable carbohydrates for an initial period, i.e., until after the yeast has fully adapted, is also intended to provide the yeast with an optimum growth-conducive environment. All other factors being equal, yeast adaptation is perceptibly promoted by a plentiful supply of moisture, e.g., in slack sponges and dilute preferments. Since water serves as the indispensable medium in which the metabolic processes of yeast take place, its relative abundance significantly accelerates the rate at which these processes occur. Stiff sponges and highly concentrated preferments are usually marked by delays in full yeast adaptation. Yeast exhibits a variable preference for different sugars. It readily assimilates four sugars, namely, sucrose (after hydrolysis to glucose and fructose by yeast invertase or sucrase), glucose, fructose, and maltose (after hydrolysis to glucose by yeast maltase). In yeasted doughs, an increase in maltose occurs during the first stages of fermentation, until the initial supply of glucose and fructose is exhausted, after which the maltose content gradually declines. Studies of the preferential utilization of sugars by yeast are documented in the literature, but this is not a topic for this discussion. Doughs prepared only from flour, water, yeast and salt will initially contain only about 0.5% of glucose and fructose derived from the flour. This is adequate to start fermentation and to activate the yeasts adaptive malto-zymase system that is responsible for maltose fermentation. Fermentation is sustained by the action of a- and beta-amylases of flour that convert the susceptible damaged starch granules into maltose. Damaged starch results from milling and its level is normally much higher in hard wheat flours than in soft wheat flours. Quantitative calculations show that I g of yeast will ferment about 0.32 g of glucose per hour during a normal fermentation. The disappearance of sugars in a liquid preferment-dough system to which 8% of fermentable solids in the form of glucose and maltose was added were traced, and it has been observed that the maltose content decreased somewhat in the liquid ferment, but then increased in the dough stage as a result of amylolysis in the dough. The system used up about 3% of the fermentable carbohydrates, with the remaining 5% forming the residual sugars found in the finished bread. Since the second stage of fermentation involves the conversion of maltose into ethanol and carbon dioxide, the behavior of this sugar in the fermentation process is of some significance. This is especially the case since different yeast strains have been shown to vary in their maltase activity. Experimental results have shown that a yeast strain with low maltase activity needed 21 min longer to produce two rises in a dough than did another, high-maltase yeast. Yeast strains also differ in their maltase activity in different doughs. A single yeast strain may also exhibit variable maltase activity under different test conditions. These observations led to the hypothesis that some constituent of flour contributes in some manner to the yeast's ability to ferment maltose. The rate of maltose fermentation by yeast also has been shown to be influenced by pH to a much greater degree than is true of glucose fermentation. Dough fermentation, in addition to generating alcohol and carbon dioxide, also produces small amounts of a fairly large number of organic acids. These include lactic, acetic, succinic, propionic, fumaric and pyruvic, butyric, isobutyric, valeric, isovaleric and capriotic acids. Among these, the most prevalent are acetic, propionic, butyric, isobutyric, valeric, isovaleric and capriotic. Acetic acid is the most prevalent by far. The production of acetic acid is much higher in breads made with a poolish or naturally leavened than with a straight dough. Calvel speculates that acetic acid acts as a carrier for bread crumb aroma, sensitizing the taster to the other constituents of the aroma. This effect seems to be directly linked to the amount of acetic acid in the dough As maturation progresses and fermentation is prolonged, the dough becomes richer in organic acids, and this increase becomes evident as a lowering of its pH. The longer fermentation is allowed to continue, the richer in organic acids the medium becomes. This formation of acids is reflected in a time-dependent decrease of pH and an increase in titratable acidity in the fermenting medium. A number of factors such as aroma, and keeping quality are enhanced as a result of the development lower pH (more acidic) dough. Temperature of the dough is an important factor. Calvel demonstrates this in the graph shown here. (Graph 2) While progressive pH change in naturally leavened dough is relatively rapid as can be seen, the change appears to occur more slowly in dough leavened with bakers yeast. The presence of salt in dough often masks acetic acid. When the dough is leavened with an unsalted preferment, the acetic acid or vinegar odor appears a little more rapidly, although it is still hardly perceptible. The results of these evaluations of dough pH are influenced by leavening method, and are different from one another. The pH is ultimately related to the level of residual sugars present in the dough before baking. Thus, a discussion of pH must, by default, include discussions of residual sugars. These residual sugars are the remainder of those that fed dough fermentation. They fulfill important functions during the baking process. The level at which they are present plays an important role in the quality of the final loaf of bread. Generally, a below average pH coincides with a lack of residual sugars, which translates to a deficiency in oven-spring, i.e. loaf volume, crust coloration and crust thickness, aroma, crust taste, crumb flavor, and keeping quality. When the dough is leavened with prefemented dough which undergoes an excess of maturation or fermentation, it is good practice to remedy the lack of residual sugar in advance by adding from 0.1% to 0.2% malt extract during mixing to reestablish the proper sugar balance. Excessive residual sugar may also occur, although this is more rare. If this phenomenon is caused by characteristics inherent in the flour it is a difficult occurrence to correct. If excessive residual sugars occur as a result of the manner in which the dough was handled, i.e. an abnormally short first fermentation, or a lack of proper dough maturation, it is more easily corrected. The presence of an appropriate amount of residual sugars in the dough at the time of baking is extremely important. It insures an active oven spring, assists in dough development, and helps the loaves to reach a normal volume. Appropriate residual sugar levels contribute to optimal crust color, which in turn, according to Professor Calvel, contributes to the exterior appearance, the aroma and the flavor of bread. Lactic acid, not mentioned by Calvel, but cited as a prevalent acid in white bread by Pyler also survives at some level in the finished bread. The accumulation of lactic acid in fermenting dough is attributable primarily to the presence of the genus Lactobacillus in both flour and compressed yeast. Of the two acids (lactic and acetic), acetic acid is normally found in smaller quantities. It is also weaker than lactic acid, with a lesser degree of ionization, and its effect upon the pH of the dough is correspondingly smaller. In sourdough breads ("San Francisco Sourdough)", acetic acid represents about 50% of the total acids found, and was five to ten times that found in white (non-sourdough) breads. The pH of fermenting dough is more strongly affected by the presence of ammonium salts in yeast foods, especially if the ammonia is present as the salt of a strong acid such as hydrochloric or sulfuric acid. Yeast readily assimilates ammonia as a nitrogen source. Yeast Tolerance to Acidity Yeast exhibits a considerable tolerance to extremes of pH, being able to maintain an active fermentation in a 5% glucose solution in the pH range of 2.4 to 7.4, but ceasing activity at pH 2.0 or pH 8.0. For optimum results, good practice dictates that the pH of the fermenting medium be maintained within the range of about 4.0 to 6. A drop of more than 50% in fermentative activity has been observed at pH 3.5. More gradual declines in yeast activity were encountered at higher pH levels, with measurable effects showing up at pH values over 6.0. The explanation for the yeast's ability to maintain a relatively constant activity over a 100-fold change in hydrogen ion concentration (pH 4 to 6) is found in the fact that the pH of the cell interior of the yeast remains quite constant at about pH 5.8, regardless of any relatively wide pH variations in the fermenting medium. The enzymes involved in fermentation thus operate in an optimum pH environment within the yeast cell that is largely unaffected by external changes in pH. A prerequisite to a controlled fermentation is a fully hydrated, homogeneous dough, such as is obtained by correct mixing. The surface appearance of a sponge as fermentation progresses usually provides a reliable indication of the adequacy of its mixing. A properly mixed sponge will exhibit good gas retention that will make it rise and assume a well-rounded top. Retention of fermentation gasses allows loaves to develop properly and result in a light, well raised loaf after baking. The surface of an under mixed sponge, on the other hand, will remain flat, which is indicative of an incomplete incorporation of the formula ingredients and an uneven fermentation. In straight doughs, mixing plays a much more critical role as the aim here is to obtain optimal physical dough development. When a correctly mixed sponge or dough is fermented, two sets of forces come into play: gas production and gas retention. Gas production involves primarily the biological functioning of yeast on available fermentable carbohydrates, whereas gas retention is largely a measure of the mechanical and physicochemical modifications of the colloidal structure of the dough during mixing and during the course of fermentation. The baker must control fermentation in such manner that the forces of gas production and gas retention are in proper balance. Thus, should gas production attain its maximum rate before the dough's gas retention capacity is fully developed, then too much gas will be lost to bring about maximum aeration of the dough. On the other hand, if the gas retention capacity has peaked before gas production has reached its maximum rate, then again much of the gas is unable to perform its aerating function. Hence, the aim of fermentation control is to have gas production capacity and gas retention capacity coincide both as to rate and time. As Clark (in Pyler) has stated, "When both peaks are reached at the same time there frequently is combined in one loaf the largest volume together with the best grain, texture, crust color, and other loaf characteristics which the flour in question will produce." In the process of developing a bread dough, changes are brought about in the physical properties of the dough. In particular the dough's ability to retain the carbon dioxide gas, which will later be generated by yeast fermentation, is improved in the process. This improvement in gas retention ability is particularly important when the dough pieces reach the oven. In the entry stages of baking, before the dough has set, yeast activity is at its greatest level, and large quantities of carbon dioxide gas are being generated and released from solution in the aqueous phase of the dough. The dough is only able to reclaim the gas formed if a gluten structure with the correct physical structure is created. The baker must coordinate the timing of the development of the gluten structure with gas production. It does little good, for example, to develop bread with high carbon dioxide release due to proper fermentation processes, but without the degree of extensibility necessary to provide good gas retention. Can one measure gas retention and gas production? The answer is "Yes - but…" Instrumentation exists which can measure both in the same dough at the same time. It is probably not available to the vast majority of home bakers and perhaps even to most commercial bakers. It is the Chopin Rheofermentometer. This is a new instrument that simultaneously measures gas production and gas retention under realistic conditions. A piece of dough is placed in a sealed chamber under a weighted piston. As the dough rises piston movements is measured to determine the rate of expansion and the dough strength. At the same time, total gas production by yeast is measured along with the amount that escapes from the dough into the chamber. Subtracting the amount released from the total gives the amount retained. All of this is controlled by a microchip that calculates the results and produces a graph depicting "development of the Dough" and "Gaseous Release". A retention coefficient is calculated by dividing the retained volume by the total volume. (Lallemand.) Most of the desirable changes resulting from 'optimum' dough development, whatever the breadmaking process, are related to the ability of the dough to retain gas bubbles (air) and permit the uniform expansion of the dough piece under the influence of carbon dioxide gas from yeast fermentation during proof and baking. Gas production refers to the generation of carbon dioxide gas as a natural consequence of yeast fermentation. Provided the yeast cells in the dough remain viable (alive) and sufficient substrate (food) for the yeast is available, then gas production will continue, but expansion of the dough can only occur if that carbon dioxide gas is retained in the dough. Not all of the gas generated during the processing, proof and baking will be retained within the dough before it finally sets in the oven. What factors effect gas production and retention? These include the following that would seem to be of interest to most home bakers. High Temperature: This increases gas production and decreases gas retention. Low temperatures give strong doughs that rise slowly, while high temperatures give weak doughs that rise quickly Higher Water Absorption: This increases gas production and decreases gas retention. Diluted dissolved solids make yeast more active, but diluting gluten reduces the strength of the dough. Sugar: Gas production can be increased with sugar levels of about 5%, but reduced at higher levels because of osmotic pressure. Salt: Salt decreases gas production even more than sugar. Fiber Content: Higher fiber content or whole grain contents reduce gas retention and tolerance because the increased fiber interferes with the gluten structure. Most flours possessing adequate baking properties pass through a stage in the course of fermentation during which gas production and gas retention are in optimum balance. The time range over which this is true may properly be designated as the flour's fermentation tolerance. Since fermentation is subject to many influences that affect its course, it is evident that one and the same flour may have rather limited fermentation tolerance under one set of conditions, and good tolerance under a different set of conditions. (See The Flour Treatise.) Sponge doughs generally are set to ferment at temperatures of 74 to 78°F (23 to 26°C), the selected temperature depending on bread making environment. It is usually more desirable to work with cool sponges and adequate levels of yeast. With approximately 2% of yeast, fermentation in a properly formulated sponge will normally proceed quite vigorously. Full maturation of the sponge will then be reached within 3 to 4.5 hr. Fermentation involves exothermic reactions that result in a temperature increase in the dough mass. The rise in sponge temperature should not exceed 10°F (6°C) over the entire fermentation. In actual practice, sponge fermentation times may vary from 2.5 to 6 hr and greater. Variations of relatively wide magnitude have only a nominal effect on final bread quality as long as the minimum fermentation time exceeds 3 hr. For determining the optimum length of time required by the sponge to reach proper maturity, the so-called "drop" or "break" represents a useful point of reference. Normally, a sponge will expand to about four to five times its original size and then recede in volume. This decrease in volume, referred to as the drop or break, is quite noticeable and is taken as the point from which the additional fermentation time is calculated. Depending on whether young or old sponges are desired, the drop is taken as representing the completion of 70 to 66% of the total sponge fermentation, respectively, and the sponge is then given the additional fermentation time. Generally, well-matured flours perform better with younger sponges and in this case the post-drop time is reduced to 30%. For example, if a sponge made from a fully matured flour required 3 hr to arrive at the break, it would then be permitted to stand for an additional 54 minutes The total sponge fermentation time would thus be 3 hr and 54 minutes, or about 4 hr. The fully fermented sponge is then returned to the mixer and mixed into the final dough which then receives additional fermentation for a relatively short time. The dough will be fully matured when it has developed shortness to a sharp pull and a rather dry feel to the touch. This stage is normally reached after a floor time of 20 to 45 min under average conditions. Warmer ambient temperatures reduce the floor time and may eliminate it altogether, while cooler temperatures tend to lengthen it. Straight doughs are normally set at slightly higher temperatures than are sponges, i.e., within a range of 77 to 79°F (25 to 26°C). The accelerating effect of the higher temperatures is desirable in this case as straight doughs contain all of the dough ingredients, some of which, such as milk solids and salt, have a retarding effect on yeast action. Dough fermentation, as a rule, proceeds at a somewhat slower rate than does sponge fermentation; hence, straight doughs take longer to reach maturity than do sponges. However, the combined time of the sponge and the sponge-dough fermentations normally exceeds that of straight dough fermentation alone. Straight doughs differ from sponges not only in their fermentation rates, but also in their handling during fermentation. The general practice is to leave sponges undisturbed until they are ready for the return to the mixer. In contrast to this, straight doughs receive periodic punching or turning, during which a good portion of the generated carbon dioxide gas is expelled, thereby reducing the dough volume. While the actual punching or vigorous kneading of the dough is still practiced in many bakeries, the recommended procedure is to more gently turn and fold the sides of the dough well into the center. Vigorous kneading, when well-matured flours are used, has a tendency to produce bucky doughs that will subsequently create difficulties in makeup. Folding the dough, on the other hand, avoids this problem. Moreover, this method of dough manipulation assures a more uniform fermentation by equalizing the temperature throughout the dough, minimizes a possible retarding effect by excessive carbon dioxide gas accumulation within the dough, introduces atmospheric oxygen with its stimulating effect on yeast activity, and increases the gas-retaining capacity of the dough by promoting the mechanical development of its gluten through the stretching and folding action involved in this process. This last effect appears to be of primary significance. Gas production is not constant during fermentation, but rises at first to its maximum rate and then declines. The increase in dough volume corresponds to gas production during the first hour of fermentation only. Thereafter, there is a marked decline in the rate at which dough volume increases. A dough that is permitted to go through fermentation without folding or punching will lose a considerable amount of carbon dioxide. However, if the dough is turned and folded at the right time, its gas retention properties are improved sufficiently to prevent a significant loss of gas. Under practical conditions, the rate of dough expansion is again accelerated by then folding or punch back, and this has led to the conclusion that there has been a corresponding increase in the fermentation rate. The beneficial effects of punching or folding result essentially from the improvement in the dough's gas retention properties. The correct time at which the dough should first be turned is usually established by the simple expediency of inserting the hand into the dough, withdrawing it quickly, and observing the dough's behavior. If the dough reshapes itself, i.e., shows only a very slight recession or indentation, it is ready to be turned and folded. This point is usually taken as the 60% completion mark of the total fermentation time. The dough is then turned again after one-half this initial time, which thus represents another 30% of the total fermentation time. During the remaining 10%, the dough is sent to the divider. The above procedure is merely indicative of general practice and must be adapted to different conditions. For example, the quality of the flour plays an important role in determining actual fermentation times. Well-matured flours normally require a shorter fermentation and less frequent punching or folding than do so-called "green" or immature flours. The fermentation time may be shortened by the simple expediency of having the first punch represent either two-thirds or even three-fourths of the total fermentation, and omitting the second punch. This procedure will yield "young" doughs. "Old" doughs, on the other hand, are obtained by having the time to the first punch represent a lesser proportion of the total fermentation. The dough will receive a series of periodic turnings or punches during this period. This practice is normally followed with strong flours of high protein content, or with lower grade flours of longer extraction. Such flours may need four or five punches; there is the risk, however, that this may give rise to bucky doughs. Slight overmixing of the doughs or increasing the absorption somewhat will ameliorate this condition. A slight increase in the dough temperature will also act to accelerate fermentation and reduce the total time. Adjustments in Fermentation Time Optimum fermentation time represents that point at which the effects of interacting factors such as character of flour, yeast level, temperature, formula ingredients, degree of oxidation, etc., are in balance. Once practical experience has established the most suitable procedure for processing a given type of flour, it is generally closely adhered to in the interest of uniformity. Occasions may arise, however, when it becomes necessary to either shorten or extend the established fermentation time. To meet such exigencies, certain rules have evolved concerning changes in yeast quantity and temperature that work reasonably well, but should always be regarded only as temporary expedients. This is to say that any major deviation from an accepted procedure that has yielded good results will usually result in some loss of quality. Hence, while it is possible to shorten or lengthen the fermentation time by certain adjustments in yeast and temperature, the final product will usually not meet optimum quality standards. There is an inverse relation between the amount of yeast and fermentation time. Thus, a reduction in the amount of yeast will result in longer fermentation times, while an increase in the amount of yeast will shorten them. A generally accepted rule is that a I°F (0.5°C) change in dough temperature will cause a 15 minute variation in straight-dough fermentation time. Hence, a dough that comes out of the mixer I°F. warmer than normal will require about 15 minutes less fermentation under average conditions, and vice versa. Here again, practical considerations impose limits on the extent to which fermentation time may be altered; about 45 min appears to be the maximum when no other changes are involved. Engineered & Changing Yeasts Can yeasts be improved? Most likely work will continue on this process for as long as there are chemists, and geneticists interested in yeasts. One of the more interesting new research areas in this domain is the work on recombinant-DNA technology as it pertains to the development of newer yeast strains. This work has led to changes in formulation, ingredients and processing conditions. Some of this work has led to new strains of yeast which are more resistant to stress, produce more proteins, and more carbon dioxide. Some of the goals of this work are to increase shelf life, dough Rheology and flavor (Randez-Gil et al) Cauvain and Young describe the changes in yeast pre and post the 1960's. They discovered that in their bread baking processes, the early yeast peaked too soon, and the resultant oven spring was much less than desired. Later yeasts were able to provide the desired gassing power at the time needed in their baking. For example, at the two percent yeast level, pre-sixties gas production from yeast activity peaked at between 70and 80 minutes, decreased between 90 and 100 minutes, then increased again until 200 minutes, finally ending at approximately 10 millimoles of carbon dioxide. Contrary to this, post-sixties yeasts, used at the same 2% level provided a smooth increase in gas production all along the time axis, peaking at 25 millimoles of gas in 140 minutes. This work was in England, and may not seem relevant to bakers in the United States. We do feel, however, that the information provided by the work there, is important to know in order to maintain a more complete picture of the effects of yeast - as well as many other ingredients - on baking. Yeast produced for different needs may be single strain, hybrid, or mixed strains and propagation profiles. According to Lallemand, North American yeast is optimized for a compromise between lean and sweet dough. When compared with other countries, US and Canadian bakers prefer compressed yeast that is light in color, dry to the touch, and friable (easy to crumble). Artisan bakers in the US and Canada tend to process doughs at temperature of 75 to 90 F, as do artisan bakers in Europe. There is widespread availability of compressed and instant active dry yeast in North America, with a trend toward increasing use of instant active dry yeast. Also, as presented by Lallemand, Lean dough requires yeast with high maltase enzyme activity, because maltose sugar from flour is the primary energy source. Also important, is the enzyme maltose permease which transports maltose into the yeast cell. Once the maltose is in the yeast cell, the maltase is able to cleave the maltose molecule into two glucose molecules. Straight dough works best with fast yeast that adapts quickly to give good oven spring. Sponge and dough methods work best with slower yeast that retains sufficient activity for the final proof. Fast strain yeast dosages customarily used for straight dough can be reduced for use in sponge and dough methods. Bibliography & World Wide Web Links Bibliography Calvel, R., Wurtz, R. L. & MacGuire,J. J. , "The Taste of Bread", Aspen, MD, 2001 Cauvain, S. P. and Young, L., "Technology of Breadmaking". Blackie, Academic & Professional, London, 1998 Corriher, S. "Yeasts Crucial Role in Breadbaking", Fine Cooking, #43, pp 80-81, 2001 Giorilli, P., Lauri, S. "Il Pane: Un'arte una Tecnologia", Zanichelli, Franco Lucisano Editore, Milano, 1996 McGee, H. "On Food and Cooking" , NY., Collins, 1988. Lallemand Baking Update, Vol 1, # 3, "Yeast Characteristics" Lallemand Baking Update, Vol 1, # 4, "Yeast Dosage" Lallemand Baking Update, Vol 1, # 6, "Dry yeast" Lallemand Baking Update, Vol 1, # 9 , "Yeast Production" Lallemand Baking Update, Vol 2, # 9, "Instant Yeast" Personal Communication, Fleischmann Yeast Company Personal Communication, Lesaffre/Red Star Yeast Company Pyler, E. J. "Baking Science &Technology" , Vols. 1 & 2, Sosland Pub., Kansas city, MO. 1988 Randez-Gil, F., Sanz, P., Prieto, J. A. "Engineering baker's Yeast: Room for Improvement", Tribtech, June, 199, Vol. 17, http://184.108.40.206/journal/sej/full/t0606-170608.pdf Rosada, Didier, "The Role of Fermentation in the Baking Process", Bread Lines, The Bread Bakers Guild of America, Vol. 6, Issue 2, Spring, 1998, pp8-9 Schunemann C. & True G., "Baking: The Art and Science", Baker Tech, Alberta, Canada, 1984 World Wide Web Links Fleischmann: http://www.breadworld.com Lallemand: http://www.lallemand.com/BakerYeastNA/newsletter.shtm Lesaffre: http://www.lesaffre.com/default_eng.asp Red Star: http://www.redstaryeast.com Link to "Yeast - A Treatise" Section I Last Updated on: 12/25/2001 11:31:30 PM
<urn:uuid:1a70a09e-5917-41a9-9315-6251eab0d31f>
CC-MAIN-2016-26
http://www.theartisan.net/The_Artisan_Yeast_Treatise_Section_Two.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944705
6,860
3.4375
3
With below-average sea temperatures beginning to warm, La Niña peaked in mid-February. It’s transitioning toward ENSO-neutral conditions during March through May 2012, according to National Oceanic and Atmospheric Administration (NOAA) officials. "The big debate right now is how much of a lingering La Niña footprint will remain through the spring planting season," says Drew Lerner, who has forecasted crop weather for 30 years as owner of World Weather, Inc. "There's a high probability that La Niña will still be with us through at least the first half of spring, with a footprint into the second half." Traditionally, the presence of La Niña or its footprintmeans an increased chance for above-average temperatures across the south-central and southeastern U.S., and below-average temperatures in the northwest for the March through April time period. Precipitation will be above average across the northern tier of states (except in the dry north-central U.S.) and in the Ohio and Tennessee valleys. Drier-than-average conditions are more likely across the southern U.S., according to the mid-February NOAA Weather and Crop Bulletin. Lerner says there's a tremendous amount of North America that is already dry – from Canada down across the northern Plains and Upper Midwest; also from the west through the southern Plains into the southeastern U.S. "It's not normal to miss out on precipitation during the winter months like we have, which depletes the moisture reserves in these areas," Lerner says. Not only is there very little snow cover across the Midwest; the snowpack in the Rocky Mountains has yet to reach an average year. "We're nowhere close to the flooding potential we witnessed last year on the Mississippi and Missouri rivers," Lerner says. “We may see some short-term minor flooding possibilities in the Ohio River basin and the Delta due to wetter soils, but we expect the crops to get planted in a fairly timely fashion. "Currently, it appears we might be dependent on timely rains to produce good crops in the northern Plains and the western Corn Belt. There's no doubt the Upper Midwest will have a drier than normal spring because it is lacking moisture already," Lerner says. At the end of the fourth-warmest and 28th-driest January on record since 1895, NOAA reports 31% of the U.S. is under moderate to extreme drought, and 14% falls into the severely to extremely wet categories. (For details, see State of the Climate (www.ncdc.noaa.gov/sotc/drought/.) "I still have a drier bias in the forecast," Lerner says. "If we can totally lose La Niña with no lingering footprint left, which is a big 'if' right now, then there is potential for timely summer rains. As spring goes along and a high-pressure ridge builds in the central U.S., it should spread rainfall potential in the western Corn Belt." This weaker La Niña will bring some good news to the parched Texas, Oklahoma and the Delta areas in the form of good precipitation, according to Lerner. "These areas will see above-normal precipitation in March and April, which will slow planting, yet still allow the crops to be planted on a timely basis. The hydrologic drought (river/stream flows, reservoir levels, ground water table) will still prevail through the growing season, but the soil moisture will help get the crop off to a good start. The southern Plains will dry out in June and July, but nothing as serious as last year," he adds. Regarding heat, Lerner says any time you have a drier bias with a high-pressure ridge in the crosshairs for late spring, you run the risk of it getting pretty warm. "For the eastern Midwest, from Michigan and Indiana into Ohio and down through the Carolinas, we should see a fairly mild summer – not excessively hot. The farther west you go, the warmer it will be," he adds. We'll see some stress and some problems during the spring, but the odds are against any major problems, Lerner says. "The really severe wind and hail events from last year won't be repeated this year, because that was the product of a very strong La Niña. We will still have some wild weather, but nothing like 2011." *Field drainage. Poorly drained soils can hinder the establishment of vigorous corn stands by challenging the uniformity of roots and plant development. Improving tile or surface drainage reduces the risks of ponding or soggy soils, denitrification and soil compaction. *Soil erosion control and soil moisture conservation. In areas of rolling hills with high risks of soil erosion and reduced ability to retain soil moisture, Bob Nielsen, Purdue University Extension agronomist, says it is important to minimize water runoff and maximize soil moisture retention. Some techniques include no-tillor reduced tillage, strip-cropping, contour farming, terraces and other water control structures and fall and winter cover crops. *Hybrid selection. "The key challenge is to identify hybridsthat not only have good yield potential but that also tolerate a wide range of growing conditions," Nielsen says. "The best way to accomplish this is to evaluate hybrid performance across a lot of locations. University trials are good for this exercise." Nitrogen management. Because the eastern Corn Belt has poorly drained soils, ample rainfall and the risk of nitrogen (N) loss by either denitrification or leaching, growers need to pay special attention to N management. According to Nielsen, best management practices include avoiding fall applications, avoiding surface application of urea-based fertilizers without incorporation and adopting sidedress N application programs where practical. Disease Management. Warm, humid summer weather conditions in the eastern Corn Belt are ideal for the development of many corn diseases, such as gray leaf spot and Northern corn leaf blight. The best ways to manage these diseases are selecting hybrids for strong disease-resistance characteristics, avoiding continuous-corn cropping systems, avoiding no-till cropping systems and responsibly using foliar fungicides. Finally, Nielsen says, producers need to "remember it ain't rocket science." "We're talking about a lot of common-sense agronomic principles that work together to minimize the usual crop stresses that occur every year and allow the crop to better tolerate uncontrollable weather stresses," he says. Source: Bob Nielsen, Purdue University Extension agronomist.
<urn:uuid:88f40dbe-d65e-4b6b-87fb-f257623f25bc>
CC-MAIN-2016-26
http://cornandsoybeandigest.com/print/corn/weaker-la-ni-planting
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949157
1,343
2.84375
3
he HMAS Goorangai was the first Royal Australian Navy vessel lost in World War II, and the first RAN surface vessel to be lost in wartime. The entire ship's complement consisting of three officers and twenty one sailors (twenty four in total) were killed in the tragedy, with only six bodies recovered, of which five were identified. The Goorangai was built for the NSW Government at the State Dockyard, Newcastle in 1919. It was sold to Cam & Sons in 1926 and refitted as a fishing trawler. At the outbreak of war, the Goorangai was one of thirty five privately owned vessels requisitioned by the RAN as auxiliary minesweepers. These vessels helped to fill the wide gap in the minesweeper department of the RAN. The largest number of minesweepers requisitioned from any one body was eight during 1939 - 1943, from Cam & Sons Pty Ltd of Pyrmont, NSW. The Goorangai was one of these vessels. Two of Cam & Sons' vessels were lost during the war, HMAS Goorangai having been lost in collision 20 November 1940, and HMAS Patricia Cam, sunk by enemy aircraft 22 January 1943. On the 29 June 1943, the remaining six vessels were purchased by the Commonwealth. These vessels were crewed mainly from the reserve force. In the Goorangai's case, the RAN persuaded sixteen of its fisherman crew to sign up. A number were Scots who had come to Australia on delivery voyages aboard trawlers built in Scottish yards. By the time of the sinking all but two of the crew were Australians. The Scottish skipper of the Goorangai, David McGregor, was one of those signed up by the RAN. He was kept on as skipper and given rank of Commissioned Warrant Officer. He was lost with the ship. Following German mine laying operations off Wilson's Promontory and Cape Otway on the nights of 29, 30 and 31 October 1940, and subsequent losses on the 7th and 8th of November of two ships, the steamer Cambridge at Wilson's Promontory and the freighter City Of Rayville at Cape Otway, the RAN ordered the minesweepers Goorangai, Orana and Durraween into Bass Strait to locate and destroy the minefields. At about 8.3Opm on 20 November HMAS Goorangai (223 tons), whilst steaming from Queenscliff to Portsea, was struck forward of the funnel by HMAT Duntroon (10,346 tons gross) which was leaving for Sydney loaded with troops. The Goorangai was cut in two and sank in less than a minute in the approaches to the South Channel. Wartime security prevented the Duntroon from heaving to or switching on searchlights to look for survivors. However, the Duntroon did lower lifeboats, fire rockets, and sound three blasts on the whistle to alert the residents of Queenscliff. When the lifeboat Queenscliffe reached the scene of the disaster the crew found the minesweeper sunk in about 15 metres of water with only the tops of the masts visible. Despite an extensive search only six bodies were recovered. Blasting operations in January 1941 reduced the remains of the Goorangai to large and small sections of steel plating which protrude from the sandy seabed. A small cylindrical boiler (2m x 3m) is lying on the northern end of the site. Broken machinery and boiler sections are scattered around the site, and occasionally wartime relics such as gas masks can be seen. The remains cover approximately 200 square metres of the seabed, and a considerable length of hull plating stands proud of the sand to a height of about two metres. The remains of the hull have been colonised by a diverse assemblage of colourful encrusting organisms, such as bryozoans, sponges and soft corals. This in turn provides an ideal habitat for both free swimming and sedentary fauna, including many fish species, cuttlefish, sea horses, nudibranches and starfish. The abundance and variety of marine life, in association with the shipwreck, makes the HMAS Goorangai a popular destination for sport divers. MEMBERS OF CREW LOST ON HMAS GOORANGAI Wartime wrecks lost on the Victorian coast,
<urn:uuid:a48409a0-d1c0-4db3-99da-e56b75b6d93a>
CC-MAIN-2016-26
http://home.vicnet.net.au/~maav/goorangai.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977252
919
2.6875
3
(Epidermoid Cyst; Epidermal Inclusion Cyst; Epithelial Cyst; Keratin Cyst) An epidermal cyst is a type of slow-growing lump underneath the skin. This cyst contains soft, cheese-like skin contents. These usually appear on the face, neck, chest, upper back, genitals, or behind the ears. Similar cysts called pilar cysts often occur on the scalp. - Blockage of a hair follicle by skin cells—When an injury to the skin occurs, cells from the surface may block hair follicles located deeper within the skin. - Damage to a hair follicle due to acne - Blockage or defect of the sebaceous gland—This gland is near the hair follicle. It secretes oily material used to lubricate the skin and hair. Normal Skin Anatomy Acne and skin injuries increase your risk of developing an epidermal cyst. An epidermal cyst may cause: - Small, dome-shaped lump beneath the skin - Foul-smelling, cheese-like material draining from the cyst - Redness or tenderness on or around the cyst if it becomes inflamed You will be asked about your symptoms and medical history. A physical exam will be done. In most cases, the diagnosis can be made by looking at it. You may be referred to a dermatologist. This is a doctor who specializes in skin disorders. Some epidermal cysts do not need treatment. If needed, treatment options may include the following: - Surgical excision—The doctor removes the entire cyst, including its contents and cyst wall. - Surgical drainage—This involves cutting open the cyst, and draining the contents. The cyst might come back, though. - Antibiotics—These may be prescribed if the cyst has become in infected. There is no way to prevent an epidermal cyst. If any of the cyst wall is left behind after drainage, the cyst may come back. If this happens, your doctor may decide to remove the cyst using surgery. American Academy of Dermatology Family Doctor–American Academy of Family Physicians Canadian Dermatology Association Cysts. DermNet NZ website. Available at: http://dermnetnz.org/lesions/cysts.html. Updated February 22, 2014. Accessed September 2, 2015. Cysts—epidermoid and pilar. The British Association of Dermatologists website. Available at: http://www.bad.org.uk/for-the-public/patient-information-leaflets/cysts---epidermoid-and-pilar?q=Cysts - epidermoid and pilar. Accessed September 2, 2015. Luba MC, Bangs SA, Mohler AM, Stulberg DL. Common benign skin tumors. Am Fam Physician. 2003;67(4):729-738. Zuber TJ. Minimal excision technique for epidermal (sebaceous) cysts. Am Fam Physician. 2002;65(7):1409-1412. Last reviewed September 2015 by James Cornell, MD Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
<urn:uuid:956d81ae-934b-45e3-89af-8ef26a1f5e30>
CC-MAIN-2016-26
https://www.mountsinai.org/patient-care/health-library/diseases-and-conditions/epidermal-cyst
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00062-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887242
770
3.046875
3
A Fascinating Piece from the New York Times Detailing the Injuries of Female Athletes Michael Sokolove has published what will be a national conversation-starter in the New York Times magazine that will come out this weekend. In "The Uneven Playing Field," Sokolove details at tremendous length the high injury risks girls and women face in playing contact sports. I found the piece compelling, frightening, and reflective of common sense: girls are not built like guys, and thus when they play contact sports with tenacity and abandon, they will often face very serious injury. Here's what Sokolove has found as a recurring trend in women's sport-- "This casualty rate was not due to some random spike in South Florida. It is part of a national trend in the wake of Title IX and the explosion of sports participation among girls and young women. From travel teams up through some of the signature programs in women’s college sports, women are suffering injuries that take them off the field for weeks or seasons at a time, or sometimes forever. Girls and boys diverge in their physical abilities as they enter puberty and move through adolescence. Higher levels of testosterone allow boys to add muscle and, even without much effort on their part, get stronger. In turn, they become less flexible. Girls, as their estrogen levels increase, tend to add fat rather than muscle. They must train rigorously to get significantly stronger. The influence of estrogen makes girls’ ligaments lax, and they outperform boys in tests of overall body flexibility — a performance advantage in many sports, but also an injury risk when not accompanied by sufficient muscle to keep joints in stable, safe positions. Girls tend to run differently than boys — in a less-flexed, more-upright posture — which may put them at greater risk when changing directions and landing from jumps. Because of their wider hips, they are more likely to be knock-kneed — yet another suspected risk factor. This divergence between the sexes occurs just at the moment when we increasingly ask more of young athletes, especially if they show talent: play longer, play harder, play faster, play for higher stakes. And we ask this of boys and girls equally — unmindful of physical differences. The pressure to concentrate on a “best” sport before even entering middle school — and to play it year-round — is bad for all kids. They wear down the same muscle groups day after day. They have no time to rejuvenate, let alone get stronger. By playing constantly, they multiply their risks and simply give themselves too many opportunities to get hurt."Here are the rates at which girls seriously (very seriously) injure themselves compared to boys-- "If girls and young women ruptured their A.C.L.’s at just twice the rate of boys and young men, it would be notable. Three times the rate would be astounding. But some researchers believe that in sports that both sexes play, and with similar rules — soccer, basketball, volleyball — female athletes rupture their A.C.L.’s at rates as high as five times that of males. The N.C.A.A.’s Injury Surveillance System tracks injuries suffered by athletes at its member schools, calculating the frequency of certain injuries by the number of occurrences per 1,000 “athletic exposures” — practices and games. The rate for women’s soccer is 0.25 per 1,000, or 1 in 4,000, compared with 0.10 for male soccer players. The rate for women’s basketball is 0.24, more than three times the rate of 0.07 for the men. The A.C.L. injury rate for girls may be higher — perhaps much higher — than it is for college-age women because of a spike that seems to occur as girls hit puberty." Here are the inherent genetic differences between men and women-- “Women tend to be more erect and upright when they land, and they land harder,” he said. “They bend less through the knees and hips and the rest of their bodies, and they don’t absorb the impact of the landing in the same way that males do. I don’t want to sound horrible about it, but we can make a woman athlete run and jump more like a man.” Here are the ideals that get in the way of common sense wisdom, not to mention biblical principles-- "The bigger barrier, though, may be political. Advocates for women’s sports have had to keep a laser focus on one thing: making sure they have equal access to high-school and college sports. It’s hard to fight for equal rights while also broadcasting alarm about injuries that might suggest women are too delicate to play certain games or to play them at a high level of intensity. There are parallels in the workplace, where sex differences can easily be perceived as weakness. A woman must have maternity leave. She may ask for a quiet room to nurse her baby or pump breast milk and is the one more likely to press for on-site child care. In high-powered settings like law firms, she may be less likely, over time, to be willing to work 80 hours a week. She does not always conform to the model of the default employee: a man." This article, as is clear, is nothing less than an earthquake in the field of gender studies. It is part of Warrior Girls: Protecting Our Daughters Against the Injury Epidemic in Women’s Sports, which will be published in June. I would encourage all readers to order the book and read it. I am guessing that, while one may not agree with every point made in it, it will offer eloquent testimony to the simple principles of common sense and biblical wisdom. Common sense tells us that, despite what our egalitarian society may tell us on many levels, men and women are intrinsically, inherently, unalterably different. This is not in any way to say that one sex is better than the other. The two sexes are equipped for different tasks, and their bodies reflect this reality, whether postmoderns--or anyone else, for that matter--accept it or not. It is readily apparent that women's bodies are not made to withstand the same physical challenges that men's bodies can tolerate.
<urn:uuid:3b036f01-6651-4c18-b28d-004e2440b721>
CC-MAIN-2016-26
http://consumedblog.blogspot.com/2008/05/fascinating-piece-from-new-york-times.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966592
1,307
2.546875
3
Methods employed at the Microbial Observatory: The Oceanic Microbial Observatory combines oceanographic microbial ecological methodology with molecular microbiological /ecological expertise to elucidate the linkages between phylogenetic diversity and large scale biogeochemical processes. Conductivity, Temperature, Depth (CTD) rosette: The CTD profiling package is used to collect discrete water samples at specific depths throughout the water column (0 - 2000 m) of the research site. Each bottle collection is triggered to close individually from the bridge of the R.V. Weatherbird II. Shown on the left students from the Marine Microbial Ecology Course are collecting samples from the CTD during a class cruise to the Northwestern Sargasso Sea aboard the Weatherbird II in July 2005. Picture provided by C. Carlson. Click on further description of CTD Tangential Flow Filtering (TFF): In using TFF large volumes of sample can be concentrated by passing them over a filter of specific size. Particulates of the size desired are retained in a loop system by exerting pressure to force excess liquid, along with particles smaller than those of interest, through the tangential flow filter and out of the system. In a circular fashion the retained sample loses volume until the particles (in this case Bacteria or virus) of interest are concentrated enough to be worked with in the lab. Craig Carlson and Bob Morris are shown size fractionating dissolved organic carbon (DOC) via the TFF system aboard the R.V. Weatherbird II in June 2005. Fluorescence In Situ Hybridization (FISH): FISH is a molecular technique that is used at the Oceanic MO to identify and enumerate specific bacterial groups. FISH works by designing a DNA or RNA probe to detect the presence of the complementary DNA sequence specific to target groups of interest. The complementary RNA, in this case, is in a variable portion of the 16S ribosomal RNA which is present only in a distinct bacterial group. See SAR11 clade dominates ocean surface bacterioplankton communities (Nature 420:806-810, 2002) for the use of FISH in enumerating SAR11 populations. Terminal Restriction Fragment Length Polymorphism (T-RFLP): Along with bulk nucleic acid hybridization analyses T-RFLP is used to identify, characterize, and quantify spatial and temporal patterns in marine bacterioplankton communities at the study site. Terminal-Restriction Fragment Length Polymorphism analysis is a method of comparative community analysis. T-RFLP (see comprehensive explanation ) analysis is based on the restriction endonuclease digestion of fluorescently end-labeled PCR products (in this study the 16S rRNA gene). The digested products are separated by gel electrophoresis and detected on an automated sequence analyzer. The method provides distinct profiles (fingerprints) dependent on the species composition of the communities of the samples. Figure on left shows 'Non-metric Multidimensional Scaling' (NMS) of relative bacterial 16S rDNA terminal restriction fragments from monthly time-series samples collected at BATS between Feb and Sep (1992 and 2000).
<urn:uuid:09e3671c-1b58-43dc-aa78-8db6af94a1b0>
CC-MAIN-2016-26
https://serc.carleton.edu/microbelife/microbservatories/oligocean/methods.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00073-ip-10-164-35-72.ec2.internal.warc.gz
en
0.875473
654
2.71875
3
Report identifies best practices in closing early digital divide Closing the digital divide has been a constant challenge as technology tools and use become more prevalent in schools. Now, research shows that developing early technology skills can help close the digital divide. Though technology use has expanded in schools, students’ at-home technology and internet access isn’t necessarily reliable. According to a RAND Corporation report, Using Early Childhood Education to Bridge the Digital Divide, this means that children from families without access to digital technology “have fewer opportunities to learn, explore, and communicate digitally, and fewer chances to develop the workforce skills they will need to succeed in later life.” Early childhood education access to technology will help ensure that students begin to develop tech skills and learn how to use tools that they are likely to use once they enter elementary school. (Next page: The report’s findings on early childhood technology exposure)
<urn:uuid:41823787-2455-45af-9093-04606c65dcd3>
CC-MAIN-2016-26
http://www.eschoolnews.com/2014/04/18/early-education-technology-783/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930378
187
3.625
4
Chapter 1 Flight • Mafatu recalls a life-changing event that happened to him early in his childhood, an event that has shaped his fear of the sea. • One day, Mafatu's mother took him out to the barrier reef to look for sea urchins; a storm was coming their way, and the other fishermen turned their canoes around and headed for the safety of dry land. • Mafatu's mother decided to brave the elements, much to the concern of the other fishermen. • They warned her against staying out in the water, but she paid them no attention. • Finally, though, she decides to head back to shore, but she finds the way blocked by a current. • The current took hold of their canoe, capsized it, and tossed them into the churning sea, where they remained for the whole of the night. • They managed to survive the danger of the sharks circling around them... This section contains 3,136 words (approx. 11 pages at 300 words per page)
<urn:uuid:ed0cb153-5a0e-49d6-9931-8d9585177b88>
CC-MAIN-2016-26
http://www.bookrags.com/lessonplan/call-it-courage/abstracts.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00026-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982907
218
3.109375
3
The House of Lancaster The Beaufort Family The House of Beaufort descended from the illicit union of John of Gaunt, Duke of Lancaster (1340 - 1399) and his mistress Katherine Swynford, the family played an important role as staunch Lancastrian supporters during the fifteenth century Wars of the Roses. The surname Beaufort derives from a castle owned by John of Gaunt in Champagne, now Montmorency-Beaufort. The family emblem was the portcullis which was later adopted by the Tudor dynasty, their descendants. After the death of Gaunt's second wife Constanza of Castille in 1394, Gaunt married Katherine in 1396, making her Duchess of Lancaster. Their children were legitimised by King Richard II and by Papal Bull, but barred from inheriting the throne. John of Gaunt was the third surviving son of King Edward III and Phillipa of Hainault. Katherine, who had previously been married to Hugh Swynford, was the daughter of Paon de Roet, a herald, and later knight. She had a brother, Walter and sisters Isabel, who later became Canoness of the convent of St. Waudru's, in Mons. Her other sister Philippa, a lady of Queen Philippa's household, married the poet Geoffrey Chaucer. Katherine outlived John by four years, dying on 10 May 1403. Prominent members of the Beaufort family include:- John Beaufort, 1st Earl of Somerset (1371-1410) The eldest son of John of Gaunt and Katherine Swnford, John Beaufort served in the crusade of Louis II, Duke of Bourbon in North Africa and later was in Lithuania serving with the Teutonic Knights. After his parents' marriage and his subsequent legitimization John was created Earl of Somerset. His cousin King Richard II further created him Marquess of Somerset and Dorset in reward for the part he played in helping Richard free himself from the power of the Lords Appellant. Two days before his elevation to a Marquess, he was married to the King's niece, Margaret Holland, the daughter of Thomas Holland, 2nd Earl of Kent, who was the son of Joan "the Fair Maid of Kent" (granddaughter of Edward I and widow of the Black Prince). The marriage produced six children, Henry, 2nd Earl of Somerset, John, 1st Duke of Somerset, Joan, who became Queen Consort of Scotland, Thomas, Count of Perche, who fought in Henry V's campaigns in France, Edmund, 2nd Duke of Somerset and Margaret Beaufort, who became Countess of Devon. Following the deposition of King Richard II by John's half brother Henry Bolingbroke, who in 1399 assumed the throne as King Henry IV, John's further titles were rescinded, though he remained Earl of Somerset. He nevertheless, proved a loyal supporter of Henry IV and served in various military positions and on some important diplomatic missions on behalf of the king. He was granted the confiscated estates of the Welsh rebel leader Owain Glyndwr by Henry in 1400. Henry, Cardinal Beaufort (1374-1447) The second son of Gaunt and Katherine Swynford, Henry Beaufort was born in Anjou in about 1374 and was brought up for a career in the Church. On 14 July 1398 he was consecrated Bishop of Lincoln . Following his usurpation of the throne, his half-brother Henry IV made Henry Lord Chancellor of England in 1403. Henry resigned his chancellorship a year later when he was appointed Bishop of Winchester. Henry lost the favour of the king through siding with his nephew, Henry Prince of Wales, against him. When the prince succeeded his father as Henry V, Henry was again appointed Chancellor. On the death of Henry V, Bishop Henry was appointed a member of the Regency Government which ruled during the minority of his young great nephew Henry VI. Bishop Beaufort was elevated to a Cardinal in 1426 by Pope Martin V. In 1427 Pope Martin further appointed him Papal Legate for Germany, Hungary, and Bohemia, and instructed him him to lead the fourth "crusade" against the Hussites in Bohemia. At the Battle of Tachov fought on 4 August 1427 during the Hussite Wars, the Cardinal's forces were routed. When Joan of Arc was captured by the English toward the end of the Hundred Years War with France, Beaufort presided at her trial at which she was sentenced to be burned at the stake. Cardinal Beaufort died on 11 April 1447 and was buried in Winchester Cathedral. Joan Beaufort, Countess of Westmorland (1379-1440) Joan, the only daughter of John of Gaunt and Katherine Swynford, was probably born at the Swynford manor of Kettlethorpe in Lincolnshire. At the age of twelve, she was married to Robert Ferrers, 5th Baron Boteler of Wem at Beaufort-en-Vallée in Anjou. The couple produced two daughters before Ferrers died in around 1395. At the age of eighteen, the young widow was married for a second time to Ralph Neville, Earl of Westmorland, who had also been previously married. The marriage produced a brood of 16 children, among whom were Richard Neville. Earl of Salisbury (1400 - 1460), the eldest son of Joan and Ralph and a leading Yorkist supporter during the Wars of the Roses. Salisbury was killed at the Battle of Wakefield but became the ancestor of Anne Neville, Queen of Richard III and Queen Catherine Parr, the sixth wife of Henry VIII. Joan's daughter, Cecily Neville (1415 - 1495) known as the 'Rose of Raby', married the Yorkist claimant to the throne, Richard Plantagenet, Duke of York (1411 - 1460) and became the mother of the Yorkist Kings Edward IV and Richard III. Joan's numerous descendants also included Warwick 'the Kingmaker', the Dukes of Norfolk, the Dukes of Buckingham and the Percy Earls of Northumberland. Joan Beaufort died on 13 November 1440 at Howden in Yorkshire. She chose to be buried next to her mother, Katherine Swynford rather than her husband, in the sanctuary of Lincoln Cathedral. The tomb was damaged in 1644 by Roundheads during the English Civil War. John Beaufort, 1st Duke of Somerset(1404-1444) The second son of John Beaufort, 1st Earl of Somerset and Margaret Holland, John succeeded his elder brother Henry as Earl of Somerset in 1418. He fought in the French Wars of Henry V at the age of seventeen. In 1421 Thomas, Duke of Clarence, Henry V's younger brother, was dispatched to fight against the dauphin of France in Anjou, Thomas advanced rashly against the French with his vanguard, and being surprised as he crossed a marsh, was killed. Somerset, who had accompanied him, was taken prisoner by the French. After being ransomed by the English, he continued fighting in France under King Henry VI. In 1443 John was created Duke of Somerset and Earl of Kendal, made a Knight of the Garter, and appointed Captain-General of Aquitaine and Normandy. He married Margaret Beauchamp of Bletso, daughter of Sir John Beauchamp in 1439. The marriage produced one child, Lady Mararet Beaufort (1443 - 1509) who in turn became the mother of King Henry VII, the founder of the Tudor dynasty. John proved to be a poor military commander and Richard Duke of York was preferred to him as regent of France. Somerset returned to England depressed by the situation in 1443 and died the following year, his death was rumoured to be suicide, as he was unable to brook the shame and disgrace of banishment from court. Joan Beaufort, Queen of Scotland (1404-1445) The daughter of John Beaufort, 1st Earl of Somerset and Margaret Holland, Joan was married to James I, King of Scots on 12 February 1424 at St Mary Overie Church in Southwark. James I of Scotland had been a prisoner in England since the age of 12 and was held captive for 18 years. He met the lovely Joan Beaufort at the English court and is reported to have fallen in love her. James was inspired by his love for Joan to write 'The Kingis Quair' an allegorical romantic poem. James was eventually released and returned to Scotland in 1424, accompanied by Joan. The marriage produced eight children, including James II King of Scots, known as 'James of the Fiery Face', due to a disfiguring red birthmark on his face. On 20 February 1437, conspirators supporting the rival claim to the Scottish throne of Walter, Earl of Atholl, secretly entered the Dominican Friary in Perth where James and his Queen were staying. James attempted an escape by climbing into a sewer that exited onto the tennis court, however, the outlet had been blocked the previous day to prevent the loss of tennis balls and the king was trapped and killed by his attackers. Joan, though injured in an attempt to defend her husband, managed to escape and fled to the safe refuge of Stirling Castle with her young son, now James II. Joan was appointed Regent of Scotland for her son until her second marriage in 1439 to James Stewart, the Black Knight. She died on 15 July 1455 while under siege at Dunbar Castle, and was buried in the Carthusian Priory at Perth. Edmund Beaufort, 2nd Duke of Somerset (1406-1455) The third surviving son of John Beaufort, 1st Earl of Somerset and Margaret Holland, Edmund married Eleanor Beauchamp, daughter of Richard Beauchamp, 13th Earl of Warwick sometime between 1431 and 1435. They had 10 children, Eleanor Beaufort, Elizabeth Beaufort (c.1434- d. before 1472), Henry Beaufort, 2nd Duke of Somerset (1436-1464), Margaret Beaufort, Countess of Stafford (c. 1437-1474), Edmund Beaufort, 3rd Duke of Somerset (c. 1439- 4 May 1471), Anne Beaufort (c. 1453 - c. 1496), John, Earl of Dorset (c. 1455- 4 May 1471), Joan (c. 1447- d. 11 August 1518), Thomas (c. 1450-c. 1463) and Mary Beaufort (c. 1441). Edmund was involved in an intense personal rivalry with his cousin Richard, Duke of York. Henry VI created him Earl of Dorset in 1442 and Marquess of Dorset the following year. From 1444 to 1449 he served as Lieutenant of France, in March 1448 he was further honoured by being created Duke of Somerset. Much to York's chagrin, Somerset was appointed to replace him as commander in France in 1448. Fighting broke out in Normandy in August 1449 and Somerset's subsequent military failures left him vulnerable to criticism from York and his allies. He totally failed to repulse French attacks, and by the summer of 1450 nearly all of Henry V's conquests in northern France were lost. By 1453, all the English possessions in the south of France were also lost. In 1453, at the age of 32, Henry VI began to exhibit signs of serious mental illness. By means of a "sudden fright" he entered into a trance-like state reacting to and recognising no one. Catatonic schizophrenia have been suggested as a likely diagnosis. York was named Lord Protector of the Realm and imprisoned Somerset in the Tower of London. The king recovered his senses late in 1454, when York was forced to surrender his office. Richard of York, who possessed a slightly superior claim to the throne than Henry himself (through his descent from Lionel, Duke of Clarence, the 2nd surviving son of Edward III) was determined to depose Somerset by one means or another. In May 1455 he confronted Somerset and the King at the First Battle of St Albans which marked the beginning of the Wars of the Roses. Somerset was killed in a last wild charge from the house where he had been sheltering. His son and succesor, Henry Beaufort, the third Duke, bent on revenge, was never to forgive Warwick and York for the part they played in his father's untimely death. Henry Beaufort, 3rd Duke of Somerset (1436-1464) Henry Beaufort, the eldest son of Edmund Beaufort, 2nd Duke of Somerset, and Lady Eleanor Beauchamp, had fought alongside his father at the Lancastrian defeat at the First Battle of St. Albans in 1455, where he was seriously wounded and his father was slain. He was the principal commander at the Lancastrian victories of the Battle of Wakefield in 1460, in which both York and his brother-in-law and chief ally Salisbury were killed. In the Second Battle of St Albans in 1461, following the disastrous Lancastrian defeat at the Battle of Towton in 1461 he fled to seek refuge in Scotland. Somerset garrisoned several Northumberland castles but after surrendering to the Yorkists at the end of a siege, he indicated willingness to make peace with the Yorkist King Edward. Edward agreed to pardon Somerset and restored his forfeited lands and titles. Soon after, however, he travelled north again and began to raise troops for the Lancastrian cause. He managed to hold out in the north of England until May 1464, when he was defeated at the Battle of Hexham. He was taken prisoner in a barn at the site of what is now known as Dukes House, and beheaded by the Yorkists the same day. Edmund Beaufort, 4th Duke of Somerset Edmund Beaufort was the second son of Edmund Beaufort, 2nd Duke of Somerset and his wife, Eleanor Beauchamp. In 1470, the exiled Lancastrian Queen Margaret of Anjou was reconciled with her erstwhile enemy Warwick the Kingmaker, who had been alienated by Edward IV's policies and his marriage to the commoner Elizabeth Woodville. These unlikely allies united in an attempt to restore her husband Henry VI to the throne. Their alliance was cemented by the marriage of Margaret's son, the Lancastrian Prince of Wales to Warwick's younger daughter, Anne Neville. Warwick invaded England and briefly reinstated Henry as king before meeting his end in combat with Edward IV at the Battle of Barnet in 1471. Beaufort, along with Queen Margaret and her son Edward, Prince of Wales, who had landed in England the day before Barnet was fought to learn the disastrous news of Warwick's defeat. They fled toward Wales to obtain aid from Jasper Tudor, Earl of Pembroke, the half brother of Henry VI. They were intercepted by a Yorkist army and forced to give battle at Tewkesbury on 4 May 1471. Somerset was given command of the Lancastrian right wing, and led a charge against the Yorkist left under William, Lord Hastings, which went unsupported by either the Earl of Devon or Lord Wenlock. This resulted in Somerset's force being driven back and suffering heavy losses. The enraged Somerset galloped back to the Lancastrian lines and rode up to the aged Wenlock, a former Yorkist, who commanded the centre, furiously demanding to know why he had failed to provide support. Without waiting for a reply, he split his skull with a battleaxe in his rage. Queen Margaret's son, the eighteen year old Edward, Prince of Wales, the last legitimate descendant of the House of Lancaster, was killed either in battle or during its aftermath. After the inevitable rout of the Lancastrian army, Somerset and other Lancastrian leaders claimed sanctuary at Tewkesbury Abbey, Two days later, Somerset and the other leaders were dragged out of the Abbey, and were ordered by Richard Duke of Gloucester and the Duke of Norfolk, Constable of England, to be put to death after perfunctory trials. Edmund Beaufort, along with and his younger brother John Beaufort, Marquess of Dorset and Prince Edward of Lancaster who had both fallen in the battle, were buried at the Abbey. With the death of Edmund and John, the House of Beaufort became extinct in the male line, however their elder brother, Henry Beaufort, 3rd Duke of Somerset, left an illegitimate son Charles Somerset, 1st Earl of Worcester, (c. 1460 - 15 March 1526) who became the ancestor of the Dukes of Beaufort.
<urn:uuid:72c0e588-17e3-4ab0-b261-844f4a264389>
CC-MAIN-2016-26
http://www.englishmonarchs.co.uk/plantagenet_47.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981529
3,473
3.0625
3
Atlanta - A new report released today documents the explosive growth of smoke-free air laws worldwide and finds that more than 200 million people are now protected by laws that require smoke-free air in workplaces and public places. The report concludes that the momentum behind smoke-free air laws will continue accelerating because a global treaty ratified by 146 nations requires governments to protect the public from secondhand smoke. The report, Global Voices for a Smokefree World, was released today by the Global Smokefree Partnership, a new multi-partner initiative formed to promote effective smoke-free air policies worldwide, to coincide with the World Health Organizations World No Tobacco Day, which focuses attention this year on the importance of smoke-free air. 'Great progress is being made globally in protecting people from the harms of secondhand smoke. Whole countries are going smoke-free one after another, and the laws are proving to be a huge success,' said John R. Seffrin, Ph.D., chief executive officer of the American Cancer Society. 'Countries like Ireland and Uruguay are leading the way, and were seeing real action now.' The report shows that nine countries have laws that require smoke-free air in workplaces, including all restaurants, bars and pubs: Ireland, Uruguay, New Zealand, Bermuda, Iran, Scotland, Wales and Northern Ireland. Englands law takes effect July 1. France, Italy, South Africa and Hong Kong have implemented smoke-free laws covering most workplaces. In the United States, 9 states, Puerto Rico and Washington, D.C. have implemented laws covering all workplaces, including restaurants and bars, and three more states have passed laws that will take effect over the next year. Smoke-free air laws outside the United States are getting a boost from the worlds first public health treaty, the Framework Convention on Tobacco Control (FCTC). The FCTC is now law in 146 countries, representin g more than three quarters of the worlds population. Under the treaty, governments are required to protect people from secondhand smoke in indoor workplaces and public places. The treatys governing body is expected to adopt guidelines at its meeting in July that will clarify that governments must implement strong smoke-free laws to meet their treaty obligations. The U.S. is one of the few countries that has not ratified the FCTC. The U.S. signed the treaty in 2004, indicating its general support, but the President has not taken the next step of forwarding the FCTC to the Senate for consideration. The American Cancer Society is a strong supporter of the FCTC and encourages swift U.S. ratification. Tobacco use is the worlds leading cause of preventable death, claiming the lives of an estimated 4.9 million people each year. This death toll is expected to rise sharply to 10 million deaths a year by 2020, due to the rapid growth in smoking rates in low-income nations. The WHO estimates that more than 650 million people alive today, including 250 million children, will die premature deaths because of tobacco use. In the United States, 440,000 people die each year from tobacco-related illnesses, and nearly one-third of all cancer deaths are attributable to tobacco use. Related medicine news :1 . Anti-Cancer Drug Restores Normal Cell Growth2 . Low-dose Growth Hormone (GH) with diet and exercise may help weight loss3 . Human Growth Hormone Found To be Dangerous For Human Use4 . Stem Cells Found To Offer Promise For Hair Growth5 . Fish Found To Improve Fetal Growth6 . Growth Patterns In Childhood Could Predict Breast Cancer Later In Life7 . Stopping The Growth Of Viruses8 . Stopping The Growth Of Viruses9 . Turmeric Found To Slow Down Melanoma Growth 10 . Protein acts as "Master Switch" that cuts off cells Cancer-Growth machinery11 . Record Growth Of Soya-Based Dairy Alternatives Over The Past Year In European Markets
<urn:uuid:da21d1fc-87ff-4818-b753-a4c9a840688c>
CC-MAIN-2016-26
http://www.bio-medicine.org/medicine-news/Growth-of-Smoke-Free-Air-Laws-Worldwide-3B-More-Than-200-Million-Found--22Fully-Protected-22-21159-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00107-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934287
813
2.546875
3
In medieval academia, quodlibet ("any question whatever") was a day when a professor (typically of theology) would open his class and answer questions on absolutely any topic. The general public was welcome to come in and pose a question - any question, as long as it was framed in a yes-no format - which the professor would be required to answer. Questions ranged from the riddle-culous ("Were there rainbows before the Deluge?") to the sublime ("Is God good if he allows great evils to occur?"). Some theologians refused to do quodlibets, while others loved them. Transcripts of quodlibets by Thomas Aquinas and William of Ockham are still in publication. Later, building on the notion of a free-for-all that could range across many modes and moods, the word came to be applied to a type of musical composition in which varied tunes and fragments were cleverly grafted to each other to create a musicological joke, riddle, or amusing pastiche. J.S. Bach was the first (known) composer to employ the device, in his Goldberg Variations.
<urn:uuid:f4fce61f-54e7-4e6d-ae3c-41bf5521a5b2>
CC-MAIN-2016-26
http://everything2.com/title/Quodlibet
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00098-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971994
237
3.53125
4
A Polygon is a two-dimensional figure, meaning all of the lines in the figure are contained within one plane. They are classified by the number of angles, which is also the number of sides. One key point to note is that a polygon must have at least three sides. Normally, three to ten sided figures are referred to by their names (below), while figures with eleven or more sides is an n-gon, where n is the number of sides. Hence a forty-sided polygon is called a 40-gon. A polygon with three angles and sides. A polygon with four angles and sides. A polygon with five angles and sides. A polygon with six angles and sides. A polygon with seven angles and sides. A polygon with eight angles and sides. A polygon with nine angles and sides. A polygon with ten angles and sides. For a list of n-gon names, go to and scroll to the bottom of the page. Polygons are also classified as convex or concave. A convex polygon has interior angles less than 180 degrees, thus all triangles are convex. If a polygon has at least one internal angle greater than 180 degrees, then it is concave. An easy way to tell if a polygon is concave is if one side can be extended and crosses the interior of the polygon. Concave polygons can be divided into several convex polygons by drawing diagonals. Regular polygons are polygons in which all sides and angles are congruent.
<urn:uuid:8df19479-85f7-407f-884c-3b48f2dd1ca1>
CC-MAIN-2016-26
https://en.wikibooks.org/wiki/Geometry/Polygon
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926529
331
4.0625
4
In his preface to The Renaissance Walter Pater begins with the assertion that beauty is not definable in the abstract, and instead it is relative. He states beauty can, or should, only be defined specifically and personally, the viewer should come to an individual understanding influenced by their specific situation and viewpoint. To define beauty, not in the most abstract but in the most concrete terms possible, to find not its universal formula, but the formula which expresses most adequately this or that special manifestation of it, is the aim of the true student of aesthetics. Pater states later that the role of a critic is to reduce art to its elements in order to find the beauty and show it to others. And the function of the aesthetic critic is to distinguish, to analyse, and separate from its adjuncts, the virtue by which a picture, a landscape, a fair personality in life or in a book, produces this special impression of beauty or pleasure, to indicate what the source of that impression is, and under what conditions it is experienced. His end is reached when he has disengaged that virtue, and noted it, as a chemist notes some natural element, for himself and others. 1. Can someone else hand us their interpretation of beauty, or can we only see the real beauty with our personal understanding? 2. How effective is it for us to see someone else's distillation of art? Is it really beauty if it is not interpreted by us? 3. Does Pater think we can see the true beauty in someone else's distillation of art? If not why does he bother to give us his if it is not as satisfactory as coming to our own understanding? 4. What techniques does Pater use to successfully (or not) convey his distillation of beauty to his reader? Is the preface one of these techniques? Last modified May 2003
<urn:uuid:ca0eb22d-db2c-4cce-86f7-877148a4c7b8>
CC-MAIN-2016-26
http://www.victorianweb.org/authors/pater/reynolds6.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96602
380
2.75
3
GEO-5 the UN's Most Comprehensive Study of the Global Environment - Now Available in Spanish Thu, May 30, 2013 The United Nations Environment Programme (UNEP) has today launched a Spanish version of its flagship report, the Global Environment Outlook (GEO-5). Sustainability targets can be met in Latin America and the Caribbean with renewed commitment, and by scaling up successful policies Panama / Nairobi, 30 May 2013 - The United Nations Environment Programme (UNEP) has today launched a Spanish version of its flagship report, the Global Environment Outlook (GEO-5). GEO-5 assessed progress towards 90 of the most important environmental goals and objectives, and found that significant progress had only been made in four. These were: elimination in the production and use of substances that deplete the ozone layer, removal of lead from gasoline, increasing access to improved water supplies, and boosting research to reduce pollution of the marine environment. Some progress was made on 40 goals, including the expansion of protected areas and efforts to reduce deforestation. Little or no progress was detected for 24, including climate change, fish stocks, and desertification and drought. Further deterioration was noted for eight goals including the state of the world´s coral reefs, while no assessment was made for 14 other goals due to a lack of data. The Latin America and Caribbean region is home to 23 per cent of the world´s forests and 31 per cent of its freshwater resources. Yet unsustainable consumption and production patterns - primarily linked to agriculture and raw material extraction - are accelerating environmental degradation. Latin American and Caribbean countries share a number of common environmental challenges, including climate change, biodiversity loss, land management, degradation of coastal and marine zones, urbanization, poverty and inequality. The region´s growing population, already largely urbanized, poses a challenge to provide clean drinking water and sanitation in expanding towns and cities. Further challenges in the region, according to the GEO-5 report, are achieving a solid environmental governance framework, including management of natural capital, public participation, education and a culture of environmental awareness, and bridging the gap between science and policy. GEO-5 highlights a number of examples in Latin America and the Caribbean which are leading the way towards a low-carbon, resource-efficient green economy. The region's protected areas, for example, cover more than 500 million hectares in 4,400 different zones. They are considered to be one of the region's most important policy measures for conserving biological diversity. They are also supporting climate change adaptation and mitigation, and contributing to national GDP when properly managed. In Brazil and Colombia, initiatives to replace conventional transport networks with a bus rapid-transit system have yielded multiple environmental and social benefits, such as reduced carbon emissions, and improved mobility. The Mesoamerican Biological Corridor, established by eight Central American countries, acts as a pathway between large and important wildlife habitats. By promoting greater involvement for local residents, the corridor helps promote a greater sense of human well-being, while ensuring that the biological heritage of the region is protected. Achieving a more sustainable model of development in the region, says GEO-5, requires improved national and regional strategies that can address environmental and economic issues simultaneously. Improved governance, active community participation and a high level of inter-institutional cooperation are also needed. Such efforts are also crucial to address the most serious challenges faced in the region: poverty and inequality. The GEO-5 Spanish and English versions are available at: http://www.unep.org/geo/geo5.asp In addition, the following link contains some short policy briefs covering the following areas (1) agriculture, (2) biodiversity, (3) governance, (4) water and (5) climate change vulnerability: For more information, please contact: Alejandro Laguna - Information Officer United Nations Environment Programme - Regional Office for Latin America and the Caribbean comments powered by
<urn:uuid:2ed9d55a-12a4-4150-b1a6-d15f3d9c1d8b>
CC-MAIN-2016-26
http://www.unep.org/NEWSCENTRE/default.aspx?DocumentId=2716&ArticleId=9515
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922386
816
3.25
3
Basic Math | Basic-2 Math | Prealgebra | Workbooks | Glossary | Standards | Site Map | Help Borrowing and Regrouping Values in SubtractionDo you remember the idea of carrying in addition? You took an extra value from one column and moved it to the next column. Borrowing or regrouping in subtraction flips that idea around so that you borrow a value from the next column to the left. Some of you will hear the word regrouping. Borrowing and regrouping are the same ideas when you subtract. Sometimes you need a little bit extra in order to do your subtraction, so you use an amount from the column to the left. In subtraction, you borrow when you are subtracting one number that is greater than another (the subtrahend is greater than the minuend). 35 - 2 would not need borrowing/regrouping. 32 - 5 would use borrowing/regrouping because you can't subtract 5 from 2 in this example. To be honest, we like the term borrow, but you need to say the word your teacher wants to hear. 4 - 2 = 2 (no borrowing/regrouping) 39 - 6 = 33 (no borrowing/regrouping) 32 - 5 = 27 (borrowing/regrouping needed) - or - When your problem is set up, you borrow from the column on the left. If you are subtracting in the ones column and you need to borrow, look to the tens column. If you are working the tens, borrow from the hundreds. It goes on like that. Look to the left when you need to borrow. Also, you only borrow a "1" and then you decrease the value of the number by one. In the above example, you are borrowing an extra ten (10) for your subtraction problem. Here's the breakdown. 32 - 5 = ? (1) Subtract the ones column. Since you can't subtract 5 from 2, you need to borrow/regroup. (2) Borrow "1" from the tens column. The "3" becomes a "2" because you took away one group of ten. (3) Increase the "2" to "12" and try subtracting again. 12 - 5 = 7 (4) Complete subtraction in the tens column. 2 - 0 = 2 Answer: 32 - 5 = 27 Always Moving to the LeftLet's start with a little reminder. When you are subtracting, you will always move to the left. Always start with the smallest values. If you have a five-digit number such as 12,345 you will start subtracting values from the ones column first. Then you will move to the tens, hundreds, thousands, and ten thousands columns. If borrowing/regrouping was involved, you would take the borrowed value and place it in the column to the right. So, if you were subtracting numbers in the tens column and you needed to borrow a "1", you would remove "1" from the hundreds column. Steps to Solve: • Start with the ones column: 2-9 requires regrouping. Borrow one from the tens column. • Now 12-9=3. Write the three in the ones column of your answer. • Move to the tens column and you have the problem 1-9 (it's a 1 because you borrowed before). You need grouping again, so borrow from the hundreds column. • Now 11-9=2. Write the 2 in the tens column of your answer. • Finally the hundreds column. 8-5=3 We built our difference like this... --3, then -23, then 323 * The custom search only looks at Rader's sites. Go for site help or a list of mathematics topics at the site map! ©copyright 2004-2013 Andrew Rader Studios, All rights reserved. Current Page: NumberNut.com | Basic Math | The Process of Borrowing in Subtraction ** Andrew Rader Studios does not monitor or review the content available at these web sites. They are paid advertisements and neither partners nor recommended web sites.
<urn:uuid:c217209b-77c9-4d20-94f4-b6b7c3b7937d>
CC-MAIN-2016-26
http://www.numbernut.com/basic/sub_borrow.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.895067
879
4.0625
4
|Conrad Gessner's Walrus. 1558. Historia Animalium. http://biodiversitylibrary.org/page/42165842| Conrad Gessner desired to reconcile ancient knowledge about the animal kingdom with the modern discoveries of the Renaissance. This endeavor spurred him to produce his magnificent Historia Animalium, a work synonymous with the beginning of modern zoology. This five-volume masterpiece covered the subjects of "live-bearing four footed animals" (mammals), "egg-laying quadrupeds" (crocodiles and lizards), birds, fish and sea creatures, and a fifth posthumous volume on snakes and scorpions. Compiling knowledge from Old Testament, Greek, Hebrew and Latin sources, Animalium boasts a rich collection of woodcut illustrations - something uncommon in other contemporary natural history publications. Gessner repurposed images from many famous researchers of his time, including Olaus Magnus, Guillaume Rondelet, Pierre Belon, Ulisse Aldrovandi, and Albrecht Durer. Their existing images were carved into woodblocks by craftsmen, which were used to "stamp" the reproductions onto designated pages within the text blocks. Though Gessner intended to produce an authoritative encyclopedia of scientific knowledge about the natural world, his five volumes do include mythical beasts and fancifully-rendered factual creatures. Many of the more exotic of the species he depicted were based on textual or second-hand accounts, explaining the sometimes substantial divergence from reality. Case in point: The Walrus. |Olaus Magnus' Walrus. 1555. | Historia de Gentibus Septentrionalibus. Gessner voiced some hesitations about Magnus' representation of the Walrus. Writing that he believed Olaus based many of his creatures on sailors' accounts rather than life studies, Gessner reasoned that "fish don't have feet." Since common wisdom of the day, and even Olaus himself, grouped Walrus with fish, it was an understandable concern. Nevertheless, Olaus was a well-respected authority, with a good family lineage and a travel resume that had brought him further north than any of his intellectual European contemporaries. Thus, Gessner included two illustrations in his work, one closely resembling Olaus' beast and another more recognizable as the pinniped we know today. |Gessner Walrus, resembling Magnus' image. 1558. Historia Animalium. http://biodiversitylibrary.org/page/42165841| Thus, all things considered, though we today may look at Gessner's Walrus and giggle, his was actually a much more accurate representation of the animal than many alternative sources in his time. Despite the factual deviations it may contain, "for an understanding of the history of zoology and a peek at some truly fascinating and five-hundred-year-old illustrations, there is no better historical guide than Conrad Gessner's Historia Animalium." (Ellis, pg. 2). And, come on, Gessner's Walrus is pretty adorable! Outreach and Communication Manager | BHL - Ellis, Richard. "The First Animal Book." Natural Histories. Ed. Thomas Baione. New York: Sterling Publishing, 2012. pg. 1-2. - McKay, John. "The White Elephant of Rucheni." Scientific American Blog. Scientific American, 25 July, 2014. http://blogs.scientificamerican.com/guest-blog/2011/11/22/the-white-elephant-of-rucheni/ - Scott, Michon. "Sea Monsters." Strange Science. 25 July, 2014. http://www.strangescience.net/stsea2.htm
<urn:uuid:e7a5ec74-b1a0-4d3f-90a9-f3884996b12d>
CC-MAIN-2016-26
http://blog.biodiversitylibrary.org/2014/07/the-walrus-as-you-never-knew-him.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910519
785
3.296875
3
UH Manoa researching Hawaii’s biofuel future Imagine if fields of tropical grass in Waimānalo, which grows like weeds year-round, could be turned into electricity, jet fuel or even gasoline for automobiles. The University of Hawaiʻi at Mānoa’s College of Tropical Agriculture and Human Resources is in the process of determining if that dream could become a reality. The college is looking into locally produced, renewable energy that would end Hawaiʻi’s severe dependence on foreign oil and serve as a model to the world. “When people ask me is it economically viable at this point, I say we don’t have the answers yet,” said Professor Andrew Hashimoto of CTAHR. “That’s why we do the research.” CTAHR and its project partners have been awarded a four-year, $6 million federal grant for the research. This grant is part of a $41 million investment for 13 projects nationwide. The goal is to spur innovation in bio energy. “There is a big emphasis on the sustainability,” said Hashimoto. “How much input? What’s the impact on carbon by these processes? What’s the impact on the environment? What’s the impact on the communities that sustain these processes? It’s a very comprehensive project.” It starts with the fast growing tropical grass and a lot of questions. “It’s not so much finding the best crop but really which crops do the best in the different environments,” said Hashimoto. “And it’s not only yield but how much input is required like water, fertilizer, pest control and things like that.” Researchers are also looking into the harvesting, pre-processing and the conversion of the grass or biomass into fuel like diesel and gasoline. A research reactor operated by HNEI converts the biomass into carbon monoxide and hydrogen that can be made to produce electricity or be converted into liquid fuel. It is one of the many conversion methods being examined. No stone is being left unturned as UH researchers look into every aspect of biofuel and its future in Hawaiʻi. “Getting off of petroleum is very important for us,” said Turn. “For energy security, for economic development and to help put some of our agriculture lands back into production.” “It’s really going to come down to the economics and basically, the sustainability,” said Hashimoto. “Are you benefiting the environment and the community as well as economically?” Those are among the questions the UH researchers hope to have answered in the next two to four years. - Christopher Lepczyk helps author roadmap on natural resources - App to track papaya ringspot virus created - HNEI expands research at energy classrooms - Samir Khanal awarded two USDA grants - Navy expands investment at wave energy test site
<urn:uuid:3796dd5e-c49f-4aec-8341-5251475fb1ee>
CC-MAIN-2016-26
http://www.hawaii.edu/news/2012/09/05/uh-researching-hawaiis-biofuel-future/?pfstyle=wp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951874
628
2.703125
3
|The Germans felt completely hoodwinked when they got to Versailles. They were. The complete responsibility for the war was laid at their feet. The vindictive and unjust instrument forced upon them was nothing short of evil. The Germans knew full well that it would lead to physical ruin,economic hardship,hopelessness and to the utter destruction of their culture. The groundwork was set for inevitable future conflict. |The German Delegates' Protest Against the Proposed "Peace" Terms : Leader of the German Peace Delegation Count von Brockdorff-Rantzau's Letter to Paris Peace Conference President Georges Clemenceau on the Subject of Peace Terms, May 1919 I have the honour to transmit to you herewith the observations of the German delegation on the draft treaty of peace. We came to Versailles in the expectation of receiving a peace proposal based on the agreed principles. We were firmly resolved to do everything in our power with a view of fulfilling the grave obligations which we had undertaken. We hoped for the peace of justice which had been promised to We were aghast when we read in documents the demands made upon us, the victorious violence of our enemies. The more deeply we penetrate into the spirit of this treaty, the more convinced we become of the impossibility of carrying it out. The exactions of this treaty are more than the German people can bear. With a view to the re-establishment of the Polish State we must renounce indisputably German territory - nearly the whole of the Province of West Prussia, which is preponderantly German; of Pomerania; Danzig, which is German to the core; we must let that ancient Hanse town be transformed into a free State under Polish suzerainty. We must agree that East Prussia shall be amputated from the body of the State, condemned to a lingering death, and robbed of its northern portion, including Memel, which is purely German. We must renounce Upper Silesia for the benefit of Poland and Czecho-Slovakia, although it has been in close political connection with Germany for more than 750 years, is instinct with German life, and forms the very foundation of industrial life throughout East Germany. Preponderantly German circles (Kreise) must be ceded to Belgium, without sufficient guarantees that the plebiscite, which is only to take place afterward, will be independent. The purely German district of the Saar must be detached from our empire, and the way must be paved for its subsequent annexation to France, although we owe her debts in coal only, not in men. For fifteen years Rhenish territory must be occupied, and after those fifteen years the Allies have power to refuse the restoration of the country; in the interval the Allies can take every measure to sever the economic and moral links with the mother country, and finally to misrepresent the wishes of the indigenous population. Although the exaction of the cost of the war has been expressly renounced, yet Germany, thus cut in pieces and weakened, must declare herself ready in principle to bear all the war expenses of her enemies, which would exceed many times over the total amount of German State and private assets. Meanwhile her enemies demand, in excess of the agreed conditions, reparation for damage suffered by their civil population, and in this connection Germany must also go bail for her allies. The sum to be paid is to be fixed by our enemies unilaterally, and to admit of subsequent modification and increase. No limit is fixed, save the capacity of the German people for payment, determined not by their standard of life, but solely by their capacity to meet the demands of their enemies by their labour. The German people would thus be condemned to perpetual slave labour. In spite of the exorbitant demands, the reconstruction of our economic life is at the same time rendered impossible. We must surrender our merchant fleet. We are to renounce all foreign securities. We are to hand over to our enemies our property in all German enterprises abroad, even in the countries of our allies. Even after the conclusion of peace the enemy States are to have the right of confiscating all German property. No German trader in their countries will be protected from these war measures. We must completely renounce our colonies, and not even German missionaries shall have the right to follow their calling therein. We most thus renounce the realization of all our aims in the spheres of politics, economics, and ideas. Even in internal affairs we are to give up the right to self-determination. The international Reparation Commission receives dictatorial powers over the whole life of our people in economic and cultural matters. Its authority extends far beyond that which the empire, the German Federal Council, and the Reichstag combined ever possessed within the territory of the empire. This commission has unlimited control over the economic life of the State, of communities, and of individuals. Further, the entire educational and sanitary system depends on it. It can keep the whole German people in mental thraldom. In order to increase the payments due, by the thrall, the commission can hamper measures for the social protection of the German worker. In other spheres also Germany's sovereignty is abolished. Her chief waterways are subjected to international administration; she must construct in her territory such canals and such railways as her enemies wish; she must agree to treaties the contents of which are unknown to her, to be concluded by her enemies with the new States on the east, even when they concern her own functions. The German people are excluded from the League of Nations, to which is entrusted all work of common interest to the world. Thus must a whole people sign the decree for its proscription, nay, its own death sentence. Germany knows that she must make sacrifices in order to attain peace. Germany knows that she has, by agreement, undertaken to make these sacrifices, and will go in this matter to the utmost limits of 1. Germany offers to proceed with her own disarmament in advance of all other peoples, in order to show that she will help to usher in the new era of the peace of justice. She gives up universal compulsory service and reduces her army to 100,000 men, except as regards temporary measures. She even renounces the warships which her enemies are still willing to leave in her hands. She stipulates, however, that she shall be admitted forthwith as a State with equal rights into the League of Nations. She stipulates that a genuine League of Nations shall come into being, embracing all peoples of goodwill, even her enemies of today. The League must be inspired by a feeling of responsibility toward mankind and have at its disposal a power to enforce its will sufficiently strong and trusty to protect the frontiers of its members. 2. In territorial questions Germany takes up her position unreservedly on the ground of the Wilson program. She renounces her sovereign right in Alsace-Lorraine, but wishes a free plebiscite to take place there. She gives up the greater part of the province of Posen, the district incontestably Polish in population, together with the capital. She is prepared to grant to Poland, under international guarantees, free and secure access to the sea by ceding free ports at Danzig, Konigsberg, and Memel, by an agreement regulating the navigation of the Vistula and by special railway conventions. Germany is prepared to insure the supply of coal for the economic needs of France, especially from the Saar region, until such time as the French mines are once more in working order. The preponderantly Danish districts of Schleswig will be given up to Denmark on the basis of a plebiscite. Germany demands that the right of self-determination shall also be respected where the interests of the Germans in Austria and Bohemia are concerned. She is ready to subject all her colonies to administration by the community of the League of Nations, if she is recognized as its mandatory. 3. Germany is prepared to make payments incumbent on her in accordance with the agreed program of peace up to a maximum sum of 100,000,000,000 gold marks, 20,000,000,000 by May 1, 1926, and the balance (80,000,000,000) in annual payments, without interest. These payments shall in principle be equal to a fixed percentage of the German Imperial and State revenues. The annual payment shall approximate to the former peace budget. For the first ten years the annual payments shall not exceed 1,000,000,000 gold marks a year. The German taxpayer shall not be less heavily burdened than the taxpayer of the most heavily burdened State among those represented on the Reparation Commission. Germany presumes in this connection that she will not have to make any territorial sacrifices beyond those mentioned above and that she will recover her freedom of economic movement at home and abroad. 4. Germany is prepared to devote her entire economic strength to the service of the reconstruction. She wishes to cooperate effectively in the reconstruction of the devastated regions of Belgium and Northern France. To make good the loss in production of the destroyed mines of Northern France, up to 20,000,000 tons of coal will be delivered annually for the first five years, and up to 80,000,000 tons for the next five years. Germany will facilitate further deliveries of coal to France, Belgium, Italy, and Luxemburg. Germany is, moreover, prepared to make considerable deliveries of benzol, coal tar, and sulphate of ammonia, as well as dyestuffs and medicines. 5. Finally, Germany offers to put her entire merchant tonnage into a pool of the world's shipping, to place at the disposal of her enemies a part of her freight space as part payment of reparation and to build for them for a series of years in German yards an amount of tonnage exceeding their demands. 6. In order to replace the river boats destroyed in Belgium and Northern France, Germany offers river craft from her own resources. 7. Germany thinks that she sees an appropriate method for the prompt fulfilment of her obligation to make reparations conceding participation in coal mines to insure deliveries of coal. 8. Germany, in accordance with the desires of the workers of the whole world, wishes to insure to them free and equal rights. She wishes to insure to them in the Treaty of Peace the right to take their own decisive part in the settlement of social policy and social protection. 9. The German delegation again makes its demand for a neutral inquiry into the responsibility for the war and culpable acts in conduct. An impartial commission should have the right to investigate on its own responsibility the archives of all the belligerent countries and all the persons who took an important part in the war. Nothing short of confidence that the question of guilt will be examined dispassionately can leave the peoples lately at war with each other in the proper frame of mind for the formation of the League of Nations. These are only the most important among the proposals which we have to make. As regards other great sacrifices, and also as regards the details, the delegation refers to the accompanying memorandum and the annex thereto. The time allowed us for the preparation of this memorandum was so short that it was impossible to treat all the questions exhaustively. A fruitful and illuminating negotiation could only take place by means of oral discussion. This treaty of peace is to be the greatest achievement of its kind in all history. There is no precedent for the conduct of such comprehensive negotiations by an exchange of written notes only. The feeling of the peoples who have made such immense sacrifices makes them demand that their fate should be decided by an open, unreserved exchange of ideas on the principle: "Quite open covenants of peace openly arrived at, after which there shall be no private international understandings of any kind, but diplomacy shall proceed always frankly in the public view." Germany is to put her signature to the treaty laid before her and to carry it out. Even in her need, justice for her is too sacred a thing to allow her to stoop to achieve conditions which she cannot undertake to carry out. Treaties of peace signed by the great powers have, it is true, in the history of the last decades, again and again proclaimed the right of the stronger. But each of these treaties of peace has been a factor in originating and prolonging the world war. Whenever in this war the victor has spoken to the vanquished, at Brest-Litovsk and Bucharest, his words were but the seeds of future discord. The lofty aims which our adversaries first set before themselves in their conduct of the war, the new era of an assured peace of justice, demand a treaty instinct with a different spirit. Only the cooperation of all nations, a cooperation of hands and spirits, can build up a durable peace. We are under no delusions regarding the strength of the hatred and bitterness which this war has engendered, and yet the forces which are at work for a union of mankind are stronger now than ever they were before. The historic task of the Peace Conference of Versailles is to bring about this union. Accept, Mr. President, the expression of my distinguished consideration. |German Protest 1919
<urn:uuid:52067017-03ac-42c2-a5a7-86a6c454533c>
CC-MAIN-2016-26
http://www.exulanten.com/cr2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954435
2,868
3.609375
4