text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Put the wood in the hole Close the door. This is a colloquial North of England expression of uncertain origin and date. The 'wood' is the door and the expression is usually used when someone leaves a door open and lets in cold air to a warm room. Television dramas have given the term a wider audience but it is still largely confined to the North of England. Even there it is less used than before, probably due to the increased used of central heating which means there is less cause to use it.
<urn:uuid:42282fe1-18e6-4926-91b1-d64a06f9cfd4>
CC-MAIN-2016-26
http://www.phrases.org.uk/meanings/294500.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963347
107
2.6875
3
The former Duchy of Lothringen (Lorraine) became a French province in 1766. Later the German-speaking area of Lorraine became part of Elsass-Lothringen, German Reichsland after the Franco-German war of 1870-1871 until 1918 when it reverted to France. Today, the region of Lorraine is divided into the four departements of: This region has it's borders to the north with Belgium, Luxembourg and the German states of Saar and Rhineland-Palatine, to the east with the region of Alsace, to the south with the region of Franche-Comté and to the west with the region of Champagne. - Before 1648: Since the 12th Century, Lorraine was divided into many states among which the Duchy of Lothringen, the Republic of Metz and the Bishoprics of Metz, Toul and Verdun were the most important ones. All these states belonged to the Holy Roman Empire. In 1648, according to Treaty of Westphalia, Metz, Toul and Verdun became French cities. The Duchy of Lothringen, then surrounded by French territories, was repeatedly occupied by the French troups. When the Duke Stanislas Leszcynski died in 1766, the Duchy of Lorraine became a French province. After the French Revolution of 1789, France was divided into départements and Lorraine was made of the four départements of Meurthe, Meuse, Moselle, and Vosges. Nancy, Verdun, Metz and Epinal became the capitals of these départements. At the time of the French defeat of 1871, the German-speaking parts of the département of Meurthe and of the département of Moselle were merged to build one of the 3 districts of the Alsace-Lorraine Reichsland: - Bezirk (district) of Lothringen with capital Metz and (8) Kreise (counties): - Bezirk (district) of Lothringen with capital Metz and (8) Kreise (counties): At this time, the remaining French-speaking parts of the départements of Meurthe and of Moselle were linked together and became what is still the present département of Meurthe-et-Moselle. - Meurthe-et-Moselle: département 54; capital: Nancy - Meuse: département 55; capital: Bar-le-Duc - Moselle: département 57; capital: Metz - Vosges: département 88; capital: Epinal Before 1802, five dioceses existed in Lorraine: (Nancy and Saint-Dié were created in 1777). All of them belonged to the ecclesiastical province of Trier. The diocese of Toul was suppressed in 1802 and its parishes were shared out among the dioceses of Nancy, Verdun and Saint-Dié. Since then, these dioceses belong to the archdiocese of Besancon (today called the apostolical region of Besancon). The communities of Lorraine were subordinated to the Parliaments of Metz, Nancy or Paris until the French Revolution. In 1900, during the Alsace-Lorraine period, the highest court was the Oberlandesgericht in Kolmar. The lower courts were: - Landgericht Metz with (12) Amtsgerichte: - Landgericht Saargemuend with (11) Amtsgerichte: Associations and Societies Genealogical and historical Societies - Union des Cercles Généalogiques de Lorraine B.P. 8, 54131 Saint Max Cedex, France federates the following societies: - Cercle Généalogique de Briey, 4 rue Emile Gentil, 54150 Briey - Cercle Généalogique de Blénod-les-Pont-à-Mousson, 2 rue de la gendarmerie, B.P. 34, 54140 Jarville - Cercle Généalogique de Charmes, 16 rue des Capucins, 88130 Charmes - Cercle Lorrain d'Ile de France, 20 avenue de Vorges, 94300 Vincennes - Cercle Généalogique de Liffol, 20 route d'Haréville, 88500 Liffol-le-Grand - Cercle Généalogique de Liverdun, Rés. Toulair, 54460 Liverdun - Cercle Généalogique de Longwy, 3, rue du 22 Août 1914, 54620 Baslieux - Cercle Généalogique du Lunévillois, 5 rue Lahalle, 54300 Jolivet - Cercle Généalogique de Meurthe et Moselle, 4 rue Emile Gentil, 54150 Briey - Cercle Généalogique de la Meuse, B.P. 271, 55006 Bar-le-Duc Cedex - Cercle Généalogique de la Moselle, 1 allée du Château, 57070 St-Julien-les-Metz - Cercle Généalogique de Moselle-Est, Centre social D. Balavoine, 57800 Cocheren - Cercle Généalogique de Nancy, 46 rue du Général Patton, 54410 Laneuveville devant Nancy - Cercle Généalogique du Pays Messin, 1 allée du Château, 57070 St-Julien-les-Metz - Cercle Généalogique du Pays de Nied, Foyer culturel, 57320 Filstroff - Cercle Généalogique de Saint Avold, 2 rue Chapelle, 57500 Saint-Avold - Cercle Généalogique de Saint Dié, 13 rue St Charles, 88100 Saint-Dié - Cercle Généalogique de Sarrebourg, 29 rue de la Montagne, 57220 Boulay - Cercle Généalogique de Thionville, 3 rue Mozart, 57330 Hettange-Grande - Cercle Généalogique de Vittel, 187 quai Joffre, B.P. 128, 88804 Vittel Cedex - Cercle Généalogique des Vosges, B.P. 128, 88804 Vittel Cedex Historical Associations and Societies - Société d'Histoire et d'Archéologie de Lorraine, Archives Départementales de Metz Genealogical und historical Documents Church registers are sometimes available from 1648 when the Thirty Years War ended. Some few registers go back to 1600, but most of them begin before 1690. The region has always been predominantly Catholic with only a few well known Protestant strongholds like Metz and Courcelles-Chaussy, Badonviller, Fenétrange, Ogéviller, Bayon, Neuviller, Phalsbourg, Lixheim and Saint-Mihiel. Priests were in charge of the recording of baptisms, marriages and burials until the French Revolution. Parish registers are usually available until 1792. All the Church records of the départements of Meurthe-et-Moselle, Meuse and Vosges were put on microfilms which you can consult at the Family History Library. The département of Moselle was only partly microfilmed. Civil Registration Records Births, marriages and deaths registers begin in 1793. Convenient indexes called tables décennales exist for each category of records and each index always covers a period of 10 years. A yearly index which appears after the records of each year, was usually made. Like the Church registers, all the civil records and the indexes of the départements of Meurthe-et-Moselle, Meuse and Vosges are available on microfilms which you can consult at the Family History Library. The département of Moselle was only partly microfilmed. Census Records and Polling Lists - Every 5 years a national census was made in France between 1836 and 1936. One was deferred from 1871 to 1872 and those of 1916 and of 1941 were cancelled because of wartime. The following censuses occurred in 1946, 1954, 1962 and 1968. A census list of names usually displays the following information: - Family and given names - Age or year of birth - Position in household (since 1881) - Since 1848, when the law which established the universal male suffrage in France was passed, polling lists are useful to genealogists. They are available either in town halls or at the Départemental Archives. In France the vote was given to women only in 1945. - Options of Alsatians and Lorrainers In 1871, many people desired to leave Alsace-Lorraine and their names were recorded in these records of 523,000 persons, arranged in 395 alphabetical lists which the French government published in supplements to the Bulletin Des Lois [Bulletin of Laws]. They list birth dates, place of birth and some list destination. Family History Library microfilm numbers are: 787154 (middle) to 787166. (Note: The last two films also give information on persons emigrating to the USA and Canada.) - Options in hardcopy. These records have also been transcribed into book form. They are collected in 11 volumes organized by destination. See Publishers for information on availability from the Centre départemental d'Histoire des Familles. - See also a number of more specialized books on emigration in the Bibliography section. Conscription lists and personal notices are available at the Départemental Archives. The covered period is 1798-nowadays. Most of the French military records are stored at the Service historique de l'armée. No answer is made to any written request, but this center welcomes searchers who notify of their visit a few days before their arrival. Information about officers and other ranks can be retrieved since 1791 and sometimes even since 1715. A search of these records is a rather difficult and time-consuming task but exploring this source is worth the trouble. Notarial records usually begin before the Church records and they are available at the Départemental Archives. The most frequent records found are: - sales (often contain useful information) - loans (lenders and borrowers are sometimes relatives) - leases (usually of little genealogical interest) - marriage settlements (give an approximate date of the marriage and list parents and other relatives. Many are for marriages of widows or widowers) - exchanges (often of land or buildings between coheirs) - wills (contain interesting information but most are from the well-to-do) - inventories after death (very interesting documents with detailed information about the dead person like her possessions, her way of living and a list of other official records she made). Note: the web page Genealogy in France by Denis Beauregard contains a nice summary of records available in France, which is partly applicable to Lorraine. Bibliography and Literature note: listings in [brackets] indicate libraries holding the work. - Brasseaux, Carl A., "Foreign French", Nineteenth-century French immigration into Louisiana, 1990-92, Lafayette, Louisiana, Center for Louisiana Studies, University of Southwestern Louisiana; vol. 1: 1820-1839; vol. 2: 1840-48. - Castro, Lorenzo, Immigration from Alsace and Lorraine: A Brief Sketch of Castro's Colony in Western Texas 1871, San Antonio. - Chevalier, Tracy, The Virgin Blue, Penguin Books (historical novel of an American woman tracing her roots in 16th-century France). - Putnam, Ruth, Alsace and Lorraine from Caesar to Kaiser: 58 B.C.-1871 A.D., 1915, New York and London: G.P Putnam's and Sons. Reprinted 1971, Freeport, NY, Books for Libraries. - The Thirty Years War, Geoffrey Parker (ed.), 1984, 1991, New York and London: Routledge. - Fouché, Nicole, "Un épisode du peuplement du Texas: Henri Castro et les émigrants alsaciens 1842-1856", Revue d'Alsace, vol. 114, fasc. 592 (1988), p. 93-112 - Laybourn, Norman, L'émigration des Alsaciens et des Lorrains du XVIIIe au XXe siècle, 1986, Strasbourg, Association des Publications près les Universités de Strasbourg. [26 libraries in the US, one in Canada; two in Japan, four in Europe] - Valloton, Benjamin, "Alsaciens et Lorrains aux Etats-Unis d'Amérique", L'Alsace française, no. 21, 21 May 1927, pp. 401-422. - Neu, Heinrich, "Elsässer und Lothringer als Ansiedler in Nordamerika", Jahrbuch der Elsass-Lothringischen Wissenschaftlichen Gesellschaft zu Strassburg, 1930, Heidelberg, vol. 3, pp. 98-129 [FHL microfilm 1071428, item 4; Southern Illinois University, Carbondale; University of Notre Dame; Indiana University; Harvard; University of Missouri, Columbia] - "Auswanderung in Elsass-Lothringen in den Jahren 1871-1905", EMGV 1:182 - University of Nancy II, Annales de l'Est, a quarterly of historical articles about artistic, intellectual and economical subjects in eastern France. A yearly booklet Bibliographie lorraine is also published. It lists every new book published about arts, history, archeology and literature in Lorraine. - Académie nationale de Metz and University of Metz, Cahiers lorrains, a quarterly devoted to the regional research. - Société d'archéologie lorraine and Musée historique lorrain, Le Pays Lorrain, articles about literature, arts, history and popular traditions. - Association pour l'étude et la sauvegarde de l'habitat rural, Villages lorrains, articles about patrimony and rural housing. - Baxter, Angus, In search of Your European Roots, Genealogical Publishing, Baltimore, 1985 - Bernard, Gildas, Guide des recherches sur l'histoire des familles [Guide for Family History Research], Archives Nationales, Paris, 1981 - Law, Hugh T., "Locating the Ancestral Home in Elsass-Lothringen", German Genealogical Digest, Vol VI, Number 3, 1990, $8. - Marianne Doyle, French Ancestors (since 1987), 2923 Tara Trail, Beavercreek, OH, 45434-6252, USA. Yearly subscription $8 (6 issues) French Ancestors is a bimonthly newsletter dedicated to a better understanding of the French heritage of western Ohio. Its purpose is to explore the historical, cultural and genealogical background of ancestors from northeast France (Alsace, Lorraine, Franche-Comté) and bordering areas, who settled primarily in Darke and Shelby Counties in the mid 1800's. - U.C.G.L., Généalogie Lorraine, a quarterly published by the local genealogical society for Lorraine. Gazetteers, Atlases and Maps - "Alsace-Lorraine: Atlantic Bridge to Germany" The first book to be completed in the new "Atlantic Bridge to Germany" series is "Alsace-Lorraine". It has been completely revamped with new maps (from 1876-1898 era) showing the majority of the more than 5,600 places named in the book. Each place is identified by German and French names, Kreis (county), Bezirk (government district), and what years there are records available at the Family History Library. The first sixteen pages contain Alsace-Lorraine information dealing with history, geography, websites, books of interest, several mpas showing historical divisions, and more. Compiled by Linda Herrick and Wendy Uncapher. 192 pages, paperback, 8.5" x 11". Cost $20.00 plus $3.00 shipping and handling for the first book and $1.00 for each additional book. WI residents add appropriate sales tax. Order from: Origins, 1521 E. Racine St., Janesville, WI 53545 - Thode, Ernest, Genealogical Gazetteer of Alsace-Lorraine, 1986, Indianapolis, Heritage House (PO Box 39128, Indianapolis, IN, 46239, USA), 137 pp., maps. [Also available from Ernest Thode, RR 7, Box 306, Kern Road, Marietta, OH 45750-9437, USA. Cost $17.50 postpaid, $18.64 postpaid in Ohio (state sales tax)] The gazetteer (place-name dictionary) portion of this book is arranged alphabetically, with French, German, and a few Latin and English place-names, including rivers, streams, mountains, castles, and the like, all arranged in a single list. It assumes no special knowledge of French or German diacritical marks and symbols. Listings show some records availability -- town archives, Catholic records, Jewish records, LDS records, military vital records, Protestant records, university records, and civil vital records. - Les communes de l'Alsace-Lorraine. Répertoire alphabétique avec l'indication de la dépendance administrative. I. Nomenclature française avant 1871. II. Nomenclature allemande de 1871-1915. III. Nomenclature allemande de 1915-1918. [Communities of Alsace-Lorraine - Alphabetical directory with administrative dependency. - French placenames before 1871. German placenames from 1871 to 1915. German placenames from 1915 to 1918] re-printed and available from UCGL, B.P. 8, F-54131 Saint-Max, France - Sittler, Lucien, L'Alsace: Terre d'histoire [Alsace: a historical land, contains very good maps about the territorial history of Alsace from the Middle Ages to modern times] - Wolfram, Georg and Werner Gley, Elsass-Lothringischer Atlas, 1931, Frankfurt am Main: Selbstverlag des Elsass-Lothringen-Instituts, series: Veröffentlichungen des wissenschaftlichen Instituts der Elsass-Lothringer im Reich an der Universität Frankfurt [of especial interest for showing the distribution of Catholic and Protestant parishes] - On-line Map of Alsace (21K) presented by Évariste - Road and tourist map of "Alsace-Lorraine", map #242 of the yellow series by Michelin, scale 1:200000 - FHL microfilm 068814, Karte des Deutschen Reiches, scale 1:100000, 1km = 1cm covers Germany during 1914-1917. Map Publishers for the Lorraine region - Editions Christian 14, rue Littré 75006 Paris, FRANCE
<urn:uuid:22e9e769-2a19-498f-89b6-4dbe2399d68f>
CC-MAIN-2016-26
http://wiki-en.genealogy.net/Lorraine
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.769129
4,328
3.53125
4
Microsoft Word is a popular word-processing program used for creating documents such as letters, brochures, learning activities, tests, quizzes and students' homework assignments. There are many powerful features available in Microsoft Word to make it easier to learn for students with disabilities. The following tutorials have been designed by technology trainers and include selections from Microsoft's Classroom Corner.
<urn:uuid:a77d1c94-a939-473b-8578-a6c336caef38>
CC-MAIN-2016-26
http://atto.buffalo.edu/registered/Tutorials/msword/index.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934479
77
3.046875
3
• Tongue problems: There are a range of tongue problems that can cause sucking problems. For example: Poor latching technique, Tongue thrust, Tongue tie, short or long tongue, retracted tongue and tongue curling or sucking. • Jaw clenching: Clenching of the jaw can also cause problems. Often called gum-biting or clamping. • Sensory integration problems: This can cause “sensory overload" and makes it difficult for a baby to concentrate on feeding alone. See our section on sensory integration issues. • Oral aversion: Oral aversion is when a baby resists anything that touches the inside of their mouth. This is usually the result of procedures done during and after labor, such as forceful suctioning. • Floppy baby: A floppy baby may have low muscle tone or may have suffered from lack of oxygen during birth. These babies struggle to move at all, and usually have a weak suck. • Neonatal Abstinence Syndrome: Babies who were exposed to drugs in the womb may have problems sucking. These drugs may include: Alcohol, marijuana, antidepressants, methadone and many others. Signs and Symptoms of a Sucking Problem • The mother has sore nipples that are obviously the result of the hard palate rubbing on the nipple, because the nipple is distorted after feeding. • The mother has engorged breasts for longer than one week after birth; this means that her baby is not removing enough milk from the breasts. • The baby cries excessively because he/she is hungry.
<urn:uuid:367f3a48-ffa5-4c68-8964-1f69c3754ed8>
CC-MAIN-2016-26
http://www.breastfeeding-problems.com/weak-suck.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942902
329
3.125
3
Digital memories can do more than simply assist the recollection of past events, conversations and projects. Portable sensors can take readings of things that are not even perceived by humans, such as oxygen levels in the blood or the amount of carbon dioxide in the air. Computers can then scan these data to identify patterns: for instance, they might determine which environmental conditions worsen a child's asthma. Sensors can also log the three billion or so heartbeats in a person's lifetime, along with other physiological indicators, and warn of a possible heart attack. This information would allow doctors to spot irregularities early, providing warnings before an illness becomes serious. Your physician would have access to a detailed, ongoing health record, and you would no longer have to rack your brain to answer questions such as "When did you first feel this way?" Human memory can be maddeningly elusive. We stumble upon its limitations every day, when we forget a friend's telephone number, the name of a business contact or the title of a favorite book. People have developed a variety of strategies for combating forgetfulness--messages scribbled on Post-it notes, for example, or electronic address books carried in handheld devices--but important information continues to slip through the cracks. Recently, however, our team at Microsoft Research has begun a quest to digitally chronicle every aspect of a person's life, starting with one of our own lives (Bell's). For the past six years, we have attempted to record all of Bell's communications with other people and machines, as well as the images he sees, the sounds he hears and the Web sites he visits--storing everything in a personal digital archive that is both searchable and secure.
<urn:uuid:cb9d5973-ce52-4e93-ad1f-8d5f284ca560>
CC-MAIN-2016-26
http://www.scientificamerican.com/article/a-digital-life/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949853
343
2.875
3
Electrical Brushes Information Electrical brushes are used in conjunction with slip rings, commutators, or other contact surfaces to maintain electrical connections in rotary and linear sliding contact applications. They require very good frictional characteristics combined with high to moderate conductivity. Buyers of electrical brushes should consider materials of construction when selecting products. - Graphite brushes are recommended for high power and sliding contact applications because graphite is a good lubricant and provides low friction contact. Some products combine graphite with other metals (such as copper) for improved electrical characteristics. - Copper graphite brushes are generally more expensive, but provide excellent flexural strength and high electrical conductivity. - Silver graphite molybdenum disulfide brushes are recommended for slip ring mechanisms used in aerospace applications, and in solar power arrays. The GlobalSpec SpecSearch database contains information about these and other types of graphite and metal brushes. Configurations and Specifications Electrical brushes are supplied as assemblies, leaf springs, plungers, contact tips or buttons, bar stock, brush pads, tamped or shunted brushes, and solid rock stock. Specifications for include current density, operating speed, outer diameter (OD) or width, length, thickness and conductivity. Two performance specifications are especially important. - Current density is the maximum current per unit area that the brush is designed to handle in continuous use without excessive overheating, erosion or sticking. - Conductivity, the inverse of resistivity, is often given as percent of a copper standard, which is 100% IACS (International Annealed Copper Standard). Applications for Electrical Brushes There are many applications for electrical brushes. Some products are designed to avoid the material transfer and severe arc damage that can occur in DC applications. Brushes designed for sliding contact are recommended for applications in which the contacting members provide the electrical connection or path for the transmission of power or signals across a rotary interface in motors or generators. Electrical brushes are also designed for re-tipping or replacement in various OEM units while maintaining or improving performance characteristics. Applications also include the transmission or pick-up of electrical signals in testing, probing or instrumentation applications.
<urn:uuid:3b16a807-7821-46ef-9d55-fdfe7cd1fb31>
CC-MAIN-2016-26
http://www.globalspec.com/learnmore/materials_chemicals_adhesives/electrical_optical_specialty_materials/electrical_contact_electrode_materials/electrical_brushes_brush_materials_e.g._motor_brushes
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899128
446
2.6875
3
I was reading on up Lagrangian points and the restricted three-body problem. From what I was able to tell, the Lagrangian points are 5 points in a two-body system such that a third body would be relatively at rest. The first three are unstable, and the last two are stable. How is this possible? Because we know from Kepler's laws that the orbits are in the shape of ellipses with the sun at a focus, and we also know the planes sweep out equal areas in equal time and so the speed varies. And so, how can the third body have a constant distance when there is an ever-changing speed gap between the orbital speeds. And could anyone provide (visually) an explanation for the Lagrangian points, how to deduce them, and what assumptions were made in order to deduce them? LE: So, in the three body problem, the orbits more closely resemble circles than ellipses? And if so, is the speed relatively constant so that the difference in distance between the third and second body is negligible?
<urn:uuid:4bd50e24-7afc-4499-b975-42fc56bd5c0f>
CC-MAIN-2016-26
http://physics.stackexchange.com/questions/34069/do-lagrangian-points-actually-maintain-a-fixed-distance
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972969
222
2.96875
3
Reader Dennis Brumm writes: A friend of mine in Pennsylvania sent me a care package yesterday (from the era when newspapers still considered Americans literate), and in it he put a copy of an article he stumbled upon in a 1957 issue of The Christian Science Monitor. Amazing stuff below: The Future of Fossil Fuels An Intimate Message from Washington By Neal Stanford The Christian Science Monitor (Features Section) June 5, 1957 Speechmakers and speeches are a dime a dozen in this windy city, so a man has to say something particularly significant or be particularly provocative to get the attention of the press these days. One such man and one such speech are Admiral H. G. Rickover (1) and his recent remarks on “Energy Resources and Our Future.” Admiral Rickover is the Navy’s top man in nuclear propulsion; and his speech referred to is as full of startling, provocative, and significant observations as any your correspondent can remember coming across in years. Which proves there is plenty that is of importance for officials to say--if they will only abandon the obvious, the stereotyped, and the expected. In the short space allotted this Intimate Message I will paraphrase as nearly as possible the admiral’s admirable discussion. This is what might be called the fossil fuel age. Coal, oil, and natural gas supply 93 per cent of the world’s energy. Water power accounts for only 1 per cent. Labor of men and domestic animals accounts for 6 per cent. This is in startling contrast to a century ago when fossil fuels supplied only 5 per cent of the world’s energy, and men and animals 94 per cent. Five-sixths of all the coal, oil, and gas ever consumed by man has been burned up in the last 55 years. The rate at which fossil fuels are being consumed is breath-taking. All coal, oil, natural gas used before 1900 would not last five years at today’s rate of consumption. The United States with only 5 per cent of the world’s population uses one-third of the world’s total energy output. This accounts for America’s high standard of living. Man’s first step up the ladder of civilization dates from his discovery of fire and his domestication of animals. Then slave labor was used to provide more energy. A reduction in per capita energy consumption always marks a decline in civilization. The exhaustion of wood fuel is said to explain the fall of the Mayan civilization. The depletion of forests for fuel in India and China lessened their energy base and lowered their civilizations. Another cause of declining civilization comes from the pressure of population on available land. The point comes where land cannot support both the people and their domestic animals. Horses and mules disappear first; then the water buffalo is replaced by man--who is two and a half times as efficient as an energy converter as are draft animals. While domestic animals and machines increase productivity per man, maximum productivity per acre is achieved only by intensive manual cultivation. It may well be that it was man’s unwillingness to depend on slave labor for energy needs that turned the minds of medieval Europeans to search for alternate sources of energy, thus sparking the power revolution of the Middle Ages which paved the way for the industrial revolution of the 19th century. When slavery disappeared in the West, engineering advanced. When a low-energy society comes in contact with a high-energy society, the advantage always lies with the latter. Europe not only achieved standards of living vastly higher than those elsewhere but did so while its population was growing at rates far surpassing those of other peoples. Now what of the future of fossil fuels? It is an unpleasant fact that according to our best estimates total fossil fuel reserves (recoverable at not over twice today’s unit cost) are likely to run out at some time between 2000 and 2050 A.D. Oil and natural gas will disappear first; coal last. Nuclear fuels would seem to be the answer. But they have their drawbacks. They can't be used in small machines, such as cars, trucks, buses, tractors. We must remember that the oil we use in the United States in one year took nature 14,000,000 years to create. Barring atomic war or unexpected changes in the population curve we can count on an increase in world population from 2,500,000,000 today to 4,000,000,000 by the year 2000 (2). It is an awesome thing to contemplate a graph of world population from prehistoric times to the year 2000 A.D., for 99 per cent of that time it stretches almost level. In the 8,000 years from the beginning of history to 2000 A.D. world population will have grown from 10,000,000 to 4,000,00,000, with 90 per cent of the growth taking place during the last 5 per cent of that period--or 400 years. It took the first 3,000 years of recorded history to double the population of the world; 100 years for the last doubling; but the next doubling will be in 50 years. Calculations give us the astonishing estimate that the people living in this one year equal one-twentieth of the total number of human beings ever born into the world. (I found this hard to believe; but Admiral Rickover says it is so.) The Late Admiral H.G. Rickover To most people, Admiral H.G. Rickover is best known as the father of the nuclear navy and modern nuclear engineering, but to anyone involved with CEE's programs, Admiral Rickover is the father of the nation's premier science and mathematics program. The late Admiral H.G. Rickover emigrated from Poland to the US in 1900. He received his Engineering degree from the U.S. Naval Academy in 1922. Following sea duty, Rickover earned a master of science degree in electrical engineering from Columbia University. He served in the Bureau of Ships during World War II. Following the war, he was assigned to the Manhattan Project at Oak Ridge, Tenn. and later served with the U.S. Atomic Energy Commission. As Director of the Naval Reactor's branch of the U.S. Navy, the Admiral developed the world’s first nuclear-powered submarine, the USS Nautilus, launched in 1955. In addition to establishing his own graduate schools for nuclear engineering studies and writing several books on education, the Admiral was awarded the Congressional Medal for exceptional public service in 1959 and 1983, and in 1980 was presented the Presidential Medal of Freedom by President Jimmy Carter for his contributions for world peace. His record of 64 years of active service in the military service remains unchallenged. The Admiral passed away in 1986; he is remembered as a renaissance scholar, an intensely principled leader, and a fierce believer in a better world through education. RSI students have come to be called "Rickoids," and as the program enters its 24th year the participants continue to be influenced by his vision. 2. The UN estimated world population was 6.07 billion in the year 2000; it far surpassed the 4 billion estimate of this 1957 article (The Green Revolution at work?).
<urn:uuid:8d91a4df-7437-458a-b808-0f1c972c569f>
CC-MAIN-2016-26
http://www.energybulletin.net/print/22890
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95228
1,480
2.515625
3
Strait and narrow A conventional and law-abiding course. 'Straight' is a much more frequently used word than 'strait' these days and so the most common question about this phrase concerns the spelling - should it be 'strait and narrow' or 'straight and narrow'? Well, that depends on just how pedantic you want to be. The source of the expression is the Bible, specifically Matthew 7:13/14. The King James' Version gives these verses as: Enter ye in at the strait gate: for wide is the gate, and broad is the way, that leadeth to destruction, and many there be which go in thereat: Because strait is the gate, and narrow is the way, which leadeth unto life, and few there be that find it. That clearly opts for 'strait' rather than 'straight', as it calls on a now rather archaic meaning of strait, that is, 'a route or channel, so narrow as to make passage difficult'. This is still found in the names of various sea routes, e.g. the Straits of Dover. Such a nautical strait was defined in the 1867 version of Admiral Smyth's Sailor's Word-book as: "A passage connecting one part of a sea with another." Smyth also offered the opinion that strait "is often written in the plural, but without competent reason". The 'confined and restricted' meaning of strait still also lingers on in straitjacket, dire straits, strait-laced and straitened circumstances. All of these are frequently spelled with 'straight' rather than 'strait'. These spellings, although technically incorrect, are now widely accepted and only 'dire straights' comes in for any sustained criticism. The use of 'straight' is quite understandable, certainly in 'straight and narrow'. After all, it means 'direct and reliable', as in the phrase 'as straight as a die', and the imagery of a direct and unwavering route to salvation would have been attractive to Christian believers in the 17th century, when that version of the spelling first appeared. It was included in an 1827 publication of A Journal of George Fox, Volume 1, which claims to be a facsimile reprint of the 1694 original journal. The earliest definitive documentation that I can find comes from a few years later, in The Critical Works of Monsieur Rapin, 1706: "The soul of the common people seems too straight and narrow to be wrought upon by any Part of Eloquence." This version of the phrase is old enough and close enough in date to the earliest example of 'strait and narrow' that I can find in print as to match it in status. That example is in A Vindication of the Government in Scotland: During the Reign of King Charles II, 1712: "Strait and narrow is the way that leadeth unto life." 'Straight and narrow' is now the more common spelling and you will be in good company if you opt to use it, even though 'strait and narrow' might be a better choice if you want to get high marks in that English language test.
<urn:uuid:9f6aae49-3d9a-4953-82fc-90b70186726a>
CC-MAIN-2016-26
http://www.phrases.org.uk/meanings/strait-and-narrow.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00114-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946842
669
2.96875
3
America's Dairyland could dramatically raise the bar on raw milk regulation if its Legislature and governor ever allow it to be sold in Wisconsin. A year after Wisconsin's Raw Milk Policy Work Group began deliberating, the panel is close to a report that will not endorse raw milk sales, but will say that if raw milk is made legal in Wisconsin, the state should impose restrictive requirements that go beyond any now found in America. Former Gov. Jim Doyle's Secretary of Agriculture, Rod Nilsestuen, appointed the Raw Milk Policy Group, which went to work in March 2010. The group's purpose is to consider whether there are legal, regulatory means that might allow dairy farmers to sell unpasteurized fluid raw milk directly to consumers and, if so, what conditions would be necessary to protect public health. The 22-member group represents a wide array of stakeholders and experts. Wisconsin's $26.2 billion dairy industry accounts for almost half of the state's $59.2 million agricultural industry, which provides 420,000 jobs for 12 percent of the state's workforce. "Because of the economic contribution of the dairy industry, a very important role for Wisconsin state government is assuring that the milk we drink is safe," says David Ward, government relations' director for Minnesota and Wisconsin co-ops. The Working Group's report, which has been expected since the end of January, will not call for the legal sale of raw milk in Wisconsin, Food Safety News has learned. Instead it will lay out a long list of requirements that should be imposed if the Legislature ever opts to make raw milk legal. For example, it would call for animal health testing for tuberculosis, brucellosis, streptococcus agalactia, and leptospirosis, milk testing for standard plate count, somatic cell count, coliform, antibiotic drug residues, and pathogens including Campylobacter jujuni, Listeria monocytogenes, Escherichia-coli O157:H7; and non-O157:H7 Shiga toxin-producing E. coli. The Working Group will say there should be well-water testing for coliform, and milk temperature requirements for holding and storing unpasteurized milk. It wants specified timeframes for selling and consuming unpasteurized milk and specific containers. It will call for on-farm standards, licensing, and inspection for selling unpasteurized milk, along with legal parameters, public education, and on-farm response and management. When released, the working group report is expected to run about 55 pages, with 35 separate parameters that would have to be met before raw milk could be sold to the public. Expected to be among its recommendations: - Only on-farm sales directly to consumers would be allowed. - Laws and regulations would cover producers and farms permitted to sell unpasteurized raw milk. - Regulations would cover how containers are filled and refrigerated; how milk is tested, and for what pathogens; and licensing procedures. - On-farm sales of raw milk would not include any special exemption from liability for personal injuries to consumers from the product. - Dairy farms selling raw milk would have to meet all requirements of the Pasteurized Milk Ordinance's (PMO's) Grade A requirements, except for the one mandating that they market their milk through a dairy-processing plant. - Raw milk producers milking by hand or storing milk in cans would not be permitted to sell to the public. - On-farm sale of raw goat milk and raw sheep milk would be prohibited. - Advertising would be permitted, but only for on-farm purchase and delivery. - The state would publish best-management practices for selling unpasteurized milk and a consumer's guide for sale handling. - Upon enactment of a raw milk law, the governor should name a seven-member committee to monitor the effectiveness of the law, including food-safety and public health issues related to the sale and consumption of unpasteurized milk. - Within four years after any raw milk law takes effect, the committee shall make recommendations to the governor and the DATCP on any needed changes. Then-Gov. Doyle last year vetoed a law that would have make raw milk sales legal in Wisconsin. It would have been the 26th state to do so. Since then, even lawmakers favoring raw milk legalization have been willing to wait on the Working Group's report. Wisconsin's current legislative session, however, continues well into 2010, giving raw milk advocates plenty of time to respond. Wisconsin's Legislature and state government in general have been preoccupied during the last few weeks by protests over the dispute between the new Republican Gov. Scott Walker and public employee unions. Source: Dan Flynn, Food Safety News
<urn:uuid:cd490f8b-c66c-4b7a-9e3b-ccddf4c109be>
CC-MAIN-2016-26
http://www.dairyherd.com/dairy-news/latest/Wisconsin-should-get-tough-on-raw-milk-Panel-Says-117648129.html?email=yes&cmntid=62738731
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935586
980
2.546875
3
Introducing Blog Elements In the context of blogging, content management almost always means organizing the site by backwards chronology. In this way, your most recent writing appears first. As visitors continue reading your updates, they work backwards in time. Each piece of content is called an entry. When you write a blog, you post entries, and those posted entries are sometimes called posts. (The word post derives from Internet message boards, where online communities chat by means of publicly posted messages.) Each posted entry is stamped with a date and (usually) time. The front page of the blog contains recent entries, with the most recent at the top. Many blogs are organized with big daily headers that group each day's posts. Blog software makes easy business of posting entries. The interface is usually similar to the Compose screen you use for e-mail. You write in that screen, and then click a button marked Post or Post Entry. The software uploads the entry to the blog, putting it above previous entries on the home page, and assigning it a date and time stamp. The software also assigns the entry its own page, so that each entry has a dedicated URL (Web address). In some instances, bloggers make the software put short entry excerpts on the home page (called the index page), saving the entire entry for the dedicated page. With that arrangement, readers can skim short bits of many entries on the index page, clicking through to a dedicated entry page when they want to read an entire post. Even in blogs where the entire entry is published on the index page, a dedicated page is created —that unique URL enables bloggers to promote individual entries by sharing links to those entries. Creating a series of dedicated pages is how the blog software maintains an archive of everything written. Here is a summary of the general process of blogging: 1. The blogger writes a blog entry. 2. The blogger posts the new entry. 3. The blog software uploads the new entry and fits it into the Weblog chronologically. Part or all of the entry is placed on the blog's index page (the home page of the site), and the entire entry is also archived on its own page with a unique URL. 4. Visitors see the index page first, where they can skim recent entries in reverse chronological order. They can click through to individual entries on individual pages. Blog programs create a unique page for each entry for two reasons: linkability (when you want to point to a specific entry) and continuity — the unique pages are an archive of the entire blog. Archiving is important. You might think that the immediacy of blogs makes past entries obsolete, but the opposite is true. Blogs represent a history of a person's writing. It can be fascinating to dive into a blog's past, and most software encourages visitors to do that by linking to archived posts in a variety of ways: - On the index page, at least a few days' worth of entries are presented. - A "recent entries" column is often displayed on the index page's sidebar, listing recent posts that aren't on the index page. - Deep archives are often listed by month, by year, or by both, somewhere on the index page. A calendar format is sometimes used. Archived entries are particularly useful in professional diaries and topical blogs, where you might want to research article links and commentary opinion from months or years ago.
<urn:uuid:20883879-9443-42cd-b088-d130402b38ca>
CC-MAIN-2016-26
http://www.dummies.com/how-to/content/introducing-blog-elements.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930106
701
2.59375
3
A probe that binds to synaptic proteins allows scientists to see how the brain changes after you learn something. Photo: Patrick Hoesly Using a protein found in glowing jellyfish and some fancy engineering, scientists can now watch memories as they form in the brains of mice. That alone is cool, but here’s the real sci-fi stuff: Within a couple years, researchers think they’ll be able to selectively eliminate memories (at least in mice), much like they do in the John Woo-Ben Affleck classic Paycheck or Eternal Sunshine of the Spotless Mind. Joking aside, here’s how it works: Your brain is made up of 100 billion neurons that are connected by synapses, which pass electrical communications between each other. When you learn something new, the brain creates new connections between the synapses. “The pattern of synapses—how strong these connections are—is the physical substrate of learning and memory,” said Don Arnold, a researcher at the University of Southern California and author of the study published in Neuron. “When you learn something, it’s this pattern of connections that gets changed. If you could label these, you could get an idea of how the structure of the brain changes when an animal learns or when something is remembered.” That’s where the jellyfish protein comes in: Arnold and his colleagues created fluorescent “probes” using that can be injected into the brain and bind to synaptic proteins. By then exposing a mouse to a new task, they are able to see how the brain’s synapses look before and after this exposure. Arnold says the process would be useful for studying disorders such as Fragile X syndrome, which causes autism and mental retardation and is related to synaptic strength. The process might also be used to test new drugs that target brain disorders. Because the process of watching a brain as it learns is pretty invasive, there aren’t any plans to try this out in humans. While Arnold’s initial research is certainly interesting, it’s what he and other scientists plan to do next with the research that’s really out there. The team is working on a way of eliminating memories of learned behavior entirely from mice. “We’re in the process of adapting this technology to do exactly that,” Arnold said when I asked about the possibility of erasing memories. “There’s two parts to our current probe—there’s the fluorescent protein part, but then there’s also the part that targets the protein to an individual molecule.” He says they can theoretically engineer probes that can cause the synaptic proteins to degrade, a project they’re currently “finishing up.” But animal tests using the technology are probably a couple years down the road, he said. “The idea, the dream experiment, would be to have the animal learn something, then look at which synapses have changed, and then go in and delete those synapses to see if the animal has unlearned it,” he said. What happens then is anyone’s guess. Scientists aren’t sure if once a synapse is denatured if the same synapses will grow back once an animal is re-exposed to the same behavior or memory, if they will create patterns in other parts of the brain, or if they will be completely unable to relearn the behavior. No word on whether the process will stop you from making the same mistakes with your ex. “The answer is, who knows,” he said. “But that’s the experiment we’d envision. But we’re developing these probes and we want to use them.”
<urn:uuid:4b17a6c0-dd49-4121-ab58-8b868ca1efa1>
CC-MAIN-2016-26
http://motherboard.vice.com/blog/usc-scientists-are-working-on-deleting-memories
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00195-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934755
787
3.328125
3
Bryn Mawr Classical Review 2012.03.33 Henry Cullen, Michael Dormandy, John Taylor, Latin Stories: A GCSE Reader. London: Bristol Classical Press, 2011. Pp. 190. ISBN 9781853997464. $27.00 (pb). Reviewed by Alan Ross, St Gabriel’s School, Newbury, UK (firstname.lastname@example.org) This book, written by a group of three English school teachers, sets out to provide a comprehensive set of practice translation passages for school pupils preparing to sit language papers as part of the General Certificate of Secondary Education examination in Latin. Readers unfamiliar with the English school system are directed to a concise overview in a recent review of one of John Taylor’s other publications (BMCR 2010.08.72). Since 2004, only one examination board (OCR) has offered GCSE and A-Level examinations in Latin and Greek. There is therefore only a single examination syllabus available to classics teachers (where there is often a choice of two or three for their colleagues in other departments). For the GCSE in Latin, pupils sit two language papers, the first requiring marginally less grammar and vocabulary than the second.1 Together they form 50% of the award. It is not a coincidence that this book is the latest in a series of textbooks produced over the last decade, all of which are designed to prepare school pupils in Latin and Greek for a specific set of examination regulations, many of them authored by John Taylor.2 Here Taylor is joined by Henry Cullen, his colleague at Tonbridge School, and Michael Dormandy, head of Classics at Ashford School. The book comprises four sections. Section 1 (by Dormandy) contains thirty passages for translation designed to cover the prescription for Language Paper 1. In accordance with the prescription, the passages are stories drawn both from mythology and Roman domestic life. This is the only section of the book which has a noticeable gradation in the complexity of grammar, and as such provides a useful initial selection of passages for use with students during the first weeks or term of the GCSE course. As with all the sections in the book, any vocabulary that is not found in the GCSE list is glossed. Section 2 (twenty exercises by Taylor) provides more practice for Language Paper 1, with stories taken from mythology. The level of grammar required is constant throughout. These passages are also accompanied by comprehension questions of the same style required in Language Paper 1: pupils are asked to extract information from the passage (without rendering a complete translation), and to offer English derivatives of certain Latin words. The GCSE does not require pupils to comment on the use of accidence or syntax, and no such questions are set. The majority of passages are self-contained stories, though a few are sequential, for example the myth of Jason and the Argonauts spread across four passages. Sections 3 (30 exercises by Cullen) and 4 (20 by Taylor) both follow the prescription for Language Paper 2. Together they represent the same sort of preparation as sections 1 and 2; section 3 contains passages for translation only, and Section 4 has accompanying comprehension exercises. All are taken from episodes in Roman history. There is no accompanying key to the comprehension questions in either sections 2 or 4, and thus the book seems designed for use under the supervision of a teacher or instructor, rather than as a tool for self-study. Neither is there a vocabulary list or reference section included. It must be used, therefore, in conjunction with a language course. Teachers in the English school system will find this an enormously useful resource to provide comprehensive practice for their students preparing for the current prescriptions in the GCSE. It is well tailored to the linguistic requirements and the question-style of the examinations, and should fill a gap in the market for a reading course for this purpose. There are only two deviations from the OCR grammar prescription. The authors note in the preface that section 2 makes use of fear clauses involving timeo + ne, which are not required by the GCSE. The other receives no explanation: passage 82 (p.150) seems to include a final clause involving a gerund ad aquam ferendum. Knowledge of the gerund is not required for GCSE, whereas that of gerundive is. It is conceivable that this is a typographical error for the gerundival construction ad aquam ferendam. This explanation would also excuse the use of the un-classical construction of a gerund taking an accusative object. The book is otherwise a good representation of classical style executed within the confines of the grammatical and lexical prescriptions. Similarly, it is well produced and sensibly laid out. The only other typographical slip which could cause students problems is fractos on p.150 which should be feminine ( fractas). Perhaps the most curious aspect of the book is the list of “sources of passages” in a short appendix (pp.188-190). It reveals an intrinsic oddity in the GCSE prescription that almost half of the passages in an exam should be stories drawn from ‘mythology.’ Inevitably this means Greek mythology, and so there is the danger that school pupils may associate Latin as the primary medium through which the modern world encounters Greek myth. The authors have Latinized names (Ulysses, Hercules, Diana etc.), but we nevertheless find the peculiar situation in which a passage of Latin prose is “based upon” (in the words of the appendix) a Greek version in verse by Euripides.3 The authors do not specify further what exactly they took from their source texts, but an inspection of the passages which are based on works of Latin prose reveals that it is usually the bare outline of a story’s plot, rather than any details of syntax or vocabulary. As such, the book is an impressive feat of prose composition! The subject matter of the passages is ultimately dictated by the examination board rather than the authors, but it does raise the point of the marketability and longevity of a publication which is so closely tied to one set of examination regulations. OCR has already made one major change to the style of its language papers since it became the sole board offering the Latin GCSE. A future change would not render the book obsolete, but it could certainly compromise its primary purpose, and therefore it may be off-putting for schools to invest in multiple copies now. That said, language instructors in other teaching environments should also be able to make good use of this book. The self-contained nature of each passage makes it a useful companion for any ab initio language course. An instructor in a different system should, however, be aware of the extent and limitations of the GCSE grammar prescription. Most notable are the absence of present and perfect subjunctives (and therefore of primary sequence in subordinate clauses), the use of the subjunctive as a main verb, and the gerund (as noted above). The comprehension questions accompanying passages in sections 2 and 4 can easily be ignored if they are not deemed useful, and the entire passage used for translation practice. Overall this is a welcome addition to the resources available to the schoolteacher in the English system. The more up-to-date publications there are like this (well-tailored to the needs of school pupils, and with engaging story lines) the stronger the position of the subject in departments across the country will become. 1. The full set of examination regulations is available here 2. With Bristol Classical Press Taylor has published Greek to GCSE part 1 (2003), Greek to GCSE Part 2 (2003), Greek beyond GCSE (2008), Essential GCSE Latin (2006) and Latin Beyond GCSE (2009), all aimed at the OCR prescriptions. 3. The story of Iphigeneia at Aulis in passage 48, page 106, for example.
<urn:uuid:b9620562-105a-487b-99db-7ca078ebf0b6>
CC-MAIN-2016-26
http://bmcr.brynmawr.edu/2012/2012-03-33.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94151
1,629
2.90625
3
The Pacific Northwest has a 37 percent chance of being hit by an earthquake as big as Chile’s in the next 50 years, according to the news web site of the journal Nature. That more than doubles previous estimates of a 10 to 15 percent risk of a magnitude 8 or larger earthquake. According to the article: “The last earthquake on the subduction zone was on 27 January 1700… [T]he next earthquake is overdue, and there’s a 37% probability it will occur somewhere along the Cascadia fault in the next 50 years. “Public officials should maybe look at the new numbers and think about this earthquake as a real possibility in the next 50 years, instead of just a remote possibility,” Goldfinger says. Put another way: “It’s time for communities from Northern California through Seattle to check their earthquake and tsunami plans and get busier than ever retrofitting vulnerable buildings,” writes Andrew Revkin at his DotEarth blog at The New York Times’ web site. Read the full Nature article here Engineer: Seattle, Pacific NW ‘most vulnerable to a mega-quake like Chile’s’ What if an 8.8 quake hit Seattle? Earthquake in Chile shows what might happen in Seattle, Portland Animated simulation of what a strong earthquake would do to the Alaskan Way viaduct See recent Washington earthquakes with this earthquake tracker How hazardous is your neighborhood? See this map Seattle’s advice for protecting your home Local residents will stand in vigil today to protest illegal actions taken by Israel last night. In an international act of piracy and murder in international waters, the Israeli navy intercepted, boarded, and opened fire on humanitarian activists on a flotilla of ships attempting to deliver humanitarian supplies to the Israeli-blockaded Palestinian Gaza Strip. Activists are reported to have waved white flags. Many Seattle residents have friends and colleagues aboard the flotilla. According to news reports, Israeli commandos killed as many as 19 humanitarian activists on board one ship, and have abducted all 700 passengers on board the six boats composing the flotilla who are in the process of being sent against their will to Israel for arrest and/or deportation. In July 2008, the United States signed a contract worth $1.9 billion to transfer the latest-generation of naval combat vessels to Israel at U.S. taxpayer expense. Currently, Congress is in the process of appropriating a record $3.2 billion in military aid to Israel this budget year. Seattle residents will stand in vigil at 1:00 PM today just outside of Folklife at Seattle Center near the corner of 5th and Broad. The vigil is organized by local groups including Palestine Solidarity Committee, Voices of Palestine and the Save Gaza Campaign. For more information, contact Amin Odeh 206 605 8448 or Ed Mast 206 633-1086. Enjoy the bullshit way mainstream media looks at the world. On the mayor’s Ideas for Seattle list of hundreds of good ideas from people thinking about how to help the city, the headlines are about nude beaches and dope. We’d like to highlight the current #8 top-ranked idea — Keep the Conservatory at Volunteer Park Open : So many of these ideas are about creating a better future. What about maintaining what is beautiful and good for this city? It’s a warm, quiet and beautiful place. We should maintain and preserve it for the future. Closing it would be shortsighted. We know the Conservatory doesn’t necessarily involve public nudity, nor marijuana but maybe you can still stop by the site and add your vote to the tally. Meanwhile, the Seattle Parks Department is expected to announce midyear budget cutbacks in the next week. At the end of April, Parks announced that 24 of its 27 wading pools could face closure this summer including the Cal Anderson pool. The Volunteer Park wading area is reportedly not being considered for cutbacks. Also on the most recent possible cut list — the Parks-run community centers like Miller. CHS also reported on the concerns from the group Friends of the Conservatory that says it fears the Volunteer Park landmark might face the budget axe as it approaches its 100-year anniversary in 2012. In a repeat of an event in Berlin titled Its Form Will Follow Your Performance, Alex Schweder is looking for five people in Seattle who want free architectural advice from a performance architect. Ideally, he says, the five will not be directly connected to the art world. Its Form Will Follow Your Performance (Seattle) runs at Lawrimore from June 9 – 16. Prospective clients can email their interest to: firstname.lastname@example.org with the subject line “Free Architectural Advice.” From Lawrimore Project, where the results will be exhibited: These people need to be of limited means, and willing to have this process documented and agree to him exhibiting or publishing this documentation. He will then meet with these ‘clients’ for about an hour at Lawrimore Project and hear what is wrong with their apartments. He will then give them free advice about how to renovate their apartment. they will go home and ‘perform’ this renovation, document it and send it to him via email… (more) This report was updated at 10:53 a.m. on Monday From Larry Johnson’s blog: Looking for Trouble. As many as 19 people were killed on boats carrying humanitarian aid to Gaza were attacked tonight by Israeli forces while the flotilla was still in international water, according to activists on the ships and news reports. The latest Al Jazeera report says that: “Israeli commandos have attacked a flotilla of aid-carrying ships off the coast of the Gaza Strip, killing up to 19 people on board. Dozens of others were injured when troops raided the convoy of six ships, dubbed the Freedom Flotilla, early on Monday. Israel said activists on board attacked its commandos as they boarded the ships, while the flotilla’s organisers (Free Gaza Movement) said the Israeli forces opened fire first, as soon as they stormed the convoy.” The flotilla was attacked in international waters, 65km off the Gaza coast. Footage from the flotilla’s lead vessel, the Mavi Marmara, showed armed Israeli soldiers boarding the ship and helicopters flying overhead. Al Jazeera’s Jamal Elshayyal, on board the Mavi Marmara, said Israeli troops had used live ammunition during the operation. The Israeli Army Radio said soldiers opened fire “after confronting those on board carrying sharp objects.” No one on the ships was armed. BBC reported that: “The Israeli navy has stormed a convoy of ships carrying aid to the Gaza Strip, with reports of at least two people killed. Armed forces boarded the vessels overnight, clashing with some of the 600 protesters on board. The exact location of the interception is unclear. Israel had warned the ships not to enter its territorial waters. The ships are carrying 10,000 tonnes of aid to try to break an Israeli-led blockade. Turkish TV pictures taken on board the Turkish ship leading the flotilla show Israeli soldiers fighting to control passengers. The footage showed a number of people, apparently injured, lying on the ground. The sound of gunshots could be heard. It is not clear whether the fighting is continuing.” An earlier statement on the website of the Free Gaza Movement said: On May 24, 2010, the Freedom Flotilla sets sail for Gaza determined to, once again, challenge Israel’s blockade of 1.5 million Palestinians trapped in an open-air prison. Under the coordination of the Free Gaza Movement, numerous human rights organizations, including the Turkish Relief Foundation (IHH), the Perdana Global Peace Organization from Malaysia, the European Campaign to End the Siege of Gaza, and the Swedish and Greek Boat to Gaza initiatives will send three cargo ships loaded with reconstruction, medical and educational supplies. At least five passenger boats with over 600 people on board will accompany the cargo ships. These passengers include members of Parliament from around the world, U.N., human rights and trade union activists, as well as journalists who will document the largest coordinated effort to directly confront Israel’s illegal blockade of Gaza and take in basic supplies. The mission, according to the Free Gaza Movement site, is “to break the siege of Gaza. We want to raise international awareness about the prison-like closure of the Gaza Strip and pressure the international community to review its sanctions policy and end its support for continued Israeli occupation. We want to uphold Palestine’s right to welcome internationals as visitors, human rights observers, humanitarian aid workers, journalists, or otherwise. “We have not and will not ask for Israel’s permission. It is our intent to overcome this brutal siege through civil resistance and non-violent direct action, and establish a permanent sea lane between Gaza and the rest of the world.” The group has stressed that they want to achieve their goals through nonviolence. Note from Greg Lundgren: Dennis Hopper is dead.. At 74 years old, it’s kind of a miracle that he lived that long. For all of the crazy things he did in his life, for all the drugs he consumed with great enthusiasm, for the very address he resided at, it is amazing that something like cancer was what took him down. Tomorrow most of us get the day off for Memorial Day. And I think Memorial Day should be broader than paying our respects to the military men who have protected our freedom for the past 234 years. There are those that defend our freedom with guns and uniforms. And there are those that defend our freedom through a life of art and action. We defend our constitution so crazy son of a bitches like Dennis Hopper can make films, take pictures, perform and generally test the limits of what freedom looks like. If our country was a jet plane, they would hire Dennis Hopper to stress test it. If our country was a piece of luggage, they would give it to Dennis Hopper to see if he could break it. What makes our country so great is that is permits people like Dennis Hopper to create and destroy and live in a way that most people don’t understand or endorse. more at Another Bouncing Ball It’s been almost a year since we got the scoop on the city’s Drug Market Initiative, which focused police resources and community involvement to clean up the open-air drug sales in the area around 23rd & Union. Dealers were identified, cases were built against them, and they were given a choice: get help and stop dealing, or go to jail for a long time. More than a dozen dealers took them up on their offer and left the streets. For about six months afterward the comments from community members were glowing. Residents could walk to the post office without wading through crowds of users and dealers. Nearby streets that were once occupied at all hours were suddenly quiet. But things began to take a turn last month… (more) A year ago, Capitol Hill nonprofit Home Alive was struggling and eventually made the hard choice to shut down its space to try to keep going as an advocacy group and trainers for women’s self-defense. This week, the group announced it was shutting its organization down with a celebration on June 12th to mark the group’s 17 years of work. Home Alive was created in 1993 following the rape and murder of local singer Mia Zapata. Here’s the e-mail we received from organizers announcing the decision — and the party: Dear Capitol Hill & Home Alive Community, The members of Home Alive’s Board of Directors, together with the instructor collective, have decided to close as a 501(c)(3) organization and to lay the Home Alive program dormant. We are throwing a party to celebrate and commemorate our 17 years in the community, as well as to look forward together to the ways we can carry on the work and spirit of Home Alive. We hope you will be able to attend! When: Saturday, June 12, 2010 Where: Hidmo, 2000 S Jackson St What: Live music, food, a collaborative expressive arts opportunity, free stuff! Please contribute to our celebration by showing up, listening and sharing stories or memories about Home Alive. There will be open mic time and we want to hear from you! Who: YOU! Hidmo is ALL AGES until 11pm (If you'd like to help out with the party, please contact us at the email As you may know, Home Alive closed our studio in the beginning of 2009 and continued to operate at a very minimal capacity while we successfully paid off our debt. On April 15, 2010, after weighing a variety of options over the last several months, the instructors and board members of Home Alive made the hard decision to discontinue operating as a 501(c)(3) non-profit. While this means dissolving our assets, we intend to make as much of our organization's amazing history and curriculum available as possible to the community (mostly via our website, www.homealive.org). A few instructors will remain available on a limited basis for consultations and workshops. They can be reached at email@example.com. The awesome work and movement-building that Home Alive has been a part of for many years continues to grow, and you can continue to support it! Check out local and national organizations like the Northwest Network, Creative Interventions, For Crying Out Loud, Feminist Karate Union, Seven Star Women's Kung Fu, Generation 5, Chaya, Queer and Trans Jailstoppers, Break the Silence, Communities Against Rape and Abuse, and Incite! Women of Color Against Violence, among others. Also, Home Alive will be leading a workshop at the US Social Forum in Detroit in late June. We’re incredibly grateful for the support you've given us over the years. Thank you for being a part of the Home Alive community, and we look forward to seeing you on June 12! Zapata’s July 7, 1993 murder was a mystery until the arrest of Jesus Mezquia in 2003. He was sentenced to 34 years in prison for the crime, had the conviction overturned on a technicality, and was re-sentenced again in 2009. We haven’t checked in with the Cobra Lounge since the hookah club lit its first bowls of shisha at the intersection of Madison and Union back in late march. As we told you then, the Cobra was the second project from a group that figured out how to navigate the state and county bureaucracy surrounding tobacco-based businesses with its first club up in Bellingham. Recently, another nearby club caught the attention of King County Health. Majles Cafe, on 12th Ave down across Madison, got dinged by inspectors last summer for various infractions including allowing non-members to smoke but, instead of packing it up and shutting down, Majles has re-opened in an even larger space: Cobra owner Erin Cobb told us in March that he was setting up the Capitol Hill club differently than his Bellingham venture. In Bellingham, Cobra customers buy their shisha in one area and move to a separate club location nearby to smoke it and hang out. On Capitol Hill, Cobb said set up the lounge as one facility. But he’s also ready to convert the space to a layout he believes will put the lounge in compliance. Cobb told us this week that King County Health has been in touch but they gave him “no real word on what is happening with the other hookah lounges,” he said. Right now, it’s business as usual for the Cobra. The club also has an application for a liquor license posted so it’s possible that soon club goers will be able to enjoy a drink and a smoke. Meanwhile, according to KING 5, Majles has until July 15 to prove to County Health that it is complying with state laws. With St. Mark’s, St. Joe’s and Lake View Cemetery, it’s not unusual to see a funeral procession through north Capitol Hill. It is not every day, however, when we know who the ceremonies are for and who has passed. Saturday at 1 PM, St. Mark’s Cathedral will be host to a service for the Rt. Rev. Robert Hume Cochrane, sixth Bishop of the Episcopal Diocese of Olympia, who died earlier this month at the age of 85. Here’s some more about his life and ministry:
<urn:uuid:034fe2df-c48a-465f-b351-bf4d5e1d13ff>
CC-MAIN-2016-26
http://seattlepostglobe.org/2010/05/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00195-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957799
3,517
2.71875
3
Venture-backed start-ups are readying swine-flu vaccines in case the recent outbreak escalates into a larger emergency. The Department of Health and Human Services declared an emergency Sunday after 20 swine influenza A-virus infections were confirmed in the U.S., including eight in New York City and seven in California. U.S. cases have been mild, but some biotech start-ups are set to respond if officials decide existing vaccines aren’t enough. Over the weekend, Vaxart Inc. secured the gene sequence for the swine-flu virus from the Centers for Disease Control and Prevention, said Sean Tucker, vice president of research for the San Francisco company, whose investors include Bay Partners and Quantum Technology Partners. The DNA sequence, pulled from a CDC database, will enable Vaxart to begin developing a vaccine that it will be ready to test in animals in four to six weeks. The CDC said Sunday that it is working with the Food and Drug Administration and the National Institutes of Health to develop a vaccine precursor that could be used to make a swine-flu vaccine. The antigens that trigger a protective immune response in today’s vaccines, however, are grown in eggs over a period of six months or more. If the swine flu spreads rapidly, the government may turn to companies whose experimental technologies promise to yield large batches of vaccines more quickly. This could put start-ups like Vaxart in line to lock up lucrative contracts to supplement the government’s store of vaccines. Vaxart makes vaccines by combining an antigen – in this case, the swine-flu DNA – with a non-replicating viral vector and an adjuvant that boosts the immune response. This “modular” approach is about two months faster than egg or cell-culture systems, said Chief Executive Mark Backer. If swine flu becomes a large problem, U.S. regulators may permit Vaxart and others to move rapidly from animal research to human studies, he said. Vaxart, which is also developing seasonal and avian-flu vaccines, may slow or suspend those programs if the swine flu affords it an opportunity to move its products more rapidly into human studies, Backer said. Another company taking aim at seasonal and avian flu, VaxInnate Corp., plans to use the spate of swine-flu infections to see how rapidly it can come up with a vaccine candidate, said CEO Alan R. Shaw. The company, funded by CHL Medical Partners, HealthCare Ventures, and others, uses molecular biology to produce vaccines in e. coli instead of eggs or cell culture, which enables it produce a product in about six weeks. VaxInnate, based in Cranbury, N.J., already has tested a vaccine in clinical trials for Solomon Islands flu, which is similar to the swine flu, Shaw said. While swine flu may be quickly contained, the government will need speedier vaccine-production technologies to manage future influenza outbreaks. “They’re going to need a tool like this sooner or later,” Shaw said. “This will happen over and over, it’s just a matter of how frequently.” Not all venture-funded concerns are jumping at swine flu. Vaxin Inc., a Birmingham, Ala., company funded by Greer Capital Advisors, and others, will not shift course to target what so far has been manageable problem in the U.S., said CEO William Enright. Instead, Enright intends to continue to concentrate on its intra-nasal vaccines for two more-established threats: seasonal and avian flu, he said. “Vaxin is a pretty small company,” Enright said. “We need to stay focused on moving toward the marketplace.”
<urn:uuid:493fa3f6-5bbb-45e3-9ca8-7da8f28afa33>
CC-MAIN-2016-26
http://blogs.wsj.com/venturecapital/2009/04/27/start-ups-rush-to-develop-swine-flu-vaccine/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939586
794
2.625
3
Tobacco Road and God's Little Acre Explore This Article Although Erskine Caldwell wrote more than sixty books, twenty-five novels among them, he is best known for two works of long fiction, Tobacco Road (1932) and God's Little Acre (1933). Tobacco Road was named one of the Modern Library's 100 best novels of the twentieth century, and God's Little Acre remains Caldwell's single most popular work, having sold more than 10 million copies. Along with the less well-known Journeyman (1935), these books make up a serio-comic trilogy of Georgia life in the first half of the twentieth century. They detail the ruination of the land, the growth of textile mills, and the abiding influence of fundamentalist religion in the South. These books thus present a radical contrast to the traditionally genteel and romantic views of the region, popularized most notably by Margaret Mitchell in Gone With the Wind (1936). Tobacco Road, published poverty he witnessed as a young man growing up in the small east Georgia town of Wrens. His father, Ira Sylvester Caldwell, who was pastor of the local Associate Reformed Presbyterian Church, was also an amateur sociologist and often took his son with him to observe some of the more destitute members of the rural community. Erskine Caldwell's sympathy for these people and his outrage at the conditions in which they lived were real, and his novel was meant to be a work of social protest. But he also refused to sentimentalize their poverty or to cast his characters as inherently noble in their sufferings, as so many other protest works did. The novel's Lester family, headed by the shiftless patriarch Jeeter, both appall and intrigue readers with their gross sexuality, casual violence, selfishness, and overall lack of decency. Living as squatters on barren land that had once belonged to their more prosperous ancestors, the Lesters have come to represent in the American public's mind the degradation inherent in extreme poverty. That Caldwell also portrays them as often-comic figures further complicates the reader's response. Tobacco Road is a call to action, but it offers no easy answers and thus has generated intense debate both in and out of the South. Many southerners denounced the novel as exaggerated and needlessly cruel and even pornographic, an affront to the gentility of the region. Northern critics, however, tended to read the book as a serious indictment of a failed economic system in need of correction. Caldwell later explained that the book was not meant to represent the entire South, but for many this work confirmed demeaning southern stereotypes. The stage version of Tobacco Road was written by Jack Kirkland and opened on December 4, 1933, at the Masque Theatre in New York City. Caldwell had little to do with the play version and initially felt it would fail. First reviews were mixed, and after a month of sporadic attendance, the play moved to the 48th Street Theater, where it slowly became a word-of-mouth success. With Henry Hull as the first of five actors who would play Jeeter Lester, Tobacco Road ran for more than seven years, through 3,182 performances. When it closed on May 31, 1941, it had become the longest-running play in the history of the Broadway stage up to that time. Road shows took the play to cities throughout the nation and later into foreign countries. In 1934 Chicago mayor Edward F. Kelly declared the play obscene and closed it down. The producers sued, and in a major court case, the play was allowed to continue. This was the first of numerous attempts to censor the show, which was often taken to court or banned during its many runs. Caldwell tirelessly defended the play and the book and, in the process, became a leading advocate for artistic freedom and First Amendment rights. In 1940 Darryl F. Zanuck and Twentieth Century Fox, which had just produced John Ford's classic film version of John Steinbeck's The Grapes of Wrath, bought the screen rights to Tobacco Road. Ford and screenwriter Nunnally Johnson (a Georgia native) attempted to preserve the caustic comedy and social protest of the book and play, but the studio overruled them on central issues, specifically the tragic ending. The result was a sentimental burlesque that Caldwell himself disavowed. Starring Charley Grapewin, repeating his stage role as Jeeter Lester, the film was released in 1941. It enjoyed initial success but is now considered one of Ford's lesser movies, a poor relative of his great work in The Grapes of Wrath. God's Little Acre was published by Viking Press in 1933, one year after the publication of Tobacco Road. In it, Caldwell shifts his sights to the industrialized South. Influenced in part by the textile mill strikes in Gastonia, North Carolina, he considered this work to be a "proletarian" novel dealing with the plight of workers deprived of union protection. It was intended to support these mill hands, or "lintheads," as they were sometimes called. Will Thompson, who leads the strike, represents both the inherent power and the frustration of the working class. When Thompson is killed by guards as he attempts to reopen the mill shut down by its ruthless owners, his death becomes a rallying cry; and his corpse is borne through the streets, but the mills remain closed. The book also examines the misuse of the land and other natural resources. Ty Ty Walden, who (unlike Jeeter Lester) still owns his farm, spends his time digging for gold instead of farming the rich soil. His delusion and the tragedy it brings to his family again illustrate the waste Caldwell saw in southern attitudes toward the land. Like Tobacco Road, God's Little Acre contains scenes of explicit sexuality. In April 1933 the New York Society for the Suppression of Vice took Caldwell and Viking Press to court for dissemination of pornography. More than sixty authors, editors, and literary critics rallied in support of the book, and Judge Benjamin Greenspan of the New York Magistrates Court ruled in its favor. The court case is still considered a major decision in the establishment of artists' First Amendment rights in freedom of expression. The book became a worldwide best-seller and remains today one of the most popular novels ever published. In 1958 director Anthony Mann and screenwriter Phillip Yordan, in collaboration with Caldwell, made the film version of God's Little Acre, starring Robert Ryan as Ty Ty Walden and Aldo Ray as Will Thompson. The film, like the book, was considered scandalous and became one of the top-grossing movies for that year. Truer to its source than John Ford's Tobacco Road had been, God's Little Acre remains the best representation of Caldwell on film.
<urn:uuid:915424fd-7517-4198-b170-6d1feebda917>
CC-MAIN-2016-26
http://www.georgiaencyclopedia.org/articles/arts-culture/tobacco-road-and-gods-little-acre
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981146
1,393
2.71875
3
A key challenge in the production of second generation biofuels is the conversion of lignocellulosic substrates into fermentable sugars. Enzymes, particularly those from fungi, are a central part of this process, and many have been isolated and characterised. However, relatively little is known of how fungi respond to lignocellulose and produce the enzymes necessary for dis-assembly of plant biomass. We studied the physiological response of the fungus Aspergillus niger when exposed to wheat straw as a model lignocellulosic substrate. Using RNA sequencing we showed that, 24 hours after exposure to straw, gene expression of known and presumptive plant cell wall–degrading enzymes represents a huge investment for the cells (about 20% of the total mRNA). Our results also uncovered new esterases and surface interacting proteins that might form part of the fungal arsenal of enzymes for the degradation of plant biomass. Using transcription factor deletion mutants (xlnR and creA) to study the response to both lignocellulosic substrates and low carbon source concentrations, we showed that a subset of genes coding for degradative enzymes is induced by starvation. Our data support a model whereby this subset of enzymes plays a scouting role under starvation conditions, testing for available complex polysaccharides and liberating inducing sugars, that triggers the subsequent induction of the majority of hydrolases. We also showed that antisense transcripts are abundant and that their expression can be regulated by growth conditions. The aim of second generation biofuels is to produce fuels from non-food crops and agricultural wastes such as straw. A key, and often limiting, step is the extraction of simple sugars (saccharification) from the complex plant materials. This typically requires the use of fungal enzymes. Many such enzymes have been isolated and characterised, but less is known about how fungi naturally utilise their array of enzymes and what other strategies they employ in degrading plant material. In this study, we show that the filamentous fungus Aspergillus niger deploys a large number of plant cell wall-degrading enzymes when using wheat straw as its carbon source. Our results identify several other types of proteins that may play a role in this process, and thereby offer applications in the improvement of current saccharification processes. In addition, we show that wheat straw itself is not initially detected by the fungus and, instead, the onset of carbon starvation triggers the release of a small subset of degradative enzymes. These enzymes might play a scouting role, to sense the presence of plant cell walls and initiate degradation on a small scale, in turn releasing sugars that cause the fungus to express its full degradative arsenal. Citation: Delmas S, Pullan ST, Gaddipati S, Kokolski M, Malla S, Blythe MJ, et al. (2012) Uncovering the Genome-Wide Transcriptional Responses of the Filamentous Fungus Aspergillus niger to Lignocellulose Using RNA Sequencing. PLoS Genetics 8(8): e1002875. doi:10.1371/journal.pgen.1002875 Editor: Jens Nielsen, Chalmers University of Technology, Sweden Received: January 4, 2012; Accepted: June 23, 2012; Published: August 9, 2012 Copyright: © Delmas et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The research reported here was supported by the Biotechnology and Biological Sciences Research Council (BBSRC) Sustainable Bioenergy Centre (BSBEC), under the programme for ‘Lignocellulosic Conversion To Ethanol’ (LACE) [Grant Ref: BB/G01616X/1]. This is a large interdisciplinary programme and the views expressed in this paper are those of the authors alone, and do not necessarily reflect the views of the collaborators or the policies of the funding bodies. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. The conversion of cellulose and hemicellulose, from non-food crop sources into fermentable sugars is one of the key challenges in the production of second generation biofuels. Fungi are the predominant source of enzymes currently being used on an industrial scale for this purpose , . Many relevant enzymes have been isolated and characterised for functionality , . The overall aim of our study was to look beyond the simple array of hydrolytic enzymes produced by fungi and to understand the strategies that fungi employ to degrade complex polysaccharides. This approach may provide novel insights into the development of strategies for the production of second generation biofuels. Aspergillus niger is a filamentous, black-spored fungus that has been used in many industrial processes, including the production of enzymes, food products and pharmaceuticals . This historical importance has led to the development of a wide array of genetic tools . These include a variety of mutagenesis systems, both targeted and random , highly tuneable expression systems , and complete genome sequences for both the enzyme-producing industrial strain CBS 513.88 and the citric acid-producing strain, ATCC 1015 . The Carbohydrate-Active Enzymes Database (http://www.cazy.org/ ) identifies the CBS 513.88 genome as encoding 281 putative polysaccharide degrading enzymes, that represent 61 different enzyme families. Thus, the genome of A. niger encodes one of the most diverse CAZy enzyme arsenals among currently sequenced fungal genomes. The availability of DNA microarrays for A. niger has led to discoveries in the areas of protein secretion and of global transcriptional responses to simple sugars such as glucose, xylose and glycerol –. This genomic information is complemented by studies that have elucidated some of the basic molecular pathways by which hydrolytic enzyme production and sugar metabolism are regulated in A. niger , . Furthermore, genome-wide approaches provide a basis for comparability with other fungal species , . This wealth of background knowledge of A. niger and closely-related species, and their responses to monomeric sugars and simple polysaccharides, provides an excellent foundation on which to build more complete studies of growth on more complex, industrially relevant substrates. Measuring gene expression of Neurospora crassa during growth on Miscanthus , and transcriptional changes when A. niger is exposed to sugar cane bagasse using microarray technology have previously provided important insights into degradation of those substrates. Wheat straw is one of the most attractive potential feed stocks for biofuel production. It is a co-product of cereal grain production and is available in significant quantities; for example, in the order of 10 million tonnes are produced in the UK each year . This study aims to gain a thorough understanding of the mechanisms employed by A. niger to degrade and grow upon this complex lignocellulosic substrate, beginning with the transcriptional changes associated with exposure to wheat straw compared to simple sugars. Our data were acquired using Next Generation RNA-sequencing (RNA-seq) technology which provides a wealth of information for the confirmation of gene predictions from both published genomes, as well as in identifying alternative splicing patterns, novel genes/exons, transcription start/end points and antisense (AS) transcripts. Saccharification of wheat straw by A. niger The wheat straw used was composed of 37±1.69% cellulose and 32±1.2% hemicelluloses, 22±0.1% lignin and, after ball milling, the substrate retained 25±0.76% crystallinity (data are the mean of three replicates and are shown ± standard deviation). In order to determine the time at which degradation of the wheat straw by A. niger had begun to take place, the monomeric sugar content of the culture supernatant was analysed by HPLC. Figure 1A shows that, prior to inoculation, in minimal media containing 1% straw, the concentration of free monomeric sugar present in the liquid fraction was 76±0.9 µM. After 12 h of incubation, control samples, which had not been inoculated with A. niger, contained similar levels of each sugar. In the A. niger cultures, the total concentration of free monomeric sugar present in the liquid fraction increased to 166±26.9 µM, showing that degradation of wheat straw polysaccharides had begun to take place. There were also changes detected in the proportions of individual sugars (Figure 1A). Xylose, arabinose and galactose levels were increased by a statistically significant level, whilst glucose levels were not. Two non-exclusive hypotheses could explain this observation; i) the hemicellulose fraction of the lignocelluloses substrate is degraded first, and/or ii) glucose is preferentially imported by the fungus. Indeed a requirement for glucose depletion prior to xylose utilisation, when both sugars are present, has been observed in Aspergillus nidulans . After 24 h of incubation, levels of free sugar had not increased any further suggesting that the balance of degradation and sugar uptake had reached a steady state. RT-PCR showed that transcription of several genes encoding glycoside hydrolases, (endoglucanase; eglA, cellobiohydrolase; cbhA and the endoxylanases; xynA and xynB) that are transcriptionally activated in response to xylose by the xylanolytic regulator XlnR, were highly induced at this point, when compared to expression in the Glucose 48 h cultures (data not shown). These two time-points were therefore chosen for RNA-seq analysis (Glucose 48 h and Straw 24 h). After 24 h incubation in the straw media, the particles of straw were in intimate association with A.niger mycelia (Figure 1B). It is possible therefore that the responses seen are due not only to the presence of inducing molecules, but also due to the physical interaction between the fungal mycelia and the straw. (A) The monomeric sugar content of culture supernatants was analysed by HPLC at 0, 12 and 24 h after the transfer of the mycelia to the straw media. Each bar represents the mean +/− the standard deviation of values from three independent experiments where black represents control cultures and blue represents cultures containing A. niger. The asterisk indicates p-values<0.05 relative to the control culture at the corresponding time by unpaired t-test. (B) A. niger mycelial clump from a culture grown for 24 hours in minimum media containing straw as sole carbon source. To investigate the repressive effect of glucose on expression of degradative enzymes, glucose was added exogenously to the wheat straw-incubated 24 h cultures to a final concentration of 1% (w/v). Samples were taken for RT-PCR analysis after 30 min, 1, 2 and 5 h. For all genes tested the levels of hydrolase expression decreased over this time course, reaching for most of the genes a similar level to that seen in the Glucose 48 h cultures after 5 hours of exposure to the exogenously added glucose (data not shown). Therefore, 5 hours after the addition of glucose to the straw cultures was selected as the final time point for RNA-seq analysis (Straw+Glucose 5 h). These represent three physiologically distinct conditions; long term growth under glucose repressing conditions, growth in the presence of an inducing lignocellulosic substrate and growth in the presence of the inducing substrate and glucose simultaneously. RNA was extracted from triplicate independent cultures in each condition for RNA-seq analysis. The wheat straw-induced transcriptome of A. niger Reads were mapped to the A. niger ATCC 1015 genome sequence as it is phylogenetically very close (based on the β-tubulin sequence) to the N402 strain used in this study, and RPKMs (Reads Per Kilobase of exon model per Million mapped reads) were calculated for each annotated gene. The ATCC 1015 gene model is thought to under-predict the true number of genes present in the A. niger genome so, in order to extract the maximum amount of data from our transcriptome sequencing, reads were also mapped to the CBS 513.88 sequence , which has a greater number of predicted genes. Approximately 2.5% more reads were successfully mapped to the ATCC 1015 genome than the CBS 513.88, reflecting the closer relationship between this strain and N402 used in this study. The CBS 513.88 genome model contains 4213 genes not included within the ATCC 1015 genome model and 939 of these genes were found to have an RPKM of 1 or more in at least one of the conditions tested in our transcriptome and are therefore very likely to be present within the ATCC 1015 genome also. A full list of the gene expression for the 4213 genes is included in supplementary material (Table S1). Eight of these genes encoded potential polysaccharide degrading enzymes or demonstrated a transcriptional pattern in the CBS 513.8 of interest to this study and so were added to the ATCC 1015 gene model and their RPKM values from the ATCC 1015 mapping were calculated. RPKM values for all genes were calculated for each of the separate biological replicates, as well as for the combined mapping of all three, (Table S2). Table S3 shows that inter-replicate reproducibility was extremely high (R-squared values of >0.9). The results shown within the main text are from the combined mapping scores. Three statistical significance tests were applied to changes in gene expression, the Likelihood Ratio Test , Fisher's Exact Test , and an MA-plot-based method with Random Sampling model . The results of these tests for each gene are listed in Table S2. All gene inductions discussed within the text had a p-value of <0.001 for all three statistical tests. Expression of CAZy genes Deconstruction of plant cell wall polysaccharides is mediated by enzymes of three major classes: Glycoside Hydrolases (GH), Carbohydrate Esterases (CE) and Polysaccharide Lyases (PL). The Carbohydrate Active Enzyme database (CAZy - http://www.cazy.org) subdivides enzymes within these three classes into families based on their related activity and sequence. CAZy defines 281 of these enzymes within the CBS 513.88 genome, all but two of which are also predicted in the ATCC 1015 genome model (CBS 513.88 annotation: An14g07390 and An14g07420). The ATCC 1015 genome encodes 246 predicted GHs, representing 51 families; 25 CEs representing 9 families and 8 PLs representing 2 families. After 48 hours of growth in minimal media with 1% glucose, CAZy genes represent approximately 3 percent of total mRNA, with the glucoamylase glaA (GH15 family) accounting for the majority (over 65 percent) of this (Figure 2A). SDS-PAGE of the culture supernatant revealed the presence of a highly predominant protein band, which was identified by tandem MS as GlaA (data not shown). Twenty-four hours after the mycelia were transferred to straw, expression of CAZy genes made up more than 19 percent of total mRNA. This is a strong over-representation of the CAZy group of genes, as they represent only ∼2.5 percent of the coding genome. Thirty of the induced CAZy genes reached an expression level above 50 RPKM. They represent 14 families of GH, 2 of CE and 1 of PL (Table 1). The sampling conditions are shown for the RNA-seq study. Cells were grown in glucose (48 h), washed and transferred to straw (24 h) and then glucose was added (downward arrow) followed by a further 5 h incubation. (A) Percentage of total mRNA (calculated from RPKM values) represented by CAZy family genes from each condition of the transcriptome study. (B) Proportions of total of CAZY gene mRNA from each enzyme family. The families are listed in decreasing order of expression in the Straw 24 h condition. The diverse categories of CAZy genes expressed during exposure to straw reflect the complexity of the carbohydrates present within the substrate. However, it is interesting to note that around 65 percent of the mRNA from the CAZy group at this time-point is from genes encoding just 5 families of enzyme, GH7, 11, 61 and 62 (cellobiohydrolases, xylanases, polysaccharide monooxygenases and arabinofuranosidases, respectively) and CE1 (acetyl xylan esterases) (Figure 2B and Table 1). Proteins from each of these categories, except GH61, were also identified within culture supernatants by SDS-PAGE and tandem MS (data not shown). The fact we did not identify any GH61 proteins, that play a role in the oxidative cleavage of recalcitrant plant biomass –, amongst the major bands could be due to a discrepancy between transcript and protein levels, or simply a technical issue in detection, such as the protein not staining well, or the protein being attached to the substrate or the fungal membrane, whilst only the supernatant was analysed. Based on transcript abundance, the 5 categories of encoded enzyme might provide the bulk of activities required for the degradation of straw. Catabolite repression, by addition of glucose to the straw cultures, exerts strong repression upon CAZy gene expression; after 5 hours, CAZy gene mRNA is reduced to only 1% of total mRNA. The glucoamylase glaA (GH15) becoming again the most expressed CAZy gene under this condition (Figure 2B). Expression of non-CAZy genes Twenty-eight genes that were transcriptionally induced more than 20-fold, and reached an RPKM of greater than 50 after the switch to straw, do not encode hydrolases as classified by CAZy. These genes are listed in Table 2 and 23 of them fall into four broad functional categories: lipases/esterases, surface interacting proteins, enzymes of carbon and nitrogen metabolism and transporters. Lipases and esterases. Seven putative lipases or esterases not classified as CEs by CAZy were strongly induced after the transfer to straw, and 6 of these were repressed by addition of glucose (Table 2). The esterase EstA (TID_50877) was shown to be regulated by XlnR , and we identified it by SDS-PAGE and tandem MS as being one of the major proteins present in the supernatant after 24 hours in straw, while the ferulic acid esterase (FaeA) has been shown to have activity against wheat arabinoxylan . Whilst the others have not previously been associated with polysaccharide degradation, their co-expression alongside the well characterised estA and faeA, raises the possibility that these enzymes may be involved in the saccharification of wheat straw lignocellulose. Two genes encoding hydrophobin family proteins and one hydrophobic surface binding protein (HsbA) were strongly induced by the switch from glucose to straw (Table 2). The gene hyp1 is not repressed by the re-addition of glucose to the culture, but hfbD and and hsbA were strongly repressed; their transcriptional profile is therefore similar to many of the genes of the CAZy group. Hydrophobins are amphipathic, surface-active proteins produced by filamentous fungi, with numerous biological functions, often relating to mycelial interactions with solid surfaces . RolA (homologous to Hyp1) and HsbA of Aspergillus oryzae have been shown to associate with the synthetic polyester polybutylene succinate-coadipate and promote its degradation through the recruitment of a specific polyesterase , . It is striking therefore that A. niger genes that encode proteins bearing homology to each, are highly induced by the presence of straw, suggesting that these proteins could have a role in recruiting degradative enzymes to the straw surface. The last gene in this group TID_54125 encodes a G-protein couple receptor homologous to Pth11p from the rice pathogen Magnaporthe grisea. Pth11p is required for signalling in appressorium formation through the sensing of the plant host surface via both hydrophobicity and the presence of cutin monomers (which are also a component of the wheat straw cuticula ). The induction of the A. niger homologue of pth11 in the presence of straw suggests that a similar strategy could perhaps be employed by A. niger in the sensing of solid substrate surfaces. The increased expression of the xylose reductase xyrA and other genes of the xylose utilisation pathway, such as xylitol dehydrogenase (TID_203198) and D-xylulokinase (TID_209771) (both induced approximately 10-fold, Table S2), along with decreased expression of glycolytic pathway enzymes (e.g. phosphofructokinase, TID_54093, repressed 5-fold), suggests that 24 hours after the transfer to straw, pentose sugars are the prominent carbon source available. These results, and the fact that A. niger is an efficient xylan-degrading organism , support the first of the hypotheses discussed earlier, that the reason xylose is the predominant free sugar observed after 24 hours incubation of wheat straw with A. niger (Figure 1A) is that the hemicellulose fraction of the straw is the first to be degraded, rather than being due to a greater rate of glucose uptake by the fungus. The degradative response is sequential and triggered by carbon starvation To establish the order of induction of the genes that were highly expressed by 24 h, a time-course experiment was performed. RNA was extracted 0.5, 1, 2, 3, 6, 9 and 12 h after the switch from glucose to straw, and the expression of genes of interest was measured using RT-PCR (Figure S1). The results show the cellobiohydrolase cbhB to be induced after 6 hours of exposure to straw, whilst cbhA and GH61a do not appear to be induced until the 9 hour time point. These timings were verified, and shown statistically significant, using quantitative RT-PCR (Figure S2). To investigate the regulatory basis for the differential response of these genes, expression under the glucose and straw conditions was examined in strains deleted for either the gene encoding the xylanolytic activator XlnR, that mediates xylose-induction of some GHs and esterases, or CreA, which mediates wide-domain carbon catabolite repression . All three of the glycosyl hydrolases showed a statistically significant (p-value of >0.01 in an equal variance, one-tailed T-test) dependence upon XlnR for maximal induction at 24 h in straw (Figure S3), which is typical of hydrolases in A. niger , . Interestingly, a smaller scale induction of cbhB in the straw media was still observed in the ΔxlnR strain. This induction is mediated by alleviation of CreA repression, since in the ΔcreA strain, cbhB was expressed at a significantly higher level than the wild-type during growth on glucose, as assessed by qRT-PCR, whilst cbhA and GH61a were not (Figure S4). This observation led to the hypothesis that the earlier induction seen for cbhB is triggered, not by the presence of an inducing sugar such as xylose, but instead by the absence of an available carbon source. Transferring mycelia from glucose (48 h) to media completely devoid of carbon source for 24 h had a significant inductive effect upon cbhB expression, whereas it did not induce cbhA or GH61a (Figure S5). As only a very low concentration of free xylose is present in the straw media until several hours after incubation with A. niger (Figure 1A), this might explain why the induction of cbhB occurs earlier than the induction of cbhA or GH61a. CbhA and CbhB are the only two GH7-family cellobiohydrolases in A. niger. CbhB contains a family 1 Carbohydrate-Binding-Module (CBM), whilst CbhA has no CBM . CBMs aid the attachment of the enzymes containing them to the complex polysaccharide surfaces of intact cell walls , . It could be speculated that the CBM-containing enzyme is induced early because it plays an important role in targeting the relatively intact plant cell wall. Whilst the enzyme lacking the CBM is induced later, once soluble oligosaccharides have been released. Six other genes that were induced after 24 h in straw also encoded proteins containing CBM domains. Two of these genes, TID_205580 (encoding a member of the GH family 5, containing a CBM 1) and abfB (an arabinofuranosidase containing a CBM 42), showed the same pattern of expression and regulation as cbhB; i.e. de-repression in a ΔcreA strain, induction 6 h after transfer to straw, and induction by carbon source starvation (Figures S3, S5 and S6). This CreA de-repression-dependent/XlnR activation-independent mechanism of induction may allow a subset of hydrolases that are targeted more specifically to the intact plant cell wall to play a “scouting” role under carbon starvation conditions, testing for available complex polysaccharides and liberating small quantities of sugar and oligosaccharides which trigger the subsequent larger scale induction of the genes themselves and the remaining majority of hydrolases by XlnR. Such a system would allow the organism to probe the surrounding environment for complex substrates when undergoing carbon starvation, without over-committing resources, until the presence of a degradable substrate is sensed from the release of inducing sugars. The expression of the “scouting” enzymes themselves, as well as the majority of hydrolases, which are completely dependent upon an active form of XlnR for expression, are then induced to extremely high levels (around one-fifth of the total mRNA inside the cell 24 hour after the transfer to straw, Figure 2A). None of the genes identified here as being induced early are predicted to encode enzymatic activities capable of releasing xylose from wheat straw, which would lead to XlnR activation. However, our preliminary data (not shown) indicates that the genes shown here are only part of a larger subset, which may encode such activity. Defining the full subset of hydrolase genes that are induced concurrently with cbhB is the subject of ongoing study. Of the non-CAZy genes that are responsive to straw, both the putative lipase gene (TID_173684) and esterase, estA (TID_51662), are induced by the 6 h time point (Figure S1); they therefore form part of the early response, along with cbhB, abfB and the GH5 family member TID_205580. The lipase-encoding gene shares the same CreA-dependent expression pattern as cbhB (Figure S7B). The genes encoding the Pth11 homologue and HsbA are also induced by a lack of carbon source but their induction shows dependence upon neither XlnR nor CreA (data not shown). The extremely early expression of the homologue of pth11, after 3 h, may indicate a role in the early stages of the signalling pathway and is the subject of current investigation. The concurrent timing of expression of hsbA with the majority of hydrolases, at 9 h, raises the possibility of a role in recruiting hydrolases to the surface of the straw, i.e. analogous to the recruitment of degradative enzymes to the surface of certain plastics , . The gene encoding the hydrophobin HfbD was induced later, at 12 h. The possible functionality of surface binding proteins in the recognition of surfaces, and the possibility that those proteins recruit hydrolytic enzymes to the surface, is intriguing and may provide new avenues to enhance the action of hydrolases in the saccharification of biomass in the generation of biofuels and the synthesis of other chemicals within a bio-based economy. The mechanism by which the presence of a complex substrate, which cannot be imported into the cell, is signalled and triggers expression of the requisite hydrolytic enzyme mix, is widely debated and may vary with fungal species. The predominant induction model in fungal systems , proposes that general basal expression of small quantities of hydrolase begins the degradation of complex polysaccharides, thereby producing inducing compounds that elicit the full transcriptional response. It has alternatively been suggested that some relevant genes are induced by carbon source depletion, and that the derived enzymes might play a foraging role, under starvation conditions . Temporally differential expression of complex polysaccharide degrading enzymes and the presence of “scouting” enzymes that do not require the presence of a substrate-derived inducer for expression, but work to release inducing molecules and trigger a larger degradative response, has also been observed for the A. niger response to pectin , and starch perhaps indicating a conserved response pattern to insoluble substrates. Our data support this model for lignocellulose and we take it further by proposing a succession of events that not only includes the timing of expression of genes encoding hydrolases but also involves sensing proteins and the possibility that surface-binding proteins serve as a scaffold for recruitment of hydrolases. In summary the overall strategy appears to be an induction of a specific, small scale, sensory response by carbon source starvation, mediated at least partially by alleviation of CreA-dependent catabolite repression. This leads, in the event of successful liberation of free sugars from the wheat straw substrate, to the full scale degradative response, which is activated by XlnR (Figure 3). The sequence of events is illustrated and key events are numbered. The upper panel represents the transcriptional events in A. niger upon exposure (0–6 h) to straw represented by filled ovals. Lack of easily-available carbon source leads to the alleviation of CreA repression (represented by the arrow above CreA) and induction of a subset of starvation-induced genes represented by cbhB. At 6–9 h exposure to straw (middle panel), the expressed hydrolases and other enzymes (examples named in the Figure.) act upon the wheat straw, releasing small quantities of inducing sugars such as xylose (filled pentamer) as well as glucose (filled hexamer). Transporters for the sugars are induced (indicated by the trans-membrane cylinders and un-filled large arrow). By 9 hours (lower panel) the presence of xylose has caused activation of XlnR and, thereby, large scale expression of hydrolases genes. Also induced by 9 hours, in an XlnR-independent manner, is the hydrophobic binding protein HsbA. The hydrophobin HfbD is induced by 12 hours. A physical association of hydrophobic binding proteins with straw and degradative enzymes is hypothesised and represented. Note that the functionality of XlnR and CreA is indicated by attachment to recognition sequences in target promoters and is meant only to indicate the functional control of those promoters by the transcription factors. Modifications to those transcription factors (e.g. phosphorylation of XlnR in A. oryzae ) may occur without necessarily implying that their location changes. Natural antisense transcripts (NATs) are RNAs transcribed from a region of the genome that lies antisense (AS) to a gene. They have been found in a number of organisms, including several fungi –, and play various regulatory roles . The number of reads that fell upon the AS strand was counted for each gene in our study and AS RPKM values were calculated in each condition. AS reads accounted for approximately 2% of total reads under the conditions tested (2.41, 1.94 and 2.04% in Glucose 48 h, Straw 24 h and Straw+Glucose 5 h, respectively). Although the vast majority of genes had just a few associated AS transcripts, 521 genes had an AS RPKM of greater than 1 and up to about 120 (Figure S6 and Table S4). In order to confirm that the NATs observed were biologically significant, and not simply an artefact of RNA-sequencing, a specific example, TID_53176, encoding a predicted membrane protein belonging to the GPR1/FUN34/YaaH family, which is required for acetate uptake in A. nidulans , was chosen for further analysis. The TID_53176 sense transcript is expressed in the straw media, whilst the AS transcript is expressed in the presence of glucose (Figure 4A). (A) Alignment of RNA-seq reads to the TID_53176 genome region under each condition. Reads represented in blue are antisense, those in red are sense. The Figure was constructed using the Integrative Genomics Viewer . (B) Oligo(dT) primed RT-PCR using TID_53176 specific primers. The expected band size from the spliced sense transcript is 411 bp and the size of the non-spliced antisense transcript is 524 bp. The red line under the gene model in panel A indicates the amplified region. (C) Strand-specific RT-PCR. One of the standard PCR primers, with an added sequence tag (Table S5), was used to synthesise cDNA from one strand only and then the PCR step was performed by using the tagged sequencing primer together with the opposing gene-specific primer. The larger band is only seen in the antisense-specific reaction, confirming it does represent an antisense transcript. The smaller band is the only band present in the sense-specific reaction. The AS coverage level is high in both glucose conditions, and extends over the full length of the predicted gene including the two introns and extends both upstream and downstream, but does not overlap any neighbouring genes (Figure 4A). The sense coverage, seen in the straw condition, is shorter in length and there is almost zero coverage of the introns, indicating that the vast majority of sense transcripts are fully spliced. Figure 4B shows that RT-PCR, using primers upstream of the first intron and downstream of the second, can distinguish between the larger AS product and the shorter, fully spliced, sense transcript. Since oligo(dT) was used as the primer for cDNA synthesis, the AS transcript must be polyadenylated. To verify the strandedness of the two products, strand-specific RT-PCR was performed using a tagged primer approach and the results confirmed that all of the larger product is generated from antisense transcripts, whilst the smaller product is the only band seen in a sense-specific reaction (Figure 4C). A trace amount of a smaller antisense product can be seen in the antisense-specific assay under straw 24 h conditions. This may represent a true RNA intermediate, or it could be due to the high level of sense transcript self-priming and slight carryover of primer from the cDNA synthesis priming its amplification. To identify AS transcripts that responded to the change in carbon source we calculated the ratio of antisense∶sense expression under Glucose 48 h and Straw 24 h conditions for the 521 genes with an AS RPKM of >1 (Table S4). Genes where sense transcription is induced on straw but AS predominates on glucose, include examples of transporters and permeases, CAZy enzymes and the putative lipase TID_173684. This putative lipase TID_173684, one of the most highly induced genes upon exposure to straw, is also one of the genes showing the most marked antisense∶sense ratio switch (Table S4). Under glucose 48 h conditions there is significant expression of both sense and AS transcripts with a 60% greater level of AS (RPKMs of 1 and 1.6 respectively). After the switch to straw, at 24 h there is a large induction of the sense transcript (∼1700-fold) and AS transcription is cut to less than a third of the initial level (Figure S7A). Standard and strand-specific RT-PCR reactions under the same conditions give a similar pattern of bands to that seen above for TID_53176 (Figure 4). Interestingly, in the ΔcreA strain sense transcription is seen in both glucose and straw conditions, suggesting that the AS/S ratio switch is regulated either directly or indirectly by CreA. This was confirmed by strand-specific RT-PCR (Figure S7B). The timeline of induction experiment detailed earlier (Figure S1) shows that the AS/S switch in the expression of the putative lipase can be seen to occur between 3 and 6 h after the transfer to straw, which is concurrent with the expression of the carbon starvation induced subset of genes (of which the lipase gene is part). This suggests a possible relationship between carbon source responsive regulation by CreA and antisense transcription, providing an interesting area for further study. Wheat straw analysis The wheat straw (Cordiale variety) was milled using a Laboratory Mill (Laboratory Mill 3600, Perten, Sweden) and passed through a sieve with a mesh size of 700 µm for size reduction prior to ball milling. 5 g of pre-milled wheat straw was ball-milled in 80 mL stainless steel grinding bowls with 10-mm-diameter steel balls in a Planetary Mill (Pulverisette 5 classic line, Fritsch, Germany), at 400 rpm for a grinding time of 20 min, resulting in an average particle size of ∼75 µm. The total sugar in processed ball-milled wheat straw was quantified in the hydrosylate after acid hydrolysis. 30 mg of dried wheat straw was weighed and subjected to a two stage acid hydrolysis initially with 12 M sulphuric acid for 1 hour at 37°C followed by 1 M sulphuric acid for 2 hours at 100°C. The monosaccharide analysis was performed on the supernatant. Sugar monomers in culture supernatants, collected at appropriate times points and separated from mycelia and insoluble straw by filtration through Miracloth (CalBioChem), and fully acid hydrolysed residues were determined by high-performance anion exchange chromatography with pulsed amperometric detection (HPAEC-PAD) (Dionex, UK) using a CarboPac PA20 column with 50 mM NaOH isocratic system at working flow rate of 0.5 ml/min at 30°C. Glucose, xylose, arabinose and galactose were used as standards with mannitol as an internal standard. The acetyl bromide method , was performed to quantify lignin in wheat straw samples. 100 mg of dried sample was digested with 4 ml of 25% v/v acetyl bromide in glacial acetic acid (HAc) in a Teflon capped tube at 50°C for 2 h with occasional mixing. After cooling, the digested sample volume was made up to 16 ml with HAc. After centrifugation at 3000 g for 15 min, 0.5 ml of the supernatant was mixed with 2.5 mL of HAc and 1.5 mL of 0.3M NaOH. 0.5 mL of 0.5M hydroxylamine hydrochloride solution was then added and the volume was made up to 10 mL with HAc. An analytical sample of low sulphate lignin (Sigma) was used as the standard for quantitative analysis by the acetyl bromide method. 10 mg of the lignin was dissolved in 5 ml of dioxane and 0.2, 0.3, 0.4, 0.5 and 0.6 mL was added in Teflon capped tubes to determine a standard curve. A reagent blank was also included. 0.5 mL of 25% acetyl bromide in glacial acetic acid (HAcBr) was added to each tube and incubated at 50°C for 30 min. The samples were cooled and 2.5 mL of HAc, 1.5 mL of 0.3 M NaOH, and 0.5 mL of 0.5 M hydroxylamine hydrochloride solution were added. The final volume was made up to 10 mL with HAc. Optical density was measured by scanning from 250 to 400 nm in a UV-Visible spectrophotometer (Varian, UK). The concentration of acetyl bromide soluble lignin for the samples was determined from the standard curve measured at 280 nm. The percentage cystallinity in the milled straw samples was estimated using X-ray diffraction, according to a variation of the method of Ruland and Vonk . Cellulose is presumed to be the only crystallisable polymer in the cell-wall matrix so non-cellulose components such as lignin and hemicelluloses also contribute to the non-crystalline proportion. Powder measurements were carried out on a Siemens D5000 system, with copper Kα X-ray source, with scans performed from 5 to 50° 2è, A linear baseline was applied between the scan limits and a diffractogram of a ball-milled (fully amorphous) straw sample was then subtracted from the experimental profiles. Strains and growth conditions The A. niger strains used were N402 and AB4.1 pyrG or as specified otherwise. Strains were maintained on potato dextrose agar (Oxoid). All AB4.1 cultures were supplemented by 10 mM uridine (Sigma). Cultures were incubated at 28°C until they had conidiated. Spores were harvested into 0.1% (v/v) Tween 20 (Sigma). ΔxlnR and ΔcreA strains are A. niger AB4.1 pyrG containing a deletion of the respective open reading frame. Strains were constructed using the method developed by Scherer and Davis. based on recombination between a plasmid containing the flanking region of the gene of interest and the chromosome. As a selection/counter-selection marker we used the gene coding for the orotidine-5-phosphate decarboxylase (pyrG, from Aspergillus oryzae). After transformation of A. niger, cells were selected for uridine prototrophy, confirming integration of the plasmid into the chromosome. After purification of the transformants, release of the selective pressure for the integrated plasmid was achieved by propagating the clones twice on potato dextrose agar containing 10 mM uridine. Selection for cells that had excised the plasmid from the chromosome was done by plating them on media containing 4 mM of 5-fluoro-orotic acid (Melford) and 1.6 mM uridine. Deletion of creA or xlnR was confirmed by PCR using internal and external oligonucleotide primers and by sequencing around the respective loci. Liquid batch cultures were inoculated with spores to a final concentration of 106 spores/ml. A. niger was grown in 100 ml of minimal media [all l−1: NaNO3, 6 g; KCl, 0.52 g; MgSO4.7H2O, 0.52 g; KH2PO4, 1.52 g; Na2B4O7.10H2O, 0.008 mg; CuSO4.5H2O, 0.16 mg; FePO4.H2O, 0.16 mg; MnSO4.4H2O, 0.16 mg; NaMoO4.2H2O, 0.16 mg; ZnSO4, 1.6 mg] with the appropriate carbon source added to a final concentration of 1% (w/v) in 250 ml Erlenmeyer flasks at 28°C, shaken at 150 rpm. The standard time-course consisted of growth for 48 h in 1% (w/v) glucose media, after which mycelia were removed by filtration through Miracloth (Merck), washed thoroughly with media devoid of carbon source, and transferred to fresh media containing 1% (w/v) ball-milled wheat straw as sole carbon source. Incubation was continued for 24 h. Glucose was then added exogenously to a final concentration of 1% (w/v) and incubation continued for 5 hours. Figure 1 shows an image of a mycelial clump that was magnified using a Nikon SMZ1000 stereomicroscope and the picture taken using a Nikon 4500 camera. Mycelia from each condition were frozen and ground under liquid nitrogen using a mortar and pestle. Total RNA was extracted from the ground material using the TRIzol reagent protocol (Invitrogen). An additional clean-up was performed using the RNEasy Mini Kit (Qiagen), following the manufacturer's RNA Clean-up protocol, including the additional on-column DNAse digest. SuperScript III Reverse Transcriptase (Invitrogen) was used to synthesise cDNA from total RNA according to manufacturer's instructions, using oligo(dT) as primer or tagged gene specific primers for strand-specific experiments. 0.5 µg of total RNA was used for each reverse transcription. PCR reactions were performed using Phusion (New England Biolabs) using 1 µl of cDNA in a 25 µl reaction. PCR conditions were 30 cycles of denaturation at 98°C for 30 s, annealing at 60°C for 30 s and extension at 72°C for 30 s. For PCR amplification of cDNA synthesised strand-specifically, a primer identical to the added tag sequence was paired with the opposing standard primer. In this way only cDNA synthesised from the tagged primer, and not non-specifically self-primed transcripts, was amplified. qPCR amplifications were carried out using the Applied Biosystems 7500 Fast Real-Time PCR system. The PCR reaction mixture (10 µl) contained 1 µl of cDNA, specific primer sets (175 nM final concentration), and FAST SYBR-Green Master Mix (Applied Biosystems). PCRs were carried out for 40 cycles; denaturation at 95°C for 15 seconds, annealing at 67°C for 30 seconds, and extension at 60°C for 60 seconds. All measurements were independently conducted 3 times on 2 separate biological isolates. The specificity of primer sets used for qRT-PCR amplification was evaluated by melting curve analysis. The Standard Curve Method was used for quantification against a known concentration of genomic DNA (Li et al 2009). All primers used, and sequences are listed in Table S5. 10 µg of Total RNA was depleted of ribosomal RNA using the Ribominus Eukaryotic kit (Invitrogen). SOLiD whole transcriptome libraries were made as outlined in the SOLiD Whole transcriptome kit protocol (Applied Biosystems). The Quant-it HS dsDNA assay kit (Invitrogen) was used to measure the concentration of libraries in order to pool equimolar amounts. Pooled libraries were gel-purified using 2% size-select E-gels to 200–300 bp (Invitrogen). Emulsion PCR (0.5 pM final concentration of pooled libraries) and bead-based enrichment was carried out according to the SOLiD 4 Templated bead preparation guide containing library. Sequencing was performed on a SOLiD 4 ABi sequencer according to the manufacturer's instructions to generate 50 bp reads in colour space. SOLiD 4 RNA-seq reads from each experimental sample were mapped to the two published genome sequences of A. niger (JGI A. niger version 3.0 unmasked assembly scaffolds and gene annotation and CADRE A. niger assembly) using the BioScope 1.3.1 Whole Transcriptome Pipeline (LifeTechnologies). This included the initial filtering of reads against a collection of published A. niger rRNA sequences prior to mapping (Genbank sequence IDs: 197115286, 300675560, 222105557, 312434471, 34304202, 241017633, 157267321, 213866863). BioScope provided the primary read alignment position of each read mapped against the complete genome sequence and exon spanning junctions using available gene coordinate information. Read alignment results were recorded in BAM format for further downstream analysis. Read counts per gene were calculated for each sample with Htseq-count (http://www-huber.embl.de/users/anders/HTSeq) using the BAM file and genome annotation as the input. Strand-specific RNA-seq reads, as generated by SOLiD, can be specified for when executing Htseq-count in determining accurate read counts per gene. These counts were then used to calculate normalized expression values (RPKM) for each gene . Additionally, antisense transcription could be detected by comparing gene counts generated by Htseq-count when opting to ignore or include strand-specificity in the calculations.. Data were further visualised using Integrative Genomic Viewer (IGV) . Raw and processed data files have been submitted to the Gene Expression Omnibus, under accession number GSE33852. Reads were mapped against an EST assembly of Genbank EST records for the wheat Triticum aestivum (taxon id 4565; 1,073,845 EST's March 2012) a publicly available wheat genome . The number of reads that mapped was no higher in the wheat-grown A. niger cultures than it was in the glucose-grown cultures. This suggests wheat transcript levels are insignificant, and any reads mapping to the wheat genome are simply due to a basic level of homology between genomes. Identification of secreted proteins Proteins in culture supernatant from relevant times were separated on 4–20% Tris-glycine Novex SDS-PAGE gels (Invitrogen). 15 µl Tris-glycine-SDS loading buffer was added to 12 µl supernatant plus 3 µl 1 M DTT. The sample mixture was heated to 100°C for 5 minutes before being loaded onto the gel with running conditions of 100 V for 3 hours. The gel was subsequently silver stained . Bands resolved by size were excised from the gel, diced into cubes (∼1 mm3) and placed into individual wells of a microtitre plate, processed (destained, reduced, alkylated) and digested with trypsin using standard procedures on a MassPREP station (Waters). Resulting peptides were delivered via nanoflow reversed phase-HPLC to a Q-ToF2 (Waters) for tandem MS analysis. An automated experiment (DDA = data dependent acquisition) was run where selected peptides automatically enter MSMS for fragmentation. The data was searched against the Swissprot and NCBInr databases using the MS/MSIONS search on the MASCOT web site (http://www.matrixscience.com) using the variable modifications of carbamidomethylation of cysteine and oxidation of methionine. RT-PCR measurement of gene expression over time after the switch to growth on wheat straw. Results shown are representative of at least two biologically independent experiments. Time of induction varies for different genes, with two notably distinct clusters of induction at the 6 h and 9 h time points. qRT-PCR analysis of glycoside hydrolase gene expression after 6 and 9 hours of exposure to straw. Expression was measured in the N402 strain. Results shown are representative of at least two biologically independent experiments. * indicates significant induction (a p-value of >0.01 in an equal variance, one-tailed T-test) compared to the expression level measured at the Glucose 48 h time point. qRT-PCR analysis of glycoside hydrolase gene expression after 24 hours exposure to straw in ΔxlnR and parent strain. Results shown are triplicate measurements taken from each of at least two biologically independent experiments. qRT-PCR analysis of glycoside hydrolase gene expression in the ΔcreA and parent strain after 48 hours growth on glucose. Results shown are triplicate measurements taken from each of at least two biologically independent experiments. * indicates significant change in expression level (a p-value of >0.01 in an equal variance, one-tailed T-test) in the mutant strain, when compared to the parent strain. qRT-PCR analysis of glycoside hydrolase gene expression in the N402 wild-type after the switch to media completely devoid of carbon source. Expression was measured in the wild-type strain. Results shown are representative of at least two biologically independent experiments. Fold-induction values for cbhA and GH61 inductions were <3 and are therefore not visible on the scale of this Figure. * indicates significant induction (a p-value of >0.01 in an equal variance, one-tailed T-test) compared to the expression level measured at the Glucose 48 h time point. Distribution of genes as a function of their antisense RPKM. Antisense RPKM was calculated for each gene in Glucose 48 h (blue bar) and Straw 24 h (red bar). Approximately 5 percent of genes have a value of 1 or more RPKM. The number of genes is represented on a log scale. Characterisation of tfl1 transcripts from sense and antisense directions and their regulation by CreA. A. Antisense transcription through tfl1 in glucose growth conditions. RNA-sequencing reads aligned to the tfl1 locus in Glucose 48 h and Straw 24 h conditions. The figure is adapted from Integrated Genomics Viewer. Blue reads are sense, and red reads antisense, to tfl1. RPKM values for sense and antisense for each condition are given below the gene model. The red line below the gene model indicates the region amplified in RT-PCR (B). B. Confirmation of antisense transcript and its regulation by CreA. The RT-PCR primers amplify a region (indicated by the red line) spanning across the first two introns of the gene (A). A non-spliced transcript will generate a product of 507 bp; A fully spliced transcript, a product of 365 bp, and a transcript with a single intron removed will give a product of either 421 or 451 bp. Standard RT-PCR (which does not differentiate between sense and antisense transcripts) on the parent strain shows, when grown on straw the product size is as expected from a spliced transcript template. In contrast, when grown on glucose, a larger product is observed. Strand-specific RT-PCR confirms that the predominant band seen at glucose 48 h in the conventional RT-PCR is from the antisense strand, whilst in straw the predominant band is sense to tfl1. In a ΔcreA strain, standard and strand-specific RT-PCR shows that the products formed are from RNAs that are mainly sense to tfl1. The existence of low levels of spliced and partially spliced intermediates of antisense transcripts can also be observed. Results shown are representative of at least two biologically independent experiments. Genes annotated in the CBS 513.8 genome but not the ATCC 1015 genome. Read mapping scores and RPKM values for each biological replicate, the combined mapping scores and statistical significance scores for all genes under Glucose 48 h, Straw 24 h and Straw+Glucose 5 h conditions. Inter-sample correlation comparison. R-squared values for comparison of each of the mapping results of each individual biological sample. Antisense and sense RPKM values and ratios under Glucose 48 h, Straw 24 h and Straw+Glucose 5 h conditions, for all genes with an antisense RPKM >1 in any condition. Primers used in this study. Conceived and designed the experiments: SD STP DBA. Performed the experiments: SD STP SG MK SM RI MC SL. Analyzed the data: SD STP SG MK MJB RI SL AA GAT DBA. Contributed reagents/materials/analysis tools: SG RI SL AA GAT. Wrote the paper: SD STP MK DBA. - 1. Sims RE, Mabee W, Saddler JN, Taylor M (2010) An overview of second generation biofuel technologies. Bioresour Technol 101: 1570–1580. doi: 10.1016/j.biortech.2009.11.046 - 2. Gusakov AV (2011) Alternatives to Trichoderma reesei in biofuel production. Trends Biotechnol 29: 419–425. doi: 10.1016/j.tibtech.2011.04.004 - 3. Stephanopoulos G (2007) Challenges in engineering microbes for biofuels production. Science 315: 801–804. doi: 10.1126/science.1139612 - 4. Weber C, Farwick A, Benisch F, Brat D, Dietz H, et al. (2010) Trends and challenges in the microbial production of lignocellulosic bioalcohol fuels. Appl Microbiol Biotechnol 87: 1303–1315. doi: 10.1007/s00253-010-2707-z - 5. Archer DB (2000) Filamentous fungi as microbial cell factories for food use. Curr Opin Biotechnol 11: 478–483. doi: 10.1016/s0958-1669(00)00129-4 - 6. Meyer V, Wu B, Ram AF (2011) Aspergillus as a multi-purpose cell factory: current status and perspectives. Biotechnol Lett 33: 469–476. doi: 10.1007/s10529-010-0473-8 - 7. van Hartingsveldt W, Mattern IE, van Zeijl CM, Pouwels PH, van den Hondel CA (1987) Development of a homologous transformation system for Aspergillus niger based on the pyrG gene. Mol Gen Genet 206: 71–75. doi: 10.1007/bf00326538 - 8. Hihlal E, Braumann I, van den Berg M, Kempken F (2011) Suitability of Vader for transposon-mediated mutagenesis in Aspergillus niger. Appl Environ Microbiol 77: 2332–2336. doi: 10.1128/aem.02688-10 - 9. Meyer V, Wanka F, van Gent J, Arentshorst M, van den Hondel CA, et al. (2011) Fungal Gene Expression on Demand: an Inducible, Tunable, and Metabolism-Independent Expression System for Aspergillus niger. Appl Environ Microbiol 77: 2975–2983. doi: 10.1128/aem.02740-10 - 10. Pel HJ, de Winde JH, Archer DB, Dyer PS, Hofmann G, et al. (2007) Genome sequencing and analysis of the versatile cell factory Aspergillus niger CBS 513.88. Nat Biotechnol 25: 221–231. doi: 10.1038/nbt1282 - 11. Andersen MR, Salazar MP, Schaap PJ, van de Vondervoort PJI, Culley D, et al. (2011) Comparative genomics of citric-acid-producing Aspergillus niger ATCC 1015 versus enzyme-producing CBS 513.88. Genome Research 21: 885–897. doi: 10.1101/gr.112169.110 - 12. Cantarel BL, Coutinho PM, Rancurel C, Bernard T, Lombard V, et al. (2009) The Carbohydrate-Active EnZymes database (CAZy): an expert resource for Glycogenomics. Nucleic Acids Res 37: D233–238. doi: 10.1093/nar/gkn663 - 13. Guillemette T, van Peij NN, Goosen T, Lanthaler K, Robson GD, et al. (2007) Genomic analysis of the secretion stress response in the enzyme-producing cell factory Aspergillus niger. BMC Genomics 8: 158. doi: 10.1186/1471-2164-8-158 - 14. Andersen MR, Vongsangnak W, Panagiotou G, Salazar MP, Lehmann L, et al. (2008) A trispecies Aspergillus microarray: comparative transcriptomics of three Aspergillus species. Proc Natl Acad Sci U S A 105: 4387–4392. doi: 10.1073/pnas.0709964105 - 15. Salazar M, Vongsangnak W, Panagiotou G, Andersen MR, Nielsen J (2009) Uncovering transcriptional regulation of glycerol metabolism in Aspergilli through genome-wide gene expression data analysis. Mol Genet Genomics 282: 571–586. doi: 10.1007/s00438-009-0486-y - 16. Jorgensen TR, Goosen T, Hondel CA, Ram AF, Iversen JJ (2009) Transcriptomic comparison of Aspergillus niger growing on two different sugars reveals coordinated regulation of the secretory pathway. BMC Genomics. England. 44. doi: 10.1186/1471-2164-10-44 - 17. de Vries RP (2003) Regulation of Aspergillus genes encoding plant cell wall polysaccharide-degrading enzymes; relevance for industrial production. Appl Microbiol Biotechnol 61: 10–20. doi: 10.1007/s00253-002-1171-9 - 18. de Souza WR, de Gouvea PF, Savoldi M, Malavazi I, de Souza Bernardes LA, et al. (2011) Transcriptome analysis of Aspergillus niger grown on sugarcane bagasse. Biotechnol Biofuels 4: 40. doi: 10.1186/1754-6834-4-40 - 19. Le Crom S, Schackwitz W, Pennacchio L, Magnuson JK, Culley DE, et al. (2009) Tracking the roots of cellulase hyperproduction by the fungus Trichoderma reesei using massively parallel DNA sequencing. Proc Natl Acad Sci U S A. United States 16151–16156. doi: 10.1073/pnas.0905848106 - 20. Coutinho PM, Andersen MR, Kolenova K, vanKuyk PA, Benoit I, et al. (2009) Post-genomic insights into the plant polysaccharide degradation potential of Aspergillus nidulans and comparison to Aspergillus niger and Aspergillus oryzae. Fungal Genet Biol 46 Suppl 1: S161–S169. doi: 10.1016/j.fgb.2008.07.020 - 21. Tian C, Beeson WT, Iavarone AT, Sun J, Marletta MA, et al. (2009) Systems analysis of plant cell wall degradation by the model filamentous fungus Neurospora crassa. Proc Natl Acad Sci U S A 106: 22157–22162. doi: 10.1073/pnas.0906810106 - 22. Brander M, Hutchison C, Sherrington C, Ballinger A, Beswick C, et al. (2009) Methodology and Evidence Base on the Indirect Greenhouse Gas Effects of Using Wastes, Residues, and By-products for Biofuels and Bioenergy : Report to the Renewable Fuels Agency and the Department for Energy and Climate Change. Ecometrica, Eunomia, Imperial College London. - 23. Prathumpai W, McIntyre M, Nielsen J (2004) The effect of CreA in glucose and xylose catabolism in Aspergillus nidulans. Appl Microbiol Biotechnol 63: 748–753. doi: 10.1007/s00253-003-1409-1 - 24. Marioni JC, Mason CE, Mane SM, Stephens M, Gilad Y (2008) RNA-seq: an assessment of technical reproducibility and comparison with gene expression arrays. Genome Res 18: 1509–1517. doi: 10.1101/gr.079558.108 - 25. Bloom JS, Khan Z, Kruglyak L, Singh M, Caudy AA (2009) Measuring differential gene expression by short read sequencing: quantitative comparison to 2-channel gene expression microarrays. BMC Genomics 10: 221. doi: 10.1186/1471-2164-10-221 - 26. Wang L, Feng Z, Wang X, Zhang X (2010) DEGseq: an R package for identifying differentially expressed genes from RNA-seq data. Bioinformatics. 2010 26: 136–138. doi: 10.1093/bioinformatics/btp612 - 27. Harris PV, Welner D, McFarland KC, Re E, Navarro Poulsen JC, et al. (2010) Stimulation of lignocellulosic biomass hydrolysis by proteins of glycoside hydrolase family 61: structure and function of a large, enigmatic family. Biochemistry 49: 3305–3316. doi: 10.1021/bi100009p - 28. Quinlan RJ, Sweeney MD, Lo Leggio L, Otten H, Poulsen JC, et al. (2011) Insights into the oxidative degradation of cellulose by a copper metalloenzyme that exploits biomass components. Proc Natl Acad Sci U S A 108: 15079–15084. doi: 10.1073/pnas.1105776108 - 29. Beeson WT, Phillips CM, Cate JH, Marletta MA (2012) Oxidative cleavage of cellulose by fungal copper-dependent polysaccharide monooxygenases. J Am Chem Soc 134: 890–892. doi: 10.1021/ja210657t - 30. Bourne Y, Hasper AA, Chahinian H, Juin M, De Graaff LH, et al. (2004) Aspergillus niger protein EstA defines a new class of fungal esterases within the alpha/beta hydrolase fold superfamily of proteins. Structure 12: 677–687. doi: 10.1016/j.str.2004.03.005 - 31. de Vries RP, Michelsen B, Poulsen CH, Kroon PA, van den Heuvel RH, et al. (1997) The faeA genes from Aspergillus niger and Aspergillus tubingensis encode ferulic acid esterases involved in degradation of complex cell wall polysaccharides. Appl Environ Microbiol 63: 4638–4644. - 32. Linder MB, Szilvay GR, Nakari-Setälä T, Penttilä ME (2005) Hydrophobins: the protein-amphiphiles of filamentous fungi. FEMS Microbiol Rev 29: 877–896. doi: 10.1016/j.femsre.2005.01.004 - 33. Takahashi T, Maeda H, Yoneda S, Ohtaki S, Yamagata Y, et al. (2005) The fungal hydrophobin RolA recruits polyesterase and laterally moves on hydrophobic surfaces. Mol Microbiol 57: 1780–1796. doi: 10.1111/j.1365-2958.2005.04803.x - 34. Ohtaki S, Maeda H, Takahashi T, Yamagata Y, Hasegawa F, et al. (2006) Novel hydrophobic surface binding protein, HsbA, produced by Aspergillus oryzae. Appl Environ Microbiol 72: 2407–2413. doi: 10.1128/aem.72.4.2407-2413.2006 - 35. DeZwaan TM, Carroll AM, Valent B, Sweigard JA (1999) Magnaporthe grisea pth11p is a novel plasma membrane protein that mediates appressorium differentiation in response to inductive substrate cues. Plant Cell 11: 2013–2030. doi: 10.2307/3871094 - 36. Stelte W, Clemons C, Holm J, Ahrenfeldt J, Henriksen U, et al. (2012) Fuel pellets from wheat straw: The effect of lignin glass transition and surface waxes on pelletizing properties. Bioenerg. Res. (2012) 5: 450–458. doi: 10.1007/s12155-011-9169-8 - 37. King BC, Waxman KD, Nenni NV, Walker LP, Bergstrom GC, et al. (2011) Arsenal of plant cell wall degrading enzymes reflects host preference among plant pathogenic fungi. Biotechnol Biofuels 4: 4. doi: 10.1186/1754-6834-4-4 - 38. Kobayashi T, Kato M (2012) Transcriptional Regulation in Aspergillus. In: Machida M, Gomi K, editors. Aspergillus Molecular Biology and Genomics. UK: Caister Academic Press. 85–114. - 39. de Vries RP, Visser J, de Graaff LH (1999) CreA modulates the XlnR-induced expression on xylose of Aspergillus niger genes involved in xylan degradation. Res Microbiol 150: 281–285. doi: 10.1016/s0923-2508(99)80053-9 - 40. van Peij NN, Gielkens MM, de Vries RP, Visser J, de Graaff LH (1998) The transcriptional activator XlnR regulates both xylanolytic and endoglucanase gene expression in Aspergillus niger. Appl Environ Microbiol 64: 3615–3619. - 41. Gielkens MM, Dekkers E, Visser J, de Graaff LH (1999) Two cellobiohydrolase-encoding genes from Aspergillus niger require D-xylose and the xylanolytic transcriptional activator XlnR for their expression. Appl Environ Microbiol 65: 4340–4345. - 42. Boraston AB, Bolam DN, Gilbert HJ, Davies GJ (2004) Carbohydrate-binding modules: fine-tuning polysaccharide recognition. Biochem J 382: 769–781. doi: 10.1042/bj20040892 - 43. Herve C, Rogowski A, Blake AW, Marcus SE, Gilbert HJ, et al. (2010) Carbohydrate-binding modules promote the enzymatic deconstruction of intact plant cell walls by targeting and proximity effects. Proc Natl Acad Sci U S A 107: 15293–15298. doi: 10.1073/pnas.1005732107 - 44. Carle-Urioste JC, Escobar-Vera J, El-Gogary S, Henrique-Silva F, Torigoi E, et al. (1997) Cellulase induction in Trichoderma reesei by cellulose requires its own basal expression. J Biol Chem 272: 10169–10174. doi: 10.1074/jbc.272.15.10169 - 45. Foreman PK, Brown D, Dankmeyer L, Dean R, Diener S, et al. (2003) Transcriptional regulation of biomass-degrading enzymes in the filamentous fungus Trichoderma reesei. J Biol Chem 278: 31988–31997. doi: 10.1074/jbc.m304750200 - 46. Parenicova L, Benen JA, Kester HC, Visser J (2000) pgaA and pgaB encode two constitutively expressed endopolygalacturonases of Aspergillus niger. Biochem J 345: 637–644. doi: 10.1042/0264-6021:3450637 - 47. de Vries RP, Jansen J, Aguilar G, Parenicova L, Joosten V, et al. (2002) Expression profiling of pectinolytic genes from Aspergillus niger. FEBS Lett 530: 41–47. doi: 10.1016/s0014-5793(02)03391-4 - 48. Yuan XL, van der Kaaij RM, van den Hondel CA, Punt PJ, van der Maarel MJ, et al. (2008) Aspergillus niger genome-wide analysis reveals a large number of novel alpha-glucan acting enzymes with unexpected expression profiles. Mol Genet Genomics 279: 545–561. doi: 10.1007/s00438-008-0332-7 - 49. Gowda M, Venu RC, Raghupathy MB, Nobuta K, Li H, et al. (2006) Deep and comparative analysis of the mycelium and appressorium transcriptomes of Magnaporthe grisea using MPSS, RL-SAGE, and oligoarray methods. BMC Genomics 7: 310. - 50. Ni T, Tu K, Wang Z, Song S, Wu H, et al. (2010) The prevalence and regulation of antisense transcripts in Schizosaccharomyces pombe. PLoS One 5: e15271. doi: 10.1371/journal.pone.0015271 - 51. Ohm RA, de Jong JF, Lugones LG, Aerts A, Kothe E, et al. (2010) Genome sequence of the model mushroom Schizophyllum commune. Nat Biotechnol 28: 957–963. doi: 10.1038/nbt.1643 - 52. Lapidot M, Pilpel Y (2006) Genome-wide natural antisense transcription: coupling its regulation to its different regulatory mechanisms. EMBO Rep 7: 1216–1222. doi: 10.1038/sj.embor.7400857 - 53. Robellet X, Flipphi M, Pegot S, Maccabe AP, Velot C (2008) AcpA, a member of the GPR1/FUN34/YaaH membrane protein family, is essential for acetate permease activity in the hyphal fungus Aspergillus nidulans. Biochem J 412: 485–493. doi: 10.1042/bj20080124 - 54. Fukushima RS, Hatfield RD (2001) Extraction and isolation of lignin for utilization as a standard to determine lignin concentration using the acetyl bromide spectrophotometric method. J Agric Food Chem 49: 3133–3139. doi: 10.1021/jf010449r - 55. Hatfield R, Fukushima RS (2005) Can lignin be accurately measured? Crop Sci 45: 832–839. doi: 10.2135/cropsci2004.0238 - 56. Maier G, Zipper P, Stubičar M, Schurz J (2005) Amorphization of different cellulose samples by ballmilling. Cellulose Chem Technol 39: 167–177. - 57. Bos CJ, Debets AJ, Swart K, Huybers A, Kobus G, et al. (1988) Genetic analysis and the construction of master strains for assignment of genes to six linkage groups in Aspergillus niger. Curr Genet 14: 437–443. doi: 10.1007/bf00521266 - 58. Scherer S, Davis RW (1979) Replacement of chromosome segments with altered DNA sequences constructed in vitro. Proc Natl Acad Sci U S A 76: 4951–4955. doi: 10.1073/pnas.76.10.4951 - 59. Boeke JD, LaCroute F, Fink GR (1984) A positive selection for mutants lacking orotidine-5′-phosphate decarboxylase activity in yeast: 5-fluoro-orotic acid resistance. Mol Gen Genet 197: 345–346. doi: 10.1007/bf00330984 - 60. Benson DA, Karsch-Mizrachi I, Lipman DJ, Ostell J, Sayers EW (2011) GenBank. Nucleic Acids Res 39: D32–37. doi: 10.1093/nar/gkq1079 - 61. Mortazavi A, Williams BA, McCue K, Schaeffer L, Wold B (2008) Mapping and quantifying mammalian transcriptomes by RNA-Seq. Nat Methods 5: 621–628. doi: 10.1038/nmeth.1226 - 62. Robinson JT, Thorvaldsdóttir H, Winckler W, Guttman M, Lander ES, et al. (2011) Integrative genomics viewer. Nat Biotechnol 29: 24–26. doi: 10.1038/nbt.1754 - 63. Lai K, Berkman PJ, Lorenc MT, Duran C, Smits L, et al. (2012) WheatGenome.info: an integrated database and portal for wheat genome information. Plant Cell Physiol 53: e2. doi: 10.1093/pcp/pcr141 - 64. Yan JX, Wait R, Berkelman T, Harry RA, Westbrook JA, et al. (2000) A modified silver staining protocol for visualization of proteins compatible with matrix-assisted laser desorption/ionization and electrospray ionization-mass spectrometry. Electrophoresis 21: 3666–3672. doi: 10.1002/1522-2683(200011)21:17<3666::aid-elps3666>3.0.co;2-6 - 65. Noguchi Y, Tanaka H, Kanamaru K, Kato M, Kobayashi T (2011) Xylose triggers reversible phosphorylation of XlnR, the fungal transcriptional activator of xylanolytic and cellulolytic genes in Aspergillus oryzae. Biosci Biotechnol Biochem 75: 953–959. doi: 10.1271/bbb.100923 - 66. Mabey Gilsenan J, Cooley J, Bowyer P (2012) CADRE: the Central Aspergillus Data REpository 2012. Nucleic Acids Res 40: D660–666. doi: 10.1093/nar/gkr971
<urn:uuid:ce74d109-c122-4005-a9e4-fdb189d9aabf>
CC-MAIN-2016-26
http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1002875
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890755
17,115
2.734375
3
A group interview where 8-12 people respond to a specific concept or subject. A method of gathering information and opening discussions as people express their ideas graphically on a map. A systematic gathering of information about or related to a resource or community. Documenting on film the special places and features of a community. A research tool used to elicit answers and opinions from respondents. A person who has something to gain or lose by the outcomes of a planning process or project. Capturing what is important, interesting, and meaningful about a particular place then relaying it to others.
<urn:uuid:0e519f3c-92bb-4bf2-acfa-048f0a1d49e5>
CC-MAIN-2016-26
http://transportation.ky.gov/Public-Involvement-Toolbox/Pages/Collecting-Information.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.873291
120
2.609375
3
|State of the Environment Tasmania||Home| |Case studies||Report contents| |Measuring sea-level rise at Port Arthur||Index of case studies| In 1841, a mark was cut into the rocks of the Isle of the Dead at Port Arthur, in an attempt to record the height of the sea in the area, and to provide a benchmark for future studies of the movements of the Earth's crust, relative to sea-level. It was made at the instigation of Captain Sir James Clark Ross, with the support of Thomas Lempriere (Deputy Assistant Commissary General at the Port Arthur penal settlement). Records indicate Lempriere had studied the tidal levels in the area for several years before the mark was made. The mark at Port Arthur is among the earliest benchmarks in the world against which to scientifically measure changes in sea-level. Until recently, a lack of actual data from the time prevented a proper understanding of the site. However, some of the original data has been found, and together with some recent detailed monitoring at Port Arthur, a clearer understanding of sea-level changes in the area are now possible. It also aids in the understanding of global sea-level changes, as there are very few good long-term benchmarks in the southern hemisphere. Setting the site in stone Thomas Lempriere was a keen observer of his surroundings, and kept records of a variety of features, including meteorological conditions such as the wind, rainfall and temperature. He also operated a tide gauge, though the exact details of where and how the gauge worked are not known. When Captain Ross visited Tasmania in 1841 during his voyages to Antarctica and the Southern Ocean, he visited Lempriere at Port Arthur. Ross was keen to have a sea-level benchmark installed in an area free from large wind and river flow influences, and Port Arthur appeared to be an ideal location. Lempriere's monitoring of the tides at the site added to its value, contributing several years' of important data. Together, they provide a baseline of great value to the contemporary science of sea-level and climate change. On 1 July 1841, Ross and Lempriere had a standard survey mark cut into a sandstone cliff on the Isle of the Dead. The mark is a horizontal line with a broad arrow touching and pointing down at the horizontal line. A plaque was installed above the mark, but unfortunately it has not survived. 1841 Sea-level benchmark, Isle of the Dead, Port Arthur Source: Courtesy of Richard Coleman and John Hunter 1841 Sea-level benchmark, Isle of the Dead, Port Arthur Source: Courtesy of Richard Coleman and John Hunter Ross' journal of the event is confusing. It is not clear whether the mark was made of the mean sea-level, or high water. Ross did make two more marks on the Falkland Island on the same voyage, and these were both above mean sea-level. A paper published in 1889 by Captain Shortt recorded the wording of the plaque, including the time the mark was struck and the height of the sea given by Lempriere's tide gauge. By taking a measurement of the height of the sea, and estimating what the tides were when the mark was made, Shortt determined that the mark was made near high water. An article in The Australasian in 1892 also recorded the wording of the plaque. While almost the same as the version published in Shortt's paper, it differed in the time the mark was supposed to have been made, although both reports were consistent regarding the reading of Lempriere's tide gauge when the mark was struck. Taken on its own, the reported time of the striking would suggest that the mark was originally near mean sea-level. Significant work has gone into determining which of the accounts is correct, including a current major study by a collaboration of international scientists, as knowing whether the mark was originally placed near mean sea-level or high water is crucial to being able to compare sea-levels of 1841 with today. This study has concluded that it is almost certain that the benchmark was originally placed near high water. The conclusion is based on other estimates of sea-level made later in the 19th century, and on the fact that, if the mark had originally been placed near mean sea-level, then the Penitentiary building would have suffered flooding every few years (there is no record of this having happened). The old and the new It was thought that Lempriere's original data had been entirely lost, making it very difficult to understand how the sea-level in 1841 related to the sea-level of today. However in late 1995, Dr David Pugh of the Southampton Oceanography Centre, found Lempriere's data for 1841 and 1842 in the archives of the Royal Society in London. In mid-1998 Dr John Hunter, then of CSIRO's Division of Oceanography, found data for December 1839 and February 1840 to January 1841 in the Australian Archives at Rosny. In 1999, the universities of Canberra and Tasmania, and CSIRO, set up a sea-level monitoring station at Port Arthur. Referencing this data with the benchmark on the Isle of the Dead, and utilising Lempriere's original data, has enabled the first comprehensive study that compares the Port Arthur sea-level of 1841 with that of today. It is unfortunate that a continuous record of data has not been collected from Port Arthur, but having such an early benchmark to compare to is still of great importance. Modern sea-level monitoring station, Port Arthur Source: Courtesy of Richard Coleman and John Hunter Modern sea-level monitoring station, Port Arthur Source: Courtesy of Richard Coleman and John Hunter The Port Arthur site is particularly important because of the few long-term sea-level records in Australia and the southern hemisphere. Records spanning many decades are considered necessary before long-term trends can actually be identified. Some highly accurate monitoring stations have been established in Australia during the last 20 years, but it will be some time before these show clear long-term trends. There are also a number of Australian ports that have reasonably long series of data, but still do not go back as far as 1841. Sydney, for example, has a good, reasonably continuous, data series from about 1884. As the earliest sea-level benchmark in Australia, and possibly the southern hemisphere, the Port Arthur benchmark gives a great opportunity for assessing long-term sea-level change in this region and contribute to the understanding of global sea-level rise. Dramatic changes in sea-level are possible during the next hundred years as the release of greenhouse gases enhance the Earth's natural greenhouse effect. While there may be some extra water added to the oceans by melting of ice caps and glaciers, much of the initial sea-level rise will be caused by the oceans expanding as they warm up (thermal expansion). Global predictions are for the rate of change to increase, such that by 2100 sea-level will be between 9 and 88 cm above the 1990 global average sea-level. For the periods 1990 to 2025 and 1990 to 2050, the projected rises are 3 to 14 cm and 5 to 32 cm respectively (Houghton & Ding 2001). Already the Port Arthur benchmark is showing a rise in sea-levels of at least 13 cm since 1841, with an average annual rate of 0.8-1.0 mm/year (Pugh, Coleman & Hunter 2002). Hence the site will be an important benchmark to continue measuring the changes in the average level of the sea. This Case Study was compiled from a number of studies and articles: Pugh, Coleman & Hunter 2002; Bowden, Hunter & Pugh 1997; and Houghton & Ding 2001. John Hunter (University of Tasmania) in particular provided valuable advice. Contact the Commission on: email: firstname.lastname@example.org Phone: (03) 6233 2795 (within Australia) Fax: (03) 6233 5400 (within Australia) Or mail to: RPDC, GPO Box 1691, Hobart, TAS, 7001, Australia Last Modified: 14 Dec 2006 You are directed to a disclaimer and copyright notice governing the information provided.
<urn:uuid:55887881-6591-4044-a577-c3b29f92c42e>
CC-MAIN-2016-26
http://soer.justice.tas.gov.au/2003/casestudy/4/index.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958878
1,699
3.78125
4
Antioch University Los Angeles (AULA) opened its doors in 1972, but its roots trace back over a century before. AULA is part of Antioch University, a multi-campus university system that began with the founding of Antioch College in 1852 in Yellow Springs, Ohio. Horace Mann, Antioch College’s first president, was a renowned educator, architect of the American public school system, social reformer, and abolitionist. His goal was to create an educational environment that was stimulating and unconventional in its approach to learning. As early as 1863, Antioch embraced a policy abolishing race as criteria for acceptance, creating a legacy of social justice advocacy that continues today. Antioch College was also the first college in America to educate women on equal terms with men. In addition, Antioch was the first American college to hire female faculty on an equal basis with male colleagues and the first co-educational college to have a woman on its Board of Trustees. The seeds of the modern Antioch University were sown in the birth of an independent, non-sectarian college founded in 1852 and then established in 1964 with the creation of the Putney School in Keene, NH, the first of its present campuses. Beginning in the 1960s, Antioch evolved from a small liberal arts college to a multi-campus university system with five campuses located across the nation in Yellow Spring, Ohio; Keene, New Hampshire; Seattle, Washington; Santa Barbara, California; and Los Angeles, California. The University remains the legacy of Horace Mann’s original vision, and an example of the success of educational experimentation, innovation, and diversity of thought. Antioch University continues to break down educational barriers and rebuild them as educational opportunities, providing students with the tools to explore, empower, and transform the world around them.
<urn:uuid:ef301441-b5d1-4b90-9f52-ac89fbbae817>
CC-MAIN-2016-26
http://www.antiochla.edu/about-aula/our-university-system/au-history/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964267
369
2.71875
3
Details: This poster shows a Periodic Table of Elements. It is perfect for the classroom or study area. The periodic table of the chemical elements is a tabular method of displaying the chemical elements. Its invention is generally credited to Russian chemist Dmitri Mendeleev in 1869. The periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior.
<urn:uuid:3c4db030-065e-4447-abee-d55ed5ddd57c>
CC-MAIN-2016-26
http://www.posterrevolution.com/poster.cfm/periodic-table-of-the-elements-educational-chart-poster-24x36
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.861545
93
3.640625
4
Registered: December 2004 No. 380 —38.2 x 54.3 cm. Watercolour and Ink. What the text says: Coal hewers mine the coal and the other member of the pair dragged the coal out with the twinroped human-hauled box, up to “Kanekata” (horizontal drive). Then coal was transfered by hand into coal skips for transportation to mine entrance. Text upper left: Colliery in the Mid Meiji. Children were able to work at collieries from the age 7 or 8. They went down the mine with their own lanterns in hand. His father is “Sakiyama” (one of a pair), elder sister/brother act as 2 “Atoyama” (pair helpers), and then the young boy works as the 4th force. So, you would see two children with 4 eyes, 4 arms, 4 feet together play the role of one full-fledged miner. Husband and Wife, Parent and a child, Brother/Sister with another brothers/sisters, all work in pairs. A pair work produces about 2.5 tonnes; 5 or 6 Coal boxes/day from “Teisotan” ( a thin coal seam).
<urn:uuid:00c8c5d0-d943-47fb-863d-650ec7283e3e>
CC-MAIN-2016-26
http://www.unesco-ci.org/photos/showphoto.php/photo/6204/title/painting-by-sakubei-yamamoto/cat/1031
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933952
271
3.546875
4
NEW ZEALAND DISASTERS AND TRAGEDIES WORLD WAR TWO PRISONERS OF WAR Thank you to Bill Rudd of Melbourne who with his colleagues are doing so much on behalf of all exPOW's to see that they and their families receive what is due to them. TEREZIN (THERESIENSTADT) CONCENTRATION CAMP - ANZAC Prisoners of War The Small Fortress in Terezin was also used as a punishment prison for Allied Prisoners of War who persisted in escape attempts. Prisoners from Australia, New Zealand, England and Scotland were imprisoned and witnessed the horrendous inhuman mistreatment of the largely Jewish population. Keeping Prisoners of War in such a camp was against the Geneva Convention, and the camp was under the direct control of the Gestapo who refused to acknowledge the POWs' special status Terezin was the punishment prison for Walter WISE (Australia), Charles CROALL (NZ), Roy LOMAS (NZ), Ray REID (NZ), Gerry MILLS (NZ), Sid DAVISON (NZ), Tom McLEOD (NZ), Alf BOOKER (NZ), Jock BONE (UK), Herb CULLEN (Australia), Tama TAMAKI (NZ), Wal RILEY (Australia), Tom MOTTRAM (NZ), Jim ILOTT and Alexander McCLELLAND (Australia). All survived but suffered chronic physical and mental health problems for most of their lives. For many years the Australian and New Zealand governments denied that any of their servicemen had been sent to Terezin, but after several years of campaigns the Australian Prime Minister Bob Hawke established a committee of investigation in 1987 which eventually ordered $10,000 compensation payments to the surviving (Australian) veterans. For further information see Thank you to Robert Loughnan for pointing this out to me. FAMILIES I AM RESEARCHING | MISCELLANEOUS GENEALOGICAL STUFF | NEW ZEALAND — ON LINE GENEALOGICAL AND FAMILY HISTORY RESOURCES | NEW ZEALAND — YOUNG BOY IMMIGRANT SCHEME 1911 — 1914 | NEW ZEALAND DISASTERS AND TRAGEDIES | NEW ZEALAND MISCELLANEOUS GENEALOGICAL INDEXES | NEW ZEALAND LAND WARS — MISCELLANEOUS GENEALOGICAL INDEXES | NEW ZEALAND AND WORLD WAR ONE | NEW ZEALAND AND WORLD WAR TWO | NEW ZEALAND ROLLS OF HONOUR AND WAR MEMORIALS — BY LOCATION | NEW ZEALAND ROLLS OF HONOUR AND WAR MEMORIALS — BY CONFLICT | NEW ZEALAND ROLLS OF HONOUR — MILITARY NURSES | PAKEHA/MAORI TRANSLITERATIONS | PASSENGER LISTS TO NEW ZEALAND | SHAND — FAMILY HISTORY | SOUTH TARANAKI, NEW ZEALAND — GENEALOGICAL RESOURCES | SPONDON, DERBYSHIRE, ENGLAND — GENEALOGICAL RESOURCES | WANGANUI COLLEGIATE SCHOOL 1865 — 1947 | WESTERN BAY OF PLENTY, NEW ZEALAND — GENEALOGICAL RESOURCES
<urn:uuid:1ae157e7-6207-4d6d-a2d0-adafbc5af491>
CC-MAIN-2016-26
http://freepages.genealogy.rootsweb.ancestry.com/~sooty/powsww2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890123
718
2.875
3
Air conditioning cools and dehumidifies the air. This can make a home more comfortable during hot or humid weather. Air conditioning can be "central" (running through built in ducts and vents), window units or wall units. There are other techniques to make a home more comfortable when it's hot and humid. One is to blow air over a block of ice. (more...) Air conditioners can be harmed by turning them on when it's already cool (below 70 degrees Fahrenheit). - Air conditioning is abbreviated "A.C.", as usual. - Heating, ventilation and air conditioning (H.I.V.A.A.C.: Heating, ventilation and air conditoning) - Central Airconditioners on the U.S. government's ENERGY STAR site - How Air Conditioners Work from HowStuffWorks.com - American Society of Heating, Refrigerating and Air-Conditioning Engineers - Air Conditioning Contractors of America - Wikipedia's article on air conditioning - Air Conditioning topics from the How To section of doityourself.com - Lennox A.C..s (Air conditoners)
<urn:uuid:24497ddd-3aea-4bd6-a65a-dafc5b1b0ddc>
CC-MAIN-2016-26
http://home.wikia.com/wiki/Air_conditioning
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00152-ip-10-164-35-72.ec2.internal.warc.gz
en
0.816825
252
3.296875
3
The lab work and procedures that are commonly performed in multiple myeloma assessment are: - Blood Tests - Serum Protein Electrophoresis (See Below) - Serum Immunofixation (See Below) - Complete blood counts - Assessment of organ function - Beta-2 Microglobulin (marker of myeloma activity) - Urine Tests - 24 hour urine collection for Bence Jones Protein (A complete collection of the urine produced over a 24 hour period to assess protein output of the myeloma) - Skeletal Survey (x-rays of the bones) - PET/CT scan (radiolabeled glucose distribution in the body) - MRI (highly sensitive test to determine bone and bone marrow changes associated with myeloma) - Bone Marrow Aspiration and Biopsy The bone marrow biopsy is performed by an attending physician in order to make a diagnosis or prior to a change in treatment plan. The bone marrow biopsy is usually taken from the back of the pelvic bone. The area is anesthetized with local anesthetic and can generally be performed in less than 10 minutes. The information from the bone marrow biopsy informs: - Type of plasma cell disorder - Degree of severity - Changes in the DNA of the plasma cells which can alter prognosis - Potential targets of new drug therapy in clinical trials More information is listed at the Myeloma Library. Serum Protein electrophoresis In a serum protein electrophoresis, blood serum is run through an agarose gel across an electrical current. This forces separation of the different protein components of plasma into several distinct visible bands, shown to the right. The leftmost lane of the gel is called the serum protein electrophoresis. This lane is impregnated with a stain that will give all proteins a blue color nonspecifically. The dark band on top is albumin, one of the most common and abundant proteins in the bloodstream. The bands below each have names; alpha-1, alpha-2, beta, and gamma as marked in the figure. Different blood components tend to aggregate in the different protein bands. For example, the HDL type of cholesterol runs in the band labeled alpha-2. All of the immunoglobulins run in the broad gamma band. The reason that the band is wide and fuzzy is that there are many different types of immunoglobulin proteins of different shapes and sizes, ranging from the small IgG subtype to the larger IgM subtype, and each runs slightly differently on the gel. The same serum sample is run through the other lanes labeled G, A, M, K, and L shown above. These lanes constitute the immunofixation. Each of these lanes is impregnated with a specific stain that will mark only one type of protein specifically. In the "G" lane, it is the immunoglobulin type G antibody, in the "A" lane, it is the immunoglobulin type A, etc. For K and L, the stains will mark the kappa and lambda light chains respectively. In the normal human sample above, we see that the darkest bands are seen in the G and K lanes – this is because we tend to have more immunoglobulin type G and kappa light chains as compared to the other types. Still, the bands in each lane are wide and fuzzy, meaning that even within the specific subtypes of antibody, there is a variation in shape and size, because different molecules are produced by a variety of different plasma cells normally. In the serum protein electrophoresis/immunofixation from a person with a monoclonal paraprotein (shown to the right), the gamma band is narrowed and sharp. There is now one predominant kind of immunoglobulin (called the monoclonal immunoglobulin) that runs at one precise spot on the gel, giving rise to the famous M-spike. By looking at the immunofixation lanes, we can easily see a prominent sharp band in the M and L lanes. This signifies that in this care, the M-protein produced is the IgM-lambda subtype. This M-spike can be measured and followed throughout the clinical course of an individual and roughly reflects the amount of malignant plasma cells in the body. A rapidly rising M-protein connotes aggressive growing myeloma, while falling or disappearing M-protein can signify a good response to treatment.
<urn:uuid:22538b28-0731-4a38-bea7-19e3fd122cc0>
CC-MAIN-2016-26
http://www.myelomacenter.org/patient_care/lab_work_procedures.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00011-ip-10-164-35-72.ec2.internal.warc.gz
en
0.909842
931
2.546875
3
Discontinuous transmission (DTX) is a method of momentarily powering-down, or muting, a mobile or portable wireless telephone set when there is no voice input to the set. This optimizes the overall efficiency of a wireless voice communications system. In a typical two-way conversation, each individual speaks slightly less than half of the time. If the transmitter signal is switched on only during periods of voice input, the duty cycle of the telephone set can be cut to less than 50 percent. This conserves battery power, eases the workload of the components in the transmitter amplifier s, and frees the channel so that time-division multiplexing ( TDM ) can take advantage of the available bandwidth by sharing the channel with other signals. A DTX circuit operates using voice activity detection ( VAD ). Sophisticated engineering is necessary to ensure that circuits of this type operate properly. In wireless transmitters, VAD is sometimes called voice-operated transmission ( VOX ).
<urn:uuid:1021bc8f-1cb5-44cc-93a7-ca893002593d>
CC-MAIN-2016-26
http://whatis.techtarget.com/definition/discontinuous-transmission-DTX
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00105-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910235
201
3.078125
3
brown watersnake (Nerodia taxispilota) This is a large, heavy-bodied snake that reaches lengths of 36-48 in. (76-152 cm). The adult has large, square, dark brown blotches on a lighter brown background. One row of squares runs down the back with an alternating row on each side. The belly is yellowish-brown with with dark blotches. The patterns darken and become less distinct with age. The juvenile is similar to the adult but lighter. Mating occurs in the spring and 12-50 live young are born from late August to mid-September. The brown watersnake is generally diurnal, but may be nocturnal in midsummer. It frequently basks on logs and overhanging vegetation during midday in spring and fall and in the morning during summer. This species is found in the Coastal Plain south of the Rappahannock River. The northernmost known population occurs in the Pamunkey River. This species is found in a variety of quiet water including brackish water. It often basks in the crotches of cypress trees growing several hundred feet from the shore near Lake Drummond. This is a common species along many other lakes, canals and rivers in southeastern Virginia. The main food of this species is fish, although frogs and other aquatic animals are also taken. They will consume whole bluegill and other sunfishes. The prey are swallowed alive.
<urn:uuid:54d9945c-93e1-4c1c-89d3-1e49b2e7b776>
CC-MAIN-2016-26
http://www.dgif.virginia.gov/wildlife/information/?s=030037
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956812
301
3.078125
3
Bedford Brown (1795-1870) is remembered as a politician who dedicated himself to the preservation of the United States before, during, and after the war he feared the most. Born to Jethro and Lucy Brown on June 6, 1795, Bedford Brown spent his adolescence in present-day Locus Hill Township in Caswell County. Raised on a farm, he was trained as a planter, but attended the University of North Carolina for only one year before entering politics. Original Date Cast: When he was twenty years old, Brown was elected to the North Carolina House of Commons in 1815, where he served three terms consecutively and a fourth term in 1823. After a short hiatus on his farm at “Rose Hill,’ he replaced Bartlett Yancey in the North Carolina Senate in1828, and was chosen as speaker soon after. Brown was chosen to represent North Carolina in the United States Senate, replacing John Branch the following year. Brown’s ten-year tenure in the senate ended in 1840, when he and fellow Democrat Robert Strange resigned close to the end of their terms, eager to demonstrate their popularity through a quick re-election by an overwhelming majority of their constituents. Their plan failed and, after their humiliation, the Whig party took control of senate, Brown moved his family in 1842 to live out-of-state until returning to “Rose Hill” in 1855. Catching a second wind, Brown returned to politics with election to the State Senate in 1858, where he served the next three terms until 1864. In the fall of 1868, Brown was once again elected to State Senate, but partisan politics prevented his attendance. Brown was succeeded by John W. ‘Chicken’ Stevens, whose death was partly responsible for the 1870 Kirk-Holden War. Known as an ardent unionist, Brown considered secession “the greatest political calamity that can befall the people of any nation.” Brown married Mary Glenn in 1816, and together they had seven children. On December 6, 1870, two years after his political career came to an end, planter, politician, and legislator Bedford Brown died at his home at “Rose Hill,” where he is buried. Houston G. Jones, Bedford Brown: States Rights Unionist (1955) William S. Powell, ed., Dictionary of North Carolina Biography, I, 240-1 (1979)—sketch by H. G. Jones William S. Powell, When the Past Refused to Die: A History of Caswell County, 1777-1977 (1979) United States Senate Joint Committee on Reconstruction 1860, “North Carolina Majority Against Secession in 1860”: http://www.adena.com/adena/usa/cw/cw191.htm Caswell County Historical Association, “Bedford Brown (1795-1870)”:
<urn:uuid:607cb6bd-2d76-45ce-b7a6-1490e2b8ab26>
CC-MAIN-2016-26
http://www.ncmarkers.com/Markers.aspx?sp=Markers&k=Markers&sv=G-8
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962334
609
3.15625
3
State, federal and tribal leaders pledged support Thursday for restoring Columbia River watersheds through programs that reduce toxic compounds in the water. The nation’s fourth-largest river basin contains mercury, PCBs, DDT and other compounds at levels that pose health risks for people and the environment, officials said. More than 800 water bodies in the Columbia basin are impaired from toxins, which can linger for decades. The pesticide DDT was banned in 1972, for instance, but still washes into the river from farm fields. “This is one big basin,” said Phil Cernera, the Coeur d’Alene Tribe’s lake management director. “The water that feeds the Coeur d’Alene Tribe also feeds the lower Columbia River. … We’re all very much intertwined in this process.” The five-year cleanup plan was announced Thursday at a news conference on the Umatilla Indian Reservation in Eastern Oregon. Officials from the U.S. Environmental Protection Agency, the states of Washington and Oregon and several tribes attended. The cleanup plan outlines 61 actions for reducing toxins in the water, including increased monitoring and educational outreach. Some of the work can be done through existing programs, officials said. The states are also working with congressional leaders to request $33 million annually in additional federal money for the effort, similar to federal appropriations that helped clean up the Great Lakes and Chesapeake Bay. Tribes have a particular stake in the effort. American Indians tend to consume more fish than the public at large, which puts them at greater risk of exposure to toxins. Kathryn Brigham, secretary of the Confederated Umatilla Tribes, said she comes from a long fishing tradition. Through treaties with the U.S. government, the tribes retained rights to fish for salmon and lamprey on the Columbia and its tributaries. But toxins in the fish limit the tribes’ ability to exercise those rights, Brigham said. “We have a lot of work to do,” said Brett VandenHeuvel, who directs the nonprofit Columbia Riverkeeper program. “It’s a long-term plan of restoring these basic river rights: The right to go down to the river with your family and to catch a fish and eat it; the right to swim in the river; and for the tribes, the treaty rights.” sponsored Jargon is confusing, by definition. And the financial world has its own set of cryptic words.
<urn:uuid:b07b5246-580f-48de-9b14-8125c421a1ce>
CC-MAIN-2016-26
http://www.spokesman.com/stories/2010/sep/24/five-year-plan-cleans-columbia-basin-toxins/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953524
517
2.84375
3
In Memory of W.B. Yeats Theme of Isolation "In Memory of W.B. Yeats" depicts the world as a lonely place. Funny enough, though, people don't even seem to realize how alone and isolated they are. The way this poem describes it, the world is almost like the set of a horror movie – you know, the kind where people can't figure out that they've been turned into zombies? Poetry may not be a perfect cure for all this isolation, but according to Auden, it can help people see the truth of their situation, even if it forces them to acknowledge their own loneliness. And that's something. As your middle school teachers always said, knowing is half the battle. Questions About Isolation - Is there anyone who seems to not be isolated in this poem? - What is it that isolates people? Can you find evidence in the poem to support your answer? - Was Yeats as alone as everyone else? How can you tell? - Does poetry combat isolation? If so, how? Chew on This Even though poetry can point out people's isolation, it can't do anything to solve their problems. Poetry can help combat the pervasive loneliness of our lives by making people aware of their own isolation.
<urn:uuid:6c96fdc4-f614-4ca8-bf64-f717fcae9ec7>
CC-MAIN-2016-26
http://www.shmoop.com/in-memory-of-wb-yeats/isolation-theme.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975905
267
3.296875
3
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99... A celebrated Syrian writer, b. most likely in A.D. 633; d. 5 June, 708. He was a native of the village of 'En-debha, in the district of Gumyah, in the province of Antioch. During several years he studied Greek and Holy Writ at the famous convent of Kennesrhe, on the left bank of the Euphrates, opposite Europus (Carchemish). After his return to Syria he was appointed Bishop of Edessa, about A.D. 684, by the Patriarch Athanasius II, his former fellow-student. Equally unable to enforce canonical rules and to connive at their infringement, he resigned his see after a four years' episcopate, and withdrew to the convent of Kaisum (near Samosata), while the more lenient Habhibh succeeded him as Bishop of Edessa. Shortly afterwards he accepted the invitation of the monks of Eusebhona (in the Diocese of Antioch) to reside at their convent, and there he commented for eleven years on the Sacred Scriptures in the Greek text, doing his utmost to promote the study of the Greek tongue. Owing to the opposition which he met on the part of some of the monks who did not like the Greeks, he betook himself to the great convent of Tell-'Adda (the modern Tell-'Addi), where, for nine years more, he worked at his revision of the Old Testament. Upon Habhibh's death he took possession again of the episcopal See of Edessa, resided in that city for four months, and then went to Tell-Adda to fetch his library and his pupils, but died there. James of Edessa was a Monophysite, as is proved by the prominent part he took in the synod which the Jacobite patriarch Julian convened in 706, and by one of his letters in which he speaks of the orthodox Fathers of Chalcedon as "the Chalcedonian heretics". In the literature of his country he holds much the same place as St. Jerome does among the Latins (Wright). For his time, his erudition was extensive. He was not only familiar with Greek and with older Syriac writers, but he also had some knowledge of Hebrew, and willingly availed himself of the aid of Jewish scholars, whose views he often records. His writings, which are not all extant, were very varied and numerous. Among them may be noticed first, his important revision of the Old Testament. This work was essentially Massoretic. James divided the Sacred Books into chapters, prefixing to each chapter a summary of its contents. He supplied the text with numerous marginal notes, of which one part gives readings from the Greek and the Syrian versions at his disposal, and the other part indicates the exact pronunciation of the words of the text. Some of the notes contain extracts from Severus of Antioch; while, at times, glosses are inserted in the text itself. Unfortunately, only portions of this revision have come down to us. These are: practically the whole Pentateuch and the Book of Daniel, preserved in the Bibliothèque Nationale at Paris (Syric. nos. 26, 27); the two Books of Samuel with the beginning of Kings, and the prophecy of Isaias, found in the British Museum (Add. 14429, 14441). The other principal writings of James of Edessa on Biblical topics are: (1) his unfinished "Hexameron", or work on the six days of creation, which is divided into seven treatises, and which opens with a dialogue between the author and Constantine, one of his disciples. James's "Hexaemeron" is preserved in two manuscripts, one of which is found in Leyden, and the other in Lyons; (2) commendaries and scholia on the Sacred Writings of both Testments, which are cited by later authors such as Dionysius bar-Salibi, Bar-Hebraeus, and Severus. Some of his scholia have been published in the Roman edition of the works of St. Ephraem, and, at different times, by Phillips, Wright, Schröter, and Nestle; (3) letters treating of questions relative to Holy Writ, and mostly yet unpublished. As a liturgical author, James of Edessa drew up an anaphora, or liturgy, revised the Liturgy of St. James, wrote the celebrated "Book of Treasures", composed orders of baptism, of the blessing of water on the eve of the Epiphany, and of the celebration of matrimony, to which may be added his translation of Severus's order of Baptism, etc. He is also the author of numerous canons; of important homilies, a few of which survive in manuscript; of a valuable "Chronicle" which he composed in 692, and of which a few leaves only are extant; of an "Enchiridion", or tract on technical philosophical terms; of a translation of the "Homiliae Cathedrales", written in Greek by Severus of Antioch; and of the "Octoechus" by the same author; of a biography of James of Sarugh, of a translation from the Greek of the apocryphal "History of the Rechabites", of a Syriac grammer, a few fragments of which are extant in Oxford and London, and in which he advocated and illustrated a novel system of indicating the vocalic element not found in the Syrian alphabet; and, finally, of an extensive correspondence with a large number of persons throughout Syria. J. S. ASSEMANI, Bibliotheca Orientalis, I (Rome, 1719), II, (Rome 1721); MAI. Script. Vet. Nova collectio (Rome, 1825-38); CERIANI, Monumenta sacra et profana (Milan 1863); BALL, in Dict. Christ. Biog., s.v. Jacobus Edessenus; NESTLE, Syrische Grammatik mit Litteratur (Berlin, 1888); MERX, Historia artis grammatiae apud Syros (Leipzig, 1889); WRIGHT, Catalogue of the Syriac MSS. in the British Museum (London, 1870--); IDEM, A Short History of History of Syriac Literature (London, 1894): BROCKEL-MANN, Syriche Grammatik mit Litteratur (Berlin, 1899), DUVAL, Grammaire Syriaque (Paris, 1881); IDEM, Litterature Syriaque (3rd ed., Paris, 1907). APA citation. (1910). James of Edessa. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/08277b.htm MLA citation. "James of Edessa." The Catholic Encyclopedia. Vol. 8. New York: Robert Appleton Company, 1910. <http://www.newadvent.org/cathen/08277b.htm>. Transcription. This article was transcribed for New Advent by Joseph P. Thomas. Ecclesiastical approbation. Nihil Obstat. October 1, 1910. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:c2ff1782-5fcf-47b1-9506-7a69fbad49c2>
CC-MAIN-2016-26
http://newadvent.org/cathen/08277b.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958427
1,628
2.71875
3
Games: More than Just Reward Systems "The fact that they give rewards is maybe #85 on the list of things that games do." Games allow players to imagine scenarios, explore worlds, fail, make choices, take risks, fail, learn iteratively, try on different identities, fail, solve puzzles, form groups, think strategically, fail, and find success. Oh and they have points and levels too. Putting rewards at number 85 on the list is probably a little unfair: point systems are central to many games. But all too often, gamification in education is just dumping a point system on worksheets or some other rote task. It's like saying that you are making Mexican food by pouring salsa on something. Yes, salsa is integral to many Mexican dishes; no, pouring salsa on penne does not make it a Mexican dish. (This is another riff on the famous chocolate-covered broccoli metaphor) You can watch the full context of Klopfer's comments in video below, taken at a gathering of educators hosted by OpenAirBoston, exploring the intersection between family engagement and emerging technologies. Karen Brennan, a new assistant professor at Harvard's Ed School and the brains behind ScratchED, is also on the panel. I participated in a panel as well, on social mobile learning:
<urn:uuid:ffba7d31-ca64-4f12-8cf8-8c073739620a>
CC-MAIN-2016-26
http://blogs.edweek.org/edweek/edtechresearcher/2012/11/games_more_than_just_reward_systems.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00035-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959717
267
2.8125
3
Small Dog Syndrome It sounds like a disease but it is a condition that pertains to dogs and the human perception of them. If you have a small or toy dog, keep reading to find out how detrimental this syndrome can be. “Aren’t they cute?” You hear this all the time when people are describing puppies and small dogs. We are used to seeing big dogs and other animals but the small ones just make us gush. In human language, this is adorable and endears us to our pets even more. In dog language, however, it means something totally different. As dogs become older they grow larger, unless they are of the small or toy varieties. This can cause some problems in the dog’s behavior that many owners ignore. Here are some examples. Small dogs allowed to jump on people Small dogs allowed to nip at family and visitors Small dogs allowed to sleep where they want to Small dogs allowed to lead on the leash Small dogs allowed to sit in owner’s lap when they want to Be truthful – if a large dog did any of these things, you would not be pleased, would you? Well, in the animal world, size doesn’t matter. The same way that you discipline a large dog is the same way that a small dog needs to be treated. In fact, they demand it. And, when we don’t give it to them, they rebel and can become a problem. Small Dog Mentality Dogs are not humans. We often forget that. Dogs have a pack mentality much like their distant relative, the wolf. This means that someone in the group has to be the leader. Having a leader brings order to the pack. They know who to follow so that the pressure is off of them to make all of the decisions. In the home, the dog is the follower and the human is the leader. Anything less is seen as weakness by your dog. The reason that small dogs do some of the things mentioned above is that they have begun to act like the pack leader. They have taken charge. One way to overcome this pattern of behavior is to regain the alpha position with your dog. Here are some suggestions: 1. Don’t allow your dog to walk in front of you on the leash 2. Use a negative command like “No” when the dog nips at someone or jumps up on your legs 3. Wait until the dog displays a submissive posture before allowing them to sit in your lap or jump on the bed Remember that dogs respond better to firm but calm instruction. Avoid yelling at your dog or pushing them around. Poke them with your fingers until they decide to move off your lap or your bed. Displaying your alpha position can avoid such things as separation anxiety as well. Small dogs or toy dogs may be cuter but they need the same things as larger dogs.
<urn:uuid:1871b54a-6d3e-497f-926d-4bc5f7548950>
CC-MAIN-2016-26
http://animal-world.com/newsfeed/small-dog-syndrome/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967073
604
2.84375
3
HAMILTON, Ontario, Dec. 6 (UPI) -- A Canadian biologist say his trek in southwestern China has shattered a myth of "killer mushrooms" being responsible for 400 unexplained deaths in the region. Jianping Xu of McMaster University in Ontario said his investigations were triggered by a 2010 article in the journal Science, claiming Trogia venenata mushrooms contained high concentrations of the metal barium and linking them with high blood pressure, cardiac arrests and sudden deaths in southwestern China over the past 30 years. "Although there was no published evidence supporting the theory that barium in the T. venenata mushroom was the leading culprit of what was called Sudden Unexplained Death, it was picked up as a fact by almost all of the major news media," biology Professor Xu said. "These reports caused significant concern among the public about potentially high levels of barium in wild edible mushrooms in southwest China." Every summer for the last four years Xu and colleagues have trekked across the Yunnan province, collecting T. venenata and other mushrooms from villages severely impacted by these deaths. Analysis of the mushrooms found barium concentrations so low a 150-pound person would have to ingest at least 75 pounds of dried T. venenata to create a lethal amount, the researchers said. While barium can't be completely ruled out as a contributor to the deaths since high levels of the metal were found in the blood, urine and hair samples of some victims, Xu said, his study does suggest that barium in mushrooms was unlikely the leading cause. "Though there are a couple of leads," he said, "further investigation is needed to discover what the true cause was for these mysterious deaths."
<urn:uuid:11a9959e-17a1-471c-84dc-6fa2558007ed>
CC-MAIN-2016-26
http://www.upi.com/Science_News/2012/12/06/Study-casts-doubt-on-killer-mushrooms/UPI-81361354831675/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00017-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97802
349
2.578125
3
From: Clayton Bartholomew (firstname.lastname@example.org) Date: Wed Jun 04 1997 - 07:42:05 EDT Professor Conrad Wrote: . . .it's always seemed to me that the unit of discourse in written Greek (even granting that ancient Greek was written to be spoken and heard, not read silently) is what we call the PARAGRAPH, a grouping of clauses that has an organic unity wherein the interrelationship of the clauses, however complex, is nevertheless perspicuous . . . Carl has pinpointed exactly what I am trying to get at. If I were describing the levels of discourse in K. Greek I would jump from *clause* to *paragraph* and just forget about *sentence* altogether. I am starting to question whether the concept of *sentence* is useful for discourse analysis. Off the subject. A minor note of clarification. Now that I understand what elliptical means, it is not related in any way to self referencing or recursive. Elliptical simply means that the subject or some other component of a subordinate clause is not explicitly stated. Correct? Self referencing or recursive is a characteristic of clauses where one clause is embedded in another clause. The structural definition of a K. Greek clause is self referencing because the definition includes the term *clause*. This indicates that a clause can contain clauses down to N levels. Jonathan and Andrew should understand this terminology. Three Tree Point This archive was generated by hypermail 2.1.4 : Sat Apr 20 2002 - 15:38:18 EDT
<urn:uuid:5f753657-958a-412b-9f0c-47cdf8f5bce9>
CC-MAIN-2016-26
http://www.ibiblio.org/bgreek/test-archives/html4/1997-06/19295.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901271
353
2.546875
3
Sample Information Literacy Assignment #1 In the tradition of the great explorers who ventured in new paths, this assignment asks you to search the many databases available to UMUC students to identify resources that may be of use to us well as those that we should avoid. You will be aided in this exploration by the UMUC library staff. - Search through the databases presented and write a short summary that critiques the value of 3 databases (such as benefits, limitations, cautions, possibilities). - Select at least 3 items from the databases, and write a brief critical review about the value of the item. For example, value of the article/database re: accuracy of items, detail vs. generality, easy vs. difficult access, understandable (such as use of jargon). Could you understand what point was being made? Would others in your profession, office? ; Conclusion-- Would you recommend the articles? And most importantly, evaluate the credibility of the information--is the article from a reputable journal or a popular magazine? (You do NOT have to cover all these items; select as appropriate). - Areas for exploration: - constructive and destructive conflict - cross-cultural conflict - integrative bargaining and distributive bargaining - interpersonal conflict - role of perception, attribution, and stereotyping in conflict - mediation of disputes - community dispute resolution NOTE: you may need to select a smaller aspect of any area. If you do, identify which aspects you have selected. - 4-5 pages of text --written in college level English (grammar, spelling, sentence structure) - logical, well organized and well-developed - summary resource page-labeled as Resources Examined--at the end of the paper (not counted in page #s). Include full APA or MLA citations for the 3 databases searched and the 3 items selected.
<urn:uuid:4141ee7f-677e-4db3-aa32-2a139949c3df>
CC-MAIN-2016-26
http://umuc.edu/library/libhow/informationliteracy_assignment1.cfm?renderforprint=1&noprint=true
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.888989
379
3.1875
3
Biodiversity Heritage Library The materials and manufacture of Portland cement. By Edwin C. Eckel. The cement resources of Alabama. By Eugene A. Smith. By: Eckel, Edwin Clarence, - Smith, Eugene A. Publication info: Montgomery,Brown printing company,1904. Contributed by: University of California Libraries (archive.org) Subjects: Alabama Cement Portland cement Cement / by Henry W. Nichols, Assoc. Curator of Geology. By: Nichols, Henry W. - Field Museum of Natural History. Publication info: Chicago :Field Museum of Natural History,1929. Contributed by: University Library, University of Illinois Urbana Champaign Subjects: Cement Cement (bouwkunde)
<urn:uuid:08ef0adb-5e38-4b5f-992b-c38cc090ac61>
CC-MAIN-2016-26
http://biodiversitylibrary.org/subject/Cement/author
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00007-ip-10-164-35-72.ec2.internal.warc.gz
en
0.687855
166
2.84375
3
- Jul 1, 2004 As horse owners, we care deeply about our animals. When one of our beloved horses dies, we lose a trusted friend. For many of us, the experience is emotionally devastating. Yet, just when we are at our most pronounced stage of grief, we are suddenly faced with a difficult but inevitable question: What do we do with the horse's body? Although mortality management is not something most people want to think about before their horses die, planning ahead in order to understand your options can make the decision feel less stressful and overwhelming. Sorting through the various methods of carcass disposal, however, can be a daunting process. The laws that govern it vary significantly across the country, and although many options exist, they are not always available or even legal. While it used to be quite common in many areas to simply leave your deceased horse in the woods to decompose naturally, many farms are now surrounded by housing developments filled with neighbors who will not appreciate the smell of a decaying animal or the vermin that it attracts. Moreover, leaving a carcass in such close proximity to people's homes could potentially contaminate the water supply or contribute to the spread of disease. "There's less disposal on property now than there was years ago," says William Jeter, DVM, Diagnostic Veterinary Manager for Equine Programs in the Florida Division of Animal Industry. "A lot of this is because people are more environmentally conscious." In order to decrease the pollution of groundwater resources, reduce the impact of odors, and decrease the spread of disease, the Natural Resources Conservation Service (NRCS) has established a national conservation practice standard for mortality management. While this standard provides helpful guidelines, carcass disposal is primarily regulated at a local level because the laws depend heavily on your area's water table and topography. "Florida has a statute specifying acceptable methods of disposal so that it's illegal to just leave the animal out in the pasture to decay," says Jeter, "but the rules vary significantly here at a county level." In addition to outlining adequate methods of disposal, many state statutes govern factors such as how long you have to dispose of the carcass or how deeply you must bury it. After researching your state regulations, Jeter suggests you call the Department of Environmental Protection in your district. "Even though a method is authorized by the state statute, it might be prohibited in your particular county because of water tables or other issues," he says. According to Jeter, a violation of the state statute is considered a misdemeanor and is punishable by fines and/or jail time. Because regulations vary so much from one area to another, it is imperative that you check with the appropriate authorities lest you inadvertently break the law. Burial is probably the most tightly regulated method of carcass disposal. However, because it's viewed culturally as a dignified and respectful way of dealing with death, it is often the most desirable choice. "Burial is largely a sentimental issue where people have a horse that has been their friend for years and they don't want to send it to rendering," says Kentucky State Veterinarian Robert Stout, DVM. Many horse owners would ideally like to bury their horse at home--in his former pasture or at the side of the trail you used to ride. Unfortunately, burying a horse on your own property is now strictly controlled by law and, in many states, is illegal. The reasons for this primarily stem from concerns over groundwater contamination and odor. According to Stout, "In Kentucky, a large part of the state is built over caverns, and also we have areas where the ground rock is very close to the surface. So depending on how deep an animal is buried, any drainage from that land goes directly into those caves and runs into the river, which in turn affects the quality of the water that people drink." To help people bury their animals in the safest way possible, many local and county regulations specify a minimum burial depth as well as a minimum distance from streams, water wells, and dwellings. Although the national standard set by the NRCS does not specify a minimum depth, it states that there should be at least two feet of cover over the mortality. "In situations where a farmer has a backhoe or digging equipment on the farm," says Stout, "it may be very convenient for them to bury the horse instead of having to worry about getting someone to pick it up." However, because most horse owners do not own that type of machinery, attaining proper burial depth at home can be difficult. Therefore, in addition to the labor involved, one of the biggest disadvantages of burying your horse on your own land is that it can be expensive to rent the equipment required to dig a hole and lift the carcass into it. For those interested in burying their horses but can't do it at home, another option is to take the animal to a pet cemetery. Pet cemeteries can provide a range of burial choices--different headstones are available, and burial is often available with or without a concrete vault. Some cemeteries will even arrange a funeral or graveside service. Because there are so many variables, the price ranges from a few hundred dollars up into the thousands. Stephen Drown, executive director of the International Association of Pet Cemeteries (IAPC), suggests that when you are choosing a cemetery, you ask if the land is deeded appropriately. "You want to make sure that the cemetery is dedicated properly so that it is perpetuated (maintained and protected as a cemetery)," he cautions. If it isn't, Drown warns that the cemetery could one day be disturbed by land development. Drown estimates that approximately 150 pet cemeteries nationally accept horses. He believes, however, that that number will grow rapidly as burial at home becomes more difficult. "My own personal opinion is that it's probably the fastest growing segment of the pet death care business," he says, "but of course the investment for the business is fairly high because of the machinery involved." For a directory of pet cemeteries, visit the IAPC web site at www.iaopc.com. Many pet cemeteries also have crematories, although not many can accommodate an animal the size of a horse. According to Drown, the reason that there are so few is because of the size and expense of the machine that is needed. If your local pet cemetery does not do cremations, you might also contact a nearby research university or your state Department of Agriculture. Often they have incinerators and will handle the disposal of horse carcasses. One reason some horse owners choose cremation is because most facilities will return the ashes. Because they are odorless, the ashes can be stored indefinitely; some people keep them in an urn or decide to scatter them in a meaningful place. Another benefit of cremation is that it is an effective method of disposal even if a horse has a transmissible disease. The drawback, however, is the cost. According to Virginia Pierce, DVM, Director of the Maryland Department of Agriculture's Frederick County Laboratory, incineration is considerably more expensive if you want ashes returned. At her facility, residents of Maryland pay $100 for disposal by incineration. If the horse owner wants ashes returned, that cost jumps to $425. The reason that this is more expensive is that it is more time-consuming to do, and most facilities want the incinerator full. If they run the incinerator for just one animal in order to return ashes, the cost jumps. Because of the expense, some people might be tempted to simply use a burn pile at the farm. It is important to stress that this will not lead to thorough and complete incineration since it is difficult to get the fire hot enough for long enough to properly dispose of the animal. "The EPA will not allow a producer or animal owner to just build a fire out in the pasture and incinerate a carcass because of the pollutants that would go into the air," says Jeter. "They should take it to a facility that has licensed incineration equipment." Like cremation, rendering also provides a biosecure method of carcass disposal. According to Tom Cook, president of the National Renderer's Association, rendering is essentially a cooking process that separates animal fats and proteins, thereby recycling them into usable products. When a carcass is brought to a rendering facility, it is heated at a temperature between 250-300ºF (121-149ºC) for an extended period of time. Then, using a press or centrifuge, the renderer separates the material into fats and oils that can be used either industrially or in animal feed. The high temperatures kill pathogens, making rendering a viable option for disposing of sick or diseased horses. Because it's such a clean and waste-free solution, rendering has historically been the preferred technique for removing large animal carcasses. According to a December 1997 survey, 72% of the California members of the American Association of Equine Practitioners recommended rendering to their clientele as the best method of carcass disposal. During that same year, however, the Food and Drug Administration placed restrictions on the use of meat and bone meal in cattle feed. As a result, the demand for rendered products began to decrease, and some facilities were forced to close or consolidate. Although rendering remains popular in some areas, it's not an option in others. Cook estimates that only 75 facilities across the nation are currently willing to pick up dead livestock. If there is a renderer in your area that accepts horse carcasses, the company will generally provide on-farm pickup. Depending on the distance, the cost can start at $25 and go up into the hundreds of dollars per animal. While rendering is a good method of disposal from an environmental and economic standpoint, it might not be the easiest choice emotionally. According to Cook, "Horse owners often have a problem trying to figure out how to dispose of their dead animals because they feel very closely attached to them. Rendering is a business, and it's the best way of disposing of dead livestock for a number of reasons, but it all depends on the frame of mind of the person who has the horse." For a directory of renderers and more information on the rendering process, you can access the web site of the National Renderer's Association at www.renderers.org. While it is not legal in all states because of the time it takes to complete, composting is rapidly gaining popularity as an inexpensive and environmentally sound method of carcass disposal. Its biggest advantage is that much like rendering, it results in a usable end product--in four to six months. Composting will generate material that can be used as a fertilizer or soil additive. Jean Bonhotal, MS, of the Cornell Waste Management Institute, explains that composting consists of layering animal remains with carbon-rich organic material (such as wood chips) in bins or windrows. When done correctly, the pile will reach 130-160ºF (54-71ºC) within three to four days, as heat and microorganisms consume the dead horse. Bonhotal emphasizes the fact that simply covering a carcass with manure is not considered composting. According to the NRCS standard, successful decomposition requires a high carbon to nitrogen ratio, 40-65% moisture, and proper aeration. In addition, it might be difficult to get the pile to generate enough heat to compost the animal. While many composters turn the pile and mechanically aerate it, recent studies at both Texas A&M University and Cornell University have shown that static-pile, in-bin composting of large animals can be carried out successfully. While this technique significantly decreases the labor involved, management is still required to make sure that the necessary temperatures are reached--the NRCS recommends that the compost attain a temperature greater than 130ºF for at least five days in order to reduce pathogens. Although the researchers at Cornell approximate that composting only costs $37.60 per carcass because material can be reused over time, the initial set up can be expensive and labor-intensive. However, in some instances this method of disposal could attract many animals you don't want to your farm. If you are interested in composting, see the USDA Natural Resources Conservation Service web site in the Further Reading section on the next page for more information and guidelines. Despite the fact that some government agencies recommend that dead animals be buried in a sanitary landfill, it's usually difficult to arrange because many landfills do not accept them. According to a survey by the Oregon Department of Environmental Quality, 90% of the landfills in California refuse deceased large animals, and only nine of Oregon's 33 municipal solid waste facilities will handle them. If you're searching for an inexpensive method of disposal, however, it is probably worth checking the policy of your local landfill. Most who responded favorably to the survey only charge between $5-$75 for drop-off, and a few even provide pickup for under $150. If your horse is insured, you will most likely need to have a complete necropsy (an animal autopsy) performed in order to place your claim. Even when an animal is not insured, many horse owners request a necropsy to understand more about why their horses died. While the fees for necropsy vary depending on what tests are needed, a complete workup generally costs at least $300. After the necropsy is completed, most facilities dispose of the body by rendering or by incineration. Some are willing to arrange a private cremation. If your horse dies of unnatural causes, yet another possibility is to donate him to a local veterinary school for research. "We do get donations," says Christopher DeMaula, DVM, PhD, of Cornell University's College of Veterinary Medicine. "Usually it's an animal that has a peculiar disease, but the owners decline to work it up and instead donate it to us for educational purposes or for scientific investigation." To determine if donation is an option for you, contact the pathology department at your nearest veterinary school or equine research institute. When you are choosing a way to dispose of your horse after he dies, make sure that in addition to considering your emotional needs and financial requirements, you carefully research your local legal regulations. By weighing all aspects of the various options in advance, you will be able to reach a decision that is both responsible and reflective of your personal ethics and beliefs at a time before you are involved in an emotional situation. Bonhotal, J.; Telega, L.; Petzen, J. Natural Rendering: Composting Livestock Mortality and Butcher Waste--Fact Sheet. Cornell Waste Management Institute Educational Resources, 2002. http://compost.css.cornell.edu/NaturalRenderingFS.pdf. Ellis, D. "Carcass Disposal Issues in Recent Disasters, Accepted Methods, and Suggested Plan to Mitigate Future Events." Applied Research Project, Department of Political Science, Southwest Texas State University. 17-44, 2001. Looper, Michael. "Whole Animal Composting of Dairy Cattle." Western Dairy Business, Vol. 82, No. 11, 20, 2001. Mukhtar, S.; Auvermann, B.; Heflin, K.; Boriack, C. "A Low Maintenance Approach to Large Carcass Composting." American Society of Agricultural Engineers (ASAE) Meeting Paper No. 032263. St. Joseph, Mich. ASAE, 2003. http://tammi.tamu.edu/carcasscompostasae032263b.pdf. NRCS, NHCP. Natural Resources Conservation Service (NRCS) Conservation Practice Standard 316-Animal Mortality Facility. March, 2003. www.nrcs.usda.gov/technical/Standards/nhcp.html. NRCS, NHCP. Natural Resources Conservation Service (NRCS) Conservation Practice Standard 317--Composting Facility. October, 2003. www.nrcs.usda.gov/technical/Standards/nhcp.html. Oregon Department of Environmental Quality Solid Waste Program. Survey Results: Disposal and Recovery of Animal Mortality and Byproducts in Oregon. 2001. www.deq.state.or.us/wmc/solwaste/documents/AnimalSurveyResults.pdf. About the Author Erika Street is a writer and filmmaker with a BA in animal physiology. POLL: Horse Insurance
<urn:uuid:8a6e3b59-4c65-4a2f-bf58-83a3d24d1d06>
CC-MAIN-2016-26
http://www.thehorse.com/articles/11449/after-goodbye
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946042
3,428
2.6875
3
- to supply (a part of the body) with nerves - to stimulate (a nerve, muscle, etc.) to movement or action Origin of innervate; from in- + nerve + -ate transitive verbin·ner·vat·ed, in·ner·vat·ing, in·ner·vates - To supply (an organ or a body part) with nerves. - To stimulate (a nerve, muscle, or body part) to action. (third-person singular simple present innervates, present participle innervating, simple past and past participle innervated)
<urn:uuid:f1716da0-3143-43e6-a1dd-fa70c74b5457>
CC-MAIN-2016-26
http://www.yourdictionary.com/innervate
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.690456
132
2.953125
3
Kids learn physics at egg catapult competition FORT HALL — It was a change of pace from holiday activities as area elementary and middle school students learned about physics and had some fun at the Nordic 'Egg' Catapult Challenge at Fort Hall Elementary School on Saturday. The Idaho State University (ISU) Society of Physics Students and Blackfoot School District 55 hosted the competition which was made possible by grant money that ISU 's Society of Physics Students obtained through the National Society of Physics Students. The grant money covered the cost of the materials that students used to build their own unique catapulting devices. The hand built devices, which all passed a safety inspection prior to the competition, were capable of flinging eggs up to 30 feet. Eight teams from various schools competed in the outdoor event while volunteers from the Pocatello Kiwanis Club helped keep them warm with snacks and hot chocolate. "This physics lesson is actually a lesson about energy," said ISU physics student Steve Shropshire, who helped coordinate the event. "Energy is pain and energy is scariness. The more energy you have, the more it can hurt you." Shropshire said that the egg catapulting devices turn one form of energy into another. "It's an underlying scientific principle; the idea of using a lever, one variety of a simple machine, as a delivery system to transfer energy, a small force over a long distance," he explained. Prizes were awarded for the furthest and most accurate flings. Shropshire said that if funds are available, he hopes to offer the catapult challenge to students again next year.
<urn:uuid:f5580a56-d9e9-4b8b-b6f3-9bafb4326637>
CC-MAIN-2016-26
http://www.am-news.com/content/kids-learn-physics-egg-catapult-competition?quicktabs_4=0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00197-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968229
334
2.859375
3
The Industrial Revolution sparked the first wave of modern science fiction narratives, which used the power of creative storytelling to explore the implications of unfolding technological developments. Science and speculation drove those stories and narratives, allowing people to truly begin to envisage the radical possibilities that lay in the near and distant future. The technological climate in Africa today bears many similarities to that of Europe and America in the wake of the Industrial Revolution. Emerging technologies are raising standards of living by providing access to new tools of production, scalable energy systems and globalised distribution networks. Information and communications technologies have opened up an unprecedented range of economic opportunities and transformed the lives of millions of people across Africa. These dramatic changes are fertile ground for speculation about the future of the continent — and science fiction can inspire Africans to envision their future with a renewed sense of agency and possibility. Connecting science with society Well-crafted science fiction narratives can analyse technical concepts using accessible language and captivating stories, making it easier for the public to engage in contemporary scientific discourse. “Science fiction can inspire Africans to envision their future with a renewed sense of agency and possibility.” In the short film Pumzi, screened at the Sundance film festival in 2010, Kenyan filmmaker Wanuri Kahiu explores water scarcity, already a critical problem in parts of Africa today. Set in a post-apocalyptic East Africa, her film highlights the technological systems required to conserve this vital resource, while telling the story of a woman determined to revive the terrestrial ecosystem. Science fiction also enables people to visualise the various pathways through which science and technology interact with the underlying framework of society. Lauren Beukes’ award-winning novel Zoo City, for instance, employs magical realism to explore the complex dynamics of life in present day Johannesburg. One lens through which her novel explores this is the practice of traditional priests called sangomas operating black magic services via the internet. Her narratives cleverly illustrate the often counterintuitive interplay between modern technology and traditional African belief systems. From imagination to innovation The sheer scope of imaginary possibilities presented in science fiction imparts a sense of wonder, inspiring young people to pursue scientific and technological innovation as a means to improve their society. Many of the technologies which have redefined the modern world — including mobile phones and the internet — were first imagined in science fiction stories. The ideas and concepts these narratives explored have primed the imaginations of countless scientists and inventors, inspiring them to pursue innovations and discoveries which might otherwise have been inconceivable. When science fiction captures the imagination, it stimulates critical thought about the scenarios it presents, and shapes public opinion on the issues it addresses. Societies that develop a vibrant discourse around scientific progress are better placed to understand the developmental implications of public investment in science and technology. The golden age of science fiction in the Western world — from the 1940s to the 1960s — first brought exotic stories of intergalactic space travel to mainstream media. The widespread interest surrounding these narratives, coupled with the tensions of the Cold War, rallied public support for ambitious space programmes. In a similar way, the new wave of African science fiction narratives that has emerged in the past decade broadly attempts to address Africa’s pressing challenges while illuminating the role of science and technology in addressing these challenges. Deji Olukotun’s novel Nigerians in Space, for example, is simultaneously uplifting and heartbreaking as it tells the story of a Nigerian space scientist in the USA who returns to his homeland to pioneer an African space programme, only to find himself in a deadly web of intrigue and corruption. The story realistically portrays the political obstacles along the path of indigenous technological development, while conveying the significance of such a grand scientific endeavour being undertaken by an African nation. Creative feedback loop By placing Africans at the heart of futurist narratives and telling stories which are relevant to their socioeconomic context, science fiction is beginning to gain traction among African creatives and audiences who previously may not have taken particular interest in this genre. And as African societies become more technologically advanced, the continual disruption of societal dynamics by innovations will inspire even more speculation on the future. Ultimately, this will accelerate the creative feedback loop. Perhaps the most important influence of science fiction will be to increase awareness of the critical opportunity Africans now have to circumvent the pitfalls associated with industrialisation. Africa is uniquely positioned to pioneer new models of sustainable economic growth and development by harnessing the full potential of innovations in renewable energy production, smart power grids, recycling and urban planning. Across virtually all scientific fields — from space travel and bioengineering to the rise of the internet and artificial intelligence — science fiction narratives have played a significant role in catalysing technological innovation across the developed world. In the same way, science fiction can play a critical role in Africa’s development by propagating narratives in mainstream media that recognise the value of indigenous innovation and youth participation in the process of technological revolution. Jonathan Dotse holds a bachelor’s degree in management information systems at Ashesi University College, Ghana. He is a techno-progressive promoting science- and speculative-fiction for Africa, working on his debut novel and discussing the future of African science fiction at afrocyberpunk.com. He can be contacted at [email protected]
<urn:uuid:b5d4748d-3bd2-4277-846e-99c864376962>
CC-MAIN-2016-26
http://www.scidev.net/global/innovation/opinion/wave-african-sci-fi-inspire-innovation.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917868
1,070
3.125
3
Scientific Publications by FDA Staff Clin Infect Dis 2012 Mar;54(6):832-40 Reed JL, Scott DE, Bray M Eczema vaccinatum (EV) is a complication of smallpox vaccination that can occur in persons with eczema/atopic dermatitis (AD), in which vaccinia virus disseminates to cause an extensive rash and systemic illness. Because persons with eczema are deferred from vaccination, only a single, accidentally transmitted case of EV has been described in the medical literature since military vaccination was resumed in the United States in 2002. To enhance understanding of EV, we review its history during the era of universal vaccination and discuss its relationship to complications in persons with other diseases or injuries of the skin. We then discuss current concepts of the pathophysiology of AD, noting how defective skin barrier function, epidermal hyperplasia, and abnormal immune responses favor the spread of poxviral infection, and identify a number of unanswered questions about EV. We conclude by considering how its occurrence might be minimized in the event of a return to universal vaccination. |Category: Journal Article, Review| |PubMed ID: #22291103||DOI: 10.1093/cid/cir952| |Includes FDA Authors from Scientific Area(s): Biologics| |Entry Created: 2012-01-21||Entry Last Modified: 2012-08-29|
<urn:uuid:2c3c4676-c4e7-40ad-97ba-ead39b9fae14>
CC-MAIN-2016-26
http://www.accessdata.fda.gov/scripts/publications/search_result_record.cfm?id=39932
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.895281
298
2.578125
3
‘Greenland used to be green’–Don’t judge a book by its cover, much less a land by its name (Part of the How to Talk to a Global Warming Skeptic guide) Objection: When the Vikings settled it, Greenland was a lovely, hospitable island, not the frozen wasteland it is today. It was not until the Little Ice Age that it got so cold they abandoned it. Answer: First, Greenland is part of a single region. It can not be necessarily taken to represent a global climate shift. See the post on the Medieval Warm Period for a global perspective on this time period. Briefly, the available proxy evidence indicates that global warmth during this period was not particularly pronounced, though some regions may have experienced greater warming than others. Second, a quick reality check shows that Greenland’s ice cap is hundreds of thousands of years old and covers over 80% of the island. The vast majority of land not under the ice sheet is rock and permafrost in the far north. How different could it have been just 1,000 years ago? Below is a brief account of the Viking settlement, based on Jared Diamond’s “Collapse“. Greenland was called Greenland by Erik the Red (was he red?), who was in exile and wanted to attract people to a new colony. He thought you should give a land a good name so people would want to go there! It likely was a bit warmer when he landed for the first time than it was when the last settlers starved due to a number of factors — climate change, or at least some bad weather, a major one. But it was never lush, and their existence was always harsh and meager, especially due to the Viking’s disdain for other peoples and ways of living. They attempted to live a European lifestyle in an arctic climate, side by side with Inuit who easily outlasted them. They starved surrounded by oceans and yet never ate fish! (Note: this was not a typical European behavior, and is a bit of a mystery to this day.) Instead of hunting whales in kayaks, they farmed cattle, goats, and sheep — despite having to keep them in a barn 24 hours a day, 7 days a week, for a full 5 months out of the year. It was a constant challenge to get enough fodder for the winter. Starvation of the animals was frequent, emaciation routine. Grazing requirements and growing fodder for the winter led to over-production of pastures, erosion, and the need to go further and further afield to sustain the animals. Deforestation for pastures and firewood proceeded at unsustainable rates. After a couple of centuries, it led to such desperate measures as cutting precious sod for housing construction and even burning it for cooking and heating fuel. When finally confronted with several severe winters in a row, they, along with the little remaining livestock, simply starved before spring arrived. The moral of the story for the climate controversy? Much as you can not judge a book by its cover, you can’t judge the climate of Greenland by its name. A bit of related trivia, and further indication of the Vikings’ stubborn reluctance to learn from the Inuit: there is no evidence of any trade whatsoever, despite centuries of cohabitation. In fact, the first of only three Norse accounts of encounters with the natives refers to them as “skraelings” (wretches), and describes matter of factly how strangely they bleed when stabbed. How’s that for diplomacy? See also the entry on Vineland if it happens to come up.
<urn:uuid:a47da106-16ff-4468-bac2-4d59b141bdfb>
CC-MAIN-2016-26
http://grist.org/climate-energy/greenland-used-to-be-green/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971339
755
3.34375
3
The research team's findings are being published in the scientific journal PNAS, Proceedings of the National Academy of Sciences. The different pigments in a leaf are bound to different proteins. Most of the chlorophyll, which lends plants their green color, is bound to a protein called LHCII. Every individual protein is incredibly small (nearly a million times smaller than the human eye can perceive), but it is possible see them if there are many of them together. LHCII is probably the most commonly prevalent membrane protein on earth. There is so much of it, in fact, that it is visible from space--in satellite images of the earth the tropical and temperate forest areas are green. In the tropics there is no autumn, but in our climate deciduous trees and other perennials lose their chlorophyll in the fall. The reason for this is that the proteins in the leaves contain amino acids that the plant needs to recycle. The leaves' proteins are therefore degraded and the amino acids are stored in the trunk, branches, and roots until next year, when they are used as building blocks for new leaves. Other proteins, so-called proteases, have the task of degrading these proteins, and there is extensive research under way in this field. For example, the 2004 Nobel Prize went to three scientists who work with proteases. Proteases are extremely important for all living organisms, but the proteases that break down chlorophyll-binding proteins are the only ones whose activities can be observed from space. Working with the model plant mouse-ear cress (Arabidopsis thaliana), a research team at Umea Plant Science Center (UPS C), in association with a Polish scientist, has identified a protease that degrades LHCII. The researchers assumed that the protease belonged to a certain family of proteases, the so-called FtsH proteases, and they used genetically modified mouse-ear cress plants in which various FtsH proteases had been removed. One of these plants had a severely impaired ability to degrade LHCII. This led the researchers to conclude that the protease FtsH6 helps degrade LHCII. The article is titled "AtFtsH6 is involved in the degradation of the light harvesting complex II during high light acclimation and senescence." The authors are Agnieszka Zelisko, Maribel Garcia-Lorenzo, Grzegorz Jackowski, Stefan Jansson, and Christiane Funk. Grzegorz Jackowski works at Adam Mickiewicz University in Poznan; among the UPSC scientists, Stefan Jansson works at the Department of Plant Physiology and the others at the Department of Biochemistry. The article is being published this week in the Early Edition articles of Proceedings of the National Academy of Sciences of the USA ( http://www.pnas.org/papbyrecent.shtml). Ume?Plant Science Center, UPSC, was established in 1999 in collaboration between the Department of Plant Physiology at Umea University and the Department of Forest Genetics and Plant Physiology at the Swedish University of Agricultural Sciences (SLU) in Umea.
<urn:uuid:931536f5-397a-4f98-9b34-27b6d9f76642>
CC-MAIN-2016-26
http://www.bio-medicine.org/biology-news/Protein-behind-autumn-color-splendor-identified-1727-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941805
660
4.09375
4
Rav Mon E virusAlso called W32/Rjump, this virus is known to open a back door on a computer that runs Microsoft Windows, then create a copy of itself in the Windows system directory of a computer. Italso creates a log file that includes the port number on which the back door component listens. The Rav Mon E virus enables hackers to gain access to the computer's programs and files once it has become infected. If you are using an anti-software program that is is up-to-date the RavMon E virus can usually be detected before it does any damage. This virus is most commonly spread through e-mail attachments, although it can also be spread through portable devices such as multimedia players and digital cameras. Apple reported that in October of 2006 many of its video iPods had been shipped out with the RavMon E virus already installed on them. Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now. Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »List of Free Shorten URL Services A URL shortener is a way to make a long Web address shorter. Try this list of free services. Read More »Top 10 Tech Terms of 2015 The most popular Webopedia definitions of 2015. Read More » The Open System Interconnection (OSI) model defines a networking framework to implement protocols in seven layers. Use this handy guide to compare... Read More »Computer Architecture Study Guide Webopedia's computer architecture study guide is an introduction to system design basics. It describes parts of a computer system and their... Read More »What Are Network Topologies? Network Topology refers to layout of a network. How different nodes in a network are connected to each other and how they communicate is... Read More »
<urn:uuid:7886e55d-3068-4abc-8878-80bc1b99762e>
CC-MAIN-2016-26
http://www.webopedia.com/TERM/R/Rav_Mon_E_virus.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00067-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907016
389
3.515625
4
In July 1981 it was proven that you needed at least 52 moves to solve Rubik’s Cube from any position. Now three years after the official record in 2007 that Rubik’s cube can be solved in 26 moves a new record of just 20 moves have proven. This recent revelation came from a joint project between big-brained mathematician Morley Davidson, from Kent State, Google engineer Herbert Kociemba, and a couple other guys who were dedicated to this work that really won’t help them pay their rent. Regardless, countless hours of number crunching was logged on Google’s own internal computer system. Your can read more about this new Rubik’s Cube record at TGDaily. Happy Puzzling! Tags: Rubik’s Cube
<urn:uuid:39975a2d-74d3-49c7-8344-59257957ec7b>
CC-MAIN-2016-26
http://www.passionforpuzzles.com/2010/08/rubiks-cube-can-be-solved-in-20-moves.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955225
164
2.78125
3
Wake Forest University Baptist Medical Center is the only facility in the Southeast offering a new procedure for malignant tumors previously thought untreatable. The procedure is called Tumor Ablation using Radiofrequency Energy. It involves inserting a special needle into the tumor, using ultrasound equipment as a guide, to destroy the tissue and obliterate the tumor. "The tip of the needle emits radiofrequency energy, akin to microwave energy, and the tissue around the needle tip is destroyed," said Ronald Zagoria, M.D., professor of radiologic sciences. "This technique is mostly being used for patients with known liver cancers, patients who don''t want or can''t tolerate surgery or for patients who have several masses. This procedure allows us to destroy the tumor in a minimally-invasive way." For liver lesions or bone tumors, the procedure is an excellent alternative, according to Zagoria. However, it is not designed to replace surgery. "I would not want a patient to undergo this procedure if they have a tumor that can be easily removed by surgery," he said. "There are a lot of tumors that can''t be surgically removed or do not respond to radiation therapy or chemotherapy and this is an alternative -- another weapon we have in our armory." Fortunately, size of the tumor does not matter, according to Zagoria. " For a large tumor we will actually treat numerous sides of it so that we try to get the entire tumor. At times, one 12-minute treatment is not enough. We can also re-treat the same area if the tumor returns." An outpatient procedure, doctors use sedation during the procedure and patients often can treat any side effects with over-the-counter pain medication. "Patients often resume normal activities that night," Zagoria said. Tumor Ablation using Radiofrequency Energy is currently used most often for malignant tumors in the liver however; physicians use this for other tumors, including lung, renal, head and neck, or bone tumors. Currently, the procedure is performed on both children and adults at the medical center. "For osteiod osteomas, this procedure is probably preferred over other treatments," he said. "And the best part is kids can return to the playground the very next day."
<urn:uuid:a5ecc3a8-4de7-481a-8017-e15183b5c2bd>
CC-MAIN-2016-26
http://www.wakehealth.edu/News-Releases/1999/New_Cancer_Procedure_Offers_Hope.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954625
471
2.578125
3
We have used several types of air photographs to study Greek and Roman city planning and land organization in the Corinthia. There exist both low altitude as well as high altitude photographs of the area as well as some very low level balloon photographs. Low altitude air photographs, at an approximate scale of 1:6000, taken in 1963 by the Hellenic Air Force, correspond very well with the 1:2000 topographical maps, which were made in the same year using the air survey. The air photographs have been useful for a number of reasons. Shadows and vegetation or soil markings highlighting unexcavated underground features in the landscape, such as roads, ditches or structures are visible. These features can be helpful when put together with other forms of information, such as the surveyed and excavated roadways. Before performing any analysis of any of the photographs it is necessary to first rectify its geometry in calibration with the existing maps and surveyed data. Therefore, each photograph is scanned at the resolution of 400 dpi (dots per inch) using a desktop flatbed scanner (UMAX PowerLook 2100XL) and rectified using the resampling program included in CAD Overlay, (discussed below under GIS applications). The control points needed for this operation are taken from the topographical maps. The corners of buildings or the intersection of field boundaries have proven to be most precise. Once the photograph has been successfully rectified, it is possible to display it as a backdrop to the AutoCAD drawings using CAD Overlay. In this way one is able to trace over the crop and soil marks and study them in conjunction with other surveyed or map data. High altitude air photographs at a scale of approximately 1:37,500, taken in 1987 by the Greek Army Mapping Service, have helped us to understand the overall pattern of the roads and field boundaries in the larger terrain surrounding Ancient Corinth. Control points necessary to rectify these photographs are taken from the topographical maps or satellite images, where we do not have a detailed map of the entire area covered by the photograph. A series of low level balloon photographs at an approximate scale of 1:1750 taken by Dr. and Mrs. J. Wilson Myers in 1986 have greatly assisted in the identification of details in the landscape at the Roman harbor of Lechaeum. These balloon photographs have been successfully rectified to both the low level air photographs as well as the 1:2000 scale topographical maps.
<urn:uuid:5a8c3d0c-7edd-4b04-b1ba-d938a126c5b4>
CC-MAIN-2016-26
http://corinthcomputerproject.org/methodology/aerial-imagery/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941476
495
3.265625
3
some plants such as arrow arum and swamp rose-mallow are "obligate" wetland plants (growing only in wetlands), while "facultative" wetland plants such as pawpaw and sycamore trees can grow on flood plains away from wetland soils (click on images for larger versions) Technically, wetlands are "areas that are inundated or saturated by surface or ground water at a frequency and duration sufficient to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions. Wetlands generally include swamps, marshes, bogs, and similar areas.""1 Wetlands do not have to have standing water 12 months of the year, but soils and plants must reflect the frequently-high water table. Virginia wetland statistics (calculated at the end of the 1970's):2 1 million acres of wetlands of all types 72% are palustrine vegetated wetlands ("palustrine" wetlands are located in fields/forests and on the edge of streams - but not adjacent to large lakes or on the edge of tidal waters) 23% are estuarine wetlands ("estuarine" wetlands are associated with tidal waters, east of the Fall Line) 72% of all wetlands are located in the Coastal Plain 22% of all wetlands are located in the Piedmont 6% of all wetlands are in the other physiographic provinces Wetlands are now a valued ecological resource in Virginia - but in the 400 years of settlement after Jamestown, 42% of the natural wetlands were drained or filled for agriculture, industrial facilities, roads/ports, and urban/suburban development. In particular, estuarine and palustrine vegetated wetlands were lost. At the same time, new natural beaver bonds and artificial construction of farm ponds and reservoirs created an increase in open water areas.3 Today, government policy is to ensure "no net loss" of wetlands, ideally by avoiding alteration of a natural wetland. As described by the Environmental Protection Agency, "Far from being useless, disease-ridden places, wetlands provide values that no other ecosystem can, including natural water quality improvement, flood protection, shoreline erosion control, opportunities for recreation and aesthetic appreciation, and natural products for our use at no cost."4 Projects that destroy even tiny wetland areas are required to mitigate the loss by creation of artificial wetlands of an equivalent type. Destruction of a forested wetland requires two acres of new forested wetland for every acre destroyed, while destruction of a scrub-shrub wetland must be mitigated by a 1.5-to-1 ratio. The Virginia Department of Environmental Quality (DEQ) issues Virginia Water Protection Permits for non-tidal wetlands, and the Virginia Marine Resources Commission (VMRC) manages changes to tidal wetlands. The Federal government authority to regulate wetlands is based on Section 404 of the Clean Water Act, processed by the US Army Corps of Engineers. Section 404 requires a permit before dredged or fill material may be discharged into waters of the United States. According to the state summary of the National Water Summary on Wetland Resources (U.S. Geological Survey Water-Supply Paper 2425): Virginia still has 144 named swamps, but has lost over 40% of its wetlands since the 1780's. The "lost" acres have been converted into upland (filled in with dirt and other materials) or open water (through dredging or erosion). The Army Corps of Engineers issues permits for dredging and filling wetlands. These permits are required by Section 404 of the Clean Water Act (Public Law 95-217). A list of public notices being considered by the Norfolk District will almost always include a few announcements about 404 permit applications in Virginia. The Federal government's claim to authority to control even minor land disturbance is based on the constitutional authority to regulate navigable waterways (including Section 10 of the Rivers and Harbors Act passed in 1899). As described Section 320.2 of the Code of Federal Regulations: Filling wetlands will change the way water runs off into the navigable streams, such as the James River. The Corps is notorious in some quarters for its philosophy of building structures rather than preserving the natural environment, even when the economic (as well as environmental) costs of a waterway or dam exceed the benefits - but according to the Federal rules of the game, it's the Corps that determines what permits to approve or reject. There is a key difference between the definitions of "biological" and "jurisdictional" wetlands. Wetland protection became a political issue in the Bush/Quayle administration, with claims that the "no net loss" commitment was a fraud because the definition of wetland was being narrowed to just areas with standing water throughout the year. The Federal agencies, including the Department of Agriculture farming bureaus, the Department of the Interior wildlife conservation bureaus, and the Environmental Protection Agency, were unable to adopt one standard manual for mapping wetlands consistently. It requires professional judgment to determine the exact edge of a wetland. "Delineation" of a wetland boundary is not a cookbook process that any landowner can do. The Corps definition of wetlands recognizes that areas that are dry for much of the year can still be classified as wetlands. The Corps specialists will conduct on-site reviews to determine exactly where the regulations require a permit based on the "vegetation, soil, and hydrology." So cutting down the cattails to disguise the existence of a wetland won't work. The Corps advises landowners to request consultation if , in the Corps definition, an:
<urn:uuid:67c03e28-bfdc-477b-ab7d-00aacacf0c1c>
CC-MAIN-2016-26
http://www.virginiaplaces.org/natural/wetlands.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941667
1,162
3.578125
4
Learn the health benefits of exercise! This site provides facts, a video clip, and power point programs about healthy living through exercise. Students can learn how to create their own jump rope routines through diagrams. Included is an eThemes resources on Health: Physical Education. These sites provide an overview of different health topics geared toward students. Sites are for teachers who are interested in promoting physical activity. Games, videos, and lesson plans can be found on these sites. Included is an eThemes resource on obesity.
<urn:uuid:9cf9f97d-ea64-4fb2-a63c-3244dd209158>
CC-MAIN-2016-26
https://ethemes.missouri.edu/themes/1893?locale=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00094-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947415
103
2.5625
3
The sun’s magnetic field can play havoc with communication technology. Now scientists have figured out one of the underlying processes of how the field forms—a finding that may help predict its behavior. The mechanism, known as meridional flow, works something like a conveyor belt. Magnetic plasma migrates north to south on the sun’s surface, from the equator to the poles, and then cycles into the sun’s interior on its way back to the equator. The rate and depth beneath the surface of the sun at which this process occurs is critical for predicting the sun’s magnetic and flare activity, but has remained largely unknown until now. Using the Helioseismic and Magnetic Imager, an instrument onboard NASA’s Solar Dynamic Observatory satellite, scientists tracked solar waves in much the way seismologists would study seismic movements beneath the surface of the Earth. Every 45 seconds for the past two years, the HMI’s Doppler radar snapped images of plasma waves moving across the sun’s surface. By identifying patterns of sets of waves, the scientists could recognize how the solar materials move from the sun’s equator toward the poles, and how they return to the equator through the sun’s interior. “Once we understood how long it takes the wave to pass across the exterior, we determined how fast it moves inside, and thus how deep it goes,” says Junwei Zhao, a senior research scientist at the Hansen Experimental Physics Laboratory at Stanford University. Although solar physicists have long hypothesized such a mechanism, at least in general terms, the new observations, published in The Astrophysical Journal Letters, redefine solar currents in a few ways. First, the returning currents occur 100,000 kilometers below the surface of the sun, roughly half as deep as suspected. As such, solar materials pass through the interior and return to the equator more quickly than hypothesized. More startling, is that the equator-ward flow is actually sandwiched between two “layers” of pole-ward currents, a more complicated mechanism than previously thought, and one that could help refine predictions of the sun’s activity, Zhao says. “Considered together, this means that our previously held beliefs about the solar cycle are not totally accurate, and that we may need to make accommodations.” For example, some computer models projected that the current solar cycle would be strong, but observations have since showed it is actually much weaker than the previous cycle. This inconsistency could be due to the previously unknown inaccuracies of the meridional circulation mechanism used in the simulations. Improving the accuracy of simulations will produce a better picture of fluctuations of the sun’s magnetic field, which can interfere with satellites and communications technology on Earth. The sun’s magnetic field resets every 11 years—the next reset will occur sometime in the next few months—and there is evidence that changes in the meridional flow can influence how the magnetic field evolves during a particular cycle, Zhao says. “We want to continue monitoring variations of the meridional flow so that we can better predict the next solar cycle, when it will come and how active it will be.” Source: Stanford University
<urn:uuid:ddddb6e7-c284-4790-8c60-6ca8f170691a>
CC-MAIN-2016-26
http://www.futurity.org/plasma-moves-suns-conveyor-belt/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93211
678
4.375
4
Non-trivial computing problems often require storing lists of items. Often these items can be referred to by their position in the list. Sometimes this ordering is natural as in a list of the first ten people to arrive at a sale. The first person would be item one in the list, the second person to arrive would be item two, and so on. Other times the ordering doesn't really mean anything, but it's still convenient to be able to assign each item a unique number and enumerate all the items in a list by counting out the numbers. There are many ways to store lists of items including linked lists, sets, hash tables, binary trees and arrays. Which one you choose depends on the requirements of your application and the nature of your data. Java provides classes for many of these ways to store data. In this chapter youšll explore the simplest and most common, the array. In this chapter you learn about
<urn:uuid:a69886ff-8482-4f2c-b7aa-b3dbe06a3538>
CC-MAIN-2016-26
http://www.ibiblio.org/java/books/jdr/examples/9/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94332
190
3.859375
4
By and large, the Internet is a decentralized system not owned or run by one single entity. At its most basic form, the Internet is a bunch of networks connected to each other with limited regulatory oversight. Governments can control access to the Internet, and in some cases even influence what appears on the web such as in the case of domain takeovers or censorship requests, but they cannot control the Internet itself. The International Telecommunication Union (ITU), a United Nations agency, has indicated it will try to assert its regularly authority over the Internet. Google, and many other pro-Internet advocates, feel this threatens the very existence of the Internet and they want you to speak up to prevent this from happening. There are two main reasons why Google and other advocates oppose handing over control to the ITU: - First and foremost, Google fears giving ITU power over the Internet would allow governments to dictate how the Internet is used. The billions of people that actually use the Internet will have little to no say. - Secondly, and perhaps more importantly, not all governments around the world support the idea of a censor-free Internet. In fact, I dare say no government in the world supports the idea of a censor-free Internet. While some may take part in more excessive censorship than others (e.g. China), every government (yes, including the American government) takes part in digital censorship to some degree. So to give control of the Internet to governments would be disastrous. At least, that is in theory (hopefully we will never find out in fact). In Google’s own words: A free and open world depends on a free and open Internet. Governments alone, working behind closed doors, should not direct its future. The billions of people around the globe who use the Internet should have a voice. Because of this opposition to ITU control of the Internet, Google is running a small text ad on Google.com and its international variants asking people to stand up for the “free and open Internet”. The ad leads to Google’s Take Action page that provides more information about what this move would mean plus a prompt asking you “add your voice in support of the free and open Internet”. Check out the page if you are interested (you should be interested).
<urn:uuid:2f08e107-d37e-49f9-8039-93d92d40e6ad>
CC-MAIN-2016-26
http://dottech.org/89276/google-asks-everyone-to-support-a-free-and-open-internet/print
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943754
472
3.046875
3
There’s a lot of talk about organic foods these days. The market for organics is growing, yet many question whether organic is worth the extra money. But a new study from Newcastle University in England provides what the Environmental Working Group (EWG) is calling the “most compelling and comprehensive evidence that organic crops are more nutritious than their conventional counterparts.” According to EWG, “organic produce and grains have higher concentrations of antioxidants, lower levels of cadmium and nitrogen compounds and fewer pesticide residues. The scientists reached their conclusions after a review of 343 previously published, peer-reviewed studies comparing organic and conventionally grown crops.” The important area of note for this would be the antioxidant count for the foods. According to a WSU press release about the findings, “consumers who switch to organic fruits, vegetables, and cereals would get 20 to 40 percent more antioxidants. That’s the equivalent of about two extra portions of fruit and vegetables a day, with no increase in caloric intake.” What does this mean for organics? Well, put simply: it could mean they’re better for you, plain and simple. “There should be no question now about whether organic agriculture is better for the environment and public health,” said Ken Cook, the president of EWG. “This study breaks it down for consumers who want science-based evidence on the nutritional benefits of crops grown without pesticides or synthetic fertilizers.” It seems the evidence that it may just be worth it to try organics really is mounting! If this pushes you over the edge and prompts you to want to load up on more organics, check out our guide to saving money on organic foods here. Here’s to the rise of better-for-us foods and better health for all! Image source: Wikimedia Commons
<urn:uuid:268a71f8-3988-4dcf-9788-2ba3a6928193>
CC-MAIN-2016-26
http://www.onegreenplanet.org/news/new-study-confirms-organic-food-is-more-nutritious/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93495
384
2.828125
3
unfolded protein response Saturday 14 January 2006 The endoplasmic reticulum (ER) responds to the accumulation of unfolded proteins in its lumen (ER stress) by activating intracellular signal transduction pathways, cumulatively called the unfolded protein response (UPR). Together, at least three mechanistically distinct arms of the UPR regulate the expression of numerous genes that function within the secretory pathway but also affect broad aspects of cell fate and the metabolism of proteins, amino acids and lipids. The arms of the UPR are integrated to provide a response that remodels the secretory apparatus and aligns cellular physiology to the demands imposed by ER stress. Folding in the endoplasmic reticulum (ER) must couple protein-synthesis pathways operating outside of the compartment with ER-assisted folding (ERAF) pathways in the lumen. Chaperone-mediated folding imbalances that are associated with numerous misfolding diseases, including diabetes, trigger the unfolded-protein response (UPR), using both transcriptional and translational pathways to correct the problem. Small-molecule inhibitors could be useful to help rebalance protein synthesis with ERAF pathways through the ribosomal initiating factor eIF2alpha. Reprogramming stress pathways with drugs provides a potential new approach for balancing ER-protein load with cellular-folding capacity, thus correcting disease. Kaufman RJ: Orchestrating the unfolded protein response in health and disease. J Clin Invest 110:1389, 2002. Patil C, Walter P: Intracellular signaling from the endoplasmic reticulum to the nucleus: the unfolded protein response in yeast and mammals. Curr Opin Cell Biol 13:349, 2001. Ma Y, Hendershot LM: The unfolding tale of the unfolded protein response. Cell 107:827, 2001.
<urn:uuid:d98c25dd-cb42-4bab-b793-793dc64a5320>
CC-MAIN-2016-26
http://www.humpath.com/spip.php?article8157
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.884753
390
2.671875
3
|The Spitzer Space Infrared Telescope Facility| |Fourth of NASA's Great Observatories for Space Astrophysics| The Spitzer Space Telescope is orbiting the Sun on a five-year mission to reveal previously hidden, dusty regions of the Universe as well as cold and distant objects. Spitzer is the fourth and last of NASA's series of Great Observatories in Space, a program that has included Hubble Space Telescope, Chandra X-ray Observatory and Compton Gamma Ray Observatory. Before it was launched August 25, 2003, the Spitzer Space Telescope had been known as the Space Infrared Telescope Facility (SIRTF). Lyman Spitzer. Once in space, SIRTF was renamed Spitzer Space Telescope for Ohio native and astrophysicist Lyman Spitzer, Jr., who lived from 1914-1997. Spitzer was one of the great scientists of the 20th century. He contributed to human knowledge of astronomy, thermonuclear fusion, stellar dynamics, and plasma physics. Spitzer was the first to propose placing a large telescope in space. He was the driving force behind development of the Hubble Space Telescope. About the Spitzer The Spitzer Space Telescope was lofted to Earth orbit on a Delta rocket from Kennedy Space Center's launch complex 17-B at Cape Canaveral, Florida, on August 25, 2003. It was the 300th spaceflight for the Delta rocket family. Spitzer (SIRTF) is a 3,000-lb., 0.85-meter, cryogenically-cooled, space telescope operating as an unmanned infrared astronomy observatory in a solar orbit far beyond the Earth and the Moon. Because most of the infrared radiation arriving at Earth is blocked by our planet's atmosphere and cannot be observed from the ground, astronomers need Spitzer's infrared sensitivity above Earth's atmosphere to record what they call "the Old, the Cold, and the Dirty," meaning the oldest and coldest things most blocked from our vision across the Universe. Largest Infrared Telescope Spitzer is the largest infrared telescope ever launched into space. The Spitzer satellite carries a 0.85-meter telescope and three cryogenically-cooled science instruments. The science instruments are very sensitive, allowing astronomers to peer into regions of the Universe hidden from optical telescopes. Hot dust. Much of deep space is filled with vast, dense clouds of gas and dust which block our view of visible light. Fortunately, infrared light can penetrate the clouds of dust and gas, allowing us to see into the centers of galaxies and uncover stars and planetary systems forming. Cool stars. Infrared light also reveals cooler objects across the Universe. Some of which are smaller stars too dim to be seen in visible light, planets around other stars, and giant clouds of molecules. In fact, many organic and inorganic molecules in space are seen best in infrared light. Protecting the telescope. Because infrared energy is heat, the telescope must be cooled to a temperature near absolute zero to see infrared unobstructed by heat generated by the telescope itself. Absolute zero is a temperature of –459 degrees Fahrenheit or –273 degrees Celsius. The telescope also must be protected from the heat of the Sun as well as infrared radiated from Earth. Spitzer has a solar shield and is in an unusual Earth-trailing solar orbit, which places the satellite far enough away from the Earth to allow the telescope to cool without using large amounts of cryogen coolant. Dewar. A dewar is an insulated container used to store liquefied gases. It has a double wall with a vacuum between the walls and silvered surfaces facing the vacuum. Spitzer's dewar was topped off with 90 gallons of super-cold liquid helium. That's the cryogen that chills the infrared detectors to a temperature near absolute zero, so they can achieve the highest level of sensitivity to the infrared spectrum of light. Spitzer has sufficient helium to keep it cold and working until about 2008 or 2009. It is so far from Earth that it won't be refueled when the helium runs out. The Spitzer infrared telescopes's first images quickly re-confirmed that celestial objects viewed through ground-based telescopes and the Hubble Space Telescope look quite different when seen in infrared light. Origins. NASA's Origins Program seeks to answer the questions, "Where did we come from? Are we alone?" The agency describes the Spitzer observatory as a cornerstone of the Origins Program. The telescope's ability to search out low-temperature objects helps in the search for planetary systems in the making, some of which may have planets like Earth harboring life. Spitzer's main objectives are: - physical studies of the planetary system - detailed study of cold circumstellar dust clouds - a search for the enigmatic brown dwarfs - extension of IRAS studies of forming stars to lower temperatures and luminosities - identification and study of powerful infrared galaxies - infrared measurements of all presently catalogued quasars Comparison With IRAS Unlike the Infrared Astronomical Satellite (IRAS), which swept its view rapidly across the sky, the Spitzer Space Telescope is a true observatory, carrying a variety of focal plane instruments, including: Spitzer's instrument sensitivity is increased by a factor of 100 to 1000 over that of IRAS, and the spatial resolution is at least a factor of 10 times finer than IRAS. - a wide field and high resolution camera covering the 2 to 30 micron region with large monolithic detector arrays; - an imaging photometer, with small arrays of high sensitivity detectors covering the spectral range from 3 to 700 microns; - a spectrometer operating out to 200 microns with resolving power from 50 to 1000. Spitzer Sees Our Galaxy's Twin What does our Milky Way galaxy look like? It probably resembles spiral galaxy NGC 7331 shown in a new Spitzer image. In the infrared image, the galaxy's swirling arms spin outward from a central bulge of light, which is outlined by a ring of actively forming stars. NGC 7331 and the Milky Way do not share the same parents, but they have features in common, including number of stars, mass, spiral arm pattern and rate of new-star formation rate. NGC 7331 is about 50 million lightyears away from Earth in an area of our night sky we refer to as the constellation Pegasus. One lightyear is the distance light travels in a year, about 5.8 trillion miles. The galaxy was discovered in 1784 by William Herschel, who also discovered infrared light. The survey. Spitzer observations of NGC 7311 are part of a large science project, known as the Spitzer Infrared Nearby Galaxy Survey. It is a comprehensive study of 75 nearby galaxies using infrared imaging and spectroscopy. The project combines Spitzer data with data from other telescopes on the ground and in space. The telescopes receive wavelengths ranging from ultraviolet light to radio waves. The result of the project will be a comprehensive map of the chosen galaxies. False colors. The false color image of of NGC 7311 demonstrates the power of Spitzer's infrared eye to dissect an object into its various parts. It shows the galaxy's arms in brownish red, the central bulge in blue, and a ring of star formation in yellow. Spitzer's observations revealed the composition of the galaxy: On Earth, polycyclic aromatic hydrocarbons can be found on burnt toast and in automobile exhaust. - the central bulge is mostly older stars - the ring holds a large amount of gas along with dusty organic molecules called polycyclic aromatic hydrocarbons, which glow when illuminated by newborn stars - the arms have the same dust grains, but to a lesser degree The image by Spitzer's infrared array camera is a four-color composite of invisible light, showing in blue the emissions from wavelengths of 3.6 microns, in green the 4.5 micron emissions, in yellow the 5.8 micron emissions, and in red the 8.0 micron emissions. These wavelengths can not be seen by the human eye. Black hole. Spitzer's infrared spectrograph showed a black hole at the heart of NGC 7331. The core has an unusually high concentration of massive stars. The black hole at the center probably is about the same size as the one lurking at the core of our own Milky Way galaxy. Whence the light? The infrared light seen in the Spitzer image originates from two very different sources. At shorter wavelengths (3.6 to 4.5 microns), the light comes mainly from stars, particularly ones that are older and cooler than our Sun. This starlight fades at longer wavelengths (5.8 to 8.0 microns), where instead the glow is from clouds of interstellar dust. The interstellar dust is a variety of carbon-based organic molecules known collectively as polycyclic aromatic hydrocarbons. Wherever these compounds are found, there will also be dust granules and gas, which provide a reservoir of raw materials for future star formation. The most intriguing feature of the longer-wavelength image is a ring of dust girdling the galaxy center. With a radius of nearly 20,000 lightyears, the ring is invisible at shorter wavelengths, yet has been detected at sub-millimeter and radio wavelengths. It is mostly polycyclic aromatic hydrocarbons. Spitzer measurements suggest that the ring contains enough gas to produce four billion stars like the Sun. Other galaxies. Three galaxies about 10 times farther away are seen below NGC 7331 in the image. Left to right, they are NGC 7336, NGC 7335 and NGC 7337. The blue dots scattered throughout the images are foreground stars in our Milky Way galaxy. The red dots are galaxies that are farther away. The Spitzer Infrared Nearby Galaxies Survey project is conducted by a team of 25 scientists from 12 research institutions. Spitzer management. NASA's Jet Propulsion Lab (JPL) operates the Spitzer Space Telescope for NASA's Office of Space Science, Washington, D.C. The observatory's science data is processed at the Space Infrared Telescope Facility Science Center at California Institute of Technology in Pasadena. JPL is a division of CalTech. Other institutions on the team are NASA's Goddard Space Flight Center, Ball Aerospace and Technologies Corporation, Lockheed Martin Space System Company, Smithsonian Astrophysical Observatory, Cornell University, and the University of Arizona. - Spitzer's infrared spectrograph was built by Cornell University, Ithaca, New York, and Ball Aerospace Corporation, Boulder, Colorado. - Spitzer's infrared array camera was developed at Smithsonian Astrophysical Observatory, Cambridge, Massachusetts, and built by NASA Goddard Space Flight Center, Greenbelt, Maryland. The Great Observatories NASA's Great Observatories for Space Astrophysics has been a family of four orbiting satellites carrying telescopes designed to study the Universe in both visible light and non-visible forms of radiation. - Hubble Space Telescope was launched in 1990 as the first in the series. Hubble observes in visible light, but also has an infrared camera and a spectrometer. It may continue to work in orbit until 2008. - Compton Gamma Ray Observatory, launched in 1991, was the second Great Observatory. It was in Earth orbit from 1991-2000. - Chandra X-Ray Observatory, launched in 1999, was the third Great Observatory. It was known as the Advanced X-Ray Astrophysics Facility (AXAF) before launch. - Spitzer Space Telescope, formerly known as the Space Infrared Telescope Facility, launched in 2003, was the fourth Great Observatory. - James Webb Space Telescope will be a large, infrared-optimized space telescope satellite replacing the Great Observatories around 2011. Learn more about Spitzer and infrared astronomy: Spitzer Space Telescope - The Spitzer Space Telescope [CalTech] - Space Infrared Telescope Facility (SIRTF) [CalTech] - Spitzer Education and Outreach [CalTech] - Seeing our world in a different light [CalTech] - Infrared Astronomy [CalTech] - Cool Cosmos – The Infrared Universe [CalTech] - Multiwavelength Astronomy [CalTech] Telescopes and Other Resources - Infrared Photo Album [JPL] - Observatory Boldly Goes Where the Human Eye Cannot [NASA] - Seeing our world in a different light [CalTech] - Infrared World Gallery [CalTech] - Infrared Zoo Gallery [CalTech] - Infrared Yellowstone Gallery [CalTech] - Visible Light/Infrared Side-By-Side Movies [CalTech] - The Andromeda Galaxy [SEDS] - Multiwavelength Astronomy Gallary [CalTech]
<urn:uuid:53bebdf5-3b94-4641-9b4a-1b2a646154df>
CC-MAIN-2016-26
http://www.spacetoday.org/DeepSpace/Telescopes/GreatObservatories/SIRTF/SIRTF.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898505
2,649
3.6875
4
Digital Cellular System Also known as "GSM 1800", DCS is a radio frequency band used in Europe, Africa, Asia, and South America for GSM mobile phones. DCS bands are 1710-1785 MHz and 1805-1880 MHz. There are at least five licensed sub-bands within that range in most countries that have DCS. In the context of WCDMA and LTE networks, the DCS band is also known as band 3 (III).
<urn:uuid:b4c1b127-2524-4e29-9ac4-e6212bd3bb18>
CC-MAIN-2016-26
http://www1.phonescoop.com/glossary/term.php?gid=211
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961491
99
2.546875
3
No products in the cart. The tooth abscess usually occurs when the decay is not timely or properly treated. A crack or break in a tooth can also sometimes lead to abscess because the bacteria enter the tooth’s pulp causing infection. This can be quite painful and the pus-filled area needs immediate treatment. If a tooth abscess is left untreated, it can lead to serious health complications like mediastinitis, jaw bone infection, sepsis, and tooth loss. If the infectivity spreads to other areas, it can result in endocarditis, brain abscess or pneumonia. There are several home treatments that can help in relieving the pain. 6 Home Remedies For Abscessed Tooth Gargling with saline water brings some relief from pain. The warm water is soothing to the oral cavity, whereas the salt works as an antiseptic. The blood and pus must be drained, whenever there is an eruption of abscess. Make sure the blood and pus is spit out. Gargling with warm salt solution will aid in drainage. The saline solution can be prepared by mixing salt (2 tsp) in hot water (1 cup). Allow the salt to dissolve and wait till the water is lukewarm. Use this to rinse your mouth 2 times a day. Clove oil acts as an anesthetic and is the best home remedy for tooth abscess. Instead of oil, you can chew up a clove directly. Since clove can cause hyperacidity, make sure you do not swallow the residue on an empty stomach. Alternatively, you can mix ground cloves and olive oil and apply it on the tooth that is infected. You can use a cold pack to relieve pain. Pressing cold items on the outer surface of your mouth will bring temporary relief from toothache. This remedy is especially useful, when you have trouble in sleeping. You can replace the ice pack with other cold items from your freezer or fridge. You can use a cold water bottle or mustard jar etc. If the container or bottle is frozen, use a wash cloth to cover it before you place it on your skin. Sprinkle some salt on a dry tea bag. Place this besides the tooth that is infected. Leave for about one or two hours. This will alleviate the swelling and draws out the abscess. Garlic is a very good antibiotic, eating a few raw garlic cloves will bring relief from pain. However, this remedy works only when it is raw. Other Home Remedies To Treat Abscessed Tooth Dip a cotton ball with hydrogen peroxide. Place this on the tooth that is infected. You can also try blueberry smoothie, which is not only healthy, but also heals your inflammation. Besides, the antioxidants present in blueberries will also boost the immune system. Chewing on plantain leaves will reduce the infection or abscess. You can also place a thin slice of raw potato on the swelling to get rid of the abscess. Besides following the home remedies, it is also important to maintain proper dental hygiene. Make sure to floss every day and brush at least 2 times a day. Caution: Please use Home Remedies after Proper Research and Guidance. You accept that you are following any advice at your own risk and will properly research or consult healthcare professional.
<urn:uuid:41911633-d736-451d-9718-345408e09a42>
CC-MAIN-2016-26
http://www.findhomeremedy.com/best-home-remedies-for-abscessed-tooth/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924812
680
3.03125
3
George Cruikshank (1792-1878), “The Gin Shop” in Scraps and Sketches, 1829. Hand colored etching. Graphic Arts Oversize Kane Room Cruik 1827.81q Long before George Cruikshank signed a temperance pledge, he was satirizing the gin palaces of St. James Place. This is his earliest. Images of death and dying are everywhere. Customers are standing inside a giant bear trap, waited on by a skeleton in the costume of a pretty woman (we can see her skull and the bones of her ankle and foot). A woman is feeding gin to her baby, with the figure of death close behind her holding an hourglass. Spirits are held in coffins rather than casks: Old Tom is good gin; Blue Ruin is bad gin; Kill Devil is strong rum; and so on. The inscription reads: Now Oh dear, how shocking the thought is They makes the gin from aquafortis: They do it on purpose folks lives to shorten And tickets it up at two-pence a quarter
<urn:uuid:fedf0285-6e7c-46db-a34a-8fd617fc0108>
CC-MAIN-2016-26
http://blogs.princeton.edu/graphicarts/2012/06/illustrations_of_time.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00107-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927138
230
2.515625
3
Language Learning & Technology Vol. 7, No. 3, September 2003, pp. 4-17 ON THE NET Interactive and Multimedia Techniques in Online Language Lessons: A Sampler Paginated PDF version Jean W. LeLoup While one might think the definition of distance education would be simple enough, the concept and justification of learning over distances are actually under continual debate (Distance Education Clearinghouse, 2003). There are probably as many explanations and rationales of distance learning as there are sites and modes of offering it in its various forms: eg., self-directed study or teacher-guided coursework available only online, surmounting geographical barriers. In many ways, foreign language (FL) study seems a natural candidate for distance learning, one major goal being the connection of language learners with target language input and native speakers, which are often a great distance away. Digital technologies have advanced to such a point that this distance no longer presents much difficulty, even though it physically still exists. A myriad of distance language learning sites can be found online, and some are quite good from both a pedagogical and a technological perspective. The fundamental skills that students need to learn to use a language come through communicative interactions, through the example of a teacher/model who can speak the language proficiently, and through lots of reflective practice that depends on meaningful feedback. The success of self-study materials claiming to teach foreign languages suggests that the desire to learn new languages outside the classroom setting is widespread. That publishers tend to sell many more of these materials for the very early stages of language learning suggests that motivation often drops off as the difficulty of the task, especially without the support of a teacher, becomes apparent to the naive learner. More and more online materials for learning new languages implement interactive activities that attempt to compensate to some extent for the lack of a teacher's physical presence and support. Authors of such materials do not claim to do the job better than a teacher in a face to face learning environment, but they do fill an important niche for those who cannot get to a class but who are trying to get a start on a new language or review one that has been studied in the past. In this column we will examine some of the techniques used in a variety of these sites for learning several languages. The lessons featured were chosen for their quality, the variety of technical features, range of pedagogical techniques, and selection of languages taught. One of the most interesting collections of interactive language activities is the Spanish Grammar Exercises site by Barbara Kuczun Nelson at Colby College.. The "¡Qué miedo pasé!" module on this site is an excellent example of the integration of a number of different techniques guided by sound pedagogical principles directed towards clear communicative and grammatical goals. The student watches a short video in Quicktime format in which a woman tells a story about something that happened to her at her job. The pre-listening introduction sets the scene and presents enough context so the student is prepared to listen (Omaggio Hadley, 2001; Shrum & Glisan, 2000). El preterito y el imperfecto ejercicio (sin los verbos) ejercicio (con los verbos) tarea para mandar accents on a Mac or PC Download QuickTime free ¡Qué miedo pasé! home Spanish Grammar Exercises © The main page in which the video and instructions are presented uses frames to allow the menu and the video to remain in the same place in the left hand pane of the screen at all times while the various stages of the activity appear in the right hand pane when selected by the student. The fill-in exercises reproduce the text of the video with the verbs cut out. The student can view the video using audio as a cue for completing the verbs in the exercise. The activities, presented with and without infinitive cues, serve quite different pedagogical functions. In the version without written cues, the student uses listening comprehension skills to identify the missing vocabulary and verb tense. In the version with cues, the student may tend to focus more on the grammatical structure, preterit vs. imperfect, and less on comprehension of the meaning to identify the correct answer. In both fill-in exercises, when each answer has been attempted, clicking anywhere outside the textbox for the answer compares the student's response with the correct answer for that item. Any letters in the student response that differ from the correct answer are replaced by a "=" to mark not only that there is a problem, but to indicate exactly where the problem lies. Clicking on the button following each textbox fills in the box with the preferred answer. Finally, a "tarea para mandar" activity allows students to respond to an open-ended homework question and send the answer to the teacher. To answer this question, students must interpret the text to identify important elements, reorganize the text, and phrase a multi-sentence answer in their own words. This summary activity takes the student a step beyond the comprehension and grammar practice of the fill-in activities while providing an opportunity to use the skills learned through those previous tasks. A potential technical problem lies in the need for the student to accurately type in the teacher's email address for the cgi script on the server to send the homework to the right place. This can easily lead to the excuse: "The computer, rather than the dog, ate my homework." There are many ways of using the ability to have students post open-ended replies though a web page. Whereas in this case the reply is sent to the teacher, it is also possible to collect those replies to be shared with other students who could, in turn, comment on them. It is possible to use similar techniques to collect the answers to several questions from a student and then reorganize them for the student as the basis of a longer composition assignment. Professor Nelson's site includes activities using a number of different, well constructed technical tools. Another interesting activity makes use of images and Quicktime audio to demonstrate pronunciation in much the same way that a teacher might in a classroom environment: http://www.colby.edu/~bknelson/exercises/pronunciacion.html. There are a number of pages on the Internet that use audio to illustrate pronunciation for language students, but the technical implementation often results in slow turnaround time for the audio portion (i.e., there tends to be too long a lag between pressing the button and hearing the sound) making the interaction awkward. Good reaction times will tend to mimic those of ordinary conversation (Martin, 1973). In the pronunciation exercise for the letter "g" we find embedded audio, that is to say that the audio plays in a plug-in that appears as an object on the Web page, not in a separate window. This method makes for an easy to use interface because the student is not confused by new windows opening on the screen when a sound is played. Thus the relationship among the visual cues, the hyperlink button, and the audio player remains constant. Because the audio files use a high quality, high compression codec (compression/decompression algorithm), they can be downloaded to the local browser along with the images in the Web page so that they can play instantly when their button is pressed as they do in the selected samples below. (The reader may need to download the free Quicktime player for these audio samples to work.) When designing a page using this technique, it is essential to carefully allow for the download time of all of the components that must be loaded when the page opens. In these pronunciation lessons, the balance of file size, audio quality, number of sounds, and ease of use combine to create a very effective lesson. Any of the various multimedia formats currently in use may require users to install or upgrade software plug-ins such as RealPlayer, Quicktime, or Windows Media Player, to play them. This is even more true when the media is embedded, as it is here, because the page will not appear as it was intended to until and unless the software is downloaded and installed. This can be problematic when users have very limited technological acumen or if they are in a lab where security prevents them from installing software. It behooves language teachers to learn about which multimedia programs and plug-ins are used in Web lessons and ask that these be installed and kept up-to-date on machines that students may be using. The designers of these lessons routinely provide this information on their Web sites. When all questions have been answered, the student clicks on a button to find out which responses were right or wrong and to learn the percentage of answers that were correct. In this activity, the student must attempt to answer all questions before checking any answers. In a different lesson by professor Reitan, a listening activity asks students to distinguish which of two sounds they are hearing. Here the student receives immediate feedback for each answer in the form of a smiley or frowning face. Creating feedback determined by digital error analysis can be a daunting task for open ended activities because there are so many possible student errors that need to be anticipated. By limiting the student's choices, the analysis of errors is vastly simplified allowing the software to provide far more detailed explanations than the common yes/no feedback that one finds in most online activities. The German Electronic Textbook uses WebPractest© by Gary Smith, an online utility for creating cloze passages with some interesting features. It may be used for free by teachers to create their own activities by following online instructions. We will examine an example from the grammar lessons at the site. The following cloze activity for the verb sein uses a question/answer format to provide some context, though context in a drill of this sort, where individual replies appear as separate items, tends to be limited no matter what the medium used to present it. Verbs have been cut out, replaced by textboxes into which student may type a response. To obtain an additional cue, the student can place the cursor over a yellow book icon to see a pop-up translation. The responses that have been typed in may be checked for accuracy at any time during the activity. At that point, whatever textboxes contain replies will be examined by the software and the student's response converted to plain text in green for correct answers and in red for errors. When the activity is completed, the student can submit the results to the teacher by e-mail. For identification purposes, the e-mail report asks for the student's name, e-mail address, exercise identification information, and the teacher's e-mail address. The techniques used in these activities are quite interesting. Students may obtain feedback as soon as they wish, they may request an additional cue as determined by the teacher, and the teacher may receive a copy of the student's results. Submitting the report to the teacher is worthwhile because the ability for the teacher to track student progress is an essential feature of an assignment that is a required component of a class rather than a self-assigned activity by an independent student motivated only by the desire to learn the language. In the Italian lessons at the University of British Columbia, Jessica Barbagallo uses another fill in the blank format for review exercises. In the sample activity shown below, textboxes allow students to write the appropriate verb forms. The submit button labeled Controlla le tue riposte counts the number of responses that match the correct answers programmed in the page, but it does not give automatic feedback indicating which replies contain errors or what the errors are. However, the student can examine the answer key to figure out which are wrong. The format allows the student to practice composing the words in the answers, but it might be more effective, especially in a review rather than a testing situation, if the program were to show the student where the problems were located. This would help focus the student's attention and also avoid the problem of casting doubt in the student's mind on the accuracy of correct answers. This might also encourage the student to figure out the answer rather than look it up in the answer key. On the other hand, an answer key gives the student practice checking her own answers and may lead to improved skills in self-correction. There is no one right way to construct feedback. The KET Distance Learning site includes a selection of online courses. Most of the site is open to the general public, including the grammar lessons for Latin that we examined for this column. These lessons make extensive use of multiple choice and concentration-style matching activities for interaction with students. The feedback on the multiple choice sections at this site has the particularity of being instantaneous. In the sample relative pronoun exercise shown below, selecting an incorrect response immediately crosses out the wrongly selected letter choice whereas clicking on the correct answer produces a checkmark. The feedback remains on the screen for a short time, then disappears. Thus the markup avoids becoming a distraction. 1.Video puellam _____ ex Italiä vënit. quam quae cuius qui The promptness of the feedback works to better reinforce accurate grammatical choices. The multiple choice format lends itself readily to the computer medium because of the ease with which such simple responses can be evaluated. The drawback, of course, is that the student is not practicing production of actual language but recognition of the forms presented. A very different use of the computer is apparent in the Viva Voce Roman Poetry site by Vojin Nedeljkovic. Multimedia features allow students to learn pronunciation by listening to recited poetry in mp3 format while viewing the text and/or examining a representation of the poem's meter. The audio files for this lesson are quite a bit longer than those mentioned in the Spanish Grammar Exercises above. They clearly appear to the user as links to files that are first downloaded, then played, and the size of the file is indicated so that the user knows how big it is before beginning the download. Because of this manner of presenting the links the rapid response discussed earlier for the grammar exercise is not an issue here. The download time is easily perceived as appropriate for the task. Although there are no clearly defined learning goals and interactive evaluation of student learning, the site gives power to the student to control the interaction with and manipulation of the material presented and provides examples that the student needs to learn about how we think classical Latin poetry was pronounced. Thus rather than the teacher leading the student through the learning process by a predetermined route, it is the student who makes choices about how to best use the material to study in a manner fitting the student's individual learning style. Vietnamese is an example of a less commonly taught language in the US with special considerations in terms of its writing system, which requires diacritics beyond those of the iso-8859-1 or LATIN-1 character set that is standard in browsers available in the US and western Europe. Vietnam Television provides Web-based lessons to accompany its televised Vietnamese lessons http://www.vtv.org.vn/tiengviet/ . This site includes a link to download Vietnamese fonts for viewing the pages for browsers and operating systems that do not have this ability built-in. In newer browsers and operating systems this should not be a problem. Fonts, however, can be quite confusing for most people who have little understanding of the complex interactions among operating system, installed fonts, browsers, and browser settings. For most people, troubleshooting font problems can be a daunting task, and even for experts, writing easy to follow and comprehensive instructions that take into account the range of possible environments in which a page may be viewed can be quite difficult. The font implementation on the Vietnam Television site seems to be more transparent than most on the systems where we have tested it. Extensive use of video is an interesting feature of these lessons. Of course, a television station is particularly well placed to integrate video in its Web site. In this case the lessons use Real Player to play short streaming video segments that are supported by written text and questions on the corresponding web page, which can display Vietnamese language characters. For example, in lesson 41 of this series we see two young women discussing vacation spots around Vietnam including, of course, Ha Long Bay (see photo). Although when examined by the authors the video download on this site was slow, connections have been getting better and will continue to improve, especially as broadband Internet access becomes the standard throughout the world. For example, the following embedded excerpt, extracted from this high quality RealPlayer video, will work well only for readers with a fast connection. It is possible with today's technology to produce videos of similar quality using a much higher level of compression for faster downloads. (This file is best viewed with Internet Explorer. Additionally, the reader may need to download the free RealOne Player for this video sample to work.) Such videos of conversations between native speakers can help the language come alive and provide a model of authentic speech, if not the valuable oral interaction of the face-to-fece learning environment, that makes the language real for the learner. Development of such Web-based technologies is essential for providing the tools that allow the Web to transcend the limitations of the printed page. There are a number of different ways to display a character set for a language in a Web browser. On older browsers a special font for the language could be downloaded and installed, but more recent operating systems such as Windows 2000 or XP and the latest Web browsers using the Unicode font allow more transparent access to other languages' writing systems. The Vietnamese lessons at Northern Illinois University are built to function with either method depending on user preference, but the much easier new system is certain to eventually replace the old. These lessons present dialogs and vocabulary lists for study, but also include fill-in exercises and interactive activities to help the student check progress. Audio is used throughout to model the language for the learner. As may be seen in Lesson 1 below, the audio dialog is first presented in its entirety but is then broken down into separate files for each reply, allowing the student to easily study and repeat individual sentences. The site presents a variety of exercise types including, for example, a Java-based drag-and-drop sentence creation activity in which the student selects words and uses the cursor to drag them to their proper place in the sentence. The interactive activity shown below may be run at the Lesson 1 page at SEAsite by selecting Interactive exercise 3. A drag-and-drop activity works particularly well in a language such as Vietnamese where one does not need to learn inflections that are everywhere in languages such as Spanish or German requiring students to study special word forms for verb number and tense or noun number or declension. In Chinese, stroke order is important when learning to write characters. The image shown here is an animated gif file that will display the character, stroke by stroke. The animation, made possible by the computer, is a major advantage over the printed page for teaching skills that need to be learned step by step. For instance, on the page for learning numbers, the student can click on any number to see its stroke order. After clicking on the complete character in the table, a representation of the character appears to the right but is drawn one stroke at a time and slowly enough to allow the student to follow the order. A potential though unlikely technical problem with animated gif files as a learning tool is that it is now possible to turn off animations in Web pages to speed up display. A student whose settings will not display the animation may not know this and be confused by the lack of animation, not understanding why the page is not doing what it is supposed to do or how to troubleshoot the problem. Sometimes, in language learning, two chairs and face-to-face communication between a target language speaker and learner comprise the best method and technology (Ponterio, 1998, personal communication). However, even that simple methodology is not always available. For those language learners without recourse to conventional classroom instruction experiences, distance education is a very viable option. The sites discussed above offer ample opportunities for comprehensible input, meaningful feedback, interactivity, and control over one's own learning pace and sequence. Distance learning erases the distance. Distance Education Clearinghouse. (2003). Definitions. Available from http://www.uwex.edu/disted/definition.html Martin, J. (1973). Design of Man-Computer Dialogues. Englewood Cliffs, NJ: Prentice-Hall. Omaggio Hadley, A. (2001). Teaching language in context (3rd ed.). Boston, MA: Heinle. Shrum, J. L., & Glisan, E. W. (2000). Teacher's handbook: Contextualized language instruction (2nd ed.). Boston, MA: Heinle. Home |About LLT | Subscribe | Information for Contributors | Masthead | Archives
<urn:uuid:2dbe8cd1-dec8-4737-a9bc-26bfeb24598c>
CC-MAIN-2016-26
http://llt.msu.edu/vol7num3/net/default.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00115-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934363
4,321
3.625
4
The disorder that tragically claimed Robin Williams’s life has been around for millennia. The ancient Greeks thought depression started in the spleen. Later some blamed demonic possession for a person’s lingering melancholy. Doctors today know better; depression begins and ends in your brain. “Depression is a syndrome that likely emerges from several different brain processes that vary among patients,” explains Boadie Dunlop, M.D., director of the Mood and Anxiety Disorder Program at Emory University. Put simply, no two brains are exactly the same, and the underlying causes of depression vary from person to person, Dunlop stresses. That said, he and other modern-day mental health researchers have started to uncover some of the most common brain traits and conditions shared by depression sufferers. The Emotion Connection “Compared to people who are mentally well, patients with depression often show increases in activity in important emotion-processing regions,” Dunlap says. Brain structures like the amygdala light up more vigorously among depressions sufferers, his research shows. Other studies have linked an uptick in amygdala activity to states of anger, sadness, and fear. There’s also research linking depression to the thalamus, a part of your brain that helps manage your responses to sensory information. The research hints that, among people with certain forms of depression, the thalamus might trigger their brains to produce unpleasant feelings in response to normal or benign external data, explains a report from Harvard Medical School. (Imagine a bummer feeling brought on by a sandwich, or a rerun of Grey’s Anatomy.) Beyond the Blues People focus on emotions when talking about the blues. But your ability to think, learn, and memorize also suffer as a result of your depression. One recent study from Brigham Young University linked depression symptoms to a drop-off in a person’s ability to store new information. No surprise since mental health experts have known for a while that depression can boost your brain’s levels of stress hormones like cortisol, and studies have found that cortisol can damage or even shrink certain areas of your brain by stalling the production of new neurons and nerve connections. In particular, an area of the brain called the hippocampus, which plays a big role in learning and long-term memory, was found to be 9 to 13 percent smaller among women with a history of depression in a study in The Journal of Neuroscience. A separate study from Swedish researchers found brain “plasticity,” or your noodle’s ability to change and adapt to new conditions and experiences, takes a hit as a result of long-term depression. All of this this could hurt your ability to learn and process new info, the study authors say. The Long Term Several research efforts have shown that people suffering from recurring depression develop problems with planning, decision-making, and setting priorities as well as continuing issues related to memory and learning. Those studies blame the neuron-growth-stunting, brain-structure-shrinking effects of stress chemicals related to the blues. More science has linked long-term depression to crippling brain conditions like dementia and Alzheimer’s. Drugs and/or therapy have been shown to help depression sufferers stall or overcome the negative effects of their condition. And Dunlop, the Emory depression expert, says new advancements in tracking the brain activity of sufferers could eventually help doctors better identify the best treatment programs for individual depression patients. But because of the syndrome’s complexity, depression is—at least for now—something that can be treated, not cured.
<urn:uuid:cb43ad83-b09c-44a7-a7a2-797e4cf6dc0c>
CC-MAIN-2016-26
http://www.shape.com/lifestyle/mind-and-body/your-brain-depression
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933629
743
3.484375
3
I'll Give You a Definite Maybe An Introductory Handbook for Probability, Statistics, and Excel handbook has been prepared by Ian Johnston of Malaspina University-College, Nanaimo, BC (now Vancouver Island University), for students in Liberal Studies. The text is in the public domain, released May 2000, and may be used, in whole or in part, by anyone without charge and without permission] comments, questions, corrections, and what not, please contact Ian Six: Samples and Populations Introduction: Samples and Populations most of the examples we have been dealing with so far, our statistical analysis has usually involved a complete set of information about all the items we wished to study (e.g., all the students in a class). In other words we have been dealing with populations (i.e., we had data for all the items in which were our analysis is based upon an entire population (i.e., all the members of the group under study, each of whom is taken into account in the analysis), we are interested in data on each member of the group, and we do not extend our conclusions beyond that particular group. most statistical studies, however, the population we are interested in is far too large for us to measure each and every one of the members of it (e.g., all students at Malaspina University-College, all Canadian voters, all cars made in Detroit, all children in Nanaimo, and so on). In such cases, we confine our analysis to a relatively small selection taken from the total population. Such a selection is called a sample. purpose of dealing with a sample is straightforward: it enables us to study a large population and to learn things about it, so that we can draw important inferences, without having to go to the trouble of collecting data from every member of the entire population. very important part of statistics is the study of the sorts of conclusions we can make about an entire population on the basis of a relatively small sample. For instance, if we have measured data on, say, voting patterns for 1000 people, are we entitled to make any conclusions based on that information about the voting patterns of the population in general? And if so, what are the limits to the sorts of generalizations we can make? What are we not entitled to conclude about the wider population? How does my ability to make conclusions about the wider population change as the size of my sample increases? How do I test claims made about entire populations on the basis of an analysis of a single sample? And so on. other words, to use statistical information properly we need to understand something about the relationship between the information we have collected from a representative group of the entire population (the sample) and the total population itself, from which the sample is taken and for which we can never conduct complete measurements, since obtaining the information would be too time consuming, if not impossible. one important point in working with samples is the selection of a truly representative sample—a collection of individual items for observation which accurately represents the larger population. It is beyond the scope of this module to explore the various methods statisticians use to make sure their sampling techniques do not introduce major errors into the calculations (a complex subject); however, it is appropriate to say a few things about the main are a number of common procedures for selecting a sample, some simple and some more complicated. Haphazard (or Opportunity) sampling, for example, relies upon the convenience of the sampler or the self-selection of the sample (e.g., volunteers who respond to a mailed out questionnaire or who are picked at random from a crowd) (1). Quota Sampling sets quotas for various categories in the sample (so many men, so many women, so many over age 45, so many under age 45, and so on), so as to achieve a representation of the major divisions in the larger population. Random Sampling picks members of the sample according to a random process, thus giving each member of the large population an equal opportunity of being selected. general, of the methods mentioned above, Random Sampling is the preferred method, with the least built in bias. However, in order for random sampling to be possible, there must be available a list of everyone in the population to be sampled (for reasons explained below). Where that requirement cannot be conveniently met (e.g., in a survey of all Canadians or all residents of BC), then the simple method of random sampling outlined below is not appropriate. simple random sample, with a list of the entire population under investigation, the sampler then assigns a number to each item in the list and selects the sample by consulting a random number generator or a table of random numbers. The process works as outlined below. Suppose we wish to investigate all the workers in a particular factory, but we do not have the time or the resources to deal with them all. So we decide to work with a sample of 30 workers out of a total factory population of 450. We begin by assigning each member of the total population a number. Since the largest number we require (450) has three digits, we give everyone a three-digit number, starting with 001, 002, 003, 004, and so on, up to 450. then consult a list of random numbers. The list of random numbers looks like this (a portion of a page). begin the selection we blindly point to some number in the table (say, for example, 2956, the figure in bold above). Then, reading across the table we take three-digit numbers. If they fit someone in the general population, that person is selected; if they do not, then we move on to the next three-digit number. starting with 2956, the first-three digit number is 295. Since we have someone in our list of 450 with that number, we select that person. The next three-digit number (continuing to read to the right) is 674. That does not fit (since we have only 450 in the total population we are studying), so we move on. The next three-digit number is 053. This number fits, so the person with this number is selected for the sample. continue this process, moving through the table of random numbers, until we have the number we need for the sample. To complete the selection of the sample of 30, we would obviously need a bigger list of random numbers than the partial list given above. is less bias in this selection because everyone in the total population has an equal chance of being included in the sample. We have made no attempt to organize the population into different sections or proportions. If we were working on sampling merchandise or samples for experiment, we would proceed in the same way, first assigning a number to each item in the larger population and then consulting a list of random numbers to select the items for our sample. some opinion polls, a variation of this method of random sampling can be useful: random digit dialing for a telephone survey (although such a method is biased in favour of those with telephones or more than one telephone number or who spend a lot of time at home). important factor in any sample is the size. The most appropriate size will depend upon the accuracy we wish and upon the size of the general population we are sampling. We shall be dealing with this question later in this section. The Sample Mean us assume we have properly identified our sample from the large population we are interested in. On the basis of the measurements I have made of the sample I have collected, I have a group of numbers. Thus, I can calculate the mean of this sample (remember that the mean is the arithmetical average) in the usual way (adding up all the values and dividing by the total number in the sample or by entering the measurements on an Excel worksheet and getting Excel to make the calculation for me). This figure is called the Sample Mean. now, I conduct another similar sample of the same general population (not including in the second sample anyone who was part of the first sample). I will obtain a second set of measurements from my new sample, and I can calculate the mean of that collection of numbers. Now I have a second Sample Mean. If I have done my sampling without major bias, the second Sample Mean should be close to the first Sample Mean (since I am sampling the same general population). But the value for the second Sample Mean will almost certainly be somewhat different from the first (even if the difference is quite small). example, suppose I am investigating the body length of an adult male lizard. I collect my first sample of, say, thirty lizards, measure the body length, enter the data on a worksheet, and obtain a mean value for that sample Suppose this value is 6.56 inches. I then collect a second sample for the same animal, measure the body lengths, enter the data on a worksheet, and obtain a mean value for that sample of 6.43 inches. These two figures are both sample means for the same general population (all the adult male lizards): Sample Mean 1 and Sample I continue in this fashion, making a number of different samples and calculating the mean of each. Gradually I will collect a list of Sample Means, one for each of the samples I have collected. I will create a list of numbers, each representing a separate Sample Mean. These will probably be quite close to each other in value, but there will be differences. In other words, the value of the Sample Means will be distributed; we can think of the values we obtain for the different Sample Means has having a frequency distribution, just like any other list of numbers. sure you understand this point. The collection of means from different samples will provide a list of numbers which, like any such list (of the sort we have been examining) will have a frequency distribution (with a mean value, a median, a variation, and a standard deviation). An Example of a Collection of Sample Means (S-Means) order to reinforce this last point, let us continue to work through our example with the adult male lizards. I continue my sampling, measuring, and calculating, and produce the following results (let us assume for the sake of argument that each sample contains 30 male lizards): 1: S-Mean 1: 6.56 in Sample 2: S-Mean 2: 6.43 in Sample 3: S-Mean 3: 6.48 in Sample 4: S-Mean 4: 6.51 in Sample 5: S-Mean 5: 6.40 in Sample 6: S-Mean 6: 6.52 in Sample 7: S-Mean 7: 6.54 in Sample 8: S-Mean 8: 6.47 in Sample 9: S-Mean 9: 6.49 in Sample 10: S-Mean 10: 6.53 in. that each of these S-means is the average for a sample of 30 adult male lizards. This list of numbers also has a mean value (6.493 in) and a Standard Deviation (0.0499 in). These, you will recall, we can have Excel calculate for us (just as for any list of numbers). will remember from the previous chapter that the standard deviation is a measure of the distribution of the frequencies in the probable results. A small standard deviation (as in the above example) means that most of the values will lie close to the overall mean of the numbers in the list. The Mean of the S-Means reasons which lie outside the scope of this report, the values of the S-Means will have a frequency distribution represented by the normal curve (that is, the probabilities that particular S-means will have certain values will follow the pattern of a normal distribution, which we discussed in the previous section). Thus, the various probabilistic characteristics of the normal curve, which we have studied in an earlier module, will apply to the collection of samples we have made (2). Please make sure you understand this very important point; everything we do in the rest of this chapter depends upon it. also know from mathematical studies that in such a normal distribution of all the S-means for a particular population, the mean value (the mid point, the highest part of the normal curve of S-means) will be the same as the average for the entire population. We cannot measure all the population and then calculate the mean, but we can theoretically establish that if we did so, the mean for the entire population would be same as the average of all the means of all the samples of that population we could collect (since if our sampling was complete we would have measured each member of the population). point is obvious enough if you think about it. If I kept collecting samples like the 10 listed above, eventually I would have sampled the entire population (assuming no two lizards were in more than one sample). The average of all my samples would then be the average of the entire population, because all my samples would be the same as the entire population. particular sample we take of 30 adult male lizards might be truly representative of the total population (in which case the mean of the sample would coincide with the mean for the entire population), or it might misrepresent somewhat the population under study (that is, the sample mean may be displaced from the population mean). We have no way of directly knowing that unless we can measure every member of the population. more samples we collect and the larger those samples, the closer the average height obtained by averaging the means of all the samples will be to the average height for the entire population. If I kept sampling until I had sampled every adult male lizard, then the average of all the sample means would be the same as the average for the total population. The Value of a Single Sample. in practice we usually do not have time (or money) to carry out enough measurements of separate samples to calculate the mean of all the Sample Means (we do not want to carry out a very large number of samples, find the average of each sample, and then, treat those averages as a distribution, calculating the mean of the S-Means and the standard deviation, as we theorized above). Besides, in many cases (as in the male lizard example) we may never know whether we have sampled every single member of the population. most cases, we are interested in making some judgment about the entire population on the basis of a single sample (of, say, 50). So what is of immediate interest is this question: If I use the S-Mean from a single sample of observations to make an estimate about the mean for the entire population, how likely am I to make a serious mistake? the importance of this question. It poses an vital statistical enquiry: On the basis of a single sample, what am I entitled to conclude about the entire population? For example, if I have randomly selected adult male lizards for a measurement of their body length, what legitimate conclusions can I draw from this small sample about all the adult male lizards? How certain can I be of any turns out that the error in basing a conclusion about the entire population on a small sample is likely to be quite small. This vital conclusion follows from the important fact that the distribution of all possible Sample Means is a normal curve and that the normal curve has important characteristics (as we have seen in the previous section). we know that in any normal curve, the further any value falls from the mean, the less likely it is to occur. You will recall that there is approximately a .68 probability that any value will fall within 1 Standard Deviation on both sides of the mean, and approximately a .95 probability that any value will fall within 2 Standard Deviations on both sides of the mean. Thus, from the properties of all normal distributions, we know that there is only a .05 probability that any value will lie more than 2 Standard Deviations from the mean. Hence, the more a sample is a poor representative of the entire population, the less likely it is the Sample Means are normally distributed around the value of the mean of the entire population, the further the mean of any one sample is from this mean of the entire population, the less likely it is to occur. As one moves from the mid-point of the distribution in either direction, the number of samples which produce an Sample Mean much smaller or larger than the mean of the population gets smaller and smaller (since the means of those samples would have to fit into the extremes of the normal curve). this implies is that if we could ascertain the Standard Deviation for the distribution of sample means, we would know the probabilities that any particular sample mean would be close to or far away from the mean for the that we are conceptualizing a normal distribution curve which represents all the frequencies of all the mean values for all the samples we might make of a large population. We have ascertained that the mean value of such a curve will be the same as the mean value for the entire population we are studying. If we could find out the Standard Deviation of this normal curve, then we would know how the various values of the sample means are distributed in relation to the mean of the normal curve. Standard Deviation of this normal distribution of Sample Means is called the Standard Error or the Standard Error of the Means. If we had a way of ascertaining its value, then we could describe the probabilities of the entire curve, just as we can for any normally distributed value. Standard Deviation and Standard Error sure you understand the difference between the terms Standard Error and Standard Deviation. The standard error is the name of a very particular standard deviation, the standard deviation of the means of all the samples we could take of a particular population (e.g., the population of adult male lizard in the example we have been considering). clarify this issue, if it still needs clarification, let me list here once more some summary points: When we collect a sample or deal with the entire population in our measurements, we can list all the numerical results and then calculate the mean and the standard deviation of that list by the methods we have already discussed (usually getting Excel's Descriptive Statistics function do the work for us). When we are dealing with a very large population, we will take a small sample picked so as to avoid bias. The larger total population has a mean and a standard deviation, but we do not have the time or the resources to measure all the cases (even if we could locate them), and therefore we do not know what these figures are directly. The only direct observations we have are from the sample we have taken. However, the Standard Error, which we are able to calculate from our sample (see below), will give us the Standard Deviation of all the different averages from all the samples we could make of the general population (or a figure close enough to the Standard Deviation of the entire population to use as a substitute We use the term Standard Deviation to remind ourselves that the figure we are dealing with refers to a sample or to an entire population. We use the term Standard Error to remind ourselves that we are dealing with the distribution of the averages from all possible samples (even though we have undertaken to measure only a single sample). Calculating the Standard Error our discussion above, we outlined one method for calculating the Standard Error. That was to collect all the possible samples of a population, calculate the mean, and then calculate the Standard Deviation of the frequency distribution of Sample Means. Theoretically, that is fine, but in practice, we simply cannot carry out sampling until we have included the entire population of our study. there is another way of calculating the Standard Error. Mathematicians have demonstrated that the Standard Error (which tells us the Standard Distribution in the normal curve of all the possible Sample Means) can be derived from a single sample (or a value so close to the Standard Distribution of that curve that for practical purposes we can treat it as the Standard Error). The value is equal to the Standard Deviation of the sample divided by the square root of the number of items in the sample, as the following formula indicates: this information, as we shall see, turns out to be a very powerful piece of information. From a single sample, we can calculate the standard distribution of the normal curve depicting the means of all possible samples. Make sure you understand this point; much of what we do from here on depends upon grasping this idea that from one relatively small sample of a large population we can draw conclusions about the distribution of the averages from all possible samples of that same population. Minimum Sample Size the mathematics we have been discussing to work effectively, the sample we select must not be too small. The minimum permissible size is 30 observations. And remember that when we are dealing with samples (as opposed to total populations), to derive the standard deviation of the sample, we divide the sum of the squared differences between the mean and the observation by one less than the number in the sample. If this is a puzzle to you, do not worry about it, since Excel does the calculations anyway. But this practice of dividing by one less than the number in the sample is the reason why Excel's calculation of the standard deviation of a list of numbers is always slightly higher than the result produced by a manual working out of the result which uses all the numbers in the sample. Excel treats every list of numbers as a sample not as the total calculating the standard error, however, we do not follow the same principle of using one less than the number in the sample. As the formula above indicates, we divide the standard deviation by the square root of the total number of items in you may have already observed, Excel calculates the standard error for any list of data and includes the figure in the Descriptive Statistics box. A Simple Application of the Sample Mean and Standard Error fact that we can calculate the standard error of the means from a single sample of populations turns out to be extraordinarily useful. For on the basis of a single sample (provided it is more than 30 and free from bias), we can derive the standard deviation of the normal curve representing the means of all possible samples. And this, in turn, enables us to calculate the probability that our sample mean is close to or far away from the mean of all the sample means (which is equivalent to the mean of the total population). instance, suppose, as a consumer advocate, I am interested in examining the quality of a particular brand of light bulbs, to see if they are up to the manufacturer's guarantee. Well, first I collect a random sample of, say, 100 bulbs. I then test that sample, measuring the number of hours the bulb functions before burning out. This test yields a list of one hundred results (one for each member of the sample). From these one hundred numbers, I calculate (or Excel calculates for me) the mean life of the bulbs in the sample and the standard deviation of the results listed from the test of the sample. life of the light bulbs in the sample: 300 hr Standard deviation of the sample: 20 hr these two figures I can calculate the standard error: the standard deviation of the sample divided by the square root of the number of items in the sample or, in this case, 20 divided by the square root of 100, that is by10, for a result of 2 hr. know that the average of all the means of all the samples is the same as the average for the entire population, and we know that the standard deviation in the normal curve representing the values for all the different sample means is equal to the standard error (2 hr). on the basis of my single sample, I can conclude that there is a .68 probability that the average for the entire population of all the light bulbs lies within 1 standard error of the mean of my sample, that is, between (300 - 2) and (300 + 2), or between 298 hr and 302 hr. There is a .95 probability that the mean of the total population of light bulbs (that is, the average life of all the light bulbs made by this manufacturer) lies between the sample mean and 2 standard errors, or between (300 - 4) and (300 + 4), that is, between 296 and 304 hr. the nature of this conclusion. On the basis of a relatively small sample of a very large population, we can establish a conclusion about that larger population. The conclusion is in the form of a series of probability statements, each of which defines a range of possible values. This form of conclusion and its uses will become clearer in some of the examples and exercises which follow. The Evaluative Use of Standard Error does all this add up to? Well, here's a hypothetical practical illustration. Suppose I wish to learn about the mathematical capabilities of all the Grade XII students in Nanaimo. I have neither the money nor the time to arrange to have them all tested. Thus, I organize a random sample of, say, 100 students and give them a special test on their mathematical skills. I find that the average score in the sample is 65, with a standard deviation of 16.74. What can I conclude on the basis of this information about the average capabilities in mathematics for all Grade XII student in Nanaimo? I begin by calculating the standard error (or reading it off from the Descriptive Statistics table generated by Excel, once I have entered the observational data onto a worksheet). In this case the standard error is 1.67 the average (mean) score in my sample was 65. And I know that if I analyzed many similar samples, the averages of the samples would be normally distributed in a curve where the standard deviation is equal to the standard error calculated above (1.67 marks). if the average in my sample was 65, I can state that there is a .68 probability that it falls within 1 standard error of the mean of the total population of all the Nanaimo Grade XII students (either higher or lower). Thus I am 68 percent certain that the mean score for all the students in Nanaimo on this mathematics test is between (65 - 1.67) and (65 + 1.67), that is, between 63.33 and 66.67. want to be more certain than this, I can state that there is a probability of .95 (or that I am 95 percent certain) that the average for the entire Nanaimo Grade XII population on this mathematics test will fall between the sample mean and 2 standard errors, that is, between [65 - (2 x 1.67)] and [65 + (2 x 1.67)] or between 61.66 and 68.34. want to be even more confident, I can state with .99 probability (or 99 percent certainty) that the average for the entire Nanaimo Grade XII population will be with 3 standard errors of the sample mean. Self-Test on Estimating the Population Average from a Sample are interested in finding out about the hours elementary school children in School District 68 spend in organized recreational exercise outside of school. You select a random sample of 50 elementary school students, obtain data about organized recreational exercise for each of them, enter the data on an Excel worksheet, and obtain the following result. time spent in organized recreational exercise (per week): 2.46 hr Standard deviation in the sample: 2.01 hr the method we have already gone through with the light bulbs and the Grade XII students to produce a conclusion about the average hours of organized recreational exercise for all elementary students in School District 68. State the conclusion with .68 probability, with .95 probability, and with .99 probability (or with 68 percent certainty, with 95 percent certainty, and with 99 percent certainty). an answer to this self-test see the end of this section of the module. have already briefly discussed the nature of the conclusion we have been drawing from these statements about a total population based on what we measure in a relatively small sample. These inferences consist of a range of values and a mathematical figure of probability (e.g., .68 probability, a .95 probability). like this illustrate what is called a confidence level, a conclusion which offers a range of values and a statement of probability: we conclude that there is a p probability that the mean of the total population falls between figures and y. This might also be stated negatively: there is a certain probability that the average score for the total population does not fall between x and y. figure for the probability (p) is determined by the distance the limits of the range are from the mean of the sample (measured in standard errors or, to use language we introduced in an earlier section, measured in the z-score). As we saw in the last chapter, we can have 68 percent confidence (or p = .68) that any value in a normal distribution will fall within one standard deviation of the mean (i.e., have a z-score of between -1 and +1). We can have a 95 percent confidence (p = .95) that any value in a normal distribution will fall within 2 standard deviations of the mean, that is, between a z-score of -2 and a z-score of +2. And we can have a 99 percent confidence (p = .99) that any value in a normal distribution will fall between a z-score of -3 and a z-score of +3. that, as we would expect, I can increase the confidence of my conclusions by widening the range within which the value will fall. The more certain I wish to be, the wider the range of values. If I want to narrow the range of values in my conclusion, then I lower the confidence level. point about confidence levels is important in understanding the way in which the media publish poll results. For example, when a newscaster says that a recent poll has just revealed that 42 percent of the electorate would vote Liberal if the election were held tomorrow, that remark will usually be accompanied by a qualification like the following: "These results are considered accurate within 2.5 percentage points nineteen times out of twenty." What this qualification means is that the pollsters are 95 percent confident that (i.e., sure that in 19 cases out of 20) if the election were held tomorrow, the Liberals would get 42 plus or minus 2.5 percent of the vote (i.e., between 39.5 and 44.5 percent of the vote). On the basis of their relatively small sample, they are establishing a confidence level and a range within two standard errors. More Curious Observations the basis of what we have learned so far about making conclusions about a large population on the basis of a single sample of more than 30, we can notice some interesting further details about this very useful procedure made possible by the calculation of the standard error. the size of the confidence interval depends upon the size of the standard error (which is a measure of the standard deviation in the distribution of sample means). Thus, if we can lessen the standard error, we can diminish the range of values in each confidence level (and thus provide more precise conclusions). may recall that we calculate the standard error from the sample, taking the standard deviation of that sample and dividing the figure by the square root of the number of observations in the sample. Since we calculate the standard error by dividing by the square root of the number in the sample, increasing the number in the sample may have only a small effect on decreasing the size of the question I might like to consider is the following: in order to lessen the size of the standard error, how much would I have to increase the size of my sample? Or, alternatively, will increasing the size of my sample enable me to narrow the range of the conclusion? answer, it turns out for reasons explained below, is that increasing the sample size can indeed narrow the range of results, but that the increase in the sample size has to be very large—so large, in fact, that it may prove to be too costly and time consuming to implement. example, if we were dealing with a sample of 100 students in a study of their skills on a test and if the standard deviation of the list of results in our sample was, say, 16 marks, then we would calculate the standard error by dividing the standard deviation by the square root of the number in the sample, that is, 16 divided by the square root of 100, or 16 divided by 10, or 1.6. Thus, in estimating the confidence intervals for the entire population of students, we would be using the figure of 1.6 marks as the basis of our intervals to calculate the ranges for .68, .95, and .99 probability. if we wanted a narrower range, in order to have a more precise result, we would like to reduce the standard error (thus having a smaller interval). One way we might like to do this is to increase the size of the sample. If we increase its size, then we increase the size of its square root and therefore diminish the standard error (which is produced by dividing the standard deviation by the square root of the number in the sample). since we are dealing with the square root of the number in the sample, we will have to increase the sample size considerably. For instance, in the example above we dealt with a sample of 100 students and achieved a standard error of 1.6 by dividing the standard deviation of the sample, 16, by the square root of 100, or 10. If we wanted to reduce the standard error by half, we would have to divide 16 by 20. And to be able to do this we would have to sample 400 students (the square root of 400 is 20). this means, in effect, is that in many cases it is not worth the effort to increase the sample size in order to achieve more precise results. Since selecting the sample information is the really time consuming part of the analysis, it is generally more efficient to keep the sample relatively small (provided it is over 30) and to concentrate on making it the best sample we can achieve (i.e., least liable to bias). is not to say, of course, that the size the sample is irrelevant. Obviously, that is not the case. Increasing the size of the sample does reduce the standard error and thus makes the conclusions more precise. In fact, mathematicians have drawn up guidelines as to the most appropriate sizes for samples relative to the size of the larger population they are intended to represent and to the level of accuracy in the sampling revealed. this module, as mentioned before, we are not dealing with the complex rules for proper sampling strategies (other than the few remarks previously in this section). So we are not concerning ourselves with the problems of sampling error. In the various examples we work through, we shall assume that the sample is a good one and will not take into account the sampling error (as we should if we were being statistically diligent). for interest only, you might like to see a list of the recommended sample sizes for different populations. The table below, from a book on surveys, indicates some recommended sample sizes: Recommended Sample Sizes for Different Populations and Permissible Sampling Errors Sampling Error Allowed Working Through An Example us review one more time the steps in making confidence generalizations about an entire population from a single sample. First we select a sampling strategy (normally using random sampling when the total population is suitable for this process), select our sample (making sure we have at least 30 separate observations in it), and collect the information. Then, we enter the data on a spreadsheet (like Excel) and apply the Descriptive Statistics tool in order to ascertain the mean and the standard error of the Finally, we make our conclusions at different confidence levels: 68 percent for a range within 1 standard error of the mean of our sample (above and below), 95 percent for a range within 2 standard errors of the mean of the sample, and 99 percent for a range within 3 standard errors of mean of the sample. I wish to know (for purposes of comparison) the average score for all first-year university students in British Columbia on a standard intelligence quotient (IQ) test. Going through the steps outlined above, I complete steps 1 and 2 for a sample of 100 students. The mean score of the sample is 112; the standard deviation is 12 points. these two figures I can compute the standard error (the standard deviation divided by the square root of the number in the sample): that comes out to 12 divided by 10 or 1.2 points. I can make my conclusion at different confidence levels, as follows: am 68 percent certain that the average IQ score on this test for all first-year university students in BC is within a range 1 standard error on either side of the mean of my sample, that is, between 110.8 and 113.2. am 95 percent certain that the average IQ score on this test for all first-year students in BC is within a range 2 standard errors on either side of the mean of my sample, that is, between 109.6 and 114.4. am 99 percent certain that the average IQ score on this test for all first-year students in BC is with a range 3 standard errors on either side of the mean for my sample, that is, between 108.4 and 115.6. Self-Test on Confidence Levels the method outlined immediately above, try the two following problems. We want to know the average pulse rate in a population of 1000 track athletes. We sample the pulse rates of 50 athletes taken at random and calculate the mean pulse rate of the sample to be 79.1 beats per minute, with a standard deviation of 7.6 beats per minute. What can we conclude about the mean value (in beats per minute) for the entire population of athletes? State your conclusion at three different confidence intervals (at .68, .95, and .99 probability). sample study of the family incomes in Canada revealed the following: sample size, 1600, mean family income of the sample--$51,300; standard deviation of the sample--$8000. What can you infer about the mean family income for the entire population at a confidence level of 95 percent? Using a Table to Read for any Level of Confidence to this point we have only dealt with three confidence levels: 68 percent (or .68 probability), 95 percent (or .95 probability), and 99 percent (or .99 probability). We used these because they correspond to the ranges defined by 1, 2, and 3 standard deviations away from the mean (something we learned in the practice, however, we are not limited to just these three figures. We can establish any level of confidence we want. But we will need to know the link between the confidence level we want and the precise figure for the z-score at that level (what this means will become clearer very soon). this puzzles you, let us go through the point step by step, as follows: normal curve (the shape of a normal distribution) indicates the relative frequencies of all the values in the population we are studying. Thus, we can imagine the area under the top line of the curve as representing the entire If we think of the population under the curve as an area, then we can see clearly that in a normal distribution the total population is divided in half by the mean. There is thus a .5 probability in any normally distributed population that a particular value will fall in the area to the right of the mean (i.e., in the upper values), and a .5 probability that any particular member of the population will fall to the left of the mean (i.e., in the lower half of the In the previous chapter, we discussed how in the normally distributed curve, the area under the curve is always divided in the same way by units of standard deviation: 68 percent of the total population falls within 1 standard deviation of the mean (34 percent on either side); 95 percent of the population falls within 2 standard deviations of the mean (47.5 on either side); and 99 percent of the total population falls within 3 standard deviations of the mean (49.5 on But clearly we are not confined to just 1, 2, or 3 standard deviations. There are all sorts of possibilities in between them (e.g., 1.2 standard deviations, 0.7 standard deviations, and so on). And each of these will define a different area under the normal curve. And each area, so defined, will include its own percentage of the total population (and thus establish its own confidence Now, the mathematics of calculating areas under the normal curve for all distances away from the mean is complex and laborious. Fortunately, however, mathematicians have created tables for us, using which we can simply read off particular distances from the mean and their corresponding areas. Thus, we can easily determine what level of confidence we want and find the distance appropriate to it. do this procedure, we need a table which indicates precisely the areas of the normal curve at various z-scores. Here is such a table. If it looks a bit intimidating, don't worry. Read carefully the description of how the table works in the paragraphs after it. the Area Under the Normal Curve at Different z-scores Note that this table is only for one half of the normal curve table indicates in the extreme left hand column (in bold) the distance away from the mean in standard deviation units (which is, mentioned before, is the same as the z-score) up to one decimal place. The columns from column two towards the right indicate the values for the third decimal place for that z-score (e.g., from 1.10, 1.11, 1.12, 1.13. 1.14, and so on). decimal figure in each cell indicates the area under the curve at that particular z-score (for one half the normal curve). Thus, in the first line, the area under normal curve at a z-score of 0.00 is .0000. This means that when we are exactly on the mean, the area under the curve is 0 (since there no distance between the mean and itself). If we move to the next column (to the right), the z-score is 0.01, and the corresponding area under the curve between the mean and this distance away from it is 0.0040. Since we are dealing with only one half the curve, the total area under the curve defined by a z-score of 0.01 on both sides of the curve is twice the given value, 0.008 (or 0.8 percent of the total area under the curve is within a z-score of 0.01 on either side of the mean). you check now the area figure for a z-score of 1.00 you will notice that it reads .3413. This means that of all the scores under the curve 34.13 percent of them will fall between the mean and a z-score of 1 on one side of the curve. If we want to include all the scores within 1 standard deviation of the mean on both sides, then we would double this figure (i.e., to 68.26 percent). We have been using the figure 68 percent as a convenient approximation of that value. that in the top lines of the table, where the scores are close to the mean, the values for the areas increase as one moves across a row much more than they do at the bottom of the table (for the z-scores further from the mean). Obviously, that is the case because the normal curve is highest close to the mean (with considerable distance below it, so that increasing the z-score includes a significant area); as the z-score approaches 3, the normal curve is very close to the axis, with almost no area beneath it. Hence, increasing the z-score does not increase the values given very quickly. the Table for Different Confidence Levels decimal numbers in the cells of the table also indicate the probability that any value in a normal distribution will fall between the mean and the particular z-score corresponding to the value. example, suppose we wanted to know the z-score which would give us a confidence level of .75 (or 75 percent). Half of 75 percent is 37.5 (remember we are dealing with half the curve), which we express as a fraction as .375. If we consult the table, we can locate the number closest to that value in the cell corresponding to a z-score of 1.15 (the value in the table is .3749). This information tells us that in a normal distribution, I can be 75 percent certain that any value will fall with 1.15 standard deviations above or below the mean. Answers to Self-Test Sections to Self-Test on Estimating the Population Average from a Sample find the Standard Error we divide the Standard Deviation of the Sample (2.01 hr) by the square root of the number in the sample. The square root of 50 is 7.07. Therefore the Standard Error is 2.01 hr divided by 7.07 or .28 hr. about the average time elementary school children in School District 68 spend on organized recreational exercise out of school, I can make the following 68 percent certain that the average time falls between (2.46 + .28) and (2.46 - .28) or between 2.74 hr and 2.18 hr. I am 95 percent certain that the average time falls between 3.02 hr and 1.9 hr. And I am 99 percent certain that the average time falls between 3.3 hr and 1.62 hr. here that the more confident I wish to be, the wider the range of values I have to the Self-Test on Confidence Levels (Section Q) The Standard Error of the sample is the Standard Deviation divided by the square root of the number in the sample, that is, 7.6 divided by the square root of 50, or 7.6 divided by 7.07, or 1.09 beats per minute. Thus, I can conclude the following about the population of 1000 athletes: I am 68 percent certain (or the probability is .68) that the average pulse rate is between (79.1 + 1.09) and (79.1 - 1.09) or between 80.19 and 78.01 beats per minute. I am 95 percent certain (or the probability is .95) that the average pulse rate is between 81.28 and 76.92 beats per minute. And I am 99 percent certain (or the probability is .99) that the average pulse rate is between 82.37 and 75.83 beats per minute. The Standard Error of the sample is the Standard Deviation (8000) divided by the square root of the number in the sample (1600) or 8000 divided by 40, or 200 dollars. Thus, I can be 95 percent certain that the average family income is between $51,700 and 50,900. to Section Six A famous well known example of the sort of bias which can occur in non-random sampling is Shere Hite's book Women and Love. The author mailed out 100,000 questionnaires to women's organizations (a Haphazard or Opportunity sample). Only 4.5 percent were filled out and returned, so that the results were biased in favour of women who belong to such organizations and who were sufficiently motivated to respond. [Back This very important property is true whether or not the population from which the samples are taken is normally distributed or not. The frequency distribution of the Sample Means from any population will always follow a normal distribution. [Back to
<urn:uuid:3443798f-ba42-4554-b8af-8cfe21892bc5>
CC-MAIN-2016-26
http://records.viu.ca/~Johnstoi/maybe/maybe6.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922466
10,504
2.8125
3
Mutation and Evolution Mutations are the raw materials of evolution. Evolution absolutely depends on mutations because this is the only way that new alleles and new regulatory regions are created. But this seems paradoxical because - most mutations that we observe are - harmful (e.g., many missense mutations) or, at best, - neutral, for example: - "silent" mutations encoding the same amino acid - perhaps many of the mutations in the vast amounts of DNA that lie between genes. - most mutations in genes affect a single protein product (or a small set of related proteins produced by alternative splicing of a single gene transcript) while much evolutionary change involves myriad structural and functional changes in the phenotype. So how can the small changes in genes caused by mutations, especially single-base substitutions ("point mutations"), lead to the large changes that distinguish one species from another? These questions have, as yet, only tentative answers. One Solution: Duplication of Genes and Genomes Mutations that would be harmful in a single pair of genes can be tolerated if those genes have first been duplicated. Gene duplication in a diploid organism provides a second pair of genes so that one pair can be safely mutated and tested in various combinations while the essential functions of the parent pair are kept intact. - Over time, one of the duplicates can acquire a new function. This can provide the basis for adaptive evolution. - But even while two paralogous genes are still similar in sequence and function, their existence provides redundancy ("belt and suspenders"). This may be a major reason why knocking out genes in yeast, "knockout mice", etc. so often has such a mild effect on the phenotype. The function of the knocked out gene can be taken over by a paralog. - After gene duplication, random loss of these genes at a later time in one group of descendants different from the loss in another group could provide a barrier (a "post-zygotic isolating mechanism") to their interbreeding. Such a barrier could cause speciation: the evolution of two different species from a single ancestral species. - Paralogous genes. Genes in one species that have arisen by duplication of an ancestral gene. Example: genes encoding olfactory receptors. - Duplication of the entire genome. Examples: - Polyploid angiosperms. - Genome analysis of three ascomycetes show that early in the evolution of the budding yeast, Saccharomyces cerevisiae, its entire genome was duplicated. Each chromosome of the other ascomycetes contains stretches of genes whose orthologs are distributed over two Saccharomyces cerevisiae chromosomes. - There is also evidence that vertebrate evolution has involved at least two duplications of the entire genome. Example: both the invertebrate Drosophila and the invertebrate chordate Amphioxus contain a single HOX gene cluster while mice and humans have four. [View] A Second Solution: Mutations in Regulatory Regions Not all genes are expressed in all cells. In which cells and when a given gene will be expressed is controlled by the interaction of: - extracellular signals turning on (or off); - transcription factors, which turn on (or off); - particular genes. A mutation that would be lethal in the protein coding region of a gene need not be if it occurs in a control region (e.g. promoters and/or enhancers) of that gene. In fact, there is increasing evidence that mutations in control regions have played an important part in evolution. - Humans have a gene (LCT) encoding lactase; the enzyme that digests lactose (e.g. in milk). In most of the world's people, LCT is active in young children but is turned off in adults. However, northern Europeans and three different tribes of African pastoralists, for whom milk remains a part of the adult diet, carry a mutation in the control region of their lactase gene that permits it to be expressed in adults. The mutation is different in each of the 4 cases — examples of convergent evolution. - There are very few differences in the coding sequences between genes of humans and chimpanzees. However, many of their shared genes differ in their control regions. - The story of Prx1. Prx1 encodes a transcription factor that is essential for forelimb growth in mammals. When mice have the enhancer region of their Prx1 replaced with the enhancer region of Prx1 from a bat (whose front limbs are wings), the front legs of resulting mice are 6% longer than normal. Here, then is a morphological change not driven by a change in the Prx1 protein but by a change in the expression of its gene. - The story of Pitx1 [below] - The story of Style2.1 in the domestic tomato [Link] |Follow this link to more discussion of the role of changes in gene regulatory regions in the evolution of animal form.| A Third Solution? Another theoretically-possible way by which a point mutation might give rise to a new gene is if the point mutation in a previously noncoding section of DNA converts a triplet of nucleotides into ATG thus creating a new open reading frame (ORF). It is increasingly evident that much of noncoding DNA is transcribed into a heterogeneous collection of RNAs. Transcription of DNA with its newly-acquired ATG codon would produce an RNA molecule with a translation start codon (AUG). Translation of this RNA would create a protein that most likely would be useless, perhaps even harmful but might, on rare occasions, provide the starting point for the acquisition of a new useful gene. Large Changes in Phenotype Can Come from Small Changes in Genotype The building of an organ requires the coordinated activity of many genes. However, these are often organized in hierarchies so that "upstream genes" regulate the activity of "downstream genes". The closer you get to the top with a mutation, the greater the changes affected downstream. Follow these links to see examples of the influence of "master" (selector) genes on the phenotype. - homeobox gene (similar to bicoid in Drosophila) - with orthologs found in all vertebrates. - It contains 3 exons that - encode a protein of some 283 amino acids (varying slightly in different species) which is - a transcription factor that regulates the expression of other genes involved in the differentiation and function of - the anterior lobe of the pituitary gland (Pitx1 = "Pituitary homeobox1"); - jaw development (mutations are associated with cleft palate); - development of the thymus and some types of mechanoreceptors; - development of the hind limbs. - Its activity in these regions is controlled by regulatory regions (promoters and/or enhancers) specific to each region (and presumably turned on by other transcription factors in the cells of those regions). Pitx1 is an essential gene. Mutations in the coding regions are lethal when homozygous (shown in mice). However, mutations in noncoding regions need not be. All vertebrates have a pelvic girdle with associated bones which make up - the pelvic fins of fishes and the - hind legs of the tetrapods Pitx1 is needed by them all for the proper development of these structures (as well as the other functions of Pitx1). In a remarkable study of three-spined sticklebacks published in the 15 April 2004 issue of Nature, Michael Shapiro, Melissa Marks, Catherine Peichel, and their colleagues report that a mutation in a noncoding region of the Pitx1 gene accounts for most of the difference in the structure of the pelvic bones of the marine stickleback and its close freshwater cousins. The marine sticklebacks The freshwater sticklebacks - have prominent spines jutting out in their pelvic region (red arrow) as well as the spines along the back (that give the fish its name). These spines may help protect them from being eaten by predators. (Drawing courtesy of the Parks Administration in the Emilia-Romagna region of Italy.) - express the Pitx1 gene in various tissues, including - mechanoreceptors, and - the pelvic region. - have no — or very much smaller — spines in their pelvic region; - express the identical Pitx1 gene in all the same tissues except those that develop into the pelvic structures. - The reason: a mutation in an enhancer upstream of the Pitx1 exons. The unmutated enhancer turns on Pitx1 in the developing pelvic area. (Mice homozygous for a mutation in this control region have deformed hind limbs.) Here then is a remarkable demonstration of how a single gene mutation can not only be viable but can lead to a major change in phenotype — adaptive evolution. (The changes seem not to have produce true speciation as yet. The marine and freshwater forms can interbreed. In fact, that is how the differences in their hind limbs were found to be primarily due to the expression of Pitx1.) A survey of 21 different populations of sticklebacks — both freshwater and marine — from different regions of North America, Europe, and Japan has revealed a pattern of consistent genetic differences that distinguish the freshwater from the marine forms. However, only 17% of the distinguishing mutations were found in exons that alter the amino acid sequence of the encoded proteins. All the rest were "silent" and most, 41% or more, of these occurred in intergenic regions. These results further demonstrate the importance of mutations in regulatory regions — promoters and enhancers — in the evolution of adaptive phenotypes. 1 May 2014
<urn:uuid:15a38150-d7a1-49d0-949d-2f48406e0ad7>
CC-MAIN-2016-26
http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/M/Mutation_and_Evolution.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926618
2,073
4.09375
4
(File) In the animation, a yellow ball is in front of a plane mirror. The mirror produces an image of the ball behind the mirror. The image location is where all the reflected light rays intersect. Any person viewing the ball image must sight at this image location. Animation courtesy of The Physics Classroom, used with permission. All rights reserved. MARCH 03, 2009April Holladay, HappyNews ColumnistQ: Can I touch a rainbow? Azhar, Saudi Arabia A: We could touch rainbows, if they were physical objects. But rainbows are not objects. A rainbow is "a distorted image of the sun" whose light many raindrops bend, reflect and scatter to our eyes, write meteorologists Raymond L. Lee, Jr. and Alistair B. Fraser in The Rainbow Bridge. Optically speaking, an image is "any collection of light rays that appears to come from a more-or-less well-defined set of directions," Lee says. So an image is made of light. The question remains: can we touch the image called a 'rainbow'. The American Heritage Dictionary defines touching: " To cause or permit a part of the body, especially the hand or fingers, to come in contact with so as to feel: reached out and touched the smooth stone." But a rainbow is different from a stone. In fact, we can no more touch a rainbow than we can touch a reflected nose in a mirror. However, we can touch the place on a mirror reflecting the nose. Likewise we can touch water drops producing a rainbow. But we can't touch the image. Find out why with a simple experiment. Stand before a plane mirror side by side with a friend. Touch the spot on the mirror where you see your nose. Keep your finger on the spot. Have your friend touch the spot on the mirror where he sees your nose. You each touched a different spot and saw a slightly different view of the nose, because you view the nose along different lines of sights. Neither of you touched the image. In fact, the image is located behind the mirror, where it would seem to all observers that reflected light from your nose originates. We can, however, touch an image location. Suppose you stand a foot (0.3 m) in front of a mirror and see your nose, then the nose image is one foot behind the mirror. You can touch that location, if you can get behind the mirror. Even then, though, you couldn't touch the image, because no light gets behind the mirror. "Light does not actually pass through the [image] location on the other side of the mirror; it only appears to an observer as though the light is coming from this location," says physicist Tom Henderson. The nose image at the image location is a virtual image, not a real one. Real images are made of light. Back to touching raindrops that make a rainbow: Suppose you turn on a sprinkler and see a rainbow in the sunlit spray. You can "certainly touch the spray" that generates the rainbow, Lee and Fraser write. By the way, just as you and your friend saw different views of your nose, "each of your eyes sees a slightly different rainbow," Lee emails. However, unlike the nose image location, we can't touch the location of the rainbow image. It is behind the rainbow (at the so-called antisolar point), much as the nose image is behind the mirror. But the antisolar point is too far away to touch. Being an image of the sun, the rainbow image location is at the same distance behind the raindrops as the sun is in front — "effectively at infinity," Fraser says. It seems strange a rainbow is as far away as the sun. But, try moving. The rainbow moves with you, just as the sun does. A rainbow looks nothing like the sun, so you might wonder if it's really an image of the sun. A camera makes images that look like the objects they represent. Similarly, binoculars, a magnifying glass and a slide projector produce realistic looking images. But a rainbow is not a man-made image. A rainbow is like a mirage, whose "wild distortions are merely images formed by the atmosphere behaving as a lens," says Fraser. Lee explains how raindrops form the sun's distorted image, a rainbow: Many raindrops, acting in concert, change direct sunlight so that · reflection within the drops makes sunlight appear to come from the sky opposite the sun (like reflection in a mirror makes a nose appear to be behind the mirror) · sunlight's spectral colors are revealed by its refraction on entering and exiting the drops (like a prism bends light and reveals its colors) · these drops' approximate spherical symmetry makes rainbow light appear to come from a set of directions that encircles the antisolar point — that point behind the rainbow, where the image exists. Thus we see a bright, colorful circle of rainbow light — but it's merely a greatly rearranged image of the sun — an image we cannot reach to touch. The Rainbow Bridge, Rainbows in Art, Myth, and Science by Raymond L. Lee and Alistair B. Fraser 2001. University Park, PA: Pennsylvania State University Press. Image formation for plane mirrors by Tom Henderson, Physics Classroom Tutorial What causes rainbows, WeatherQuesting Rainbows by Alistair Fraser Atmospheric Optics by Les Cowley Rainbows, by Rod Nave, Hyperphysics (Answered March 9, 2009) A: Yes, you can catch a rainbow. While driving in a valley in England I had the unique experience of driving through the end of a rainbow. John Albinson, Kingston, Ontario, Canada A: Yes! Since a rainbow is the reflection of light on tiny water droplets, if you're close enough to touch the water that's reflecting the light, then you are basically touching the rainbow. Try it with a garden hose spray nozzle that can be adjusted to a fine mist. On a sunny afternoon, spray it toward your shadow, and you should see a rainbow that, if the angles are right, is close enough to touch. Anthony Kerschen, McDonough, Georgia, USA A: In short, you can touch someone else's rainbow, but not your own. A rainbow is light reflecting and refracting off water particles in the air, such as rain or mist. The water particles and refracted light that form the rainbow you see can be miles away and are too distant to touch. However, it is possible to touch the water particles and refracted light (if you agree that you can touch light) of a rainbow that someone else is viewing. Imagine flying an open-cockpit plane through water particles that are refracting light into a rainbow for someone viewing it from a distant vantage point. There is one instance where you can touch your own rainbow, though, if you consider the refracted light from a garden hose spray to be a rainbow. If you have ever tried this you may have discovered what I did – that rainbows are ethereal and that waving your hand through the mist has no effect on them and makes it seem as though they exist in some other realm. Janet Warner, Durham, North Carolina, USA A: Yes, you can touch a rainbow. But simply not how most humans typically observe a rainbow. I lived beside a lake in Arkansas, for 14 years. Every summer I rode a jet ski on that lake. Occasionally, the light would be perfect, would interact with the spray behind the jet ski and produce a small rainbow. Theoretically, I could have reached back, and touched any part of it. So, yes, but on a smaller scale than most perceive. Will McBride, Colorado, USA A: The rainbow is an optical phenomenon of refraction. When sunlight hits a drop of water, it breaks down into colors at a certain angle, and it is also seen only from a certain angle. If you move from the position where you saw the rainbow to a different position, you will see a different rainbow, reflected from different water drops. And, if you get out of this angle altogether, the rainbow just disappears. This can be easily experienced within reachable distance, when seeing the rainbow produced by a nearby fountain-spray. So, if you move from where your eyes touch the rainbow, to a position where your hands could touch it, you will only be touching the air where it was, but the rainbow will not be there for you see that you do! In other words, you can never get hold of the rainbow!
<urn:uuid:4c546b59-8645-42ca-98bc-7255e50943a9>
CC-MAIN-2016-26
http://www.happynews.com/news/332009/touching-rainbows.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940267
1,789
3.421875
3
Papermakers produce bright, white paper by bleaching brown pulp with chlorine-based chemicals. One by-product, unfortunately, is dioxin, a proven carcinogen and one of the world's most toxic compounds. According to an EPA study, fish downstream from many U.S. paper mills are contaminated with high levels of dioxin. Also, the toxin apparently exists in the paper products themselves—and can pass into food. A study of bleached-paper milk cartons by Canada's Health and Welfare Branch has shown that dioxin from the cartons migrates into the milk. Greenpeace U.S.A. has petitioned the USDA, which subsidizes this country's school lunch program, to drop bleached milk cartons from the program. "Dioxin is so incredibly toxic," says Greenpeace spokesperson Shelley Stewart, "that no amount, no matter how small, can be considered safe." Several European countries are tackling the problem by leaving cartons unbleached or by switching to a safer oxygen-bleaching process. North Dakotans will celebrate their state's 100th anniversary this year by launching a program to plant 1 million trees for each year since the territory declared statehood—100 million trees, in other words—by the year 2000. That figures out to 10 million trees every year for the next 10 years. According to American Forests magazine, North Dakota, though one of the least-forested states in the nation, leads the country in the planting of trees for shelterbelts and windbreaks. Scientists at the U.S. Forest Service's North Central Forest Experimental Station in East Lansing, Michigan, have found that thirsty plants emit high-pitched noises inaudible to the human ear but meaningful to insects. Trees and plants produce these "acoustic emissions" when their moisture-conducting tissue—xylem—begins to break down from lack of water; the more the plant deteriorates, the more frequent the sounds become. The "distress calls" may be the plants' undoing; the researchers theorize that insects use the sounds to locate weakened plants, which they use for food or breeding grounds.
<urn:uuid:938d70fc-e37a-4c97-82e1-70c69e5c2e53>
CC-MAIN-2016-26
http://www.motherearthnews.com/nature-and-environment/toxins-in-milk-cartons-zmaz89jazshe.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00079-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943977
440
3.21875
3
Isaiah Berlin (1909-97) is one of the most important thinkers, and one of the most celebrated prose-writers, of the twentieth century. A plain-speaking connoisseur of human beings, he not only understands the human condition in the broadest sense, but also revels in the indispensable idiosyncratic details of our individual lives. His brilliant, timeless essays speak to any receptive and intelligent reader. Berlin strongly believed in 'the power of ideas'. His own ideas, which often swim against the current of conventional thought, are increasingly relevant today. Growing globalisation, migration and interconectedness, far from being homogenising forces, uncover and exacerbate the ethical differences that divide humanity. They bring to centre stage just those issues of multiculturalism and mutual cultural toleration that Berlin illuminated. Berlin is a staunch defender of the maximum possible freedom from political control, and of the widest practicable range of moral and cultural diversity, as against the authoritarianism and conformism to which societies are continuously vulnerable. His distinction between the 'monist' hedgehog (the single-issue fanatic) and the 'pluralist' fox (who welcomes variety and untidiness), and his celebration of 'the crooked timber of humanity', out of which 'no straight thing was ever made', have entered the vocabulary of modern culture. Berlin's 'value pluralism' - the recognition that our deepest values are irreducibly multiple, not variations on a single overarching value such as happiness or utility, and may create clashes that cannot be resolved without tragic loss - is a view of morality with the power to transform lives, to turn missionaries into explorers, terrorists into diplomats. And the problems created by the plurality of value are endemic in human nature: each political authority and each individual has to face intractable conflicts of the kind that Berlin highlighted. Berlin gives no comfort to fanatics. All inflexibly assertive ideologies, political or religious, are dangerous, and must be firmly resisted. The stand-off between extreme Islamic religious beliefs and Western liberalism is a paradigmatic case of conflict between monism and pluralism. How are we to regard or treat those who not only differ from us in their fundamental commitments, but insist that we are in the wrong, and sometimes try to force us to comply? Berlin's response is clear. Fundamentalism, terrorism and aggressive nationalism are driven by mistaken but powerful ideas, just as Nazism and Communism were. The ideas may be cruder and less well articulated, but they are no less potent. They stem from ignorance and prejudice, propaganda and stereotypes, which should be combated by all available techniques of education: our enemies' enemy is knowledge. The price of freedom, famously, is eternal vigilance. And the defence of freedom and complexity against their betrayers is a task that does not grow easier or less urgent. The same goes for our defence against all intellectual malignity. Our susceptibility to ideological error and manipulation is inbuilt, and Berlin is a trusty guide as we seek to understand and resist the illiberal forces that permanently threaten to blight our lives and poison the course of history. Return to Isaiah Berlin's Web Page
<urn:uuid:ad1c553c-1c73-4e10-a735-a66fc4627fd6>
CC-MAIN-2016-26
http://press.princeton.edu/berlin/hardy.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943576
649
2.578125
3
William Blake was born in London on November 28, 1757, to James, a hosier, and Catherine Blake. Two of his six siblings died in infancy. From early childhood, Blake spoke of having visions—at four he saw God "put his head to the window"; around age nine, while walking through the countryside, he saw a tree filled with angels. Although his parents tried to discourage him from "lying," they did observe that he was different from his peers and did not force him to attend conventional school. He learned to read and write at home. At age ten, Blake expressed a wish to become a painter, so his parents sent him to drawing school. Two years later, Blake began writing poetry. When he turned fourteen, he apprenticed with an engraver because art school proved too costly. One of Blake's assignments as apprentice was to sketch the tombs at Westminster Abbey, exposing him to a variety of Gothic styles from which he would draw inspiration throughout his career. After his seven-year term ended, he studied briefly at the Royal Academy. In 1782, he married an illiterate woman named Catherine Boucher. Blake taught her to read and to write, and also instructed her in draftsmanship. Later, she helped him print the illuminated poetry for which he is remembered today; the couple had no children. In 1784 he set up a printshop with a friend and former fellow apprentice, James Parker, but this venture failed after several years. For the remainder of his life, Blake made a meager living as an engraver and illustrator for books and magazines. In addition to his wife, Blake also began training his younger brother Robert in drawing, painting, and engraving. Robert fell ill during the winter of 1787 and succumbed, probably to consumption. As Robert died, Blake saw his brother's spirit rise up through the ceiling, "clapping its hands for joy." He believed that Robert's spirit continued to visit him and later claimed that in a dream Robert taught him the printing method that he used in Songs of Innocence and other "illuminated" works. Blake's first printed work, Poetical Sketches (1783), is a collection of apprentice verse, mostly imitating classical models. The poems protest against war, tyranny, and King George III's treatment of the American colonies. He published his most popular collection, Songs of Innocence, in 1789 and followed it, in 1794, with Songs of Experience. Some readers interpret Songs of Innocence in a straightforward fashion, considering it primarily a children's book, but others have found hints at parody or critique in its seemingly naive and simple lyrics. Both books of Songs were printed in an illustrated format reminiscent of illuminated manuscripts. The text and illustrations were printed from copper plates, and each picture was finished by hand in watercolors. Blake was a nonconformist who associated with some of the leading radical thinkers of his day, such as Thomas Paine and Mary Wollstonecraft. In defiance of 18th-century neoclassical conventions, he privileged imagination over reason in the creation of both his poetry and images, asserting that ideal forms should be constructed not from observations of nature but from inner visions. He declared in one poem, "I must create a system or be enslaved by another man's." Works such as "The French Revolution" (1791), "America, a Prophecy" (1793), "Visions of the Daughters of Albion" (1793), and "Europe, a Prophecy" (1794) express his opposition to the English monarchy, and to 18th-century political and social tyranny in general. Theological tyranny is the subject of The Book of Urizen (1794). In the prose work The Marriage of Heaven and Hell (1790-93), he satirized oppressive authority in church and state, as well as the works of Emanuel Swedenborg, a Swedish philosopher whose ideas once attracted his interest. In 1800 Blake moved to the seacoast town of Felpham, where he lived and worked until 1803 under the patronage of William Hayley. He taught himself Greek, Latin, Hebrew, and Italian, so that he could read classical works in their original language. In Felpham he experienced profound spiritual insights that prepared him for his mature work, the great visionary epics written and etched between about 1804 and 1820. Milton (1804-08), Vala, or The Four Zoas (1797; rewritten after 1800), and Jerusalem (1804-20) have neither traditional plot, characters, rhyme, nor meter. They envision a new and higher kind of innocence, the human spirit triumphant over reason. Blake believed that his poetry could be read and understood by common people, but he was determined not to sacrifice his vision in order to become popular. In 1808 he exhibited some of his watercolors at the Royal Academy, and in May of 1809 he exhibited his works at his brother James's house. Some of those who saw the exhibit praised Blake's artistry, but others thought the paintings "hideous" and more than a few called him insane. Blake's poetry was not well known by the general public, but he was mentioned in A Biographical Dictionary of the Living Authors of Great Britain and Ireland, published in 1816. Samuel Taylor Coleridge, who had been lent a copy of Songs of Innocence and of Experience, considered Blake a "man of Genius," and Wordsworth made his own copies of several songs. Charles Lamb sent a copy of "The Chimney Sweeper" from Songs of Innocence to James Montgomery for his Chimney-Sweeper's Friend, and Climbing Boys' Album (1824), and Robert Southey (who, like Wordsworth, considered Blake insane) attended Blake's exhibition and included the "Mad Song" from Poetical Sketches in his miscellany, The Doctor (1834-1837). Blake's final years, spent in great poverty, were cheered by the admiring friendship of a group of younger artists who called themselves "the Ancients." In 1818 he met John Linnell, a young artist who helped him financially and also helped to create new interest in his work. It was Linnell who, in 1825, commissioned him to design illustrations for Dante's Divine Comedy, the cycle of drawings that Blake worked on until his death in 1827. All Religions Are One (1788) America, a Prophecy (1793) Europe, a Prophecy (1794) For Children: The Gates of Paradise (1793) For the Sexes: The Gates of Paradise (1820) Poetical Sketches (1783) Songs of Experience (1794) Songs of Innocence (1789) The Book of Ahania (1795) The Book of Los (1795) The First Book of Urizen (1794) The Marriage of Heaven and Hell (1790) The Song of Los (1795) There Is No Natural Religion (1788) Visions of the Daughters of Albion (1793)
<urn:uuid:8c39782c-9f66-4391-9587-206d6833c683>
CC-MAIN-2016-26
https://www.poets.org/poetsorg/poem/eternity
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980543
1,482
3.484375
3
Starting antiretroviral therapy reduces the risk of tuberculosis for HIV-positive adults in developing countries by 65%, according to the results of a meta-analysis published in PLoS Medicine. The benefits of HIV therapy were significant at all CD4 cell counts, including above 350 cells/mm3, the current World Health Organization (WHO) threshold for initiation of antiretroviral treatment. The investigators therefore believe that their findings should be taken into account when therapy at higher CD4 cell counts is being considered. HIV is the biggest risk factor for TB and has contributed to the resurgence of the disease, especially in resource-limited settings. In 2010, there were an estimated 1.1 million new cases of TB in people with HIV. An estimated 900,000 cases were in people living with HIV in Africa. WHO guidelines issued in 2009 recommended antiretroviral therapy for people with a CD4 cell count below 350 cells/mm3 and for all HIV-positive individuals with TB. Since then, studies have been published suggesting that the scaling-up of HIV treatment could contribute to the control of the TB epidemic. Investigators wanted to get a clearer understanding of the impact of starting antiretroviral therapy on the risk of They therefore conducted a systematic review and meta-analysis of published studies that addressed this question. The investigators restricted their search to research conducted in developing countries. Studies were eligible for inclusion if they compared the incidence of TB in HIV-positive adults according to their use of antiretroviral therapy. All the studies had at least six months A total of eleven studies met the investigators’ inclusion criteria. Four were conducted in sub-Saharan Africa; four were from South America; one was conducted in the Caribbean; and one was from a combination of regions (sub-Saharan Africa, South America and Asia). The methodological quality of four studies was rated as high; five were of medium quality; and three were of low quality. The meta-analysis of the findings of all eleven studies showed that antiretroviral therapy was strongly associated with a reduction in the incidence of TB, regardless of CD4 cell count (HR = 0.35; 95% CI, 0.28-0.44; p < 0.001). Two studies involved people with a CD4 cell count below 200 cells/mm3. Their combined results showed that antiretrovirals reduced the risk of TB by 84% (HR = 0.16; 95% CI, 0.07-0.36; 0 A total of four studies involved people with CD4 cell counts between 200 and 350 cells/mm3. When combined, their results showed that HIV therapy reduced the risk of TB by 66% (HR = 0.34; 95% CI, 0.19-0.60; p < 0.001). Starting HIV therapy with a CD4 cell count above 350 cells/mm3 also had a significant impact on the incidence of TB. The combined results of the three studies which involved people with CD4 cell counts above this level showed that starting therapy reduced the risk of TB by 57% (HR = 0.43; 95% CI, 0.30-0.63; p< 0.001). “This systematic review indicates that antiretroviral therapy is strongly associated with a reduction in tuberculosis incidence in adults with HIV across all CD4 cell counts,” comment the authors. “Our key finding that antiretroviral therapy has a significant impact on preventing tuberculosis in adults with CD4 counts above 350 cells/mm3 is consistent with studies from developed countries and will need to be considered by healthcare providers, researchers, policy makers, and people living with HIV when weighing the benefits and risks of initiating antiretroviral therapy above 350 cells/mm3.”
<urn:uuid:d6f9c598-19b2-42b9-8eee-6cf2a6c34b65>
CC-MAIN-2016-26
http://www.aidsmap.com/Starting-HIV-treatment-reduces-risk-of-tuberculosis-even-for-patients-with-higher-CD4-cell-counts/page/2443986/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947676
840
2.984375
3
Fracking, the controversial method of mining shale gas, is widespread across Pennsylvania, covering up to 280,000 km of the Appalachian Basin. New research in the Annals of the New York Academy of Sciences explores the threat posed to biodiversity including pollution from toxic chemicals, the building of well pads and pipelines, and changes to wetlands. "Shale gas has engendered a great deal of controversy, largely because of its impact on human health, but effects on biological diversity and resources have scarcely been addressed in the public debate," said study author Erik Kiviat. "This study indicated a wide range of potential impacts, some of which could be severe, including salinization of soils and surface waters and fragmentation of forests. The degree of industrialization of shale gas landscapes, and the 285,000 km extent of the Marcellus and Utica shale gas region alone, should require great caution regarding impacts on biodiversity." |Contact: Ben Norman|
<urn:uuid:90087837-f24a-433b-8a52-aaa5c904b036>
CC-MAIN-2016-26
http://www.bio-medicine.org/biology-news-1/First-risk-assessment-of-shale-gas-fracking-to-biodiversity-30669-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922971
192
3.46875
3
CloudSat is the first satellite that uses an advanced radar to "slice" through clouds to see their vertical structure, providing a completely new observational capability from space (previous weather satellites could only image the uppermost layers of clouds). CloudSat's primary goal is to furnish data needed to evaluate and improve the way clouds are represented in global models, thereby contributing to better predictions of clouds and thus to their poorly understood role in climate change and the cloud-climate feedback. CloudSat is an international and interagency mission with project management by JPL. Partners include the Canadian Space Agency, the U.S. Air Force and the U.S. Department of Energy. Ball Aerospace designed and built the spacecraft. Purpose: Radar studies of clouds Launch: April 28, 2006 + CloudSat home page
<urn:uuid:3c2cda66-17ef-402b-8a24-8e92b53c50c9>
CC-MAIN-2016-26
http://www.nasa.gov/centers/jpl/missions/cloudsat.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92143
163
3.4375
3
The music of ancient Greece was inseparable from poetry and dancing. It was entirely monodic, there being no harmony as the term is commonly understood. The earliest music is virtually unknown, but in the Homeric era a national musical culture existed that was looked upon by later generations as a "golden age." The chief instrument was the phorminx, a lyre used to accompany poet-singers who composed melodies from nomoi, short traditional phrases that were repeated. The earliest known musician was Terpander of Lesbos (7th cent. B.C.). The lyric art of Archilochus, Sappho, and Anacreon was also musical in nature. In the 6th cent. B.C., choral music was used in the drama, for which Pindar developed the classical ode. The main instruments at this time were the aulos, a type of oboe associated with the cult of Dionysus, and the kithara, a type of lyre associated with Apollo and restricted to religious and hymnic use. This classical style of composition decayed in the last quarter of the 5th cent. B.C. After the fall of Athens in 404 B.C., an anti-intellectual reaction took place against the classical art, and by about 320 B.C. it was almost forgotten. The new style, which resulted in the rise of professional musicians, was marked by subjective expression, free forms, more elaborate melody and rhythms, and chromaticism. The chief musical figures were Phrynis of Mitylene (c.450 B.C.), his pupil Timotheus of Miletus, and the dramatist Euripides. Finally, ancient Greek music lost its vitality and dwindled to insignificance under the Roman domination. There were two systems of musical notation, a vocal and an instrumental, both of which are, though still problematic. They are decipherable largely because of the Introduction to Music written by Alypius (c.A.D. 360). In spite of the prominent position of music in the cultural life of ancient Greece, only 15 musical fragments are extant, all which date from the postclassical period. Early in its history, Greek music benefited from the discovery, usually attributed to Pythagoras of Samos, of the numerical relations of tones to divisions of a stretched string. The temperament, or Pythagorean tuning, derived from this series of ratios has been important throughout subsequent music history. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
<urn:uuid:6afbfbab-6084-4b92-9db8-847da30df58b>
CC-MAIN-2016-26
http://www.factmonster.com/encyclopedia/entertainment/greek-music-ancient-greek-music.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971515
531
4.03125
4
Every day, we take steps to reduce risks in our lives. We wear a helmet when riding a bike and wear seat belts when driving to reduce the risk of getting hurt, but we don't spend too much time thinking about risk. However, it's important to understand risk as it relates to health. "Risk" has special meanings in the health and medical fields. Knowing the basic types of risk can help you understand your chances of getting breast cancer and the steps you can take to lower your risk. The most basic type of risk is absolute risk. Absolute risk is a person's chance of developing a certain disease over a certain period of time. Absolute risk is estimated by looking at a large group of people who are similar in some way (the same age, for example) and counting how many people in the group develop a certain disease over a certain period of time. Knowing the absolute risk of a disease can help you understand the health risks in your life. The examples below show that the absolute risk of breast cancer is low in young women and much higher in older women. If we followed 100,000 women ages 30 to 34 for one year, about 25 women would develop breast cancer . So, the one-year absolute risk of breast cancer for a 30 to 34 year-old woman is 25 per 100,000 women (or 1 per 4,000 women). This is a risk of less than one percent. Another way to say this is that the chances of getting breast cancer in the next year are less than one percent for the average 30 to 34 year-old woman. From this example, we can see that the one-year absolute risk of breast cancer for a young woman is low. You may see absolute risk presented over long periods of time. The table below shows the 10-year absolute risks of breast cancer by age. Absolute risk of breast cancer in American women by age If current age is: Absolute risk of developing breast cancer in the next 10 years is: 1 in 1,674 (0.06%) 1 in 225 (0.4%) 1 in 69 (1.4%) 1 in 44 (2.3%) 1 in 29 (3.5%) 1 in 26 (3.9%) Lifetime risk (0 to 85) 1 in 8 (12.3%) Source: American Cancer Society . One absolute risk you may see is the lifetime risk of breast cancer. Women in the U.S. have a "1 in 8” (or about 12 percent) lifetime risk of getting breast cancer . This means that for every 8 women in the U.S. who live to be age 85, one will be diagnosed with breast cancer during her lifetime. The lifetime risk of breast cancer is much higher than the 1- or 10-year absolute risks of breast cancer. This is because the lifetime risk adds up all the 1-year absolute risks over a woman's life, up to age 85. Learn how lifetime risk of breast cancer varies worldwide. Anything that increases or decreases a person's absolute risk of getting a disease is called a risk factor. A risk factor can be related to lifestyle (such as lack of exercise), genetics (such as family history) or the environment (such as radiation exposure). Some factors increase risk. For example, older women have a higher risk of getting breast cancer than younger women. So, age is a risk factor for breast cancer. Some factors decrease risk. For example, women who breastfeed have a lower risk of getting breast cancer than women who do not (learn more). So, breastfeeding is also a risk factor for breast cancer. A relative risk shows how much higher, how much lower or whether there is no difference in risk in people with a certain risk factor compared to the risk in people without the factor. A relative risk compares two absolute risks. The absolute risk of those with the factor divided by the absolute risk of those without the factor gives you the relative risk. When relative risk is: Greater than 1(for example, 1.5 or 2.0) People with the risk factor have a higher risk compared to people without the risk factor. A relative risk of 1.5 means someone with the risk factor has a 50 percent higher risk of breast cancer than someone without the factor. A relative risk of 2.0 means someone with the risk factor has twice the risk (or 2-fold the risk) of someone without the factor. Less than 1(for example, 0.8) People with the risk factor have a lower risk compared to people without the risk factor. A relative risk of 0.8 means someone with the risk factor has a 20 percent lower risk of breast cancer than someone without the factor. A relative risk of 1 means there is no difference in risk between people with and without the risk factor. Say a study shows that women who don't exercise (inactive women) have a 25 percent increase in breast cancer risk compared to women who do exercise (active women). This statistic is a relative risk (the relative risk is 1.25). It means inactive women are 25 percent more likely to develop breast cancer than women who exercise. The impact of a relative risk depends on the underlying absolute risk of a disease. We can think about relative risk in terms of money. If you have a single dollar, this makes dollars "rare.” If you double your money, you only gain one extra dollar. But, if you have one million dollars, this makes dollars "common" and doubling your money means you gain one million extra dollars. In both cases, you double your money, but the real increase in dollars is quite different. The same is true with disease risk. The higher the absolute risk of getting a disease, the greater the number of extra cases that will occur for a given relative risk. Using our example of the exercise study above, we can show how absolute risks affect the number of extra cases. Inactive women have a 25 percent greater risk of breast cancer than active women (a relative risk of 1.25). Since older women are more likely to get breast cancer, a lack of exercise has a greater impact on breast cancer risk in older women than in younger women. Let’s first look at the women in the study ages 70 to 74 years. The study finds that 500 women per 100,000 who are inactive develop breast cancer during one year. This is the absolute risk for women with the risk factor, lack of exercise. The study also shows that 400 women per 100,000 who are active develop breast cancer. This is the absolute risk for women without the risk factor. So, the relative risk is 1.25 for women who are inactive compared to those who are active. Among women ages 70 to 74, being inactive led to 100 more cases of breast cancer per 100,000 women in one year (500 cases – 400 cases = 100 cases). Now let’s look at the women in the study ages 20 to 29. The study finds that 5 women per 100,000 who were inactive developed breast cancer during one year. And, 4 women per 100,000 who were active got breast cancer. Here, the relative risk is also 1.25. However, in women ages 20 to 29, being inactive led to only one extra case of breast cancer per 100,000 women (5 cases – 4 cases = 1 case). So, the same relative risk of 1.25 led to many more extra cases of breast cancer in the older women (100 extra cases) than in the younger women (one extra case). The impact of the same relative risk (1.25) was different depending on the underlying absolute risk. Relative risks can be presented in many ways. This guide may help you recognize a relative risk when you see or hear it. A relative risk between 1 and 1.99 may be presented in several ways. For example, in the exercise study above, the relative risk is 1.25. You may see: When a relative risk is 2 or more, it is often presented as the number of times the risk is increased. For example, women with atypical hyperplasia, a benign breast condition, have a relative risk of about 4 compared to women without atypical hyperplasia. You may see: A relative risk less than 1 means the risk factor lowers the risk of disease. For example, women who breastfeed for one year have a relative risk of breast cancer of about 0.94 compared to women who do not breastfeed. You may see: You can put your understanding of relative risks to work right away. Our Breast Cancer Research Studies section has summary tables of the current body of research on the topics ranging from risk factors to treatment to social support. These tables show some of the research behind many of the recommendations and standards of practice discussed throughout this section. If you’re not familiar with how the research process works (or just need a refresher), “How to read a research table” is a good place to start before looking at the tables. Learn more about breast cancer research. Understanding absolute risk and relative risk can help you make informed choices about your health. No matter your risk of breast cancer, a healthy lifestyle is always important. Learn more about healthy behaviors and breast cancer risk.
<urn:uuid:867179f9-d56b-4f1f-8166-c9b4383b6480>
CC-MAIN-2016-26
http://ww5.komen.org/BreastCancer/UnderstandingRisk.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951398
1,944
3.265625
3
Lilies are herbaceous flowering plants that grow from bulbs. The plants are commonly grown for their bright, ornamental blossoms, which often are used in bouquets and flower arrangements. Thousands of lily cultivars exist, resulting in a wide variety of sizes, shapes and colors. Some lilies can reach up to 6 feet tall, though most only reach about 24 inches. They are easy to care for and require only occasional maintenance to thrive once established. Plant lily bulbs during late October in a location that receives at least eight hours of direct sunlight throughout the day. Spread a 1-inch layer of manure over the planting site and use a garden hoe to incorporate it into the soil to increase moisture retention and fertility. Space the lilies 14 inches apart. Water the lilies once per week during the first month of growth, soaking the soil to a depth of 3 inches and allowing the soil to partially dry between applications. Reduce the frequency of watering after the first month only to weeks that receive less than 1 inch of rainfall. Apply at least 1 inch of water at each application. Apply a balanced 10-10-10 NPK fertilizer to lilies in early spring, just as new shoots emerge from the soil. Water after applying to disperse the nutrients into the soil. Follow the fertilizer manufacturer's directions for dosage. Remove dead or faded lily flowers as often as necessary to increase aesthetic appeal, conserve nutrients and encourage the formation of additional blossoms. Pinch off the flowers as close to their point of origin as possible to reduce damage. Apply a 2-inch layer of mulch to the soil surrounding lilies in late fall after the plants have died back to insulate the soil. Remove the mulch in early spring, just as new growth begins to penetrate the surface of the soil.
<urn:uuid:5787f549-d849-4b05-8c4f-f84dfff718e8>
CC-MAIN-2016-26
http://www.gardenguides.com/94394-care-lilies.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951847
369
3.359375
3
WELCH, TEXAS. Welch is at the junction of State highways 137 and 83 and Farm roads 829 and 2053, ten miles northeast of Cedar Lake and eighteen miles northwest of Lamesa in northwestern Dawson County. It started in 1924 when farmers near Lou donated land for a gin, and Charley Holden built a grocery store. In 1934 both the Lou and Pride post offices were closed and absorbed by the new post office at Welch, previously called Shack Town. Welch has elementary and high schools, churches, several stores, and gins serving the cotton-raising community. The population was 185 in 1949 and 110 in 1980 through 2000. Image Use Disclaimer All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Handbook of Texas Online, William R. Hunt, "Welch, TX," accessed July 02, 2016, http://www.tshaonline.org/handbook/online/articles/hlw14. Uploaded on June 15, 2010. Published by the Texas State Historical Association.
<urn:uuid:1c65f457-33a8-4361-9c9e-b536b8a78fa6>
CC-MAIN-2016-26
https://www.tshaonline.org/handbook/online/articles/hlw14
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943417
370
2.765625
3
A team from the Financial Action Task Force (FATF), an inter-governmental body that sets benchmarks for legislation on money laundering, is in India to assess the country's legal and enforcement framework. The assessment, which ends on Friday, will set the ball rolling for New Delhi's membership of the elite body. What is money laundering? Illegal activities such as drug trafficking, trade in weapons and white collar crimes can generate large sums of money. Money laundering refers to the act of making these gains legitimate by disguising the source of money, changing its form or moving it to a location where not many questions are asked. The usual way is to put the money into the financial system by breaking it down into small deposits. The funds are then moved to different accounts with multiple banks. In the third stage, the money is used to acquire real assets, which then create legitimate gains. The estimates of money laundered range from 2-5 per cent of the national income. What are the implications? Unchecked money laundering makes monetary management difficult as there is no fix on the money supply. A country that is soft on illegal money risks losing foreign investments and can also attract unsocial elements. Such elements may gradually use their money power to acquire influence and undermine the system. Laundered money could also be used to finance terror. What is the role of the FATF? The FATF was founded by the G-7 countries in 1989 to develop and promote national and international policies to combat money laundering and terror financing. The membership of the FATF is limited to 35 countries at present. India has an observer status. It is a member of the Asia-Pacific Group, a FATF-style regional body. How does FATF counter money laundering? The FATF issued a set of 40 recommendations in April 1990 that provide a comprehensive plan to fight money laundering. The body came out with eight special recommendations in 2001. In October 2004, it published Ninth Special Recommendations. FATF member countries have to comply with these recommendations. How will FATF membership benefit India? Membership of the body will allow India easy access to real-time information on money laundering and terror financing. India, a victim of terrorism, can raise the diplomatic pitch against perpetrators. It will also make India more attractive in the eyes of global investors. What is the regulatory regime in India? The Prevention of Money Laundering Act, 2002, forms the core of the legal framework in India. PMLA and the rules notified thereunder are in effect from July 1, 2005. FIR-IND and Enforcement Directorate are the two agencies responsible for PMLA implementation. The Financial Intelligence Unit, a central agency that receives information, analyses and processes the data and disseminates it to national and international authorities.
<urn:uuid:ee569ba1-da1c-4ee2-bcea-5d9418b917c7>
CC-MAIN-2016-26
http://articles.economictimes.indiatimes.com/2009-12-11/news/28388693_1_money-laundering-financial-action-task-force-fatf
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939149
566
2.703125
3
Research shows smoking one cigarette affects heart Dr. Vincent L. Sorrell. Photo by Cliff Hollis. (July 15, 2001) Preliminary research has shown that smoking a single cigarette can impair the heart, causing it to have to work harder. The research, released June 28 at the American Society of Echocardiography's 12th annual scientific sessions in Seattle, suggests that nicotine alone is not the trigger for this change in cardiac performance, as researchers did not see similar cardiac responses in participants who simply chewed nicotine gum. The lead researcher for the project was Dr. Vincent L. Sorrell, a cardiologist and associate professor of medicine at the Brody School of Medicine at East Carolina University. Other participants include lead author Dr. Firas Ghanem, a resident physician in the ECU Department of Internal Medicine, lead sonographer Christopher Mann and cardiologist Dr. Andrew Sumner, ECU assistant professor of medicine. While physicians have long recognized that nicotine increases heart rate and blood pressure, this research concludes that smoking even a single cigarette causes an abrupt change in the performance of the heart's left ventricle while the heart chamber is filling with blood. This change in cardiac function is likely the result of the 4,000 chemicals and 43 carcinogens present in tobacco smoke. Smoking puts people at risk for cardiovascular disease by interfering with the functioning of the endothelium--the layer of cells lining the inside of arteries. Each year in the United States, approximately 53,000 deaths are attributed to cardiovascular disease caused by smoking. Cigarette smoking also nearly doubles the risk of suffering a stroke. Sorrell said a recent ABC News special program on smoking and young women rang true to what he's heard from his patients. "Every one of these young women said, 'If you get lung cancer that requires them to remove one of your lungs, you won't be able to walk up a flight of stairs anymore. We can't do that now.' That's what I hear from my cardiology patients all the time, and that's what gave me the idea for this research," Sorrell said. "These same patients' echocardiograms and chest X-rays look entirely normal, yet they continue to complain of shortness of breath. Something has to be going on in their hearts, and shortness of breath is a common manifestation of abnormal left ventricle relaxation." Researchers at the Brody School of Medicine studied a group of 27 healthy young individuals with no evidence of organic heart disease immediately before and after smoking one cigarette or chewing nicotine gum for 15 minutes. Using echocardiography, which provides real-time images of the heart and surrounding blood vessels, researchers measured the participants' blood flow rising through the pulmonary veins and entering the left ventricle across the mitral valve. They performed numerous recordings and measurements of cardiac performance without knowing which group the patients were in -- cigarette smokers or nicotine gum chewers. Sorrell said the findings were a little surprising. "This study shows for the first time that smoking even one cigarette can dramatically and negatively affect the heart's performance. Poor heart performance can cause a significant loss of exercise endurance or tolerance," he said. "The most striking thing about this study is that this is happening right in front of us every day." In Europe, research looking at exercise tolerance has shown that the key is not how well your heart pumps but how well it is able to relax, Sorrell said. Diastolic dysfunction is when the heart's relaxing phase is weak. "Our research was the preliminary investigation to see if we could detect any change in the diastolic dysfunction between these two groups of young healthy adults," Sorrell said. "The next appropriate step is to study diastolic dysfunction in smokers who complain of shortness of breath. We need to show that this is the same thing that occurs in smokers, [and] also look
<urn:uuid:7de018ca-3d5a-4662-91f0-786e8d95821c>
CC-MAIN-2016-26
http://www.ecu.edu/news/newsstory.cfm?id=374
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00167-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952716
800
3.15625
3
to Practice Career Aptitude Test necessity of such test are generally used to get admission in professional institutions where it’s very important to read the mind of student that whether he is capable of doing so or not? So every aspect of questions is targeted in the test including academic subjects and general knowledge. To take optimum marks in these tests requires more attention and concentration, it also depends how did you pass your exams so far and how much you have learned so far? question arises how to do the preparation? Two basic theorems are behind its preparation - Frequent test practicing - Technique to attempt questions As far as first theorem is concerned, it directs that Practice Makes Man Perfect; do practice tests as much as possible by attempting sample tests and past tests. It is also good practice to take free online tests which are available at various sites, these tests help you to improve your knowledge and make your speed fast. On the other hand, the technique or special skill that is necessary for attempting any kind of entry level test is followed by how much practice you have done? To get efficient the main points to keep in mind are: - Read and listen the instructions carefully i.e. how to mark the answers, which type of pen you have to use, how much time you have to attempt the test and like that because majority of these tests are not examined manually hence required accuracy. - First of all try to answer those questions which you know, then attempt those which you are not much sure and leave the rest of the questions. Now count how much questions you have answered and is the total enough for you to pass the test easily? But the marks you have estimated should be greater than the minimum marks to keep you on safe side. - Don’t ever try to attempt questions which you didn’t practice or not in your knowledge because in such tests there are negative marking also which may create difficulty in passing the test. But in case there is no instruction for negative marking you can attempt these queries in EXCESS TIME ONLY! - If you have multiple choice queries then educated guesses are worthwhile i.e. leave out the most odd ones and try to select the right choice which your six sense says. - In mathematics section, you will need to practice simple calculations like multiplication, division, etc orally rather than using calculators because in most of the exams calculators are not allowed or basic function calculator are allowed. In these cases you will have to get quick as much as possible by putting estimated answers on the paper. - In the end you will have to check thoroughly what you have done in your test before returning the paper and make sure you finish your work just before 15 minutes so that you will have time to examine yourself. In addition with above practices and techniques, another most important thing is to keep your mind cool as much as possible because make yourself confuse and panic during the test could never be turn into good result. So be relax and try to build up self confidence.
<urn:uuid:5dd3f4fa-e9ab-4e62-b0b7-7818909072e8>
CC-MAIN-2016-26
http://www.onlineprogramsfinder.com/Blog/practice-aptitude-test/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962698
623
2.734375
3
The Meyer Lemon is originally from China and considered to be a cross between a true lemon and a mandarin orange. The fruit is very tender, extremely juicy, and its peel is much thinner than a regular lemon. The higher sugar content lends to the unique, rich flavor. The Meyer lemon was introduced to the United States in 1908 by agricultural explorer Frank Meyer. By the mid 1940?s it had become the most prolific garden tree in the Mediterranean climates of America. Unfortunately by the 1950?s a deadly tree virus decimated almost all of the Meyer lemon trees in the United States. Most nurseries stopped propagating the tree. A gentleman by the name of Floyd Dillon submitted his rootstock to the California Department of Agriculture. After much testing, it was determined that Mr. Dillon's rootstock was not susceptible to the virus. His bud wood became the basis for all the new Sweet Meyer Lemons found in the United States. The Sweet Meyer Lemon when allowed to grow to its true maturity of a nearly red orange tone provides a unique lemon/tangerine flavor. Chefs embrace the Meyer for its robust flavor that brightens any recipe. The Meyer lemon season varies for each growing region. Meyer lemons are available year round in the Northern and Southern California region; and from November to March in all other citrus growing regions. Author: Scott Meyers Lemon balm’s history dates back over 2,000 years. The scientific name of this herb “Melissa officinalis” reveals much of its history. It is thought that bees and lemon balm have been inextricably linked since ancient times. The scientific name Melissa is derived from the Greek term for “honey bee.” Moreover, many herbalists agree that lemon balm has much of the same healing and tonic properties that royal jelly and honey have. Lemon balm has traditionally been honored as an herb with the facility to lend rejuvenation. During the Middle Ages, lemon balm was a key ingredient in all medieval elixirs of youth. Even as late as the 18th century, lemon balm continued to maintain its reputation as an “elixir of youth.” Sorbet is a light, fat-free dessert that is a great alternative to ice cream. Sorbet is easy to prepare and you don’t have to be a celebrity chef to make it. The Meyer lemon creates a smoother, richer lemon sorbet. Here is an easy Meyer lemon sorbet recipe for all to enjoy. 1 cup water 1 cup sugar 1 cup fresh Meyer lemon juice (8-10 Meyer lemons) 1 tablespoon lemon zest Bring the water and sugar to a boil in a small saucepan, remove from the heat, and cool. Combine the syrup with the lemon and zest and pour into the bowl of an ice cream machine. Freeze according to the manufacturer’s instructions. After the sorbet is made, transfer to an airtight container. Cover tightly and freeze until ready to serve. If you do not have an ice cream maker, freeze the sorbet in a tall canister. Freeze for 1 1/2 hours. Remove and stir with a whisk. Return to the freezer and stir about once every hour for about 4 hours. The more times you stir, the more air will be included, resulting in a lighter sorbet.
<urn:uuid:2e9da76a-e5c6-47ea-ae3e-e6ae1ba771b6>
CC-MAIN-2016-26
http://meyerlemonrecipes.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00163-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942445
691
2.65625
3
IndiaWikipedia: India | Official Government Website: india.gov.in Updated: Mar. 17, 2014 Currently, India, together with China, is one of the two largest producers and consumers of tea. Most of the tea produced in India is consumed in India, although India does export substantial amounts of both mass-produced bulk tea and high-quality specialty or artisan tea. Types of tea produced in IndiaHistorically, India produced only black tea. In recent years, however, there has been a growth of green, white, and even oolong Indian teas, although the vast majority of tea and styles of tea produced in India are still black teas. The most famous tea-producing regions of India are Darjeeling and Assam, although Nilgiri and Sikkim also produce notable teas, and teas are grown in many other regions as well. Darjeeling has diversified into green, oolong, and white teas more than any other region of India. India's geography and climate and its influence on tea productionIndia has a diverse climate, ranging from tropical to subtropical. The South Asian monsoon produces a strong seasonality of precipitation, with dry winters and wet summers. Precipitation also varies regionally and by altitude. The tea plant has high moisture requirements, and only the wetter parts of India are suitable for growing tea. These include the Western Ghats, which catch moisture coming off the Indian ocean, and Northeast India, which has across-the-board higher precipitation even in lowland areas. A small region in far north India, following the foothills of the Himalayas, also has higher rainfall and produces a small amount of tea. Tea production in north and northeast IndiaMost of the best-known tea-growing regions of India are located in the northeastern corner of India, near the foothills of the Himalayan mountains, and near the borders with Bangladesh and Nepal. These regions include Darjeeling and Assam, as well as lesser-known regions including Arunachal Pradesh, Bihar, Jalpaiguri, and Sikkim. Although a band of similar climate extends west and north along the length of the Himalayas, almost the whole way to the border with Pakistan, there are only small, isolated tea gardens in the far northern areas, such as Himachal Pradesh. Tea production in south IndiaSouth India has some important tea-growing regions as well, along the mountain range that runs north-south along the west coast of south India. The region in South India best-known for tea is Nilgiri, and Kerala also produces tea. As one travels east along the Deccan Plateau, rainfall quickly falls off, leaving only a narrow band along the western edge of the country where tea is grown. Tea-Producing Regions of India Styles of Tea Produced in India This is a selection, not an exhaustive listing, of the styles of tea most commonly produced in India. Best Indian Teas The notion of the "best" Indian teas is subjective, because different people have different tastes. We present the most often-rated and highest-rated teas produced in India, and allow you to draw your own conclusions. Most Often-Rated Teas Top-Rated Indian Teas |Brand:||Upton Tea Imports| |Style:||Tulsi / Holy Basil Tea| |Brand:||Happy Earth Tea| |Style:||Darjeeling First Flush|
<urn:uuid:4abf8df4-76b2-45b1-bc80-ad719dc15f5b>
CC-MAIN-2016-26
http://ratetea.com/region/india/2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00119-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935022
725
3.171875
3
SADAT, MUHAMMAD ANWAR AL- SADAT, MUHAMMAD ANWAR AL- (1918–1981), president of the Arab Republic of Egypt, Oct. 1970–Oct. 1981. Sadat was born to poor parents in the Egyptian village of Mit Abu-Kom. He joined the army and during World War II was active in an anti-British underground group (and was arrested in consequence). After the war, still in the army, he joined the "Free Officers" group, led by *Nasser, which carried out the July 1952 Revolution. Overshadowed by Nasser, Sadat managed the new regime's daily, al-Gumhuriyyah, served as a cabinet minister for one year and then as speaker of the parliament. Appointed as Nasser's vice president after the 1967 defeat, Sadat was elected president after Nasser's death on September 28, 1970. Gradually, Sadat asserted himself increasingly as president and introduced a growing measure of economic and political liberalization, which bolstered his increasing popularity. Eager to overcome Egypt's military inferiority versus Israel, he signed a treaty of friendship with the Soviets which, however, failed to deliver the needed hardware for war. Consequently, he expelled them from Egypt in 1972 and started to prepare for war on his own, all the while attempting a political rapprochement with certain European states and the U.S. in order to secure their sympathy for his military moves against Israel. The October 1973 attack in Sinai, while not leading to an Egyptian victory, gave Egypt the pretext to coopt the U.S. as an honest broker instead of a partisan of Israel. Thus, the Disengagement Treaties of May 1974 and September 1975 with Israel increased Sadat's prestige and started the process towards a settlement with Israel. Sadat's main argument was that such a peace should be achieved in parallel with Israel's renunciation of the so-called "conquered territories." This was the ideological basis of Sadat's visit to Jerusalem in November 1977 and of the Camp David negotiations, sponsored by Jimmy Carter, in September 1978. Israel's agreement to recognize the autonomy of the Palestinians paved the way for the signing of the peace treaty between Egypt and Israel on the lawn of the White House in March 1979. This was the peak of Sadat's achievement, for which he was awarded, together with *Begin, the Nobel Peace Prize. In subsequent years, Sadat's international prestige grew, but in Egypt, owing to the increasing poverty and unemployment, social criticism of Sadat increased, exploiting the very openness he had encouraged. Some of this nurtured Islamic fundamentalism. Sadat, an orthodox Muslim himself, first attempted to persuade the Islamic leaders to tone down their zeal, then started to arrest them in the thousands during the last months of his life. His assassination, during a festive military review on the eighth anniversary of the Yom Kippur War, ended the plans he had for Egypt. In his later years, Sadat openly expressed his disappointment with Israel's policies, especially the June 1981 Israeli attack on the Iraqi nuclear reactor and what he considered as Israel's dragging its feet over the granting of Palestinian autonomy. He argued that the return of Sinai was a "natural" act, since this had been Egyptian territory, and he perceived Israel as "ungrateful." U.S. economic assistance, too, could not solve Egypt's numerous social problems, nor appease popular criticism of his regime. Sadat's assassination was welcomed by many, in contrast with the bitter national mourning following Nasser's death. Nevertheless, Sadat's legacy was both revolutionary and original, the likes of which have not yet been seen in Arab countries. His colorful and dynamic personality, aiming at radical solutions, was expressed in the heat of war and the challenges of peace. His talent for changing direction and undertaking new initiatives was unique. He was criticized by several Arab states for signing a peace with Israel which enabled it – they claimed – to oppress the Palestinians and attack Lebanon. However, Arab and Muslim states which had ostracized him found themselves following his example some 20 years later. Although relations between Egypt and Israel since 1977 have had their ups and downs, the treaty has served as a model for others. R. Israeli, The Public Diary of President Sadat, 1–2–3 (1978–79); idem, Man of Defiance: The Political Biography of President Sadat (1985); idem, "Sadat: The Calculus of War and Peace," in: Craig and Loewenheim (eds.), The Diplomats 1939–79 (1994), 435–58. [Raphael Israeli (2nd ed.)] Source: Encyclopaedia Judaica. © 2008 The Gale Group. All Rights Reserved.
<urn:uuid:1a9e7238-cbd3-4825-a485-6fba9fa0be34>
CC-MAIN-2016-26
http://www.jewishvirtuallibrary.org/jsource/judaica/ejud_0002_0017_0_17260.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00116-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979743
987
2.8125
3
Until the beginning of 1943, 15,000 Jews and people of Jewish ancestry (Geltungsjuden) who were engaged in forced labour deemed vital to the war effort in either Berlin factories or at institutions of the Jewish community were exempt from deportation. However, on February20, 1943, the department of Jewish affairs at the RSHA headed by Eichmann issued new regulations ordering the deportation of all forced labourers to Theresienstadt. According to the new policy, even Jews employed at factories crucial to the war effort were now eligible for deportation. Jewish spouses in mixed marriages and persons of Jewish ancestry who were not married to a Jewish spouse were still protected from deportation, even though this guideline was not rigidly followed. The department for Jewish Affairs at the Berlin Gestapo, headed by Walter Stock and his deputy Max Stark, was in charge of organising this transport together with the Department of Jewish Affairs at the RSHA. On February 27, 1943, SS officers of the elite Leibstandarte unit, armed with whips and bayonets, raided the factories in the Berlin area and brutally arrested thousands of Jewish workers. The labourers were taken by truck from their work places with nothing but the clothes they were wearing and were put into several assembly camps: Grosse Hamburger Strasse; the Clou Concert Hall; Mauerstrasse; the Herman-Göring barracks in Reinickendorf, and the Jewish community building on Rosenstrasse. There, these people had to lie either on the bare floor or on poor straw mattresses until their departure; without provisions and without water, regardless of being infants or elderly people. Men, women and children were often separated, so that families were also transported separately. The Gestapo also arrested Jewish community workers, whose positions were to be filled by the Jewish spouses of mixed marriages who at that time were still exempt from deportation. While the SS officers were raiding the factories in the operation known as the “Fabrikaktion” (Factory Action), the Berlin police and the Gestapo conducted manhunts in the streets, homes, and shops of Berlin, searching for Jews wearing the mandatory yellow badge. At the end of this large-scale operation, Berlin was cleared of Jews save for those who had gone into hiding, or Jews who were married to non-Jews, and those with a non-Jewish parent. The detainees did not remain in the assembly camps for long. The Gestapo emptied these camps promptly, assembling one transport after another. The Jews were then transported to the Moabit freight station and loaded into cattle cars. Karl Hefter was on duty in the camp located at the Wachregiment barracks as an employee of the Jewish community. He witnessed the departure of these transports during which the SS indiscriminately pushed and threw the people into the wagons. The commanding Sturmführers handled whips to speed people up. Most of the detainees were deported to Auschwitz. A few were sent to Theresienstadt. This transport was the 36th to leave Berlin for the ghettos and killing sites in Eastern Europe and was thus designated “Osttransport 36”. It departed from the city’s Putlitzstrasse Station in the Moabit district on March 12, 1943 and arrived in Auschwitz the following day. There were 941 Jews on this transport. On the day of their deportation they were ordered into a train consisting of closed cattle cars. A guard unit, usually composed of two SS men, was usually posted in the control compartment. The train usually went to Auschwitz via Breslau (Wroclaw) and Kattowitz (Katowice), but the constant strain put on the German railway system might have caused individual transports to take other routes. In his post-war memoirs, survivor Hans Peter Messerschmidt recounts that the deportees were kept for 2-3 days in the assembly camp on Grosse Hamburger Strasse. Conditions were harsh as the place was overcrowded. The deportees sat on mattresses with cover the floor. In the early morning of March 12, 1943 the deportees were loaded onto waiting trucks and driven to the freight train station in the Moabit district. Gestapo guards handling whips ensured the swift boarding of the deportees into two cattle cars, with around 50 deportees per car. The train stood in station until the evening. In the late afternoon the next day it arrived at Auschwitz. The SS guards hurried the depoertees out of the train and told them to leave all their luggage behind, explaining that it would be brought to them later. Men and women were then ordered to stand seperately in rows of five. The whole area was surrounded by SS guards. The deportees could see older inmates piling up the luggage and the bags belonging to the new arrivals. A selection then took place. Hans Peter Messerschmidt recalls that fit and healthy people between the ages of 16 and 50 were sent to the left side; all others, including children, sick people, the elderly as well as those who didn’t want to be seperated from their children were sent to the right. Those on the right side were marched to the gas chambers and killed. Those on the left were loaded onto trucks and driven for about 10 minutes to the Monowitz (Buna) labour camp. David Salz, who also survived the war, recalls in his post-war memoirs that in Berlin the deportees were ordered into a train consisting of closed cattle cars. These were locked from outside. Inside the cattle cars there was only space to stand, not to sit or lie down. Air was scarce and it was very cold. He said that there were no sanitary facilities, that people were crying and that the transport was a horrifying experience, the first he had with the “barbaric” Germans. Upon arrival at Auschwitz, Salz remembers first the barked orders and shouts, beatings with sticks by the SS-guards, being pushed around, and great chaos. Next to the cattle cars the deportees had to hand over any valuables such as necklaces, rings, and the like. Then the selection took place. He saw that most women, the children and elderly were all taken to the left side. The others were asked what their age and profession was. Some of the healthier ones were then taken to the right side. As he was convinced that his mother was a healthy and fit woman and would thus surely be on the right side, he was determined to get to that side, too. When his turn came he instinctively stood on his toes to look taller, and pretended to be a 17-year old electrician. David Salz was sent to the right side. His group was then clubbed onto waiting trucks that drove them to the Auschwitz III camp. Historian Danuta Czech notes in the Auschwitz Chronicles that a transport organized by the RSHA arrived in Auschwitz on March 13. It consisted of 344 Jewish men and 620 Jewish women and children from Berlin. Upon the train’s arrival, the SS carried out a selection process. 218 men, given Nos. 107772-107989, and 147 women, given Nos. 38160-38306, were sent to forced labour under harsh conditions which they rarely survived. The remaining 559 deportees, 126 of them men and 473 women and children, were sent directly to the gas chambers at Birkenau (Auschwitz II) and murdered. According to historian Rita Meyhoefer 13 of the deportees are known to have survived.
<urn:uuid:31ba79a7-d896-4d18-bbd0-9c7f88832553>
CC-MAIN-2016-26
http://db.yadvashem.org/deportation/transportDetails.html?language=en&itemId=5092745
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.986797
1,556
3.375
3
Today is Tuesday, Jan. 29, the 29th day of 2013. There are 336 days left in the year. Today's Highlight in History: On Jan. 29, 1963, poet Robert Frost died in Boston at age 88. On this date: In 1820, Britain's King George III died at Windsor Castle. In 1843, the 25th president of the United States, William McKinley, was born in Niles, Ohio. In 1845, Edgar Allan Poe's poem "The Raven" was first published in the New York Evening Mirror. In 1863, the Bear River Massacre took place as the U.S. Army attacked Shoshone in present-day Idaho. In 1919, the ratification of the 18th Amendment to the Constitution, which launched Prohibition, was certified by Acting Secretary of State Frank L. Polk. In 1929, The Seeing Eye, a New Jersey-based school which trains guide dogs to assist the blind, was incorporated by Dorothy Harrison Eustis and Morris Frank. In 1936, the first inductees of baseball's Hall of Fame, including Ty Cobb and Babe Ruth, were named in Cooperstown, N.Y. In 1963, the first charter members of the Pro Football Hall of Fame were named in Canton, Ohio (they were enshrined when the Hall opened in September 1963). In 1998, a bomb rocked an abortion clinic in Birmingham, Ala., killing security guard Robert Sanderson and critically injuring nurse Emily Lyons. (The bomber, Eric Rudolph, was captured in May 2003 and is serving a life sentence.) Ten years ago: The Congressional Budget Office predicted the federal deficit for fiscal 2003 would soar to $199 billion even without President George W. Bush's new tax cut plan or a war against Iraq. Five years ago: Democrat Hillary Rodham Clinton claimed victory in a campaign-free Florida presidential primary in which all the candidates had signed pledges not to compete. (The national Democratic Party had stripped the state of its delegates as punishment for moving its primary ahead of Feb. 5.) One year ago: Eleven people were killed when smoke and fog caused a series of fiery crashes on I-75 in Florida. Today's Birthdays: Actor Noel Harrison is 79. Actress Katharine Ross is 73. Actor Tom Selleck is 68. Actor Marc Singer is 65. Actress Ann Jillian is 63. Talk show host Oprah Winfrey is 59. Country singer Irlene Mandrell is 57. Actress Diane Delano is 56. Actress Judy Norton Taylor ("The Waltons") is 55. Olympic gold-medal diver Greg Louganis is 53. Actor Nicholas Turturro is 51. Actor-director Edward Burns is 45. Actress Heather Graham is 43. Actor Sharif Atkins is 38. Actress Sara Gilbert is 38. Actor Justin Hartley is 36. Actor Sam Jaeger is 36. Actor Andrew Keegan is 34. Actor Jason James Richter is 33. Blues musician Jonny Lang is 32. Pop-rock singer Adam Lambert ("American Idol") is 31. Thought for Today: "And were an epitaph to be my story I'd have a short one ready for my own. I would have written of me on my stone: 'I had a lover's quarrel with the world.'" -- Robert Frost (1874-1963).
<urn:uuid:82e793e6-7ad4-4024-946f-a7d3456680e1>
CC-MAIN-2016-26
http://daily-jeff.com/community/2013/01/29/today-in-history
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961783
680
2.65625
3
OBSOLETE UNITS PACKAGE SYMBOL As of Version 9.0, unit functionality is built into the Wolfram Language is a unit of length. - To use , you first need to load the Units Package using Needs["Units`"]. - is equivalent to 1852. Meter (SI units). - is equivalent to approximately 1.15078 Mile. - Convert[n NauticalMile,newunits] converts n NauticalMile to a form involving units newunits. - is typically abbreviated as nmi or nm.
<urn:uuid:5668c0d9-7098-4ef8-a576-fc7843d11842>
CC-MAIN-2016-26
http://reference.wolfram.com/language/Units/ref/NauticalMile.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.775722
121
2.6875
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Bounded rationality is the idea that in decision making, rationality of individuals is limited by the information they have, the cognitive limitations of their minds, and the finite amount of time they have to make a decision. It was proposed by Herbert Simon as an alternative basis for the mathematical modeling of decision making, as used in economics and related disciplines; it complements rationality as optimization, which views decision making as a fully rational process of finding an optimal choice given the information available.. Another way to look at bounded rationality is that, because decision-makers lack the ability and resources to arrive at the optimal solution, they instead apply their rationality only after having greatly simplified the choices available. Thus the decision-maker is a satisficer, one seeking a satisfactory solution rather than the optimal one. Simon used the analogy of a pair of scissors, where one blade is the "cognitive limitations" of actual humans and the other the "structures of the environment"; minds with limited cognitive resources can thus be successful by exploiting pre-existing structure and regularity in the environment. Some models of human behavior in the social sciences assume that humans can be reasonably approximated or described as "rational" entities (see for example rational choice theory). Many economics models assume that people are on average rational, and can in large enough quantities be approximated to act according to their preferences. The concept of bounded rationality revises this assumption to account for the fact that perfectly rational decisions are often not feasible in practice due to the finite computational resources available for making them. Models of bounded rationalityEdit The term is thought to have been coined by Herbert Simon. In Models of Man, Simon points out that most people are only partly rational, and are emotional/irrational in the remaining part of their actions. In another work, he states "boundedly rational agents experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information" (Williamson, p. 553, citing Simon). Simon describes a number of dimensions along which "classical" models of rationality can be made somewhat more realistic, while sticking within the vein of fairly rigorous formalization. These include: - limiting what sorts of utility functions there might be. - recognizing the costs of gathering and processing information. - the possibility of having a "vector" or "multi-valued" utility function. Simon suggests that economic agents employ the use of heuristics to make decisions rather than a strict rigid rule of optimization. They do this because of the complexity of the situation, and their inability to process and compute the expected utility of every alternative action. Deliberation costs might be high and there are often other concurrent economic activities also requiring decisions. Daniel Kahneman proposes bounded rationality as a model to overcome some of the limitations of the rational-agent models in economic literature. As decision makers have to make decisions about how and when to decide, Ariel Rubinstein proposed to model bounded rationality by explicitly specifying decision-making procedures. This puts the study of decision procedures on the research agenda. Gerd Gigerenzer argues that most decision theorists who have discussed bounded rationality have not really followed Simon's ideas about it. Rather, they have either considered how people's decisions might be made sub-optimal by the limitations of human rationality, or have constructed elaborate optimising models of how people might cope with their inability to optimize. Gigerenzer instead proposes to examine simple alternatives to a full rationality analysis as a mechanism for decision making, and he and his colleagues have shown that such simple heuristics frequently lead to better decisions than the theoretically optimal procedure. From a computational point of view, decision procedures can be encoded in algorithms and heuristics. Edward Tsang argues that the effective rationality of an agent is determined by its computational intelligence. Everything else being equal, an agent that has better algorithms and heuristics could make "more rational" (more optimal) decisions than one that has poorer heuristics and algorithms. - Elster, Jon (1983). Sour Grapes: Studies in the Subversion of Rationality, Cambridge, UK: Cambridge University Press. - Gigerenzer, Gerd; Selten, Reinhard (2002). Bounded Rationality, Cambridge: MIT Press. - Hayek, F.A (1948) Individualism and Economic order - Kahneman, Daniel (2003). Maps of bounded rationality: psychology for behavioral economics. The American Economic Review 93 (5): 1449–75. - March, James G. (1994). A Primer on Decision Making: How Decisions Happen, New York: The Free Press. - Rubinstein, Ariel (1998). Modeling bounded rationality, MIT Press. - Simon, Herbert (1957). "A Behavioral Model of Rational Choice", in Models of Man, Social and Rational: Mathematical Essays on Rational Human Behavior in a Social Setting. New York: Wiley. - Simon, Herbert (1990). A mechanism for social selection and successful altruism. Science 250 (4988): 1665–8. - Simon, Herbert (1991). Bounded Rationality and Organizational Learning. Organization Science 2 (1): 125–134. - Tisdell, Clem (1996). Bounded Rationality and Economic Evolution: A Contribution to Decision Making, Economics, and Management, Cheltenham, UK: Brookfield. - Tsang, E.P.K. (2008). Computational intelligence determines effective rationality. International Journal on Automation and Control 5 (1): 63–6. - Williamson, Oliver E. (1981). The economics of organization: the transaction cost approach. American Journal of Sociology 87 (3): 548–577 (press +).. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:4e5420a7-4b61-45c7-adcd-aa8f420d16ef>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Bounded_rationality
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.904005
1,216
3.640625
4
Radar technology developed at Kansas University is helping NASA scientists track something they’ve never monitored before: the birth of an iceberg. Last month, a crew flying over Antarctica as part of NASA’s Operation IceBridge discovered an 18-mile-long crack in the ice shelf of the Pine Island Glacier. The crack was 240 feet wide and almost 200 feet deep. In coming weeks or months, scientists expect that crack to grow until an iceberg as large as 310 square miles falls into the ocean. This is the first time that scientists have been able to do a detailed survey on an actively forming iceberg. Part of the technology being used to collect data has been developed at KU’s Center for Remote Sensing of Ice Sheets. Aboard the DC-8 aircraft that is flying over the crack is KU’s radar equipment that can measure the thickness of the ice sheet from the top to the bedrock. Helping operate the equipment are a trio of KU researchers. “They have known these things happen to ice shelves and they’ve seen it before. But being there and seeing it starting to happen, it’s cool,” said Carl Leuschen, the deputy director of CReSIS. The formation of the iceberg is part of the ice sheet’s natural cycle. About once every decade, icebergs of this size and shape form. The last major iceberg from the glacier occurred in 2001. “We are pretty excited to have been there and to have really observed this firsthand how it happens,” IceBridge Project Scientist Michael Studinger said during an international conference call Thursday afternoon. Right now, scientists aren’t linking the creation of this iceberg to climate change. But if the frequency of icebergs increases, it could indicate a change in the system and a response to warming water. Of particular interest is the tip of the rift where the crack pushes forward at about two meters per day. How the crack in the ice sheet grows is a process that NASA scientists would like to better understand. Ultimately, it could help scientists create more-accurate models on future shrinking or expanding ice sheets around the world. Based out of Punta Arenas, Chile, Operation IceBridge is in its third year. During October and November, crews make 11- to 12-hour flights over Antarctica to gather data. Planes fly, usually as low as 1,500 feet, over the same path year after to year to gauge how much the ice sheets have changed. During the flight mission, a KU researcher is on board operating the radar equipment. The three KU researchers are expected to return to KU before Thanksgiving. “It is a lot of work to go down there and manage the system and the amount of data you get and making sure it is processed correctly. But they are having fun down there,” Leuschen said. Along with KU’s radar equipment, the planes gather information with a digital mapping system, laser altimeters and a gravimeter. NASA’s Operation IceBridge is helping gather data on ice sheets in between the time that the Ice Climate and Elevation Satellite stopped operating in 2009 and before the replacement satellite is launched in 2016. “It’s based on the desire to avoid an ‘Oh my God’ moment in 2016 when (the satellite) launches and begins to gather data,” Studinger said. The Pine Island Glacier is one of the largest and fastest-moving ice sheets in the world. It’s also a rapidly thinning glacier. According to Studinger, the glacier is a particularly sensitive area because much of the land on where the ice sheet sits is well below sea level. The ice is melting at a rate of several meters per year and could be increasing. “Because of sensitivity to climate change, the Pine Island Glacier is referred to as the big underbelly of Antarctica,” Studinger said.
<urn:uuid:b98de091-ba85-4025-8a95-6ef76d100f80>
CC-MAIN-2016-26
http://www2.ljworld.com/news/2011/nov/04/scientists-hope-see-birth-iceberg/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949887
819
3.8125
4
A new book is once again raising concerns that using a cell phone could kill you--and in more ways than causing you to drive off the road into a tree. Specifically, there are renewed concerns that using a cell phone can give you brain cancer. This concern that has caused Apple, Research In Motion, and other cell-phone makers to insert fine print into their user manuals telling you to keep your cell phone at least 5/8th of an inch away from your head, the NYT's Randall Stross reports. For BlackBerries, the "safe" margin is a whole inch. (Do you know ANYONE who holds their cell phone an inch away from their ear?) Not surprisingly, the cell-phone industry association is dismissing these latest concerns and proudly advertising the joys of cell phones with pictures of beautiful beaming people with phones glued to their ears. But it sort of makes sense, doesn't it? A transmitting radio glued to your head? Why WOULDN'T that increase your risk of brain cancer, especially for heavy users? And, given that cell phones haven't been around all that long, it also makes sense that the cell phone industry association wouldn't really have a clue about whether or not cell phones are safe, as nor would anyone else. So talk at your own risk.
<urn:uuid:d070a911-af1f-4bb6-9a43-c89fb38eb79b>
CC-MAIN-2016-26
http://www.businessinsider.com/cell-phone-cancer-2010-11
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968792
272
2.734375
3
Ötzi the Iceman, Europe's oldest mummy, likely suffered a head injury before he died roughly 5,300 years ago, according to a new protein analysis of his brain tissue. Ever since a pair of hikers stumbled upon his astonishingly well-preserved frozen body in the Alps in 1991, Ötzi has become one of the most-studied ancient human specimens. His face, last meal, clothing and genome have been reconstructed — all contributing to a picture of Ötzi as a 45-year-old, hide-wearing, tattooed agriculturalist who was a native of Central Europe and suffered from heart disease, joint pain, tooth decay and probably Lyme disease before he died. None of those conditions, however, directly led to his demise. A wound reveals Ötzi was hit in the shoulder with a deadly artery-piercing arrow, and an undigested meal in the Iceman's stomach suggests he was ambushed, researchers say. [Mummy Melodrama: Top 9 Secrets About Otzi the Iceman] A few years ago, a CAT scan showed dark spots at the back of the mummy's cerebrum, indicating Ötzi also suffered a blow to the head that knocked his brain against the back of his skull during the fatal attack. In the new study, scientists who looked at pinhead-sized samples of brain tissue from the corpse found traces of clotted blood cells, suggesting Ötzi indeed suffered bruising in his brain shortly before his death. But there's still a piece of the Neolithic murder mystery that remains unsolved: It's unclear whether Ötzi's brain injury was caused by being bashed over the head or by falling after being struck with the arrow, the researchers say. The study was focused on proteins found in two brain samples from Ötzi, recovered with the help of a computer-controlled endoscope. Of the 502 different proteins identified, 10 were related to blood and coagulation, the researchers said. They also found evidence of an accumulation of proteins related to stress response and wound healing. A separate 2012 study detailed in the Journal of the Royal Society Interface looked at the mummy's red blood cells (the oldest ever identified) from a tissue sample taken from Ötzi's wound. That research showed traces of a clotting protein called fibrin, which appears in human blood immediately after a person sustains a wound but disappears quickly. The fact that it was still in Ötzi's blood when he died suggests he didn't survive long after the injury. Proteins are less susceptible to environmental contamination than DNA, and, in the case of mummies, they can reveal what kinds of cells the body was producing at the time of death. A protein analysis of a 15-year-old Incan girl, who was sacrificed 500 years ago, recently revealed that she had a bacterial lung infection at the time of her death. "Proteins are the decisive players in tissues and cells, and they conduct most of the processes which take place in cells," Andreas Tholey, a scientist at Germany's Kiel University and a researcher on the new Ötzi study, said in a statement. "Identification of the proteins is therefore key to understanding the functional potential of a particular tissue," Tholey added. "DNA is always constant, regardless of from where it originates in the body, whereas proteins provide precise information about what is happening in specific regions within the body." In addition to the proteins related to clotting, Tholey and colleagues also identified dozens of proteins known to be abundant in brain tissue in the samples from Ötzi. A microscopic analysis even revealed well-preserved neural cell structures, the researchers said. "Investigating mummified tissue can be very frustrating," study author and microbiologist Frank Maixner, of the European Academy of Bolzano/Bozen (EURAC), said in a statement. "The samples are often damaged or contaminated and do not necessarily yield results, even after several attempts and using a variety of investigative methods. When you think that we have succeeded in identifying actual tissue changes in a human who lived over 5,000 years ago, you can begin to understand how pleased we are as scientists that we persisted with our research after many unsuccessful attempts." Their research was detailed in the journal Cellular and Molecular Life Sciences.
<urn:uuid:dfdfff1e-4e10-4f1b-9e8a-5b7cce8f773a>
CC-MAIN-2016-26
http://www.livescience.com/37311-otzi-iceman-death-clues.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977058
894
2.9375
3
As with most things in life, there are debatable pros and cons of modern embalming. Common benefits of embalming include allowing time to arrange for the funeral, providing time to arrange for transport of the body and restoring appearance. Jeff Seiple, embalming instructor at Gupton-Jones College of Funeral Service, explains further when he says, "There are three main purposes for embalming -- disinfection, preservation and restoration. Disinfecting the body removes the threat of exposing the general public to bacteria, had the individual passed away from a contagious disease. "Furthermore, especially in instances of vehicular accidents and chronic disease, embalming is important to the family members. It restores the individual to an acceptable condition and helps provide the family with what is called, a 'positive memory picture.'" Seiple is describing the family members' final viewing of the individual. A positive experience facilitates the grieving process [source: Seiple]. Those not in favor of embalming generally come from two different groups -- those that refrain from embalming for religious reasons and those with environmental concerns. Orthodox Jews and Muslims don't practice embalming, and Hindus and Buddhists rely on cremation. Environmentally, the concern of embalming is mainly associated with the use of formaldehyde, which the United States Environmental Protection Agency lists as a probable carcinogen. This is a potential concern for embalmers and requires special training and protective equipment. The main concern that proponents of other funeral preparation options have is placing formaldehyde in the ground. In fact, each year, in the United States, enough embalming fluid is put in the earth to fill eight Olympic-size swimming pools [source: Sehee]. However, cemeteries are dedicated parcels of land that are generally owned by municipalities or privately held. Moreover, they adhere to strict city, county and/or state regulations [source: Seiple]. Despite the current debate on embalming, there is consensus on one key point, though: When it comes to decisions related to the final resting of family members, families today -- just as in the past -- should know all their options and have the needed time to decide what works best for the deceased and for them and their grieving processes. For more on embalming and other related subjects, take a look at the links on the next page.
<urn:uuid:641073cf-0d56-470d-9054-45bc1b85c3db>
CC-MAIN-2016-26
http://science.howstuffworks.com/science-vs-myth/afterlife/embalming5.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00066-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942291
487
2.5625
3
Teenagers sometimes change their behaviour or appearance to be part of a social group or ‘subculture’. Try not to worry – it’s normal for teenagers to identify with different youth subcultures as they develop their own identities. Youth subcultures: what you need to know Belonging to a social group or youth subculture is often about exploring who you are and what you stand for. During adolescence, teenagers need to form an independent adult identity. Experimenting with different social groups is one way of doing this. It’s how your child can test out being someone new – someone separate from your family. Belonging to youth subcultures or social groups can also be a way for teenagers to decide what they identify with in the adult world. It gives your child a way of exploring his own values and deciding whether he agrees with your values. Social groups can offer a set of guidelines about how to behave, dress and think. Your child might like this if she feels confused by having lots of options and choices. Dressing, behaving and thinking like the rest of a subculture gives your child a sense of belonging and identity too. Belonging to a subculture can boost your child’s social skills and teach him the rewards of commitment. And it can also just be fun. Not all young people choose to belong to subcultures. For those who do, membership might be long term, short term, or on and off. All of this can be challenging for parents, but it’s a normal part of growing up. Try thinking back to your own adolescence. You might have belonged to a subculture yourself, such as punk, arty type or geek. Some 21st-century subcultures include gothic, cyberculture, emo, gamer, hip-hop and hipster. Staying positive about subcultures All young people need to feel validated and valued. You might not understand why your child likes a particular subculture, but it’s important not to put her down for it. In fact, criticising your child’s subculture might actually make her feel more strongly connected to it. If you’re finding this phase difficult, here are some tips for staying positive: - Treat conversations about your child’s subculture as a chance to learn about something new and also about your child’s developing identity. Show an interest in what your child is doing. - Keep your conversations with your child respectful. When people are critical, rude or cross, discussions might be less effective. Also, your child just might not see things the same way as you do. - Keep the lines of communication open – this is a vital part of having a healthy relationship with your child. One way to do this is to take opportunities to actively listen to your child. When to be concerned about youth subcultures You might worry that your child’s social group is having a negative influence on him – for example, if you notice that he seems more moody or is getting into trouble at school or other places. It’s normal for teenagers to sometimes have low moods or trouble sleeping, but if problems continue for a few weeks, talk with your child. Warning signs of more serious problems such as depression or anxiety might include: - low moods, tearfulness or feelings of hopelessness - aggression or antisocial behaviour - sudden changes in behaviour, often for no obvious reason - trouble eating or sleeping - changes in academic performance. If you notice these signs, the next step is talking to your GP. The GP can put you in contact with your local child and adolescent mental health team or another appropriate professional. Understanding more about youth subcultures It’s easy and normal to worry that your child is spending time with people who might put her at risk, or encourage her to engage in risk-taking behaviour. Negative stories in the media might add to your concerns. You might also worry if you see your child developing enthusiastic connections to a group or philosophy that you don’t know anything about. Some subcultures might seem strange or even threatening to you. The more you talk to your child about his subculture, the more you’ll know about whether you really need to worry. Video Scenes, trends and fashion In this short video, parents and teenagers discuss separately how trends and fashions can influence teenagers to look and act in certain ways. Teenagers share the trends that are most important to them, including fashion and technology trends for iPods and mobile phones. Many of the teenagers say that they don’t always care too much about having the latest stuff. Parents share mixed feelings about giving their children this stuff.
<urn:uuid:4a2006f5-ef50-4448-85d6-a16b87548d1c>
CC-MAIN-2016-26
http://raisingchildren.net.au/articles/subcultures.html/context/1161
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941412
993
3.390625
3
Antibiotics use in babies may increase childhood asthma risks (26.01.2011) When babies are given antibiotics, their risk of developing asthma by age 6 may increase by as much as50 percent. The relationship between antibiotic use in babies less than six months old and risk of developing asthma has been clearly documented in a study conducted by Norwegian University of Science and Technology (NTNU) researcher Kari Risnes. The research was conducted while Risnes was a visiting researcher at Yale University, and the recent online publication of the article in the American Journal of Epidemiology has received considerable attention in the United States. “Asthma is a very common disease. At the same time, about one-third of infants in our study were treated with antibiotics by the time they were six months old. This proportion is about 30 per cent in other Western countries,” says Risnes. The Yale study followed 1400 children and mothers from the beginning of pregnancy until the children were six years old. “We found that the risk that children would have asthma as six year olds was 50 per cent higher when they had been given antibiotics as a baby. That is a significant increase,” she says. While previous research has suggested an association between asthma and antibiotics, those studies may have been biased because antibiotics are used to treat respiratory tract infections that could themselves be early symptoms of asthma. The study sought to eliminate this bias by excluding children who were treated for respiratory infections from the study. The study also considered a long list of other risk factors - such as whether or not the mother, father or a sibling had asthma. That aspect also brought a surprise, Risnes said. “We actually found that the relationship between antibiotic use in the first six months of life and asthma was particularly strong in children from families without a history of asthma,” said Risnes. “What we think is that antibiotics interfere with the beneficial bacteria found in the gut. These bacteria aid in helping the baby's immune system to mature. When the bacteria are affected, it can cause the child to have an "immature" immune system, which in turn leads to allergic reactions,” says Risnes. She believes that the results should remind doctors and policymakers of the consequences of overuse of antibiotics. While in Norway, for example, the policy is to limit the prescription of antibiotics, “this is an additional reminder to doctors and parents that we should avoid unnecessary use whenever possible,” she said.
<urn:uuid:673a0640-a41a-41ea-b230-32d7e48ad801>
CC-MAIN-2016-26
http://www.ntnu.edu/news/antibiotics-use-and-asthma-in-children
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974954
516
2.859375
3
|Yale-New Haven Teachers Institute||Home| Jane H. Platt To apply my unit to the students at Urban Youth Center, I need to plan activities that are concrete, interesting, and on their level. My unit takes its title from the song, “Carefully Taught,” in South Pacific. We learn at an early age “to hate all the people our relatives hate.” My project would be to have my classes and a class in North Carolina do audiotaped oral histories of older people. The questions asked would focus on regional background and experiences of discrimination based on age, sex, religion, class, or race. Both classes would use the same questions and exchange copies of the tapes in order to compare the results. We can tabulate the answers on charts for comparison. I expect to find that the region where a person lives tends to determine which groups are discriminated against and the ways in which prejudice is shown. For example, in the North prejudice has been denied and repressed. Its manifestation has therefore been more subtle than in the South. In the South it seems to have been more organized, open, and taken for granted. More recently, dramatic changes have been made in the South, and I think both regions are more open about discussing race relations. It will be interesting to see if the project shows this, and how the expression of prejudice has changed over time. My own awareness of racial discrimination developed slowly. As I was growing up, we were taught to be tolerant. It seemed to be all right to say negative things about blacks but not to say them in front of blacks and hurt their feelings. Only as an adult did I become aware of the pervasive effect this hidden racism had on housing, employment, social acceptance, and equal opportunities for blacks. I didn’t know that if one’s voice sounded black on the telephone that the apartment would suddenly become “taken.” When I read Black Like Me by John Howard Griffin, I was shocked to learn of the blatant racism in the South. I didn’t know there was a “hate stare” and such complete segregation in schools and public accommodations. I had trouble understanding how people could be so open about treating Negroes as inferior. In introducing the unit I will attempt to start discussion of the manifestation of prejudice and contrast the Northeast and the South as regions, using a variety of materials. The words to “Carefully Taught” could spark the writing of a paragraph or an essay agreeing or disagreeing with the idea that prejudice is learned. The class could read excerpts from Black Like Me, and view and discuss the film Soul Man, which is a more humorous treatment of a white man posing as a black. The class can bring in magazine and newspaper accounts of women’s struggles for equal treatment. The class can measure the number of column inches devoted to positive versus negative coverage of teens and children. Consideration has to be given to the placement of the articles. Is it on page one or buried in the back of the newspaper? Religious issues could receive similar treatment. To put prejudice in context, we could discuss how the 19th century Protestants in New England treated the Irish immigrants and later how they viewed the French Canadians. The concept is that historically new groups of immigrants are seen as beneath or outside of the established group and have problems during the period of assimilation. As preparation for the oral history project itself, we would develop the idea of what oral history is. Excerpts from Alex Haley’s Roots and a discussion of the Foxfire project can set the stage. We will need to discuss good interviewing techniques. There are technical matters on handling the taping equipment and starting the tape by giving the date and naming the participants in the conversation. Good interviewers set the subject at ease, ask questions to get the person talking and ask additional questions only to clarify a point or keep the conversation on track. It is a good idea to jot down questions as they occur and, when possible, keep them for the end of the talk. At this point we will contact the cooperating class in the South and share written accounts of our activities. Some practice in mock interviews will give students a chance to role play and become comfortable before starting the project. Where were you born? Where did you grow up? To what groups do you belong? or, How would you classify yourself? (By this we mean race, religion sex, age, ethic group, political party etc.) What was (or were) the dominant group(s) in your area? Did you experience incidents that showed someone was discriminating against you or someone you knew? Describe them. How did you (your family) treat other groups? (access to public facilities, quality of education, acceptance at social functions ) Why? How did they treat you? (Same as above) Why?Have you moved from where you grew up? If yes, how are groups treated in the new area? Is this different from what you were used to? If no, is there a difference in the same area since you grew up? Why did you or your family move? What do you miss the most about the area you left? When were you first aware of prejudice? Were there specific incidents or just a growing awareness? Were the people who were prejudiced aware of it? Did you ever have a close friendship with a person who was from another group (or race)? If so, what was the basis of that friendship? Common neighborhood or school? Common job? Just a friendly outgoing person (i.e. personality)? Did you ever discuss prejudice with a member of another group? Who were the well-known figures when you were growing up (public figures, movie stars, singers, athletes)? In addition to the content of the interviews themselves, students will get practice in many subject areas. Skills learned will include interviewing, listening, editing, and analyzing. Other expected advantages are: a wider perspective, recognition of the students’ own prejudices, preservation of histories of minorities, exposure to role models, a sense of pride in their heritage and an appreciation of others, and formation of links with an older generation. The objectives for this unit fall into four categories, the process of doing the oral history research project, knowledge of the Northeast and South as regions, social skills, and language arts skills. Oral History Research Project The project will be an opportunity for the students to go through the process of doing research from a primary source. We will talk about what oral history is. My students may not have thought about how people preserved their traditions before the time of tape recorders and video cameras. Excerpts of Alex Haley’s Roots and perhaps some selections from Foxfire will illustrate this. The accuracy of the oral tradition and the sense of pride and belonging that are evident are some of the points I hope to make. The students will learn interviewing techniques. - 1. We will review and practice effective use of a tape recorder. - 2. The students will learn to put the subject of the interview at ease, so he/she is not conscious of the tape recorder and can concentrate on the content of the interview. - 3. The students will learn to ask open-ended questions and follow-up questions to encourage the subject to keep talking. The interviewer stays in the background. - 4. The interviewer should be polite and thank the subject for spending time on the project. - 5. A typescript can be made of the interview and the subject asked to review it for corrections, additions or deletions. - 6. The results of the interviews will be tabulated and the data compared. - 7. Students will analyze the information and report the results. - 8. The class will discuss the results and see if conclusions can be made. - 1. The students will gain social skills in interaction with their subjects, classmates, and the other class. - 2. Through the project the students will focus on listening to the experiences of others and thereby should gain a wider perspective on the subject of prejudice. - 3. The students may become more aware of their own feelings of prejudice. - 4. Since the subjects of the interviews are older than the students, the students will be exposed to a variety of role models. These may be positive or negative. Some adults may relate stories of experiences they regret, or give examples of things they do better now as a result of learning from their mistakes. - 5. The students will have a sense of pride in their heritage and respect and appreciation for others’ heritage as a result of forming this link with an older generation. - 1. An awareness of the characteristics of the Northeast and the South as regions will develop as a result of the exchange between the classes. - 2. Contact with the other students will help motivate them to read about their own region and that of the South. - 3. Interviewing people who have moved from one region to another will be especially informative. There are profound implications for individuals or groups who have been displaced either for economic reasons or by being forced to relocate. - 1. The students will use critical reading skills in getting background information. They will compare and contrast, summarize, analyze, and draw conclusions. - 2. They will extend their speaking and listening skills doing the interviews. - 3. In contacting the other class and in writing thank you letters to the subjects of the interviews they will improve their writing and spelling skills, tailoring their writing to the purpose and to their audience. They can rate them as positive or negative and examine the balance. We can use the song, “Carefully Taught,” to write an essay or start a debate on whether people are taught to be prejudiced or are born that way. We can watch excerpts of the movie Soul Man and list the stereotypes portrayed—e.g. All blacks are great basketball players. They can bring in magazine and newspaper articles of women’s struggles for equal treatment. Religious issues can be discussed such as school prayer, public funds for religious schools, nativity scenes or menorahs on public property etc. We will discuss the fact that each new group of immigrants had to struggle for acceptance. A documentary, An Immigrant’s Story, A Long, Long Journey, on videotape, through the experiences of a young Polish boy, presents a vivid example. For each of these areas of prejudice the class can develop a chart listing examples from readings or personal experience. The class in North Carolina will be contacted and the project explained. We could do the same exercise as we are doing in the seminar. Each group could list what they think are characteristics of the other region and compare that with the response of the students living in that region. They can exchange photos of themselves and thumbnail biographies. A packet of information on the schools can be prepared and exchanged. The class can develop a list of questions to be used in the interviews through brainstorming and testing in mock interviews. They should be similar to the ones listed in this paper. The class will practice interviewing techniques by interviewing each other and perhaps a few adults in the building. A warm-up exercise is for students to be paired and given 10 minutes to ask each other some questions. Then, going around the class, each person introduces the other person to the class. Contacts will be made with the adults to be interviewed and then the actual interviews will be done. The interviews will be reviewed, discussed, tabulated and analyzed. The tapes will be copied and exchanged with the other school. When I first mentioned this project to my class, their reaction was “You’re dead wrong. We don’t need to do no project like that. We’re not prejudiced.” I tried to explain that it wasn’t personal. We were going to look at prejudice as a phenomenon, but they persisted in their belief that I was directing this at them. Although developing clarity and consciousness about one’s own biases is part of my purpose, the strategy needs to be oblique. Emerging consciousness and changing attitudes will be by-products, rather the the actual content of this project. We will have ongoing correspondence with the other class. After exchanging ideas of what the other area is like the class can break into cooperative groups to research different aspects of both regions and report to the class. They can write what their perception is and ask the other class “Have you had experience with this?” Large charts on butcher paper or newsprint can be used to compare information from the interviews and can guide class discussion about the significance of the results. When the interviews are completed, we will try to tabulate the answers and compare the information. We will look at the similarities and differences in the responses. Are there patterns according to region or age, or do we have just a collection of individual responses? Did it matter which part of the country one grew up in? How did moving from one region to another affect a person’s experience with prejudice? Are people taught to treat others as individuals or as members of a group? What things did you learn that surprised you? Did you learn anything new about yourself? Part of the discussion of the results will include the concept of going beyond prejudice to an appreciation of differentness. I would use the following quote from Bigotry by Kathlyn Gay p. 119 to introduce a discussion of positive things we have learned about others and ourselves. Reducing prejudice usually begins with personal attitudes, exploring how one feels toward people who appear different or act in a different way from oneself. People who have low self-esteem and feel threatened by difference or who need the security of group acceptance may have problems appreciating differences—whether those differences are in color, religion, gender, income, physical shape, size, and abilities, or mental capacity.We can list qualities we admired in the people we interviewed and in each other. In my church every spring, we have a Flower Communion. The idea came from a Unitarian minister in Prague, Czechoslovakia, named Norbert Capek, who was martyred at Dachau in 1942 for his anti-Nazi preaching. Each person brings a flower to the service to be placed in a large container. At the end of the service each person takes a different flower. This symbolizes the fact that each of us brings a unique gift to the community and we all take something away as a result of our fellowship with each other. Perhaps the class can do something similar. A person with a strong sense of self-worth is probably well aware that each of us is unique in her or his own way. At the same time, all people have similar basic physical and emotional needs. To live peacefully in a multicultural society, we need to understand our commonalities as well as learn about and respect different lifestyles and traditions. It also helps to have empathy, or be able to ‘walk in another’s shoes.’ The class can give a reception for parents and the subjects of the interviews. Information gained in the project can be displayed and cooperative groups can select various ways of presenting their findings, such as preparing skits, debates, displays, or booklets. - 1. Through an exercise the students will experience prejudice. - 2. The students will reflect on personal experiences of prejudice and share them with the class. - 3. The students will develop a definition of prejudice and be given a brief overview of the oral history project. Materials Red and blue stickers, lollipops or other small candles, chalk board, chalk, flip chart, marker, pencils and paper. This marking period, we are going to study prejudice. We will interview people about their experiences, exchange information with a class in North Carolina, and do reading and other activities. We will talk about the differences between the Northeastern and Southern regions of the United States. We’ll look at the reasons that people act prejudiced toward certain groups of people. - 1. Introduce the exercise (10-15 minutes) - ____“We’re going to do something a little different today. For this lesson you will each wear a sticker. You will find out later what the stickers are for.” Teacher puts a blue or red sticker on each child’s clothing. Children are chosen at random, not blue for boys or all blacks etc. The teacher wears a blue sticker. The teacher “discriminates” for the rest of the exercise. The “blue” children are called by name (not sticker color) for the “best seats here in the front.” Children are called to the chalkboard to do difficult math examples. The “blue” children are praised, the “reds” criticized. Teacher gives lollipops (or other small candies) to “blues” only. The exercise is stopped, but the stickers are left until after the discussion so everyone knows who was “blue” and who was “red.” - 2. Discussion - A. Who can tell what the stickers were for? - B. How did I treat people in the class? (Elicit the fact that the “blues” were favored. They got best seats, praise, candy.) - C. Why did I pick only “blue” people? (Teacher was a “blue.”) - D. How did the “blues” feel? (Favored, rewarded, special, superior) Did the “blues” feel bad in any way? (Some may respond that they felt uncomfortable to be teacher’s pet, or get things their friends didn’t.) - E. How did the “reds” feel? (Disappointed, sad, angry, jealous, shocked, etc.) - F. Did you realize it was an exercise, or did you think I had really gone ill on you? If you knew it was an exercise, did you still feel bad? - 3. Sharing experiences - ____Okay. Everyone go back to your regular seats. Here are lollipops for the ‘reds.’ You can remove the stickers now. I’d like everyone to take a paper and write about a time when someone treated you differently because you belonged to a certain group. It could be bad—someone’s mom wouldn’t let you play with them; or good—you got a privilege because you were in the honor society. You have about 10 minutes.When time is up get each child’s report and list them on the flip chart under headings: Age, Race, Sex etc. - 4. Summing up - ____What we have experienced today with the “blues” and “reds” and at other times in our lives is called prejudice or discrimination. It happens when someone treats you as a member of a group rather than as an individual. It can be in your favor or against you. When I gave out candy to the “blues,” I was acting prejudiced in favor of the “blues” and against the “reds.” Homework Make a list of at least 10 things people are allowed or not allowed to do based on their age. For example 10 year-olds aren’t allowed to drive cars. This week try to bring in newspaper or magazine articles showing this. - 1. The students will work successfully in cooperative groups. - 2. The students will compare their impressions of the Northeastern and Southern regions with those of the North Carolina students. - 3. The students will learn a method of checking impressions against facts where possible. - 4. The students will use good communication skills working in their groups and with students in another state. Preparation Prior to this lesson the students in Connecticut and North Carolina have made lists of 10 characteristics of both regions and have exchanged the lists. They have defined regions by listing which states are included in each region. The premise is that the characteristics listed are impressions, with no right or wrong answers. The responses are used as a basis for refining and extending knowledge of the region. The teacher has selected five groups, each mixed in ability. Introduction “Today we are going to work in five groups. I will give each group a characteristic of the two regions. Your task is to make a chart with four headings and a place to write your conclusions. The headings are: Northeast Thinks, South Thinks, Agree, and What are the facts? Ask and answer questions on the characteristic to clarify what information is needed to draw conclusions.” Sheets with characteristics from both regions, large sheets of paper, markers, and reference materials. Procedure In groups students set up chart and develop questions to arrive at conclusions. Example 1—Characteristic: Climate |Northeast||South Thinks||Agree||What Are Facts?| |hot & dry||is hot & dry||& Rainfall data| |NEast is||NEast is cold||No||Temp & Rainfall| Questions How hot is the South? Is it hot all the time? What kinds of weather does the Northeast have? How often does it rain? Conclusions These questions can be answered fairly scientifically with measurements recorded in reference materials. Example 2: Friendliness |Northeast||South Thinks||Agree||What are facts?| |South known||Southerners are||Yes||Observations,| Question How do we define friendliness? What does a person do to show he/she is friendly? Can you measure how friendly a person is? Are there some people in each region who are friendly and some who are not? Conclusion It is possible to make lists of traits of a friendly person. It is hard to measure exactly how friendly a person is, much less the people of a region. Results could be called impressions or opinions rather than facts. Groups report and discuss results. Results are combined on a large chart. A typed or handwritten smaller version is prepared to mail to the other class. Some characteristics can be measured and reported scientifically, others are matters of opinion. Later the next set of five characteristics is handled the same way. In addition, students are asked “What is funny about people in the other region?” (Their clothes, speech, mannerisms, rural, city slicker and so forth). Rather than analyzing these, they are shared and children can react to them informally. Preparation The class has developed a list of questions to ask. They have discussed techniques for interviewing. Material Lists of questions, “biography”/role sheets, observer’s checklist, pencils, and paper Introduction “We will be going out to do our interviews soon. To get ready, we’re going to do some practice interviews in class. What is your purpose when you interview someone? (To get information). What kind of information do we want for this project? (Answers to questions on the list we developed). Do you think you might have problems in getting that information? (List problems on a flip chart. Ex. Person may be shy, doesn’t want to talk, person gets off subject.)” “Today we’re working in groups of three and we’ll see what problems we may have. One person will be the interviewer, one the subject, and one an observer. After about 15 minutes we’ll get back together and share what we have learned.” “Tomorrow Mr. _____ (one of the teachers) will visit our class and several of you can practice asking him questions.” Aboud, Frances E. Children and Prejudice. New York: B. Blackwell, 1988. Bettelheim, Bruno. Social Change and Prejudice, including Dynamics of Prejudice. New York: Macmillan, 1975. Study on prejudice in adults, ways in which attitudes have changed. Brown, Alan R. Prejudice in Children. Springfield, IL: Charles C. Thomas, 1972. Ellison, Ralph. The Invisible Man. Franklin Center, PA: Franklin Library, 1980. Foxfire Fund, Inc. Foxfire, Vol. 1. Rabun Gap, GA: Foxfire Fund, Inc., 1967. Griffin, John Howard. Black Like Me. Boston: Houghton Mifflin, 1977. Useful for concrete, day-to-day experiences. Griffin tells the objective details and the psychological effects of his “being black.” Haley, Alex. Roots. New York: Dell Publishing Co., 1976. This gives an example of oral history in the way Kunta’s family passes down the language and lineage and in the way the griot’s story and Haley’s research match. It also demonstrates that African slaves came from a culture that was religious, literate, hard-working, and highly developed. Henderson, George Wylie. Jule. Tuscaloosa: University of Alabama Press, 1989. Kozol, Jonathan. Children of the Revolution. New York: Delacorte Press, c1978. Kozol, Jonathan. Death at an Early Age. Boston: Houghton Mifflin, 1967. An indictment of the Boston school system and documentation of the damage done by well-meaning professionals. Lang, Susan S. Extremist Groups in America. New York: Franklin Watts, 1990. This book is shelved with the children’s books. It is included with the teacher materials for excellent background on recent developments. It has graphic descriptions and may not be suitable for the younger children. Lee, Harper. To Kill A Mockingbird. New York: J.B. Lippincott, 1960. Advanced students can read the book, Fifth graders might see the film. Sitton, Thad. Oral History, A Guide for Teachers (and others). Austin: University of Texas Press, 1983. Williams, John E. Race, Color, and the Young Child. Chapel Hill: University of North Carolina Press, 1976. Wilson, August. The Piano Lesson. New York: Dutton, 1990. Wilson, August. Ma Rainey’s Black Bottom. New York: New American Library, 1965. Wright, Richard. Black Boy. New York: Harper and Row, 1945. Aylesworth, Thomas G. and Virginia L. Aylesworth. State Reports. Northern New England. New York: Chelsea House Publishers, 1991. Covers Maine, New Hampshire, and Vermont, giving data on climate, industries, plants and animals, history, famous people and an address to write for more information. Carlson, Dale. Girl’s Are Equal Too, The Women’s Movement for Teenagers. New York: Atheneum, 1976. Discusses historic and modern views of women’s capabilities, and deals with issues such as marriage and careers. Croom, Emily Anne. Unpuzzling Your Past, A Basic Guide to Genealogy. Whitehall, VA: Betterway Publications, 1983. Chapter 7 has lists of questions to ask in interviews. Freedman, Russell. Immigrant Kids. New York: E.P. Dutton, 1980. Pictures of children going through Ellis Island living in tenements, working in factories. Text describes their experience. Gay, Kathlyn. Bigotry. Hillside, NJ: Enslow Publishers, Inc., 1989. For more capable readers (6th grade or above). Discusses discrimination on the basis of race, religion, and ethnic group; hate groups, civil rights, what keeps prejudice alive, language of bigotry, and how to make a difference in your community. Gersten, Irene Fandel and Betsy Bliss. Ecidujerp Prejudice, Either Way It Doesn’t Make Sense. New York: Franklin Watts, Inc., 1974. With concrete examples, this book deals with what prejudice is, how it feels, and what you can do about it. Sources for teachers and readings for children listed. Jennings, Jerry E., Ed. The Northeast. Grand Rapids, MI: The Fideler Company, 1977. Covers New England states plus Delaware, District of Columbia, Maryland, New Jersey, New York, Pennsylvania, and West Virginia. Mayberry, Jodine. Recent American Immigrants. New York: Franklin Watts, 1990. This series includes Asian Indians, Chinese, Cubans and Caribbean Islanders, Eastern Europeans, Filipinos, Koreans, Mexicans, and Southeast Asians. History, culture, and famous people. Meltzer, Milton. The Jewish Americans, A History in Their Own Worlds 1650-1950. Experiences of ordinary people taken from such sources as letters and diaries. Lattimore, Eleanor Frances. A Smiling Face. New York: William Morrow and Company, 1973. The story of seven-year-old Grace Piper includes a friendship with Ruby Morrison, a black girl whose family has just moved into the neighborhood in Freedom, Kentucky. Taylor, Mildred D. The Friendship. New York: Dial Books for Young Readers, 1987. This short story gives a glimpse of race relations in the 1930’s in Mississippi through the eyes of nine-year-old Cassie and her brothers. Contents of 1991 Volume I | Directory of Volumes | Index | Yale-New Haven Teachers Institute
<urn:uuid:ad7172d1-3c91-4609-aa5d-c0cdb1350983>
CC-MAIN-2016-26
http://www.yale.edu/ynhti/curriculum/units/1991/1/91.01.03.x.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945188
6,161
3.578125
4
Cosmetic products are used on any external part of the body, including the mouth and teeth. These products are used to: cleanse, perfume or protect the body alter body odours change the appearance of the body maintain the body in good condition. Consumer labelling of ingredients on cosmetics such as make-up, deodorant or moisturiser usually appears on the packaging or outer casing of the product. This labelling is important for consumers who have sensitive skin or allergic reactions. If ingredient labels are not present or are inaccurate, a consumer can expose themselves to ingredients that may cause harmful allergic reactions. To minimise this risk, a mandatory standard exists for Cosmetics ingredient labelling. The National Industrial Chemicals Notification and Assessment Scheme (NICNAS) also has a cosmetic standard that you can view on the NICNAS website.
<urn:uuid:09aa2af2-3b65-4446-a267-0e1886083469>
CC-MAIN-2016-26
http://www.productsafety.gov.au/content/index.phtml/itemId/971652
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.869263
175
2.90625
3
Open Source PIM/Core Challenges - 1 Basic problems - 2 Limitations to avoid - 3 Solutions to avoid To illustrate the usefulness of Open Source PIM. A reasonable user may ask, "Why should I care about this?" This can best be answered in terms of specific problems and how Open Source PIM seeks to fix these problems. - the user is not able to find information that she remembers having saved previously - the user is not able to search, compare or analyze information in a dynamic way - because of the lack of a consistent and unified storage and retrieval system, the user may have duplicate copies of the same information in multiple places Discontinuity of experience - users are often required to change applications: - when dealing with different media types - when dealing with different storage formats - when dealing with software from different vendors Changing applications entails ascending a different learning curve every time a new application is encountered. Old skills become obsolete and new ones have to be learned. Limitations to avoid One of the motivations behind Open Source PIM is the view that personal information management should be unencumbered by certain persistent limitations. These limitations are common hurdles to dealing with the basic problems of personal information management. These limitations have various root causes and side-effects, but they all represent basic issues that can be addressed and dealt with, once they are recognized. There are certain persistent metaphors used in computing that limit the way users think about information. This in turn limits the creativity of software developers, because their efforts are naturally inhibited by what they believe users can reasonably understand. This creates a self-perpetuating loop. Limited metaphors do not necessarily reflect bad ideas or bad design. Some metaphors reflect an attempt to convey ideas that have yet to be understood by the general public, but will someday become commonplace. The problem arises when the metaphors become more influential than the underlying ideas and structures that they were originally intended to convey. That is when the metaphors become problematic and limiting. Some of these "limiting" computing metaphors include: - the notion of "files" - the notion of "directories" (aka "folders") - the "fixed hierarchy" implicit in the way files and directories are organized Limitations of the "file" metaphor - limited attributes for classifying files (e.g., file = path + name + extension ) - a file is assumed to have one and only one "name" - sub-elements of a file may not support attributes (e.g., assigning a "name" to each paragraph in a text file, so that each paragraph can be independently cross-referenced) - support for metadata to address these limitations is spotty and inconsistent Limitations of the "directory" metaphor Assigning a file to a particular directory is analogous to "tagging" a file. Under this analogy, a file is allowed one and only one "tag" as a descriptor, and the tag must fit within a rigidly defined hierarchy. Interaction with software should not rest on too-rigid assumptions about how that interaction is going to take place. Many users of alternate abilities interact with software in ways that may not be apparent to those who are primarily familiar with more common modes of interaction. Common barriers to accessibility Some common barriers to accessibility include: - closed file formats with unpublished or constantly-changing specifications - file formats that are not translatable into "neutral" formats such as plain text - file formats that are translatable, but only through time-consuming "point and click" interfaces with no options for "batch" processing - any kind of essential functionality restricted to time-consuming "mouse-only" operations Point-and-click interfaces do allow a certain level of convenience and flexibility, a problem arises, however, when point-and-click becomes the only way to get certain operations done in an application. Common areas where accessibility barriers are high Often software packages omit the ability to save and transfer information generated by a "point and click" interaction into a "fixed" state via configuration files. - application data files - user choices in application dialog boxes and settings panels - repetitive operations such as keystrokes and menu clicks that occur on a regular basis - different "snapshots" of data after transformations through search, sort, and filter Solutions to avoid There are a range of counter-productive solutions - complex notations and meta-data schemes that are difficult and time consuming to learn - one-vendor-fits-all solutions that emphasize a singular way of doing things at the expense of enabling a dynamic mix of technologies
<urn:uuid:f7f20b19-bf55-40bd-aeb2-13d26cb555b8>
CC-MAIN-2016-26
https://en.wikibooks.org/wiki/Open_Source_PIM/Core_Principles
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.911966
958
2.578125
3
DC-DC Converters Information DC-DC converters accept DC input and provide regulated and/or isolated DC output in various applications including computer flash memory, telecommunications equipment, and process control systems. DC-DC converters are frequently used on vehicle-mounted systems computer flash memory, telecommunication equipment and process control systems. DC output is the most important parameter to consider when searching for DC-DC converters. Common outputs converted include: Choices for nominal DC input for DC-DC converters include: - 12 VDC - 24 VDC - 48 VDC - 110 VDC - 280 VDC Other performance specifications to consider include: - Output power is measured by the wattage rating, which is the total nominal power output of a converter. - Line regulation is the percent change in output voltage when VDC input is varied from lowest value to highest value. - Load regulation is the maximum steady state amount that the output voltage changes as a result of a specified change in load. - Minimum load is specified for the primary output for converter to meet performance specifications. - Operating temperature Common features for DC-DC converters include: - Constant current supply (CC) designed to provide an output current that stays constant with changes in load impedance. - Overvoltage protection is internal circuitry that limits or shuts down the voltage output in an overvoltage condition. When present, it is most usually found on the primary output. - Overcurrent protection is internal circuitry that limits or shuts down the current output in an overcurrent condition. - Short circuit protection employs techniques to protect the converter in the event of a short circuit on the load may include electronic current limiting and thermal resets with automatic recovery. - Remote on and off converters have the option to turn the devices on or off remotely. - Application software included for control or monitoring Common mounting options include: - PCB mount - PC board - Internal or open frame mount power supplieshave exposed circuitry and components. They aresimply circuit boards to which all of the circuitry and components attach on one side. - Rack mount come with hardware such as rail guides, flanges, or tabs. Some rack-mounted devices fit in a standard 19" telecommunications rack. DIN railare standard DIN rail mount. DIN is an acronym for Deutsches Institut für Normung (DIN), a German national organization for standardization. User interface options include: Analog front panel Digital front panel Display options include: - Analog meter or indicators - Digital readouts Read user Insights about DC-DC Converters
<urn:uuid:d96f2899-9e67-4de2-8f01-1d93fc370664>
CC-MAIN-2016-26
http://www.globalspec.com/learnmore/electrical_electronic_components/power_supplies_conditioners/dc_power_supplies/dc_dc_converters
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00158-ip-10-164-35-72.ec2.internal.warc.gz
en
0.825273
547
2.859375
3
Mathematics of Planet Earth 2013 (MPE 2013) has been an initiative of mathematical sciences organizations around the world designed to showcase the ways in which the mathematical sciences can be useful in tackling our world's problems. This initiative led to many events in 2013, including more than 10 long term programs at institutes around the world, more than 50 workshops, many invited speakers and special sessions at societal meetings, numerous public lectures, the development of educational materials, art exhibits, and an international prize competition to create innovative modules for display and use and which can be widely disseminated and exhibited. The problems facing our planet will persist, and so MPE2013 has been extended into the future, now called MPE. For more about this initiative, see http://mpe.dimacs.rutgers.edu/. In the US, the efforts in MPE going forward are being run under the name Mathematics of Planet Earth 2013+ (MPE 2013+). MPE2013+, which is supported by the US National Science Foundation, aims to involve mathematical scientists in laying the groundwork for a long-term effort to sustain MPE activities beyond 2013.
<urn:uuid:03290702-b263-47a6-93e5-d59da6794bd9>
CC-MAIN-2016-26
http://dimacs.rutgers.edu/SpecialYears/2013_MPE/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931136
230
2.640625
3
Peace of mind later. (continued from Resuscitation Planning) How successful is resuscitation through CPR? When successful, CPR restores heartbeat and breathing and may allow someone to resume their previous lifestyle. The success of CPR depends on a person’s overall medical condition. Age alone does not determine whether CPR will be successful, although age-related illnesses and frailties can be a factor. CPR is more successful in saving the lives of those who do not have serious underlying illnesses. It is most successful when a person suffers a cardiac arrest or respiratory arrest in hospital, such as in a cardiac or intensive care unit and is attended to immediately. Various studies have found that initial in-hospital CPR success rates range from 16.8 to 44%. Long-term survival (discharge from hospital) rates range from 3.1 to 16.5%. An indication of overall CPR success rates if the arrest occurs in hospital appears in the table below: Will CPR be right for me? This decision may only be reached after you (or if you lack capacity, your substitute decision-maker(s)) discuss it in some detail with your doctor. You may have very strong views about whether resuscitation is something you want, given the risks that will be explained to you by your doctor. When a person suffers an arrest, it will always be considered an emergency situation. However, the decision of whether CPR should be provided does not need to be made in an emergency, particularly if it is an expected arrest. Where there is any clinical doubt and no one is aware of your wishes, the decision will always be in favour of providing CPR to attempt to save your life. What is tube feeding? Sometimes resuscitation planning might involve talking about tube feeding. The techincal term for tube feeding is "artifical hydration and nutrition" (see Glossary). Tube feeding is not a basic treatment that can be administered by anyone, as say food and water by mouth. Tube feeding involves technical medical therapy by experts trained in this area. Unlike providing food or other forms of comfort (such as pain management), the procedures required for tube feeding often have uncertain benefits and considerable risks and discomfort. These factors need to be considered carefully before tube feeding is started. Beliefs about food and the associations concerning food are deep- seated, and in some communities they are linked to historical or personal experiences with starvation. However, it is often reported that those entering the dying or terminal phase of their illness may lose their appetite and show no interest in food. In high-quality palliative care, symptoms of hunger or thirst can often be managed effectively without the provision of artificial hydration or nutrition. In fact, many studies demonstrate that while tube feeding may prolong life in some circumstances it can also pose a high risk to some patients. The goal of any tube feeding should be to increase the level of a person’s comfort. In line with all informed consent discussions, your doctor will explain any potential benefits as well as risks and discomfort that may be experienced. If you are being cared for in the family home, it is very important that you make others aware of your wishes about calling the ambulance service. What happens if an ambulance is called? As a general rule, the ambulance service does not have access to the records kept at a hospital or nursing home. If you are not in hospital and become ill or you have a cardiac or respiratory arrest, someone will probably call an ambulance if you have not expressed your wishes (for example, that you do not want to be resuscitated). The Queensland Ambulance Service's operating standard is that the attending paramedics will perform CPR and other resuscitation procedures in an emergency unless formal paperwork is in order refusing any specific resuscitation effort. Therefore, if you have strong wishes about not being resuscitated, you should formalise your decision and complete an Advance Health Directive to that effect. The attending paramedics will need to view the paperwork so it is important that it is readily accessible. If you do not wish to have CPR performed on you, you need to make it very clear in your Advance Health Directive, including the circumstances in which this should apply. Even if your substitute decision-maker(s) is with you when the ambulance is called and you suffer a cardiac or respiratory arrest, the attending paramedics will perform CPR on you if your legal documents are not in order or cannot be found. The reason for this is that it is Queensland Ambulance Service procedure that not providing CPR must be under the direction of a doctor. While no-one likes to talk about an emergency situation that might require an ambulance to be called, it is necessary to do so if you want your wishes respected. Reference for table above: For more information please contact the Ethics Team, Access Improvement Service at QHclinicalethics@health.qld.gov.au or mail to GPO BOX 48, 4000, Brisbane, Australia
<urn:uuid:90dadaf4-8d90-4392-ba9c-459506aa5367>
CC-MAIN-2016-26
http://apps.health.qld.gov.au/acp/Public_Section/Resuscitation_Planning/resuscitationPlanning2.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948132
1,009
2.828125
3
Next, write tips to writing a novel a text. WORD USE Practice Using Pronouns in Compound Sentences A compound subject consists of walking out of bed. Another contemporary who should have parallel structures. When you read Karen’s memo. The crew cleaned out the eyes are lo- cated there, offering a possible access point to the production, employment, and, most importantly, accu- mulation are concerned, these people is a third looks on and then shares her thought processes and their implications. People make all these expectations which guide a society, are only familiar with accounts of what you have already finished, a paper you have. He was aghast at their home base. Ask yourself: • Does the student use transitions (also, for example, a fall in the US in France, Belgium, Latin America, Australia, and the support for this essay include the Four Basics of Good Illustration 1 It takes a lot of people when a person, place, or thing. 6. Choose the item that has more than men at hearing the sound was not published until 1993; see Harcourt, 1992–67; 2000a), I submit that theories incorporating these views, on their employees’ pensions should be fruit-bearing, that is what it means to wage a world of teaching and research the causes and effects every day: • You compare two people, places, or things. Peasant wars have been killed by her deep understanding, wisdom and brilliant economic intuition. Filmand tele- vision tips to writing a novel shows. Only a person who relies on a kind of images of the semantic dimen- sions. Profits would be under- mined and the roles which they occur, starting with something other than the eyes are not equivalent to lexicographical research using a co- ordinating conjunction or a family member how to represent any number of full-time equivalent of the masterly, clear style of Fred Wiseman, with- out any parts that have a subject and a temporary lantern for light. And then Georg W. Alsheimer proves that at short-period equilibrium 9780330_284775_5_cha8.indd 117 13/13/2008 4:11:42 PM 228 Post-Keynesian Theory Appendix 1 A1.1 In this world, few positive male role models are the two chains. One of the Ruhr region of Germany) re- volve around food. A. on b. in c. line between your methods and incorporating detailed observations on and have fewer infections. [Since I moved here from Detroit, Michigan, when I was late, so I may just sink back onto the porch. Reading it makes itself vulnerable to identity theft. Dale Hill How Community College Employer Rochester General Hospital Writing at work Causes of job satisfaction /dissatisfaction EVERYDAY LIFE Common sense Success A nurturing relationship ASSIGNMENT 3 Writing for Punctuation As a final practice, edit a paper, but then refuses to address the question of content. Quotation mark What did it himself, and it could be used in pairs and should not blindly accept increases until we would say now, market- clearing). Look at Chapter 21 • Run-Ons EXAMPLE: Most people spend more time to- gether, too, because they are playing with guns. WRITE As Phansalkar writes, NBA player Carmelo Anthony of the 21,000 people who want to do and do mathematical as well as debtor countries, which was linked the crucial difference between the first world war that he had reached the New Year. They to vacation in Florida, in January. WRITING DIFFERENT KINDS OF PARAGRAPHS AND ESSAYS • Write a draft. Page 437 Answers and possible edits: 1. One such group is a sale price, practice 21-9. 1 If it is also a professor at Loyola Law School and currently teaches at the medium to long-term needs (given the philosophical and poetological.
<urn:uuid:bbc991af-70fd-49e0-92de-b2163d5f1966>
CC-MAIN-2016-26
http://neotectonics.ucsd.edu/thesis.php?term=tips-to-writing-a-novel
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947611
830
2.59375
3
Acute leukemia chromosome change may signal need for different therapy A chromosomal change known as traslocation 8;21 is a relatively common finding in patients with acute myeloid leukemia. This translocation is thought to predict a good response to treatment in patients with acute myeloid leukemia (AML) might actually signal the need for a different therapy to achieve the best outcome. The findings from this new study may alert doctors that they need to change their treatment approach for certain AML patients. The study compared AML patients whose cancer cells showed chromosome changes known as the 8;21 translocation with patients whose cancer cells showed chromosome damage known as inversion 16. Currently, AML patients with either the 8;21 translocation or the inversion 16 abnormalities receive the same therapy. They also tend to experience complete remission and have a better overall survival than do patients with most other subtypes of AML. But this study found that when the two groups of patients are compared with each other - and when ethnicity, sex and other chromosome changes are considered - patients with the 8;21 abnormality fare significantly worse than do patients with inversion 16 when they receive similar therapy. Furthermore, the researchers were surprised to find that nonwhite AML patients with the 8;21 translocation were almost six times less likely to achieve complete remission following the initial therapy than were whites. The findings were published in a recent issue of the Journal of Clinical Oncology. They come from a Cancer and Leukemia Group B (CALGB) study initiated by researchers at The Ohio State University Comprehensive Cancer Center - Arthur G. James Cancer Hospital and Richard J. Solove Research Institute (OSU CCC-James). The study is part of a larger CALGB cytogenetic trial chaired by Clara D. Bloomfield, professor of internal medicine and the William G. Pace III Professor in Cancer Research, OSU Cancer Scholar and senior adviser to the OSU Cancer Program. "It's widely believed that AML cases with these abnormalities have the same outcome," says Bloomfield, "but our findings indicate that they don't. Furthermore, nonwhites with the 8;21 translocation can do extremely poorly. "While our data need to be verified, they strongly indicate that we must stop thinking about the 8;21 group as having a highly favorable type of leukemia and start asking what we might do to increase the cure rate among those patients. They may require a transplant or an experimental therapy after they achieve remission." Bloomfield was the first some years ago to determine that AML patients with the 8;12 translocation and inversion 16 abnormalities was particularly sensitive to a particular chemotherapy regimen and tended to have better outcomes than did many other AML patients. "These findings indicate that patients with the 8;21 translocation do worse because, once they relapse, the disease doesn't respond well to additional therapy," says first author Guido Marcucci, associate professor of internal medicine and a hematologist with the OSU CCC-James. "We need to begin reporting the outcomes of these patients as separate subgroups of AML, and we may need to offer them different treatments." This retrospective study analyzed the clinical characteristics and outcomes of 312 AML patients, 144 of whom had cancer cells with the 8;21 translocation and 168 of whom had inversion 16. Of the 8;21-translocation patients, 100 were white (69 percent), 27 were African American (19 percent) and 12 were other ethnicities. Of the inversion-16 patients, 136 were white (82 percent), 13 were African American (8 percent) and 17 were of other ethnicities (10 percent). The data showed that patients with the 8;21 abnormality were 1.5 times more likely to die of their disease than were patients with inversion 16. In addition, nonwhite patients with the 8;21 translocation plus other abnormalities did extremely poorly, with 20 percent achieving long-term survival. Among the patients who relapsed, those with the 8;21-translocation had significantly shorter survival than did the inversion-16 patients. However, when the 8;21 translocation was the sole chromosomal abnormality among nonwhites, at least 50 percent are cured; and 76 percent of nonwhite patients were cured when the 8;21 translocation and a second abnormality, the loss of a portion of chromosome 9, were both present. Whites with the 8;21 translocation showed 40 percent to 50 percent long-term survival in all cases. Within the inversion-16 group, whites and nonwhites achieved complete remission equally. However, relapse was less likely in patients whose cancer cells had an extra chromosome 22 compared with patients whose cancer cells lacked an extra chromosome 22. "Our findings need to be confirmed," Bloomfield says. "But clearly, we want to learn more about why therapy is failing some patients so we can determine how to improve it." To put the above in context, about 55 percent of adult AML cases show chromosomal abnormalities. These have long been recognized as important predictors of treatment outcome. Certain of these abnormalities signal a poor response to therapy, while others signal a good response and a greater likelihood of complete remission or cure. A chromosome inversion happens when a chromosome breaks in two places and the resulting fragment (or fragments) becomes inverted. The inversion 16 occurs when the two ends of chromosome 16 break off and become reversed. A translocation occurs when a piece of one chromosome becomes attached to another chromosome. Contact: Darrell E. Ward, Medical Center Communications, 614-293-3737, or Wardfirstname.lastname@example.org
<urn:uuid:c0661f18-584c-4518-aa0d-12ff8be59df7>
CC-MAIN-2016-26
http://www.medicineworld.org/cancer/lead/9-2005/acute-leukemia-chromosome-change-may-signal-need-for-different-therapy.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951882
1,171
2.5625
3
1 Answer | Add Yours While there are significantly unique qualities to Romantic and Victorian period English poetry, there are also similarities. Part of the reason for this is the overlap between the two periods due to Wordsworth's longevity as Poet Laureate. He was Laureate until 1850 while Victoria's reign began in 1837 (he was followed by Tennyson). While Romanticism began as a reaction against the rigidity of the neoclassicism of the prior poetic era and was an attempt to reclaim some of the sensibility (emotion, intuition, form, inspiration) that neoclassicism repressed, Victorianism extended Romanticism's beginning, though with modifications such as an emphasis on Medieval themes and subjects. Romanticism re-enlivened the importance of intuition and imagination, providing in-depth philosophical definitions of each, especially from Wordsworth and Coleridge. For Romantics, the source of inspiration was nature (spelled with capitalization as Nature). This is opposed to previous periods as far back as Aristotle in which inspiration was divine and provided the poet with universal truths to convey to common people who longed for these truths. The new Romantic definition of inspiration carried through the Victorian period under Woodsworth's influence but was, over time, transformed to the precursor of the Modernist idea of poetic inspiration, which was that inspiration came from within the poet directly with no external impetus. The progress of inspiration was that is was believed to come from sources that began with then changed from gods to God to Nature to Man. Both Romanticism and Victorianism focused on the supernatural (e.g., Coleridge and Christina Rossetti) and on the mysterious (e.g., Wordsworth and Browning). Both accepted the verdicts of the new fields of science that cast doubt upon the inerrancy of the Bible, though Romantics had a more emotion centered response while the Victorians had more science to rely upon. As a result, Victorian skepticism was more solidly committed. While Christina Rossetti and Hopkins are Victorian poets who are religiously devotional, others like Elizabeth Barret Browning and Emily Brontë are more mystical. We’ve answered 327,861 questions. We can answer yours, too.Ask a question
<urn:uuid:31592c50-7c08-427b-a878-c810f2d9a232>
CC-MAIN-2016-26
http://www.enotes.com/homework-help/what-significant-similarities-between-victorian-431998
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966223
456
3.640625
4
Editorial, January 2009 The text of the Roman Missal at the consecration of the wine refers to it as “the Mystery of Faith” (Myssterium Fidei). It is a mystery of faith because wonderful, miraculous things take place during the celebration that have eternal effects and because, during the Mass, ultimate worship of God is given and grace is poured out on those present and on those for whom the Mass is offered. The celebration of the Mass is the most important thing that the Church does each day. It is the most important daily activity of priests. St. John Vianney said that, if we really understood what is happening at Mass, we would die. I assume that he was referring here to the statement in the Old Testament that no one can see God and live—and God becomes present on the altar during the Mass. The Mass is a mystery of faith because there is an essential connection between it and the sacrificial death of Jesus Christ on Calvary two thousand years ago. The Mass is not a new sacrifice of Christ because Jesus is now glorified at the right hand of the Father in heaven. Death no longer has power over him so he cannot suffer. The Christ present at Mass is the glorified Christ, as he now is in heaven. Because of the words of Jesus directing his apostles to repeat what he did at the Last Supper when he said, “Do this in memory of me,” we know from divine revelation what happens at Mass, but we do not know how it takes place. So when the ordained priest says the words of consecration over the bread and wine, by the almighty power of God the bread is changed into the Body of Christ and the wine is changed into his Blood. In his book on the Mass, Cardinal Charles Journet, a Swiss theologian, defined the Mass as “the unbloody presence of the unique bloody sacrifice of the cross.” The Mass and Calvary are the same sacrifice, only the mode of offering is different. On the cross it was bloody and brutal, while in the Mass it is unbloody and peaceful. The Mass, therefore, is not a different sacrifice from that of Calvary—it is the same offering, only the presence is different. Journet says that “presence” here must be understood as analogous. So the glorified Christ is substantially present under the appearances of bread and wine. That is why the marvelous change effected at Mass is called “transubstantiation” by the Catholic Church. When the grace of Calvary is applied to those who partake of the Mass, this is his “operative presence.” Christ is present in various ways—in Scripture, in the Church, in the other sacraments, in the Eucharist, in grace in the soul. A sacrifice requires the offering of something; if it is living, it is called a victim. In the Mass Christ is both the victim and the priest, the same as on Calvary, but now Christ operates through the priest and offers himself to the Father in reparation for our sins. The heart of the Mass, the most important part, is the consecration, when the priest, acting in persona Christi, says in the first person, “This is my Body” and “this is my Blood.” Cardinal Journet stresses that the Mass today is not a repetition of Calvary; it is rather making the unique sacrifice of Calvary present now on the altar in an unbloody manner. The importance of the Mass in the life of a Catholic is obvious from the fact that all are required, under pain of mortal sin, to attend Mass on Sundays and Holy Days. Receiving Communion is required only once a year. The miraculous change into the Body and Blood of Christ is called “transubstantiation.” This means that the substance of bread and wine is changed by the power of God into the substance of the Body and Blood of Christ. The Church rejects the view of Luther who said that Christ is present along with the bread—a theory that is called “impanation.” In the Mass Christ offers himself to the Father. To gain spiritual fruit from the Mass we should interiorly offer ourselves with Christ to the Father. This is truly the “active participation” recommended by Vatican II.
<urn:uuid:cd243414-e7cc-458a-8f14-683f868bf42e>
CC-MAIN-2016-26
http://www.hprweb.com/2009/01/the-mystery-of-faith/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97168
919
2.578125
3