text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Radiologic technologists perform imaging tests such as X-rays, computed tomography (CT) scans, and magnetic resonance imaging (MRI). They are also called radiographers. They work under the direction of a radiologist, who interprets the images to diagnose illness. Training programs in radiography lead to a certificate, associate degree, or bachelor's degree. State requirements for licensing vary. And radiologic technologists may be registered through the American Registry of Radiologic Technologists. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org © 1995-2014 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:5ec5906e-85f2-4799-a393-b50c7eb2992b>
CC-MAIN-2016-26
http://www.emedicinehealth.com/script/main/art.asp?articlekey=139078&ref=135957
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918622
153
2.984375
3
The Federal Elections of 1957 and 1958 The activities in this program encourage students to think critically about messages they see in the media. Students will evaluate and critique images, ask questions, and confer with their classmates about what they mean. A follow-up activity encourages students to create a visual identity for themselves and participate in a mock election campaign. Consult the online primary documents and website text as resources. The Crown and Canada This activity is designed to introduce students to the complex relationship which exists between Canada and the British Monarchy, and conceptualize the relationship between Commonwealth nations and the Crown. The focus of the activity is a simulated Commonwealth Heads of Government Meeting (similar to a model United Nations activity), where students must balance their own national interests with the pressures exerted by other nations as represented by their classmates. Students will also have the opportunity to learn about some of the key issues in British relations in which the Diefenbaker government was involved. Consult the online primary documents and website text as resources. The Canadian Bill of Rights This activity is designed to familiarize students with the evolution of human rights in Canada. Students will look at and discuss various primary documents provided by the Diefenbaker Canada Center. These documents will look at the first stages of development of the Bill of Rights, what Diefenbaker's government accomplishes, and some of the negative reactions to the Bill. This activity is best done in conjunction with the human rights timeline activity, also provided by the Diefenbaker Canada Center upon request. Consult the online primary documents and website text as resources Enfranchisement of Canada’s Aboriginal Peoples This activity provides students with varied viewpoints on the issue of extending the franchise to the First Nations. Students will have the opportunity to examine documents and materials related to the position of the Canadian government, the First Nations, and international organizations like the United Nations. Consult the online primary documents and website text as resources. Appointment of Ellen Fairclough This activity covers broad areas of learning, including Canadian history, government, and human rights. Students will have the opportunity to discuss Fairclough's plans, goals, and intentions throughout her political career. This activity also encourages students to examine the struggle for women's rights in Canada and the role of women in Canadian government today. Consult the online primary documents and website text as resources. Canada's Role During the Cuban Missile Crisis This activity is designed to help students gain a thorough understanding of the events that constituted the Cuban Missile Crisis. Prime Minister Diefenbaker hesitated when President Kennedy told him to raise Canada's military status to DEFCON 3, drawing a variety of responses from the public. Students will have the opportunity to examine letters addressed to the Prime Minister that express both support for and criticism of his decision. This activity also provides an ideal forum in which students can discuss current events related to the nuclear issue. Consult the online primary documents and website text as resources. The Nuclear Question in Canada This activity will introduce students to the workings of the Cabinet and its Ministers, as well as the historical debate over nuclear weapons in Canada. A follow-up activity encourages the students to analyze the changing stance of Canada on nuclear armament and develop personal responses to the issue. Consult the online primary documents and website text as resources.
<urn:uuid:75c450c2-d8d0-4d21-b7d7-acb758ec0cf8>
CC-MAIN-2016-26
http://www.usask.ca/diefenbaker/galleries/virtual_exhibit/classroom/index.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00118-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952422
672
4.125
4
The Great Depression was a terrible time for Minnesota and the rest of the nation. One of the New Deal programs intended to get people back to work was the Works Progress Administration (WPA). The WPA was one of the Roosevelt Administration’s most successful projects, creating jobs in everything from road construction to feeding people to literacy and more. WPA programs focusing on the arts produced some of the best examples of federal support. In addition to producing amazing works of art, the Federal Writers’ Project was designed to encourage written work and support writers through the tough times. Among the most well-known products are the state guides series. Other works created by the Writers’ Project focused on history, society, and the land around them. Some examples are on display in the Library cases. This exhibit will be on view when the Library is open, and is part of the Soul of a People: Writing America’s Story project, organized by the Friends of the Saint Paul Public Library. For more information about other programs in this series, please go to:
<urn:uuid:37a76547-7cb1-4bcc-a497-bdb181b47737>
CC-MAIN-2016-26
http://discussions.mnhs.org/collections/2009/09/minnesota-and-the-federal-writers-project-exhibit/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00005-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951627
216
3.421875
3
From Our 2010 Archives Processed Meat Linked to Heart, Diabetes Risks Study Compares Diabetes and Heart Risks of Processed and Unprocessed Meat Reviewed By Laura J. Martin, MD But the study, published in Circulation, shows no such link for unprocessed red meat. Eating one serving a day of processed meat -- or the equivalent of a single hot dog or two slices of salami -- was associated with a 42% increased risk for heart disease and a 19% increased risk for diabetes in the study, conducted by researchers from the Harvard School of Public Health. Eating unprocessed beef, pork, or lamb was not linked to a higher risk for heart disease and diabetes. The study is the largest research review ever to attempt to tease out the health impact of eating processed vs. unprocessed red meat. The finding that all red meats are not equal when it comes to heart and metabolic disease risk has important implications for public health, says study researcher Renata Micha, PhD. But that doesn't mean it's OK to eat steak for dinner every night if you cut way back on bacon at breakfast and hot dogs or deli meats at lunch. "People should limit their consumption of processed meats," Micha says. "Eating up to one serving a week would not be associated with much risk. And this study should not be taken as license to eat unlimited amounts of unprocessed red meat." Hot Dogs and Heart Risk Micha and colleagues included 20 studies involving more than 1.2 million people in their analysis. For the purposes of the study, red meat was defined as any unprocessed beef, lamb, or pork food. Processed meat was defined as any meat preserved by smoking, curing, or salting, or any meat containing chemical preservatives such as nitrates. Even after taking into account established risk factors for heart disease and diabetes, eating processed meat was associated with an increased risk for both. Processed and unprocessed meats contained similar amounts of fat and cholesterol, but processed meats contained, on average, about four times more sodium and 50% more nitrate preservatives than unprocessed meats, the researchers note. Salt consumption is a strongly linked to high blood pressure, which is a major risk factor for heart disease, according to the American Heart Association (AHA). "The major difference in heavily processed and less processed meat is sodium and chemical preservatives," AHA spokesman Robert Eckel, MD, tells WebMD. "We have tended to blame the saturated fat in red meat for heart disease, but this study suggests it may not be that simple." The study was funded by the Bill & Melinda Gates Foundation/World Health Organization Global Burden of Disease initiative along with the National Institutes of Health and the Searle Scholars Program. Cancer Risk Not Studied Micha says it is clear that future research on red meat and health should separate processed and unprocessed meats. The role of processed vs. unprocessed red meat in other diseases, such as cancer, also remains to be determined. Eating red meat and processed meat have been implicated in colorectal cancer, for example. But like the heart studies, most of this research has considered the two types of meat together. Eckel says more research is needed to better understand the separate impact of processed and minimally processed red meat consumption on health. He is a professor of medicine at the University of Colorado, Denver. "This study is certainly interesting, but the findings are hypothesis generating," he says. "They are not definitive." SOURCES: Micha, R. Circulation, May 17, 2010; online edition.
<urn:uuid:cc25a1de-c487-4b64-ace2-8b912246d488>
CC-MAIN-2016-26
http://www.emedicinehealth.com/script/main/art.asp?articlekey=116364
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962639
764
2.6875
3
HISTORY OF LETTERMAN GENERAL HOSPITAL Student Nurses at Letterman The word education brings before most of us a vision of the days we spent sitting at a desk with some weary faced teacher before us strenuously trying to drill into our none-too-receptive heads the knowledge contained in many books, but in the education of a student nurse, while the information obtained from books is necessary, it plays only about one-fourth of the leading part—the eye must be educated to observe accurately, the brain educated to think quickly, and the hand educated to serve skillfully. In the Army School the student nurse has very wonderful opportunities for the broadest training—she starts with a good foundation, for she must be twenty-one years of age and High School graduate. After entering the school she receives instruction in Anatomy and Physiology, Bacteriology, Chemistry, Nutrition and Cookery, Drugs and Solutions, Materia Medica, History of Nursing, Nursing Principles and Methods, Bandaging, Pathology, Psychology, Ethics, Massage, Nursing in Medical Diseases, Surgical Diseases, Communicable Diseases, and in Diseases of Infants and Children, also special Orthopedic, Gynecological, and Obstetrical Nursing, and Operating Room Technique. These subjects are taught by the doctors in the service and they are trained specialists, many of them having been lecturers in other hospitals before entering the Army. The practical nursing is taught by the graduate nurses, and they represent nearly every training school in the country, hence the student has the great advantage of seeing varied methods. The Army School is a new institution, therefore there are no old traditions to live down; the student here has the advantage of being a builder of tradition herself. It was founded by those who have been managing training schools for years and therefore have selected the best points. The student is taught the necessity of coordinating her theoretical knowledge with her work on the wards, of associating symptoms with certain diseases, of noting the action of drugs and the cause or condition warranting their use. There is another branch of education in which she is thoroughly instructed, and that is her own physical education. If you will observe carefully, you will note that nearly every student has the rosy cheeks, clear eyes, and springy gait that denotes good health; her power to resist disease has been increased during her training, and though many of them will deny it, the scales will tell that nearly every girl has added several pounds to her weight. The criterion of every system of education as of every bullet is the result, and as the bullets of our soldiers finally resulted in peace, so we hope the system of education in our Army Training School will result in splendid nurses and women who will carry to the world the light from the Florence Nightingale lamp which they wear on their collars. The Duties and Opportunities of a Student The duties for the making of a good nurse are under three heads. To the visitor, the student nurse’s duty is apparently stroking the fevered brow and smoothing a pillow, but in reality her first duty is the welfare and comfort of the patient. This is obtained mainly through absolute cleanliness. A campaign against dirt is therefore instituted, and the student nurse scrubs both patient and bed to be ready for “Inspection” on Saturday morning. She should also create an atmosphere of cheer in order to speed up recovery. The doctor’s orders should be accurately and promptly carried out, and cooperation strived for. The second duty is to gain a high standard and good reputation for the school. This school is in its infancy and these nurses are pioneers. The way of the pioneer is always filled with difficulties, she has the path to clear and the firm foundation to build for those who follow. Therefore the obligation is doubly binding. The third duty is to herself, she must regard he health, appearance, mental development, conduct, and retention of a good disposition. In order to achieve her ideal she must be a good woman plus a good nurse. It is a great opportunity to be in one of the biggest institutions of the age for the education of nurses. She is receiving her training under specialized doctors who have come in for the war-time emergency. The capable graduates pass on their splendid methods and helpful experiences which is another advantage. When the course is completed, Miss Goodrich, Dean of the Army School, hopes to have the students go to Washington, D. C., to receive their diplomas from the Surgeon General. Probably there they will take the Florence Nightingale pledge, which is: “I solemnly pledge myself before God and in the presence of this assemblage to pass my life in purity and to practice my profession faithfully. I will abstain from whatever is deleterious and mischievous, and will not take or knowingly administer any harmful drug. I will do all in my power to elevate the standard of my profession and will hold in confidence all personal matters committed to my keeping, and all family affairs coming to my knowledge in the practice of my calling. With loyalty will I endeavor to aid the physician in his work and devote myself to the welfare of those committed to my care.” Transcribed by Sharon Walford Yost. Source: ”The History of Letterman General Hospital, Page 30. Published by the Listening Post, Presidio, San Francisco, Cal. 1919. © 2010 Sharon Walford Yost.
<urn:uuid:a6f4d5fc-65b6-4d08-a4cb-5a7470195636>
CC-MAIN-2016-26
http://freepages.genealogy.rootsweb.ancestry.com/~npmelton/let30.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968681
1,132
3.015625
3
Precision and Accuracy are terms for quality of measurements. They have different meanings when used for scientific measurements. Precision: how near repeated measurements of the same quantity agree with each other. Example: If the repeated measurements of the same quantity agree with each other, the measurements have a high degree of precision. Accuracy: how a measurement or multiple measurements agree with a true value (standard control). In any science measurements, we usually strive for precision and accuracy. Precision depends on the fineness of calibration of the measuring device. Example: A stopwatch versus a wristwatch. A stopwatch measures time more precisely than a wristwatch. Why? - The difference that can be read on the stopwatch is about 0.01 seconds. - The difference that can be read on the wristwatch is 0.5 seconds. Errors in measurements can be caused by: The correct use of significant figures is essential in reporting any scientific measurements carried out in the lab. It is important to not that all digits obtained by measurement are significant. It is also important to know that the last digit to the right is an estimate. To read more about significant figures, click here. There are several pages of this site, so be sure and click on Next at the bottom of each page. Instructions for multiplying measurements: Area = (length) (width) Area = 20.14 in. × 9.45 in. Area = 190.323 square inches Question: Are we justified in reporting this six-digit number? - Because our measurements of width and length contained only 4 and 3 significant figures respectively. - Since the least precise figure in our multiplication contained only 3 significant figures, our answer must have only 3 significant figures. - The correct value of the area is 190.00 square inches. Note: Zeros used to show where a decimal point belongs are not significant. Rules to ensure that your answers always contain the correct number of significant figures: How to round off properly: A scientific notation number has the general formula: N × 10exponent ; where N = a number between 1 and 10 and the exponent is a whole number that is the power to which a number is raised. 1,000,000 = 106 6 = the exponent 10 = the base 1,000,000 = 1 × 106 This is written by moving the decimal point six (6) places to the left and using the exponent 6. 1 million = 1,000,000 = 1 × 106 Types of Exponents How to switch from scientific notation to ordinary figures: You will need a scientific calculator for calculations involving numbers in scientific notation. The following keys must be used: EE, EXP, or EEX (on the calculator) Example: To enter the numbers 5.74 × 104 - First enter 5.74 - Press the EE, EXP, or EEX key - Press 4 - The calculator will display: 5.74 × 1004 - Then proceed with any calculations with this number on the calculator. Negative numbers can be entered into the calculator by following the steps below: Try these for practice: Round off to 2 significant digits Positive Scientific Notation Negative Scientific Notation
<urn:uuid:ba4f4d64-1f19-4430-afd8-7b9a1e4c5169>
CC-MAIN-2016-26
http://water.me.vccs.edu/courses/SCT111/lecture3b.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00031-ip-10-164-35-72.ec2.internal.warc.gz
en
0.880134
677
4.46875
4
Choosing a major is one of the most important decisions a college student must make. Love for the subject matter and natural aptitude are important considerations. Of equal and more practical significance is the knowledge that satisfying professional opportunities exist for successful graduates in the field. You must also be confident that your university and departmental education will prepare you to be competitive in the job market and successful in your career. The Physics Program at UH The department offers the following undergraduate degrees: the Bachelor of Science (B.S.) in physics, the B.S. in physics with a geophysics specialization, the B.S. in computational physics (pending) and the Bachelor of Arts (B.A.) in physics. Potential Physics Majors Physicists attempt to understand natural laws using in-depth analysis of simple systems. Studying physics requires insight about those features of a problem that are most significant, and possession of the experimental, analytical, or numerical skills to solve these problems. Physicists make significant contributions in many fields. In the last century, physicists have received several Nobel prizes in chemistry, biology and economics. Students considering majoring in physics should have strong mathematics backgrounds. Those who have a natural curiosity for investigating how things work would enjoy physics as a major. Potential physics majors should also have an interest in other natural sciences such as chemistry. Many employers find physics majors attractive because they possess strong mathematics backgrounds and computation skills, and have experience with instrumentation and measurement. There has never been a better time to earn a degree in physics. The demand for trained physicists is strong and there is a national shortage of physicists. In addition to exciting career opportunities, highly competitive salaries and comparatively low unemployment rates reward today’s successful physics majors. UH graduates with the B.S. in physics are prepared to enter physics graduate programs leading to the master’s or Ph.D. degrees in physics. They are also prepared for careers in many diverse areas requiring a physics degree, such as within the aerospace industry at NASA, and the high-tech materials and electronics industries. Students with training in physics also find employment in fields as diverse as commodities or stock brokerage, medicine, and the energy industry. Graduates with a B.A in physics generally are prepared to teach physics at the high school level, once they have obtained the appropriate teacher certifications. They are also prepared enter graduate programs in business administration or law. The department welcomes inquiries from interested students. For more information about the department’s undergraduate programs, please call the undergraduate faculty advisor at (713) 743-3588 or e-mail email@example.com.
<urn:uuid:1c290d75-f993-4bfa-9fee-84b615798160>
CC-MAIN-2016-26
http://phys.uh.edu/undergraduate/index.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953144
544
2.546875
3
Immunotherapy through the skin might have applications beyond food allergies, with research getting underway for treatment of Crohn's disease. "It's definitely a possibility for a new form of treatment," said David Dunkin, MD, a pediatric gastroenterologist studying the patch for Crohn's disease at Mount Sinai Hospital in New York City. The research is still in its infancy -- only in mouse-model studies compared with the phase II research underway at his center using a skin patch for peanut allergy immunotherapy -- but the concept is exciting for this condition without a cure, he suggested. "If there is some way to get those regulatory T cells to the gut and proliferate there, it could block inflammation nonspecifically," he told MedPage Today. While the exact cause of the inflammatory bowel disease remains unclear, aberrant immune responses are a prime suspect. Retraining the immune system, rather than restraining it as with the immune-suppressing treatments often used in Crohn's disease, could circumvent the small but serious risks of those drugs, including infection and cancer. "The thought is to try to utilize the patient's own immune system to correct the defect," Dunkin explained. Prior trials have shown some success in easing Crohn's disease activity with IV infusions of the regulatory T cells that act as a "self-check" in the immune system to prevent over-reactions, such as in autoimmunity, and down-regulate unwanted immune reactions. Regulatory T cells can also be generated through the skin, allergy research has shown. In food allergy immunotherapy, safety has been the main appeal of the epicutaneous route, because giving small oral doses of the allergy trigger leads to adverse events in most patients. While most of those reactions are mild, 5% to 10% are significant and anaphylaxis is a risk. Approximately 12% of children in trials have required epinephrine at some point during oral food allergy immunotherapy. Low-dose skin patch immunotherapy had excellent safety, albeit limited efficacy, in one pilot study in peanut allergy. Mount Sinai researchers are now leading a large study of higher-dose patch immunotherapy for peanut allergy. In Crohn's disease, the bigger draw of the patch is that it has the potential to be more effective at inducing immune tolerance than oral administration, Dunkin pointed out. "Patients with Crohn's disease have a defect in forming tolerance to a protein via the oral route, and probably a defect in forming T regulatory cells through the oral route," he explained. "Going through the skin would circumvent that." "You could probably do the same thing through the lungs or through the nasal mucosa," he added, "but [the skin route] is not invasive." In Crohn's, there is no specific protein that is known as a critical target for induction of tolerance, as there is in food allergy. The patch is just aimed at broadly amping up regulatory T-cell production, currently using ovalbumin (egg white protein). "Exposure on the skin to this will generate T regulatory cells specific to this protein," Dunkin told MedPage Today. "Then subsequent oral exposure to the protein would get those T regulatory cells to migrate to the gut and proliferate. If you have enough of these T-regs, then they can suppress inflammation nonspecifically." The goal is to eventually move to using Keyhole Limpet Hemocyanin (KLH), "a protein that humans do not regularly see, thus if they were to develop an adverse reaction to it, they would never be exposed to it again and there would be no potential for issues later," he explained. Another challenge is uncertainty as to how well the mouse-model results will translate to humans, added Dunkin, who predicted a slow path to the clinic. "We're just establishing efficacy in mouse models, and it looks like it's going to help, although we don't have enough data yet," he said. Nevertheless, "there's a good chance, especially if we understand the mechanism and that mechanism is applicable to humans, there's a good chance it could work," he said. Dunkin disclosed no relevant relationships with industry, although the research on patch immunotherapy being done at his institution is in collaboration with DVB Technologies, which is developing the Viaskin patch.
<urn:uuid:a5a14127-8edd-4156-9307-21469dd79899>
CC-MAIN-2016-26
http://www.medpagetoday.com/Gastroenterology/InflammatoryBowelDisease/45161
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968148
900
2.828125
3
June is Post-Traumatic Stress Disorder Awareness Month. Although PTSD has been brought to the nation’s attention by the staggering number of war veterans who return home with it, Post-Traumatic Stress Disorder doesn’t just occur in veterans. An estimated 7.7 million Americans have PTSD. One in 10 women will develop PTSD in her lifetime, and 50% of those with Post-Traumatic Stress Disorder don’t seek treatment. What is Post-Traumatic Stress Disorder? Post-Traumatic Stress Disorder (PTSD) develops after a terrifying event that involved physical harm or the threat of physical harm. PTSD occurs when the something harmful happened to the individual, harm comes to a loved one or the individual witnesses a harmful event When you’re in danger, it’s natural to feel afraid. When you’re afraid, your fear triggers your “fight-or-flight” response, which is a healthy reaction to protect you from harm. In PTSD, this reaction is altered or damaged. Those with Post-Traumatic Stress Disorder may feel stressed or afraid when they’re no longer in danger. Post-Traumatic Stress Disorder was first brought to the attention of the public in relation to war veterans, but PTSD can occur from a variety of traumatic events like: - Being kidnapped/held hostage - Child abuse - Car accidents - Train wrecks - Plane crashes - Natural Disasters What are the signs and symptoms of Post-Traumatic Stress Disorder? PTSD can cause a variety of symptoms. These symptoms can affect the mind, body, and emotions. Symptoms of Post-Traumatic Stress Disorder can include: - Recurrent and unwanted distressing memories of the event - Reliving the traumatic event (flashbacks) - Upsetting dreams about the event - Severe emotional distress or physical reaction - Negative feeling about yourself or other people - Inability to experience positive emotions - Feeling emotionally numb - Lack of interest in activities you once enjoyed - Hopelessness about the future - Memory problems, including not remembering the traumatic event - Difficulty maintaining close relationships - Trying to avoid thinking or talking about the event - Avoiding places, activities or people that remind you of the event - Irritability, angry outbursts, or aggressive behavior - Always being on guard for danger - Overwhelming guilt or shame - Self-destructive behavior - Trouble concentrating - Trouble sleeping - Being easily startled or frightened Children and teens may react differently to Post-Traumatic Stress than adults do. In very young children, these symptoms can include: - Bedwetting, after they’ve already learned how to use the toilet - Forgetting how or being unable to talk - Acting out the scary event during playtime - Being unusually clingy with a parent or other adult When should you seek help for Post-Traumatic Stress Disorder? Not every person who has been traumatized develops PTSD. Symptoms of Post-Traumatic Stress Disorder usually occur within three months of the traumatic incident, but symptoms of PTSD can also occur years afterward. When those symptoms last for more than one month, it is usually considered PTSD. If you have disturbing thoughts and feelings about an event for more than a month, especially if those thoughts are severe, or if you feel you are having trouble getting your life under control after a traumatic event, it’s important to seek the attention of a medical professional. How to protect yourself if you have Post-Traumatic Stress Disorder Wearing a medical ID is an important step to protecting yourself when you have PTSD. In the event you are unsure of your surroundings or are unable to speak for yourself, your medical ID can advocate for you when first responders arrive. Attending counseling sessions is a great way to explore your feelings and thoughts about a traumatic situation in a safe environment. Working through your feelings with a medical professional can help you cope with how you feel when you have PTSD.
<urn:uuid:8491c46b-0522-4f1c-8a2c-88ec2cf673ea>
CC-MAIN-2016-26
http://blog.laurenshope.com/medical-id-jewelry-blog/bid/90606/June-is-Post-Traumatic-Stress-Disorder-Awareness-Month
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00091-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921262
832
3.640625
4
Near-double digit unemployment among Latinos persists more than four years after the U.S. economic collapse, but a new report suggests that so-called "green jobs" could help reduce joblessness. According to a report from the National Council of La Raza, the nation's largest Latino-advocacy organization, Hispanic workers stand to benefit significantly from the growing clean-energy sector. "It is in the interest of the country to align the fastest growing segment of our labor market with some of the fastest growing sectors of the economy," Catherine Singley, a senior policy analyst with NCLR, said during a news call. The report identifies five metropolitan areas as diverse as Knoxville, Tenn. and Los Angeles where Latinos might benefit from the green jobs sector. Some of the areas, such as McAllen, Texas, already have sizeable Latino populations. Others, such as Little Rock, Ark., have small but growing Hispanic populations. All, according to the report, have bustling green economies. Latinos are supportive of the idea of green jobs, and clean-energy jobs typically pay more than traditional sectors that rely heavily on Latino labor, such as construction and hospitality. The majority of the green jobs in the five metropolitan areas studied require less than a bachelors degree. Almost nine in 10 Latinos lack a college degree, according to a Brookings Institution report. About 15 percent of the country's workforce is Latino, and Hispanics are on track to make up a full third of all U.S. workers by 2050, many of whom could potentially fill clean-energy jobs. In 2010, the Los Angeles area alone had nearly 90,000 clean-energy jobs, according to the NCLR report. But there are some challenges to increasing the number of Latinos in the green jobs sector. Many Latinos aren't sure how to go about finding green jobs. And while some already have the skills needed to get into the job market, they require specific training for jobs such as solar-panel installation. Latinos are also disproportionately disadvantaged when it comes to adequate transportation to work, and access to social networks that can result in jobs. The report urges companies and nonprofit organizations to focus on training Latino workers for green jobs and for state and local governments to focus on adult education programs. "I think it's safe to say," Singley said, "this is more than just a passing fad."
<urn:uuid:b54f8611-08f5-46e2-bf53-9935a22a2326>
CC-MAIN-2016-26
http://seenandheard@abcnews.go.com/ABC_Univision/Politics/latinos-boost-clean-energy-sector/story?id=18420913
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00125-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960366
487
2.9375
3
Preparing for the new normal in the wake of Sandy Hurricane Sandy, one of the largest storms on record and packing more destructive power than Hurricane Katrina, could very well be a sign of things to come. You can call it climate change instead of global warming and argue that the effects aren’t due to the hand of man, but there’s no denying the impact: the planet is getting warmer, ocean levels are rising and extreme weather events are becoming more common. Coastal New York and New Jersey learned that the hard way last week. Despite a litany of warnings over the years that Lower Manhattan and the barrier islands were vulnerable to storm surge, it was business as usual until the borrowed time finally ran out. The ocean overran berms, subway tunnels flooded and electrical infrastructure once thought to be safe ended up under 5 feet of water. “Anyone who says there is not a change in weather patterns is denying reality,” New York Gov. Andrew Cuomo told reporters Oct. 30 as he inspected water damage at the World Trade Center. “We have old infrastructure, we have old systems. That is not a good combination, and that is one of the lessons I will take from this personally.” The vulnerability of the infrastructure hit home for New York-based SecureWatch 24 on the morning after Sandy came ashore. The company had moved its critical systems to a facility in Texas before the storm, but it still had semi-critical servers at a co-location site in downtown Manhattan. That proved to be a problem when much of the island was inundated and the power failed, said Gene Dellaglio, chief technology officer for SW24. “They have generators on the 17th floor of this building, diesel generators,” Dellaglio said last week as he traced a time line of the storm. “The pumps that supply the diesel to the 17th floor are in the basement, which is now flooded. Manhattan is flooded. The pumps shut down. By the time we get down there, people are carrying 5-gallon spackle buckets up 17 flights of stairs from a diesel tank downstairs to get the [generators] running. It’s a bucket brigade. I said we’ve got to get out of here.” Within an hour, SW24 had moved the servers and had them up and running at its new Fusion Centre in Moonachie, N.J., which also served as a command post for emergency responders and local officials displaced by Sandy. While the company was happy to help and was grateful that it had weathered the storm, Dellaglio said it was easy to see that a threshold had been crossed. “I did 12 years in the NYPD. … I saw the blackout in 2004, I saw Sept. 11 up close and personal, but I’ve never seen [an emergency] as expansive as this, with everything from the gas to the stores to the [shortage of] food,” he said. “And I think there is a lot to be learned here too in the bigger picture about critical infrastructure. How do you put pumps in the basement for diesel when the generators are on the 17th floor? They evacuated Bellevue Hospital for the same reason.” It’s something that hasn’t gotten enough attention in New York, which relies on an intricate network below ground to drive just about everything above it. But with the region facing what Cuomo calls a “new reality” of extreme weather events, it might be time to rethink the game plan.
<urn:uuid:e63114c9-c52e-4c87-92c1-dc8c9a062d04>
CC-MAIN-2016-26
http://securitysystemsnews.com/blog/preparing-new-normal-wake-sandy
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971353
738
2.5625
3
The scientific term for the study of body structure is anatomy. The anatomy in Latin means “cutting,” because a fundamental way to learn about the human body is to cut it apart, or dissect it. Physiology is the term for the study of how the body functions, and is based on a Latin term meaning “nature.” Anatomy and physiology are closely related-that is, form and function are intertwined. The stomach, for example, has a pouch-like shape because it stores food during digestion. The cells in the lining of the stomach are tightly packed to prevent strong digestive juices from harming underlying tissue. Anything that upsets the normal structure or working of the body is considered a disease and is studied as the science of pathology. Levels of Organization All living things are organized from very simple levels to more complex levels (Fig. 1). Living matter is derived from simple chemicals. These chemicals are formed into the complex substances that make living cells-the basic units of all life. Specialized groups of cells form tissues, and tissues may function together as organs. Organs working together for the same general purpose make up the body systems. All of the systems work together to maintain the body as a whole organism. Figure 1 Levels of organization. The organ shown is the stomach, which is part of the digestive system.
<urn:uuid:ac05f623-d512-4862-859a-1bcf47e0aaa5>
CC-MAIN-2016-26
http://encyclopedia.lubopitko-bg.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00004-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956656
273
3.984375
4
parish church of Saint Andrew (seen here in our photo) stands on the seafront and has always been a focal point of the town. The original piece comprised a tableau of cherubs, flanked by two angels mounted on pedestals. The altar piece was later moved to Westminster Abbey, where it was placed behind the High Alter. It remained there until 1820 when the Bishop of Rochester, who was also the vicar of Burnham, acquired it and used fragments to decorate the Chancel of Saint Andrews. The sculptures are now dispersed over various parts of the interior of the building, including the nave windows and behind the altar. leaning church tower Near the church stands the house named Tregunter. This house stands on the site of an old farmhouse, which was owned by the Roper family. The sons of farmer Roper fought at the Battle Of Sedgemoor, and were deported to America by Judge Jeffries. The house was bought by John Gunter, a chef to King George The Third in 1760 and he lived there for 60 years. house that stands now was rebuilt in 1826 and the original cellars of the house were reported to be connected by secret passages to the Church and the Old Vicarage for smuggling purposes but if indeed this was so, they have long been blocked by sand.
<urn:uuid:8afa051f-cf27-424c-8eff-8349b5248ea4>
CC-MAIN-2016-26
http://www.burnham-on-sea.com/standrews.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977746
282
2.6875
3
August 23, 2012 What To Do If A Fire Breaks Out On The AVE Researchers at the University of Cantabria have used computer models to analyze the best way to evacuate the Spanish High Speed Train, AVE, in the case of fire. The involvement of the crew in organizing the fast transfer of passengers, completing the process before the train comes to a halt and collective collaboration to assist those with reduced mobility are just some of the strategies to be followed. "In the event of fire on an AVE, two stages should be defined: the first is pre-evacuation in which passengers are transferred from one coach to another while the train is still in motion and the second involves evacuation once the train has stopped," as explained to SINC by Daniel Alvear, member of GIDAI Fire Safety Group of the University of Cantabria.The team of researchers have analyzed the best evacuation strategies on a high speed train, which has more inertia and less stops than others, using modelling tools and computer simulation. As Alvear says, "in this way it is possible to overcome drill exercise limitations which are costly, unrealistic and have a limited number of possible scenarios." Even so, input data supplied to the computer come from a real drill carried out in 2009 by 218 people inside the Guadarrama tunnel between Madrid and Segovia. The study enjoyed the participation of the Spanish Railway Network, RENFE and has been published in the 'Fire Safety Journal'. The results show that the pre-evacuation stage is "crucial" and the best strategy is to gather all passengers together before the train comes to a halt. Using a type of software, it was possible to determine the optimum and maximum number of coaches that can be evacuated from this type of AVE along with the time required to do so. Two key aspects were identified in this process. One is to avoid those at the front of the evacuation line from impeding the movement of those behind. The solution involves one member of the crew hurrying up those at the front while another tells the others not to stop for their belongings. As a result, the aisle is not obstructed. The second aspect refers to the need to evacuate those with reduced mobility. This is complex due to the AVE's narrow aisles that make the passage of wheelchairs difficult and the fact that the amount of crew is limited. Therefore, the involvement of all passengers is recommended to assist disabled people. A question of minutes Once the train has come to a halt, evacuation of the passengers can begin, taking into account the number of exits. If evacuation takes place onto the platform of the nearest station, exit availability will depend on the pre-evacuation strategy used. For example, if the train is estimated to stop in less than 10 minutes, the results indicate that the coach on fire should ideally be evacuated along with both adjacent coaches either side. When stopping time is calculated at more than 10 minutes, it is recommended that the maximum possible number of coaches is evacuated during pre-evacuation. This increases evacuation time by 27% compared to the first method, but passengers leave the train in safer conditions. The driver and the railway traffic control center are the ones who estimate how long it will take for the AVE to reach the nearest safe area, like a platform. The international standards stipulate that the train should arrive at a safe area some 15 minutes after the fire is detected. In many cases this is difficult and the train has to quickly stop in the middle of the line to avoid further damages. The researchers have also considered the possibility of analyzing what would happen if passengers had to alight directly onto the track ballast (the layer of gravel on which lays the track) using the emergency stairs. The data show evacuation should be controlled by stages, giving priority to those individuals closest to the fire. This allows for quick, congestion free movement for those closest to the fire and minimizes their exposure to the harmful effects of the flames. On the Net:
<urn:uuid:7212ad54-40e4-4829-8dd2-82465e367737>
CC-MAIN-2016-26
http://www.redorbit.com/news/science/1112680318/what-to-do-if-a-fire-breaks-out-on-the-ave/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953639
825
2.96875
3
Two months ago, when the Centers for Disease Control and Prevention released revised estimates of foodborne illness in the United States, plenty of people tried to put a positive spin on the long-awaited analysis. Even though CDC experts cautioned against comparing the old estimates with the new to discern a trend, some saw the new estimate of 48 million episodes of foodborne illnesses per year, in contrast with the 1999 estimate of 76 million, as a dramatic drop — a sign the food supply was becoming safer. Not so fast, say the publishers of the New England Journal of Medicine. In a perspective published Wednesday in the Journal, Michael Osterholm, director of the University of Minnesota Center for Infectious Disease Research and Policy and a preeminent authority on food safety, channels the ghost of Paul Harvey with “Foodborne Disease in 2011 – The Rest of the Story.” Food safety improvements made in the late 1990s are still having a positive effect, Osterholm writes, “but we’ve made little additional progress in the past decade.” “Although the media and some food producers, processors, wholesalers, and retailers may conclude that the recent CDC estimates offer evidence of major improvements in food safety since 1999, data from active population-based surveillance offer a more nuanced and neutral picture.” Osterholm says data collected by the Foodborne Disease Active Surveillance Network (FoodNet)– the 10 states that track lab-confirmed infections–are a better “measuring stick of the incidence of foodborne disease across geographic areas and over time” than the CDC estimates. Based on the FoodNet data, Osterholm says, “even with improvements made during the past decade, the burden of foodborne disease persists” and only the infection rates from Shigella and E. coli O157:H7 have declined significantly. Additionally, the increase in disease caused by non-O157 toxic E. coli suggests “that surveillance for O157 is no longer sufficient to determine the effect” of foodborne E. coli infections, he notes. The same edition of the Journal also discusses previously unrecognized sources of foodborne disease that have caused nationwide outbreaks – such as contaminated jalapeno peppers. That outbreak, with 1,500 illnesses and two fatalities, was initially thought to be caused by tomatoes. Outbreaks caused by contaminated fresh produce are difficult to track to their source, Osterholm points out, because like the peppers, produce from a single farm can be distributed widely and yet the food is quickly gone–consumed because it is perishable. But will the new FDA Food Safety Modernization Act make the food supply safer? Osterholm calls the FDA’s expanded authority and other provisions “long overdue,” but adds that without adequate funding “requiring the FDA to carry out the law’s required activities will be like trying to get blood out of a rock.” Unless Congress appropriates enough money to implement the new law, Osterholm predicts that “in the end, food safety in the United States cannot be expected to improve in more than an incremental manner.”© Food Safety News
<urn:uuid:0fe3409f-e8c3-4cea-a1f6-f09b5594c3c7>
CC-MAIN-2016-26
http://www.foodsafetynews.com/2011/02/is-food-really-safer-the-rest-of-the-story/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924401
658
3.046875
3
January 2009, Vol. 21, No.1 Effective Emergency Communications in Natural and Human-Generated Disasters Thomas D. Rockaway, David M. Simpson, and Joshua Rivard Recent events, such as Hurricane Katrina, the World Trade Center attack, and the 2003 power outage in the U.S. Northeast, demonstrate the need for improved systems of communicating emergency information to the public and other agencies. During these events, it was evident that many of the communication messages were poorly constructed, confusing, or erroneous. To “prevent ineffective, fear-driven, and potentially damaging public responses to a serious crisis” — a goal established in Communicating in a Crisis: Risk Communication Guidelines for Public Officials, a primer published by the U.S. Department of Health and Human Services — it is essential to have sound and thoughtful communications. To help facilitate effective communication, the Water Environment Research Foundation (WERF; Alexandria, Va.) funded a project (No. 03-CTS-5SCO) to create a toolkit to help water and wastewater utilities communicate more effectively with the public and other agencies during a crisis event. This toolkit includes a Communications Preparedness Guide to assist with emergency communications planning; decision-making guides and researched sample messages; and a software tool to help organize and maintain emergency communication information. This software tool, the Emergency Communications Information Management System (eCIMs), provides utility professionals with an unprecedented ability to easily build, manage, and share customized emergency communications plans. The eCIMs tool is designed to be expandable to include any anticipated communication event and can be customized to accommodate the needs of large and small utilities in both the water and wastewater sectors. Project Objectives and Focus Prior to a crisis event, many water and wastewater utilities devote much time and effort to developing emergency response plans. While these documents typically contain sections on communications, many simply list contact information. Therefore, many emergency messages are conceived and created in the chaotic and hectic atmosphere associated with an emergency. Messages created under these conditions are much less likely to provide information in a clear and understandable format. Because providing reliable and concise emergency communications to the public is critically important, these messages should be well-considered, designed in advance, and organized. Emergency management is a broad issue that encompasses not only public communications but policies, internal practices, and the protection of critical infrastructure. This project, however, focused only on the specific subset issue of emergency communications. The project addressed the need for water and wastewater utilities to prepare for adverse events and communicate effectively in emergency situations. To accomplish these goals, the project was organized according to the following objectives: - Determine the best emergency communications practices through literature reviews, national surveys, and case study interviews. The project team focused on optimal processes and systems for situational analysis, message creation, and information dissemination. - Evaluate emergency communication messages. - Develop a Communications Preparedness Guide consisting of a step-by-step process for utilities to use in developing their emergency communications plans. - Create an eCIMs software toolkit that combines the Communications Preparedness Guide, decision-making guides, and researched sample messages for three likely events and a software tool to help organize and maintain emergency communication information. Communications Preparedness Guide and eCIMs Researchers used the results of the literature review, national survey, case study interviews, focus groups, and message evaluation process to create the Communications Preparedness Guide and the eCIMs software. Together, these tools assist the utility professional in preparing to communicate effectively under crisis conditions. For a water resource utility to manage emergency events effectively, communication structure and assignments must be in place before a crisis strikes. The guide, which is based on best management practices, helps users build, manage, and share customizable emergency communication plans. The researchers developed the guide based on the communication sections of selected emergency operating plans, surveys, case study results, and a literature review. The guide details strategies utility companies can use to manage emergency communications with the public effectively and includes relevant examples. The eCIMs toolkit software provides water and wastewater utilities with a guide to assist with the effective management of emergency communications and related information. The purpose of eCIMs is to supplement existing emergency operation plans and provide a convenient method for storing and managing emergency communications information. For maximum utility, a downloadable print version of a utility’s collected information is made available upon completion of the eCIMs toolkit. The print version is useful to companies whose power has failed or in instances where more advanced technology is not available or accessible to manage the communications information. Use of the Planning Tools The intent of the Communications Preparedness Guide and eCIMs software is to provide a toolkit that assists utility personnel with managing their emergency communication messages. Once completed by the utility, the Communications Preparedness Guide can be a supplement to an existing emergency operation plan and provide an effective way to anticipate, store, and maintain pertinent information. The eCIMs software can be loaded onto a personal computer for customizing. It is hoped that many utility professionals will use these two tools and develop a common foundation for communications and streamline the manner in which they share their experiences. The report detailing the guide and software toolkit is available through the WERF Web site at www.werf.org. The report and tools are free to WERF subscribers and priced at $175 for nonsubscribers. Thomas D. Rockaway is an assistant professor and director of the Center for Infrastructure Research, David M. Simpson is the director of the Center for Hazards Research and Policy Development, and Joshua Rivard is a research coordinator at the Center for Infrastructure Research at the University of Louisville (Ky.). ©2009 Water Environment Federation. All rights reserved.
<urn:uuid:2924765c-98d6-4a35-850e-ec8d58e3fa8d>
CC-MAIN-2016-26
http://www.wef.org/publications/page_wet.aspx?id=2410&page=ca&section=Safety%20Corner
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912424
1,189
2.703125
3
|Dror Bar-Natan: Classes: 2002-03: Math 157 - Analysis I:||(68)|| Next: Solution of Term Exam 1 Previous: Sample Term Exam 1 Solve the following 5 problems. Each is worth 20 points although they may have unequal difficulty. Write your answers in the space below the problems and on the front sides of the extra pages; use the back of the pages for scratch paper. Only work appearing on the front side of pages will be graded. Write your name and student number on each page. If you need more paper please ask the tutors. You have an hour and 50 minutes. Allowed Material: Any calculating device that is not capable of displaying text. Problem 3. Sketch, to the best of your understanding, the graph of the function Problem 4. Write the definition of and give examples to show that the following definitions of do not agree with the standard one: Problem 5. Suppose that is continuous at 0 and and that for all . Show that is continuous at 0. The generation of this document was assisted by LATEX2HTML.
<urn:uuid:cd485237-8eab-4d9a-8cf8-cca1f1cf7b71>
CC-MAIN-2016-26
http://www.math.toronto.edu/drorbn/classes/0203/157AnalysisI/TermExam1/Exam.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916745
229
3.25
3
Traditional Children's Games, England Scotland Ireland vol 2- contents Pg. Turkish Children’s Games. Even today Turkey is predominantly rural. Though the intrusion of both indoor and outdoor foreign games such as cards and football has taken away much of the popularity that traditional games and sports once enjoyed, and other games have been lost as a result of the weakening of local cultures, throughout Turkish villages one can still see both adults and children amusing themselves in their leisure with identical games, and these traditional games vary greatly from village to village.1 Since peasants and children are among the most obstinate conservators of traditional usage, the study of their legends, anecdotal material, certain fragmentary meanings and actions, and the rich game vocabulary still extant can help to reveal the connection of many games with primitive forms of ritual and their original functions. One of the most important aspects of folk games is vocal or verbal expression. Another important dimension to be considered is the occasion and functions of the games. IBEXES (see tag: oyun kuramı) Games - (see esp Parlour Games) The Problem Site: Problem Solving and Educational Games.
<urn:uuid:67fa9469-894e-49d1-9288-764e80382d27>
CC-MAIN-2016-26
http://www.pearltrees.com/siya.traore/misc/id3417222
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957623
232
3.40625
3
Can US forestry mop up all US CO2 emissions? larryc at teleport.com Thu Dec 3 02:35:06 EST 1998 In article <7417jv$mt4$1 at nnrp1.dejanews.com>, dwheeler at teleport.com > You are using as a model US forests which have been around for a while. But, > as research done at OSU has shown, higher CO2 concentrations actually > encourage trees to grow 2-3 times their normal rate, thus showing that trees > _can_ remove considerably more CO2 than previously believed. > Thus rapid-growing trees such as Red alder (Alnus rubra), hybrid cottonwoods, > and the new hybrid willows can grow up to 13 feet per year for at least 15 > years. This produces trees which sequester CO2 _much_ more rapidly than That is wonderful, but when the world production of fossil fuels releases more carbon than is contained in the active biosphere of the earth every decade, a few faster growing trees isn't going to do much. The only way to stop the inexorable buildup of atmospheric CO2 is to quit burning > Also, remember that fossil coal deposits probably came from extensive > forests, based on the numerous plant fossils found therein. I suspect, but > cannot prove, that in years long gone, the earth had considerably higher CO2 > concentrations. This would make forest fires unable to function, since they > must have at least 18-20% atmospheric oxygen to burn. Increased atmospheric > CO2 would effective decrease the oxygen percentage, thus stimulating forests > world-wide while increasing the world-wide temperature. > What is know for a fact, is that during the Permian and Pennsylvanian huge > quantities of coal beds were established, probably on any land then above > sea- level. Such forests as established the coal beds could not exist in the > presence of wide-spread forest fires. Most coal beds are the remains of peat beds. Sometimes the peat beds supported forests above them, sometimes not. The important thing is the presence of an acid, wet anaerobic zone where plant matter can sink and be preserved. I understand that even in the UK where peat beds were common, drainage projects have led to the rapid reduction of peat layers. Volcanoes have been steadily ejecting carbon dioxide and methane for hundreds of millions of years, which is where many of our carbon and carbonate deposits got their raw materials. It's doubtful that all that carbon was ever in the biosphere at once. And in the carboniferous, the oxygen content of the earth's atmosphere was quite a bit higher than it is currently. We know this because it supported huge insects that would choke to death in our own atmosphere. Insects breath through gas diffusion and spiracles, which is a very inefficient means of respiration. A 20 cm dragon fly just could not More information about the Ag-forst
<urn:uuid:60c3cdb4-7af3-4b83-8a82-a5ddd5f36ff0>
CC-MAIN-2016-26
http://www.bio.net/bionet/mm/ag-forst/1998-December/012204.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947965
650
3.234375
3
'Trust me, I'm an android' Here's a truly bad omen if the robot uprising ever does occur: a study has found that people will follow robots in a simulated emergency - despite knowing that the machine is taking them the wrong way. People asked to find their way out of a burning building overwhelmingly elected to follow an "emergency guide robot" along a previously unknown route instead of taking the exit they had entered through. This suggests that people will place an inordinate amount of trust in machines, irrespective of what common sense might indicate and despite the risk of a robot malfunctioning. Researchers at the Georgia Institute of Technology, in Atlanta, say their study shows that people see robots as authority figures that can be trusted in an emergency. Participants in the study were first led by the brightly coloured robot, controlled by a hidden researcher, into a room, often with an unusual detour or breakdown along the way. When the building was filled with artificial smoke and fire alarms were set off, they would open the door to find the robot in the hall. Despite the robot's previous erratic behaviour, the vast majority followed the robot down a hallway to the back of the building instead of taking clearly marked exits. In some cases the participants would follow the robot down a darkened hall blocked with furniture, even attempting to squeeze past the obstacle. "We absolutely didn't expect this," said Paul Robinette, who led the study. The pressure of the situation - which participants were led to believe was a real emergency - might have made people more likely to trust the robot, the researchers suggested. Other studies, in low-pressure situations, have shown that people are more likely to be sceptical about robots that have previously been shown to be erratic. "People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault," said Alan Wagner, a senior research engineer at Georgia Tech.
<urn:uuid:eb5f1f45-7afb-44b1-a1af-aa5f2552bc03>
CC-MAIN-2016-26
http://www.timeslive.co.za/thetimes/2016/03/03/Trust-me-Im-an-android
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00078-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976939
406
2.640625
3
What is the Latitude and Longitude on Bahrain? Latitude of Bahrain is:26 and Longitude of Bahrain is: 50.55 Located at the shores of Persian Gulf, Bahrain is a cluster of 33 islands, out of which Bahrain Island is the largest one. Spread over an area of 750 square kilometers and home to some beautiful Desert landscapes, more than 90 percent of Bahrain is covered with deserts, which is a reason for occasional dust storms that inhabitants experience. An estimated population of 1,234,600 people resides in the country. Bahrain is one of the fastest growing economies of the world. However, the unemployment among the youth is an area of major concern. Bahrain's major exports are oil and pearls, which are produced in large quantities in this country. Though it predominantly receives tourists from the Arabic world, people from other parts of the world are also showing interest in Bahrain due to its rich cultural heritage as well as its modernization. The architecture in Bahrain has a lot of historical significance. Whilst the average temperature in summers averages around 35°C, it can shoot up to more than 45°C in the months of June and July. Winters however, are lot more pleasant with temperature hovering around 15°C. In winters, Bahrain receives most of its rainfall. If you are looking for a date with history, Bahrain might just be the right place for you. View Latitude and Longitude on Bahrain in other units.
<urn:uuid:d86bbfab-3292-41e1-9ae7-7f1f5a45bda4>
CC-MAIN-2016-26
http://www.roseindia.net/answers/viewqa/TravelTourism/19881-Latitude-and-Longitude-of-Bahrain.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00099-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962404
296
2.828125
3
Sendhil Mullainathan in a recent article in the New York Times, ‘Get Some Sleep, and Wake Up the G.D.P.’ espouses the importance of sleep and its large effect on a person's economic output. While most studies emphasize the influence of inadequate sleep on a person's physical and medical state, they leave out the “biggest potential impact,” which is on the G.D.P. “Most of today’s workers rely on their mental and social skills," Mullainathan writes. "And if those workers don’t get enough sleep, their lethargy, crankiness and poor decision-making will hurt the economy in assorted and significant ways.” Mullainathan goes on to link an individual’s sleep deprivation/unproductiveness with the broader economy, asking if national productivity has fallen then why can’t better sleep be the answer? According to The Daily Beast's David K. Randall in an August 2012 article, sleep deprivation makes people feel and act as if they were drunk, therefore, hindering their ability to be productive. “Sleep also looks to bolster the brain’s ability to handle taxing mental loads,” Randall writes. Randall also cites a study at the City University of New York that confirms this. “The amount of time each subject slept determined how well he or she performed on the task," the report states. "Subjects who were able to reach deeper stages of sleep demonstrated a better command of flexible thinking, a vital cognitive skill that allows us to apply old facts and information to new situations.” A study in Occupational and Environmental Medicine published in 2000 also shows that sleep deprivation affects cognitive performance equivalent or worse than a blood alcohol level of 0.05 percent. The study shows that the subjects who experienced sleep loss demonstrated slower reaction times and weaker memories. Similarly, Business Insider’s Tony Schwartz says that sleep deprivation harms productivity more than malnourishment in a March 2013 article. Citing a Harvard Business Review study, Schwartz notes that sleep is the first thing most people sacrifice, but that this choice severely impairs “health, mood, our cognitive capacity and productivity.” “Many of the effects we suffer are invisible,” writes Schwartz. “Insufficient sleep, for example, deeply impairs our ability to consolidate and stabilize learning that occurs during the waking day. In other words, it wreaks havoc on our memory.” It is fairly easy to see that in order to live productive lives - that support not only one's personal finances, but also the broader economy - sleep isn't just important, it's vital. Erik Raymond is experienced in national and international politics. He relocated from the Middle East where he was working on his second novel. He produces content for DeseretNews.com. You can reach him at: - Nearly 1 in 3 on Medicare got commonly abused... - Yellen faces GOP criticism over weak economic... - Behind the support for Brexit and Trump:... - Asian stocks mixed as markets await Brexit... - Financial Tips & Tricks: When is bankruptcy... - Sturt & Nordstrom: The 5 fundamental... - Costco begins new credit card agreement - Renovation Solutions: Tips for choosing an... - Behind the support for Brexit and... 7 - Costco begins new credit card agreement 4 - Yellen faces GOP criticism over weak... 3 - Nearly 1 in 3 on Medicare got commonly... 2 - Asian stocks mixed as markets await... 0 - Financial Tips & Tricks: When is... 0 - Sturt & Nordstrom: The 5 fundamental... 0 - Renovation Solutions: Tips for choosing... 0
<urn:uuid:5e4d7f10-231d-4973-954d-2d0f8070e6b4>
CC-MAIN-2016-26
http://www.deseretnews.com/article/865595714/Want-to-help-the-economy-Get-some-sleep.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918053
778
3.03125
3
Quadruple DNA Helix Seen in Human Cells, University of Cambridge Study 1/22/2013 7:17:00 AM In 1953, Cambridge researchers Watson and Crick published a paper describing the interweaving 'double helix' DNA structure -- the chemical code for all life. Now, in the year of that scientific landmark's 60th Anniversary, Cambridge researchers have published a paper proving that four-stranded 'quadruple helix' DNA structures -- known as G-quadruplexes -- also exist within the human genome. They form in regions of DNA that are rich in the building block guanine, usually abbreviated to 'G'. comments powered by
<urn:uuid:0b0435e5-2b4d-43f4-aba5-46d519123f37>
CC-MAIN-2016-26
http://www.biospace.com/News/quadruple-dna-helix-seen-in-human-cells-university/285248/Source=TopBreaking
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901823
141
3.15625
3
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Open source culture (OSC) is a term that derives from open source software and the open source movement. Open source software is software with its source code made freely available; end-users have various degrees of rights to modify and redistribute the software, as well as the right to use the software for commercial purposes. "Open source" as applied to culture defines a culture in which fixations are made generally available. Participants in such a culture are able to modify those products and redistribute them back into the community. Open source culture and intellectual property lawEdit OSC is a culture that supports and promotes the sharing of culture. As more domains of contemporary life are affected by technologies of cultural reproduction, the possible domain of OSC expands. For some, OSC describes a balance between freedom and exclusive rights (and the surrounding ethos of proprietary culture). They maintain that the experience of culture is influenced by exclusive rights as implemented in copyright law. Artists, programmers, and other authors are understood as having limited ownership over their creations. Current laws are instrumental in maintaining a creator's economic and moral rights for a limited time, while allowing for exceptions in certain cases pertaining to "fair use". Legally, copyright obtains to an expression as a "fixation," and licensing becomes the legal way of using copyrighted works. Open source culture versus free cultureEdit The idea of an "open source" culture runs parallel to "Free Culture," but is substantively different. Free Culture is a term derived from the free software movement, and in contrast to that vision of culture, proponents of OSC maintain that some intellectual property law needs to exist to protect cultural producers. Yet they propose a more nuanced position than corporations have traditionally sought. Instead of seeing intellectual property law as an expression of instrumental rules intended to uphold either natural rights or desirable outcomes, an argument for OSC takes into account diverse goods (as in "the Good life") and ends. Open source culture and technologyEdit One way in achieving the goal of making the fixations of cultural work generally available is to maximally utilize technology and digital media. As predicted by Moore's law, the cost of digital media and storage plummeted in the late 20th Century. Consequently, the marginal cost of digitally duplicating anything capable of being transmitted via digital media dropped to near zero. Combined with an explosive growth in personal computer and technology ownership, the result is an increase in general population's access to digital media. This phenomenon facilitated growth in open source culture because it allowed for rapid and inexpensive duplication and distribution of culture. Where the access to the majority of culture produced prior to the advent of digital media was limited by other constraints of proprietary and potentially "open" mediums, digital media is the latest technology with the potential to increase access to cultural products. Artists and users who choose to distribute their work digitally face none of the physical limitations that traditional cultural producers have been typically faced with. Accordingly, the audience of an open source culture faces little physical cost in acquiring digital media. Open source culture and networkingEdit Essentially born out of a desire for increased general access to digital media, internet is open source culture's most valuable asset. It is questionable whether or not the goals of an open source culture could be achieved without the internet. The global network not only fosters an environment where culture can be generally accessible, but also allows for easy and inexpensive redistribution of culture back into various communities. Some reasons for this are as follows. First, the internet allows even greater access to inexpensive digital media and storage. Instead of users being limited to their own facilities and resources, they are granted access to a vast network of facilities and resources, some for free. Sites such as Archive.org offer up free web space for anyone willing to license their work under the Creative Commons license. The resulting cultural product is then available to download for free (generally accessible) to anyone with an internet connection. Second, users are granted unprecedented access to each other. Older analog technologies such as the telephone or television have limitations on the kind of interaction users can have. In the case of television there is little, if any interaction between users participating on the network. And in the case of the telephone, users rarely interact with any more than a couple of their known peers. On the internet, however, users have the potential to access and meet millions of their peers. This aspect of the internet facilitates the modification of culture as users are able to collaborate and communicate with each other across international and cultural boundaries. The speed in which digital media travels on the internet in turn facilitates the redistribution of culture. Through various technologies such as peer-to-peer networks and blogs, cultural producers can take advantage of vast social networks in order to distribute their products. As opposed to traditional media distribution, redistributing digital media on the internet can be virtually costless. Technologies such as BitTorrent and Gnutella take advantage of various characteristics of the internet protocol (TCP/IP) in an attempt to totally decentralize file distribution. For more on these technologies, see "Examples: Communication and Personal Expression" below. There has not been extensive economic study on open sourced models of works such as books, photographs, paintings, sculpture, music, technical drawings, computer software, movies, and maps. However, papers exist which may be suggested to cover general approaches to open source to more specific approaches. There has been some analysis done specifically on copyright and appropriation art. The basic economic approach is to first understand why a certain product might be considered suitable for open source. And then, it is helpful to explore possible economic impacts. Additionally, economic analysis serves to understand what exactly makes a possible open source work different from any other work. According to GNU, it is important to discern between open source goods and other goods such as free software. Free software allows consumers to either receive the software via payment or for free. The consumer may then alter the software and then resell if he or she so desires. There are no stipulations of free price in the free software model the only thing free is the freedom to do with the software what you wish. On the other hand, according to GNU, open source implies a free price. However, there does appear to be some confusion with the open source label, and often, it does not imply free price. Most economists would agree that open source candidates have a public goods aspect to them. In general, this suggests that the original work involves a great deal of time, money, and effort. However, the cost of reproducing the work is very low so that additional users may be added at zero or near zero cost - this is referred to as the marginal cost of a product. At this point, it is necessary to consider a copyright. The idea of copyright for works of authorship is to protect the incentive of making these original works. Copyright restriction then creates access costs on consumers who value the original more than making an additional copy but value the original less than its price. Thus, they will pay an access cost of this difference. Access costs also pose problems for authors who wish to create something based on another work yet are not willing to pay the copyright holder for the rights to the copyrighted work. The second type of cost incurred with a copyright system is the cost of administration and enforcement of the copyright. The idea of open source is then to eliminate the access costs of the consumer and the creator by reducing the restrictions of copyright. This will lead to creation of additional works, which build upon previous work and add to greater social benefit. Additionally, some proponents argue that open source also relieves society of the administration and enforcement costs of copyright. Organizations such as Creative Commons have websites where individuals can file for alternative "licenses", or levels of restriction, for their works. These self-made protections free the general society of the costs of policing copyright infringement. Thus, on several fronts, there is an efficiency argument to be made on behalf of open sourced goods. Others argue that society loses through open sourced goods. Because there is a loss in monetary incentive to the creation of new goods, some argue that new products will not be created. This argument seems to apply particularly to the business model where extensive research and development is done, e.g. pharmaceuticals. However, others argue that visual art and other works of authorship should be free. These proponents of extensive open source ideals argue that there should be no monetary incentive for artists. Eric Raymond and other founders of the open source movement have sometimes publicly tried to put the brakes on speculation about applications outside of software, arguing that strong arguments for software openness should not be weakened by overreaching into areas where the story is less compelling. The broader impacts of the open source movement, and the extent of its role in the development of new information sharing procedures, remains to be seen. Main article: appropriation art Since the early years of the 20th century, the idea of ownership and 'openness' in the visual arts has been influenced by processes of appropriation. To appropriate something is to take possession of it. In the visual arts the term appropriation is often used in a general way to refer to the use made of borrowed elements in the creation of new work. These borrowed elements might include images, forms or styles from art history or popular culture, or materials and techniques from non-art contexts. Since the 1980s the term has also been used more specifically to describe the process of quoting the work of another artist to create a new work. The new work may or may not alter the original. Because the very nature of appropriation art involves the borrowing, modification, and or use of existing art, and the redistribution of the new art into the community, this implicates appropriation art into a discussion of legal issues, especially that of fair use. Main article: sampling (music) In music, sampling is the act of taking a portion of one sound recording and reusing it in a new recording. The portion can vary in length from as little as one note or beat to an entire recording. The sampled recording can be used as an instrument or element of the new song. This is typically done with a sampler, which can be a piece of hardware or a computer program on a digital computer. Sampling is also possible with loops of magnetic tape with a reel-to-reel tape machine or can be done live with two turntables. Because the very nature of sampling involves musicians borrow, modify, and use existing recordings, redistributing them back into the community, this implicates sampling into a discussion of legal issues, especially that of fair use. For examples of sampling and the intersection with legal issues see: For record labels involved in open source culture: Within the academic community, there is discussion about expanding what could be called the "intellectual commons" (analogous to the creative commons). Proponents of this view have hailed the OpenCourseWare project at MIT, Thacker's article on "Open Source DNA", the "Open Source Cultural Database", openwebschool, and Wikipedia as examples of applying open source outside the realm of computer software. Science, industry and manufacturingEdit The principle of sharing predates the open source movement; for example, the free sharing of information has been institutionalized in the scientific enterprise since at least the 19th century. Open source principles have always been part of the scientific community. The sociologist Robert K. Merton described the four basic elements of the community - universalism (an international perspective), communism (sharing information), disinterestedness (removing one's personal views from the scientific inquiry) and organized skepticism (requirements of proof and review) that accurately describe the scientific community today. These principles are, in part, complemented by US law's focus on protecting expression and method but not the ideas themselves. There is also a tradition of publishing research results to the scientific community instead of keeping all such knowledge proprietary. One of the recent initiatives in scientific publishing has been open access - the idea that research should be published in such a way that it is free and available to the public. There are currently many open access journals where the information is available for free online, however most journals do charge a fee (either to users or libraries for access). The Budapest Open Access Initiative is an international effort with the goal of making all research articles available for free on the internet. The National Institutes of Health has recently proposed a policy on "Enhanced Public Access to NIH Research Information." This policy would provide a free, searchable resource of NIH-funded results to the public and with other international repositories six months after its initial publication. The NIH's move is an important one because there is significant amount of public funding in scientific research. Many of the questions have yet to be answered - the balancing of profit vs. public access, and ensuring that desirable standards and incentives do not diminish with a shift to open access. Open source principles can also be applied to technical areas other than computer software, such as digital communication protocols and data storage formats (for instance the Indian development simputer). Communication and personal expressionEdit Weblogs, or blogs, are another significant platform for open source culture. Blogs consist of periodic, reverse chronologically ordered posts, using a technology that makes webpages easily updatable with no understanding of design, code, or file transfer required. While corporations, political campaigns and other formal institutions have begun using these tools to distribute information, many blogs are used by individuals for personal expression, political organizing, and socializing. Some, such as LiveJournal, utilize open source software that is open to the public and can be modified by users to fit their own tastes. Whether the code is open or not, this format represents a nimble tool for people to borrow and re-present culture; whereas traditional websites made the illegal reproduction of culture difficult to regulate, the mutability of blogs makes "open sourcing" even more uncontrollable since it allows a larger portion of the population to replicate material more quickly in the public sphere. Messageboards are another platform for open source culture. Messageboards (also known as discussion boards or forums), are places online where people with similar interests can congregate and post messages for the community to read and respond to. Messageboards sometimes have moderators who enforce community standards of etiquette such as banning users who are spammers. Other common board features are private messages (where users can send messages to one another) as well as chat (a way to have a real time conversation online) and image uploading. Some messageboards use phpBB, which is a free open source package. Where blogs are more about individual expression and tend to revolve around their authors, messageboards are about creating a conversation amongst its users where information can be shared freely and quickly. Messageboards are a way to remove intermediaries from everyday life - for instance, instead of relying on commercials and other forms of advertising, one can ask other users for frank reviews of a product, movie or CD. By removing the cultural middlemen, messageboards help speed the flow of information and exchange of ideas. Religious ideology and practiceEdit Main article: open source religion A number of organized attempts to develop open source religions have sprung up in recent years. An open source religion would dramatically decentralize and democratize control over both belief system and actual practice. Open source religion is conceptualized as being similar to Wikipedia, with a limited focus on developing specific systems of spiritual notions, creation cosmologies, meanings inherent (or lack thereof) in human life, etc. These attempts are still in their infancy and no phenomena corresponding to the early explosion of interest in open source software or Wikipedia have yet emerged. Government and public policyEdit Rampant piracy of movies, music and computer programs in less developed nations has outraged intellectual property owners in the industrialized world. As local governments come under pressure from institutions such as the World Trade Organization and the International Intellectual Property Alliance, some have turned to open source software as an affordable, legal alternative to both pirated material and expensive computer products from Microsoft, Apple and the like. The government of Pakistan, for example, established a Technology Resource Mobilization Unit in 2002 to enable groups of professionals to exchange views and coordinate activities in their sectors and to educate users about free software alternatives. GNU/Linux is an appealing option for poor countries with little revenue available for public investment; Pakistan is employing open source software in public schools and colleges, and hopes to run all government services on Linux eventually. There may also be broader, geopolitical implications. The spread of open source culture affords some leverage for these countries when companies from the developed world bid for government contracts (since a low-cost option exists), while furnishing an alternative path to development for countries like India and Pakistan that have many citizens skilled in computer applications but cannot afford technological investment at "First World" prices. In this sense, open source culture refers to a new situation in which the power relationships rooted in traditional intellectual property are disrupted not just illegally (e.g. file-sharing or piracy) but also officially and openly, as free software can remove the developed world's control of various technologies. Whether this course of development is viable remains to be seen. The Ministry of Defense in Singapore began switching its computers from Microsoft to open-source software in 2004, while South Korea, China and Japan agreed to cooperate in creating new Linux-based programs. The makers of proprietary software in developed nations have followed these trends and discouraged the use of free software. Microsoft has argued that Linux is not actually a free and original system but, rather, a violator of more than 228 patents, and the SCO Group Inc. has also charged that Linux is based on its Unix operating system. Legal action could follow if nations continue to implement open source software instead of privately owned products. - Open access - Open content - Open Source Culture: A Seminar on Intellectual Property, Technology, and the Arts at Columbia University - Open Source Culture Mediography: An annotated list of books, web sites, and other resources related to open source culture - Open source government - Open source record label - Public Library of Science - OSI homepage - OSI's history of the open source movement - Stallman's criticism of the open source movement - MIT's OpenCourseWare project - UK Parliament report on Open Source (PDF) - Thacker on "Open Source DNA" - McCormick on the Open Source Cultural Database - "Lessons from Open Source", by Jan Shafer - Why OSS/FS? Look at the Numbers by David A. Wheeler - How to Evaluate OSS/FS Programs by David A. Wheeler - "Copyright, Borrowed Images and Appropriation Art: An Economic Approach" by Prof. William Landes - GNU Project - "Why Art Should Be Free" by Jon Ippolito - Directory of Open Access Journals - Budapest Open Access Initiative - Open source project hostings - Apache Software Foundation, focused on servers, infrastructures, as well as development tools - BerliOS Developer - IBM developerWorks : Open Source - Java.net, for projects using Java technology - Mozilla Foundation, for internet clients and development infrastructures - mozdev.org, for Mozilla-related projects - Open Bioinformatics Foundation, for Bioinformatics-related projects - Savannah.GNU, for GNU Softwares - Savannah.NonGNU, for Free Softwares that runs on free operating systems - SourceForge.net, free hosting of open source software projects - SunSource.net, projects sponsored by Sun Microsystems - Tigris.org, focused tools for collaborative software development - Open Culture Movement Discussion - Video Resources |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:066e967a-8be6-4a57-a431-b3c2d926cd96>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Open_source_culture
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00179-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931047
4,060
3.4375
3
The United States still has a lot of work to do in regard to addressing the prevalence of domestic violence. In fact, an in-depth story from the Arizona Republic has pointed to the fact that in the last several years, the number of deaths from domestic violence has stayed fairly consistent in Arizona. While this means there hasn’t really been an increase in deaths, there certainly hasn’t been a decrease either. Fortunately, researchers are seeking more information about domestic violence and specifically about domestic violence that ends in death. Not surprisingly, much of the research has a mental health aspect. For example, the article mentioned how substance abuse, depression and estrangement are just some of many risk factors that could increase a battered woman’s chance of eventually being killed by her partner. Later, the article explained that generally before a battered woman’s life ends at the hands of her partner, there are warning signs. For example, the partner usually engages in a specific kind of abusive behavior called “intimate partner terrorism” or “coercive control.” “Coercive control is almost exclusively the domain of men,” according to the article. “It is long-term and tyrannical abuse that includes, often in addition to physical violence, attacks on a woman’s self-worth, degrading remarks and obsessive monitoring of her whereabouts and her contact with other people.” The abuser often has mental health issues like depression or substance abuse, and struggles with obsessive and possessive behavior. In some cases, abusers cope with massive self-shame by severely abusing or killing their partners. Mental health experts have more insight into how domestic violence can impact mental health, and what issues sometimes predispose people to being in relationships that involve domestic violence. Nerina Garcia-Arcement, a licensed clinical psychology and a clinical assistant professor at the NYU School of Medicine, said in an email that there is a gradual process that leads from “normal” relationships to relationships involving domestic violence. “Women don’t enter violent relationships where they are being hit from day one,” Garcia-Arcement said. “They date men that pay attention to them, are possessive and slowly begin to limit their behavior and social interactions (i.e., the woman can’t talk to friends or family as much or at all, or she can’t wear certain things). Often this controlling behavior is couched as ‘loving them.’”
<urn:uuid:0f134c4a-e91b-4156-8e7d-f41016c9b834>
CC-MAIN-2016-26
http://gcadv.org/domestic-violence-and-mental-health-how-are-they-intertwined/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96415
524
2.625
3
Duane Park, located at Hudson and Duane Streets in Manhattan, was the first public space acquired by the City specifically for use as a public park. This park and the adjacent street take the name of James Duane (1733-1797), New York’s first mayor after the Revolutionary War. Born in New York City and admitted to the bar in 1754, Duane went on to serve as New York attorney general in 1767 and in the Continental Congress from 1774 to 1784. Despite having initial reservations about his country’s independence, he later supported the Declaration of Independence and helped to draft the Articles of Confederation and the first New York State Constitution. He was a member of the New York State Senate, the first mayor of New York City (1784-1789), and he served as a U.S. District judge in New York. The park is the last remnant of the greensward of the Annetje Jans farm, granted in 1636 by Governor Wouter Van Twiller to Roelfoff and Annetje Jans. After the death of Roeloff Jans, his widow married the Revered Everardus Bogardus, second minister of the Dutch Church of New Amsterdam, and the farm became known as the Dominie’s Bouwery, (minister’s farm). The farm was sold in 1670 to the English Governor, Sir Francis Lovelace, but was later confiscated by the Duke of York and deeded in 1705 to Trinity Church. In 1797 the triangle was purchased from Trinity Church for five dollars in order to create a public park. Originally an open commons, the park was later enclosed by an iron fence. By 1870 it had been enlarged and enclosed with trees, lawn, and shrubs following a design by Parks Chief Engineer M.A. Kellogg and Chief Gardener I.A. Pilat, who were also responsible for designs for City Hall and Washington Square Parks. The 1870 design featured bluestone curbing, iron fencing on a granite base, grading and planting of the enclosed green space, and twelve new street trees. The sidewalk along the south side of the park was widened from two to ten feet, and the sidewalk along Hudson was narrowed from sixteen to ten feet. Around this time many of the nearby buildings were erected, and as part of a citywide effort to improve public access to enclosed parklands, Parks Superintendent Samuel Parsons, Jr., and landscape architect Calvert Vaux introduced a new plan for the space in 1887 that added paths while retaining the plantings. Parsons in particular wrote extensively about designing public squares like the one at Duane Street. “I do not know why it is that city squares are generally treated as mere open spaces of greensward with shade trees dotted over them,” Parsons opined in his 1892 article “The Evolution of a City Square.” “Poverty of designing ability, probably, and lack of knowledge of what might be done to beautify such places will entirely suffice to account for this baldness of treatment.” Noting that ideally parks “should be of the most liberal character - ten, twenty, fifty acres,” Parsons acknowledged that there were “often vacant places, triangles, and irregular spaces, not suited for building lots, that seem to be left unoccupied, perforce, as we might say.” The Duane Street site, in what Parsons called one of Manhattan’s “crowded and dusty” neighborhoods, was one such example. Integrating “definite artistic principles” and taking into account the foreboding native soil in this part of Manhattan, the Vaux-Parsons plan for Duane Park featured paths curving in from each surrounding street. “At Duane Street a diagonal walk has been introduced swelling out to a considerable width at one point between the three entrances,” Parsons explained. “Beyond this there are only three small bits of green grass on either side, a few shrubs along the fence and a small flower-bed, but even this is a boon to the crowded neighborhood.” In 1940 a design by Chief Consultant Landscape Architect Gilmore D. Clarke and Parks Landscape Architect Janet Patt gave the park a formal Beaux-Arts style look, reducing the planted area and adding a central flagpole. The design was typical of Works Progress Administration projects, and featured a geometric style. In 1999, a plan by landscape architect Signe Nielsen sponsored by the Friends of Duane Park, replaced much of the paved area with planting to evoke the 1887 design. Tablets detailing the park’s history and design were installed and the flagpole’s base was reinscribed.
<urn:uuid:4dac9c57-5071-4383-b25a-f4e9c3710dbf>
CC-MAIN-2016-26
https://www.nycgovparks.org/parks/M025/history
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970534
984
3.046875
3
The right of sanctuary was based on the inviolability attached to things sacred, and not, as some have held, on the example set by the Hebrew cities of refuge . It was recognized under the Code of Theodosius (399) and later by that of Justinian. Papal sanction was first given to it by Leo I, about 460, though the first Council of Orange had dealt with the matter in 441. The earliest mention of sanctuary in England was in a code of laws promulgated by King Ethelbert in 600. The right of asylum was originally confined to the church itself, but in course of time its limits were extended to the precincts, and sometimes even to a larger area. Thus, at Beverley and Hexham, the boundaries of sanctuary extended throughout a radius of a mile from the church, the limits being marked by "sanctuary crosses", some of which still remain. In Norman times there were two kinds of sanctuary in England, one belonging to every church by prescription and the other by special royal character. The latter was considered to afford a much safer asylum and was enjoyed by at least twenty-two churches, including Battle, Beverley, Colchester, Durham, Hexham, Norwich, Ripon, Wells, Winchester, Westminster, and York. A fugitive convicted of felony and taking the benefit of sanctuary was afforded protection from thirty to forty days, after which, subject to certain severe conditions, he had to " abjure the realm", that is leave the kingdom within a specified time and take an oath not to return without the king's leave. Violation of the protection of sanctuary was punishable by excommunication. In some cases there was a stone seat within the church, called the "frith-stool", on which it is said the seeker of sanctuary had to sit in order to establish his claim to protection. In others, and more commonly, there was a large ring or knocker on the church door, the holding of which gave the right of asylum. Examples of these may been seen at Durham cathedral, St. Gregory's, Norwich, and elsewhere. The ecclesiastical right of sanctuary ceased in England at the Reformation, but was after that date allowed to certain non-ecclesiastical precincts, which afforded shelter chiefly to debtors. The houses of ambassadors were also sometimes quasi-sanctuaries. Whitefriars, London (also called Alsatia), was the last place of sanctuary used in England, but it was abolished by Act of Parliament in 1697. In other European countries the right of sanctuary ceased towards the end of the eighteenth century. The Catholic Encyclopedia is the most comprehensive resource on Catholic teaching, history, and information ever gathered in all of human history. This easy-to-search online version was originally printed between 1907 and 1912 in fifteen hard copy volumes. Designed to present its readers with the full body of Catholic teaching, the Encyclopedia contains not only precise statements of what the Church has defined, but also an impartial record of different views of acknowledged authority on all disputed questions, national, political or factional. In the determination of the truth the most recent and acknowledged scientific methods are employed, and the results of the latest research in theology, philosophy, history, apologetics, archaeology, and other sciences are given careful consideration. No one who is interested in human history, past and present, can ignore the Catholic Church, either as an institution which has been the central figure in the civilized world for nearly two thousand years, decisively affecting its destinies, religious, literary, scientific, social and political, or as an existing power whose influence and activity extend to every part of the globe. In the past century the Church has grown both extensively and intensively among English-speaking peoples. Their living interests demand that they should have the means of informing themselves about this vast institution, which, whether they are Catholics or not, affects their fortunes and their destiny. Copyright © Catholic Encyclopedia. Robert Appleton Company New York, NY. Volume 1: 1907; Volume 2: 1907; Volume 3: 1908; Volume 4: 1908; Volume 5: 1909; Volume 6: 1909; Volume 7: 1910; Volume 8: 1910; Volume 9: 1910; Volume 10: 1911; Volume 11: - 1911; Volume 12: - 1911; Volume 13: - 1912; Volume 14: 1912; Volume 15: 1912 Catholic Online Catholic Encyclopedia Digital version Compiled and Copyright © Catholic Online
<urn:uuid:2039a18b-bb01-413a-b43d-c102785dd4f0>
CC-MAIN-2016-26
http://www.catholic.org/encyclopedia/view.php?id=10430
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972381
909
3.96875
4
On-line version ISSN 2078-5135 SAMJ, S. Afr. med. j. vol.104 n.1 Cape Town Jan. 2014 CONTINUING MEDICAL EDUCATION MB BCh, FCPsych (SA) OPD, Outreach and Medium Term Services, Lentegeur Hospital, Division of Public Mental Health, Department of Psychiatry and Mental Health, University of Cape Town, South Africa De-institutionalisation refers to the depopulation of large psychiatric institutions, which was an important component of mental healthcare policy beginning in Europe and the USA in the middle to late 20th century. Growth of the psychiatric institution Interestingly, there seems to have been little need for institutional care in Europe before the 18th century. The first purpose-built psychiatric institution in the UK, The Priory of Saint Mary of Bethlehem (later known as Bethlem or 'Bedlam'), was founded in 1247, but in 1700 was still the only public asylum in Britain, with 100 inmates. The next two centuries, however, saw a radical change, with the development of institutional confinement as the principal way of dealing with individuals who were deemed to be mentally ill, and by 1900 the total number of inmates in psychiatric institutions in the UK exceeded 100 000. Initially, these places offered little more than confinement. However, in time, more humane treatment began to arise and it has been asserted that the institutions were the birthplace of psychiatry. This has implications for how psychiatry is viewed to this day, with a powerful and negative association of mental illness with removal to the 'loony-bin With little effective treatment (therefore few discharges) and rapid population growth, the demand for institutions began to exceed available funding by the early to mid-1900s. Conditions deteriorated and the institutions became notorious as places of overcrowding, neglect and abuse. By the end of the 20th century, particularly in the USA and Europe, the number of inpatients in psychiatric hospitals had been radically reduced - in the USA the population of state and county psychiatric hospitals fell from 553 979 in 1954 to 61 722 in 1996, and 120 hospitals were closed. The reasons for this precipitous change in the practice of psychiatry are complex, but a number of factors have been cited: - the advent of effective antipsychotic medication - the growing wave of public antipathy towards psychiatric institutions as the abuses and poor conditions became more widely known - the growth of mental healthcare user/survivor groups and the development of disability activism - an assumption that community-based care would be more humane - a variety of political arguments that span the spectrum - from concerns about the human rights of the mentally ill to financial imperatives driven by growing costs and the perception that community care would be cheaper. Undoubtedly, the most positive outcome of de-institutionalisation was the disappearance of the huge asylums of old and with them the potential for human rights abuses. This coincided with an expanded psychopharmacological armamentarium, a widened scope of practice outside asylums, and the diversification of care. Delinked from the negative association with the institutions, psychiatry has steadily gained recognition as a medical discipline, and psychiatric treatment has become socially more acceptable. It is clear, however, that the consequences for those suffering from severe mental illness were not entirely positive, as the enthusiasm for cost-cutting hospital closures has not been matched in the development of alternatives to hospitalisation. The most obvious negative consequence has been the emergence of large populations of homeless people with severe mental illnesses, and the increase in the number of mentally ill persons in prisons, or those housed in poorly regulated smaller facilities outside the healthcare system. In particular, it has been argued that those who were meant to benefit most from the closure of the old institutions, the indigent severely mentally ill, have fared worst as a result of the new reforms. The reasons why this has happened have become clearer in retrospect: - What has emerged is that the successful placement of a person living with a chronic mental illness in a community setting requires substantial effort and resources which, when properly assessed do not translate into any substantial financial saving over a long-term hospital admission. - Co-morbid substance dependence has emerged as a major problem that complicates rehabilitation. - Social spending has generally been reduced, with fewer funds available for social support. - The emergence of structural unemployment has made vocational rehabilitation extremely difficult. - In some areas community resistance has emerged as a significant factor. - Urbanisation and smaller families also reduced social support. Perhaps, more than anything else, we have learned the true meaning of the biopsychosocial approach to mental illness from the experience of de-institutionalisation. It has become clear that real recovery requires more than just attending to the biological needs of an individual, such as medication, food and shelter. It demands that if people with chronic mental illness are to do more than just survive, attention must be paid to their individual circumstances, needs and hopes. Additionally, it demands that we see care from a social context, that attends to the wide range of social factors that affect a person with mental illness, such as stigmatisation and various forms of structural discrimination. 1. Porter R. Madmen: A Social History of Madhouses, Mad-Doctors and Lunatics. Stroud, Gloucestershire: Tempus, 2006. [ Links ] 2. Geller JL. The last half century of psychiatric services as reflected in 'Psychiatric Services'. Psychiatr Serv2000;51(l):41-67. [ Links ] 3. Stroman D. The Disability Rights Movement: From Deinstitutionalization to Self-determination. Lanham, MD: University Press of America, 2003. [ Links ] 4. Scott J. Homelessness and mental illness. Br J Psychiatry 1993;162:314-324. [ Links ] 5. Bloom JD. 'The incarceration revolution: The abandonment of the seriously mentally ill to our jails and prisons Journal of Law, Medicine and Ethics2010;38(4):727-734. [http://dx.doi.org/101111/j.1748-720X2010.00526.x] [ Links ] 6. Grove B. Reform of mental health care in Europe. Br J Psychiatry 1994;165:431-433. [ Links ] 7. Okin RL. The future of state mental health programs for chronic psychiatric patients in the community Am J Psychiatry 1978;135:1355-1358. [ Links ]
<urn:uuid:146a8162-e05b-4c5d-8f85-5c128573b875>
CC-MAIN-2016-26
http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0256-95742014000100034&lng=en&nrm=iso&tlng=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945711
1,346
3.03125
3
The city of Oxford and the Oxford shire county town are situated in the south east part of England. The city covers an area of around forty six square kilometers and is home to 134248 people. The River Thames passes through the city but is known by the name Isis for a distance of sixteen miles. The University of Oxford is situated in Oxford. This is the oldest university in the parts of the world where English is spoken as the main language, as it was established in the year 1249. The buildings in the city have been made on the architectural style which has been the style of the buildings since the time the Saxons came to the city. Matthew Arnold, a poet, has give Oxford the name the city of dreaming spires. This name reflects the musical structure of the university buildings in Oxford. The Carfax Tower is considered to be the centre of the city of Oxford. The climate of Oxford is classified as being a Maritime Temperate climate. The level of precipitation is uniform all year round and is affected by the Atlantic Ocean. It was during the heat wave of Europe in the year 2003 that the highest temperature of ninety six degrees Fahrenheit was recorded. The lowest temperature ever was a temperature of two degrees Fahrenheit which was recorded in the month of January of the year 1982. The climate all over the world has been experiencing significant changes nowadays, and because of that the temperatures in Oxford are increasing and the level of precipitation in winter is rising but decreasing in the summer season. The major sectors of economy in Oxford are car making and brewing. Cars are being made in Oxford for a long time. The BMW MINI is produced in Oxford these days.
<urn:uuid:2e1f1f8d-6162-447f-b4e2-6f63b692fa4a>
CC-MAIN-2016-26
http://www.agitproperties.com/europe/united-kingdom/oxford/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00038-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975058
334
2.734375
3
||edHelper's suggested reading level: ||grades 6 to 8 ||Flesch-Kincaid grade level: ||treaty, mining, better, gunpoint, posts, banks, court, goods, council, trade, mainly, vote, lower, cause, peace, constitution ||North Carolina, South Carolina, Native Americans Quickly Print - PDF format Quickly Print - HTML format Feedback on Cherokee By Mary Lynn Bushong 1 The Cherokee lived in the Southeastern part of America. Most of them lived in North Carolina, South Carolina, Tennessee, and Georgia. They built homes and towns. They farmed the land and grew crops. They also hunted and fished for meat. The Cherokee were also good warriors. Many of the men took great pride in how handsome they were. 2 The Cherokee lived over a large area. They built homes made of wood posts with branches woven in between and covered with mud. Sometimes they were dug out inside so the floor was lower than the ground. In the top was a hole for smoke to leave. 3 There were 30-60 homes in each village. In the middle of the village was a larger building. It was called the council house; that was where the people would meet. Villages were about one day's walk apart. 4 Outside the village were the fields. This is where food was grown. The people called corn, beans, and squash the "three sisters." This was because they were all grown together. 5 The Cherokee language was like the one spoken by the Iroquois. The culture was not the same, though. They were more like other southern tribes in how they did things. 6 The Cherokee were great farmers, but they did not raise animals for food. If they wanted to eat meat, they had to hunt. 7 These Native Americans also warred with other tribes like the Creek, Tuscarora, Chickasaw, and Shawnee. Natives living in villages on river banks made canoes. These canoes could hold twenty men. They could get to other villages quickly if they needed to. 8 When white men came, the Cherokee were friendly. The first white men they met were the Spanish. They only wanted gold but saw that the Cherokee wore mainly silver and copper. Even so, they did find some gold and did some mining. 9 When the English made colonies on the edge of Cherokee land, they made a treaty for trade. The Cherokee traded furs and sometimes other Indian people for guns. They were prisoners of war from raids. Those people became slaves. The Indian slave trade went on to cause more problems, especially in South Carolina. Paragraphs 10 to 16: For the complete story with questions: click here for printable Copyright © 2009 edHelper
<urn:uuid:2ef71cd2-6a3a-4711-8ae9-5b1c268e6b47>
CC-MAIN-2016-26
http://www.edhelper.com/ReadingComprehension_35_573.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982536
576
4.03125
4
Giving fluids rapidly through a drip into a vein (fluid resuscitation) as an emergency treatment for African children suffering with shock from severe infections does not save lives, according to a major clinical trial funded by the Medical Research Council. The ground-breaking research showed that giving children fluids slowly to replace the needs of a sick child who cannot drink, rather than rapid fluid resuscitation, is safer and more effective in aiding recovery. These findings challenge current World Health Organization (WHO) guidelines on how best to provide fluids to children in Africa with fever and shock caused by malaria, sepsis and other infections. The trial known as FEAST (Fluid Expansion As Supportive Therapy) involved over 3000 children in six hospitals across Tanzania, Uganda & Kenya. It examined the effectiveness of a long-standing treatment used across the world called fluid resuscitation. This treatment involves giving seriously ill children large volumes or boluses of intravenous fluids quickly through a drip in their first hour at hospital to try to reverse the deadly effects of shock. The children on the trial were divided randomly into three equally sized groups. Two groups were given emergency boluses of either albumin or saline in the first hour of arriving in hospital. After the first hour, the children were then given fluids slowly, to replace the amounts a sick child should drink. The third group were given fluids slowly from the first hour of admission but no additional bolus treatment. The trial results showed that children given fluids more slowly did better, with a 48-hour survival rate of 92.7%, compared with 89.4% of those children given boluses. Compared with giving children fluids slowly, fluid resuscitation caused three additional children to die out of every hundred treated. The trial was stopped early because the independent committee overseeing safety saw that giving boluses was unsafe. However, all children who took part in FEAST had a better chance of survival than is normally the case in Africa, in part due to extra training given to hospital staff to give emergency treatments, such as oxygen and providing medicines for malaria and other infections. Professor Kathryn Maitland, the Chief Investigator for FEAST, Imperial College London and KEMRI-Wellcome Trust Program said: This is the first time anywhere in the world that fluid resuscitation has been evaluated for safety and effectiveness in such a large trial, even though it has been standard treatment for the last two decades in the United States, Europe and Australasia. The FEAST trial was set up with the hope that fluid resuscitation would help the many African children with malaria and septicemia. Around one in ten children in Africa admitted to hospital with these deadly infections are in a state of shock. Although there are effective medicines for these illnesses, too often children arrive in hospital already very sick, with many children dying within hours of admission. Large-scale clinical trials of this nature carried out to the highest levels are crucial if we are to find new ways to keep children alive when they come into hospital. Disappointingly, across all parts of the trial we found that rapid fluid resuscitation had no benefit- our only conclusion is that boluses are harmful when used for shock in the illnesses we studied. Professor Sarah Kiguli, Chief Principal Investigator in Uganda said: The results have surprised me, particularly as I had seen some children getting better after being given large volumes of fluids. But more importantly the results went against the recommendations of the WHO and the normal practice in wealthy countries, and this surprised me greatly. Finding this out before we started to encourage the use of fluid resuscitation children with severe infections and shock across Africa was incredibly important. It will save many lives in future. The study authors agree that further research is needed in countries where fluid resuscitation is already standard practice, although the results in Africa may not be directly applicable to wealthy countries. One reason for this is that sophisticated life support equipment is available in wealthier countries and is available along with fluid resuscitation as part of a package of care. Professor Diana Gibb from the Medical Research Council Clinical Trials Unit said: The treatment may not carry the same risks in wealthy countries because children are healthier, and in particular have few problems of underlying long-standing malnutrition or anaemia. However, the clear findings from the FEAST trial do question the use of boluses for severe infections even in wealthy countries and more research is needed. The researchers have stressed the need to continue to use fluid resuscitation to treat diarrhoea and other conditions like burns and trauma, where children lose fluids. For these conditions, where fluid resuscitation will continue to be a vital life-saving treatment, they advise that current WHO recommendations should stay the same. Children with severe malnutrition were not included in the trial as fluids are not recommended as part of their treatment. Explore further: Major malaria study leads WHO to revise treatment guidelines The paper Mortality after Fluid Bolus in African Children with Severe Infections will be published online by the New England Journal of Medicine on 2000 BST 26 May 2011.
<urn:uuid:850c931f-7875-4ec7-8a3b-a947d2397fb9>
CC-MAIN-2016-26
http://medicalxpress.com/news/2011-05-african-trial-emergency-treatment-children.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964178
1,016
3.21875
3
The most important place to share and reflect on this message is in our families. While it's always a challenge to use a statement like Faithful Citizenship —so obviously written to an adult audience—within a family context, it's worth the challenge! Civic responsibility starts with the adults in the family. Some Do's and Don'ts Do show your children that you are concerned about the issues and questions raised in the statement. Express your opinions or beliefs about these issues, and share questions you have about issues or candidates. Look for opportunities to state where you stand on a certain issue or why you favor a certain candidate. Don't push your children to adopt your stance or to support your candidate. Don't preach or try to convert them. Do ask for their opinions, questions, or concerns. Be genuine with your interest, and really listen to whatever they have to say. Don't worry if they don't agree with your position or even with all the positions expressed in Faithful Citizenship. (Most of the issues addressed in the statement are very complex, even for adults.) The most important thing is that your children are aware and concerned and that they are thinking about the issues in moral terms. Do show that you truly respect different points of view on the issues or candidates—that good people can disagree on specific matters without rancor. Do get involved yourself. If you believe strongly in an issue or candidate—and hopefully you do—take an active role. It's a cliche, but actions do speak much louder than words, especially to our children. Do look for activities that your children or your whole family could get involved in with you (e.g., pro-life marches, environmental cleanup projects, the design of posters for a campaign, canvassing or leafleting for a candidate, attendance at rallies, letter writing to elected officials). Don't coerce or shame them into involvement, but invite and encourage it, leaving them free to participate or not. Of course, promising a favorite treat to children at the end of an activity is an excellent means of encouragement! Social action and ice cream just seem to go together. Do vote and let your children know that you see voting as a priority. Bring your children with you to the polls. Watch the election returns together and discuss their implications. Raising Family Awareness Using Faithful Citizenship with your family involves thinking creatively, planning interesting family activities, and taking advantage of opportunities that present themselves. Here are some suggestions: - Use TV as a resource: Look for shows that in some way address one of the issues mentioned in the statement. An example may be a news show or a documentary; it might also be a sitcom that is treating some current political or social topic. The key is to check out the show ahead of time and then to watch the show together. It's often effective if you just "wander in" and sit down while your children are viewing it. Or it may be necessary to decide ahead of time that you will watch a specific show together. However you do it, the most important thing is to talk about the show's topic. As mentioned above, share your thoughts and listen to their thoughts without being judgmental. Sometimes the only talking you can do is at the TV, but that's okay. They'll hear it. - Question, question, question: The bishops' statement lists "Goals for the Campaign." Rephrase these goals as questions so that young people can relate to them. The following are examples: "I wonder how much money the person who sews the clothes we buy earns, or how much the farmer who grows the food we eat receives of the price we pay?" "Why are some people poor when so many people are rich?" "I wonder where we would go for health care if we didn't have insurance?" If the questions lead to further discussion, you or your children may need to do a little research. - Look at billboards and television advertisements for various candidates, and critique the advertisements as a family. Do the candidates address any of the issues mentioned in the statement? How well? - Pick out a few short excerpts from the statement, rephrase them for children, and post them on your refrigerator. Here are some possibilities: "The answer to violence is not more violence." "Every child should have the opportunity to be born and to feel welcomed." "Make the needs of the poor a priority." "Safe and affordable housing should be available for all." Try to find candidates or elected officials who support these positions by their policies and actions. - As a dinner prayer in the days leading up to election day (usually the first Tuesday in November), read one of the scriptural passages referenced in the statement. - Contact your library to get good children's books that deal with the issues. Green Street Park and Drop by Drop are excellent examples. - Have a family night on "citizenship": Choose one or two issues from the statement that are of particular interest to your family. For example, if you have an aging relative in a nursing home, you may want to pick health care or Medicare reform as your issue to discuss. If you know someone who has been a victim of crime, you might focus on handgun legislation. Make a list of how this issue does or could affect your family. Develop a family statement that summarizes your view on the issue. Write this statement in a letter you send to one of the candidates, inviting their comments. End the evening with "patriotic sundaes": vanilla ice cream topped with strawberry and blueberry syrup or with the berries, if available. - Identify some heroes—people who have taken a stand on these issues—whom your family could learn more about. Blessed Theresa of Calcutta, Blessed Oscar Romero, Dorothy Day, Cesar Chavez, and Gandhi are some well known examples of heroes, but you can probably find a number of local heroes as well. Your public library is a great resource, as is your diocesan social action office, peace and justice office, or pro-life office. - With older children, reflect and act on The Call to Family, Community and Participation by using the CST 101 video on this topic.
<urn:uuid:f12d95a5-adcf-42be-8c39-cc547d24ef04>
CC-MAIN-2016-26
http://www.usccb.org/issues-and-action/faithful-citizenship/family-guide-to-faithful-citizenship.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00176-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962239
1,266
2.546875
3
The dictionary defines a hero as “a person noted for feats of courage or nobility of purpose, especially one who has risked or sacrificed his or her life.” A “personal hero” is someone you or I hold in especially high esteem. For me, Dr. Martin Luther King, Jr., is both a national and personal hero. I have no illusions that he was a flawless man. I simply have the conviction that his virtues far outweighed his faults and that this nation is a better place because of him. When I read his speeches and weigh them in the context of his times, and consider his ability and courage to pursue his aggressive, but nonviolent, humanitarian principles, despite enormous pressures from those who thought he was going too far, as well as those who thought he wasn’t going far enough, I conclude that he was an extraordinarily inspirational leader with uncommon vision and strength. Dr. King didn’t simply talk about his dreams; he went to the battle lines time and time again to fight for them. Before he was finally murdered at the age of 39, his home had been bombed and he knew he put his life at risk continuously to advocate social justice, human dignity, and an end to racism and bigotry. We have not yet fully reached Dr. King’s Promised Land, where all people will be judged by the content of their character, but we are certainly closer to it because of him. This is Michael Josephson reminding you that character counts.
<urn:uuid:06d78263-9c19-4166-ab6d-199e2d997636>
CC-MAIN-2016-26
http://whatwillmatter.com/2014/01/commentary-812-1-why-martin-luther-king-is-a-hero/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.992012
308
3.015625
3
“The world is flat.” So declared New York Times columnist Thomas Friedman in 2005. And before the world was flat, it was round, and before that it was flat. And the picture plane was flat too. It gained the illusion of depth with the development of linear perspective, which, coincidentally or not, transpired roughly around the same time as the confirmation that the world is indeed round, proved by the explorations of Ferdinand Magellan. The painter David Simons recently remarked to me that he saw a parallel between pre-perspectival painting (the world as flat) and comic books. That gave me pause, because I was raised on Superman and Spiderman comics, Archie and Veronica, and later, R. Crumb. I loved those comics, and I have always loved the period of painting before perspective really took hold, mostly iconic religious pictures of the 12th and 13th centuries. What these two rather disparate practices have in common is that both rely on a clear distillation of ideas down to an essence that can reach any audience — one in which the story line and the characters are made perfectly clear. This was an important method of conveying the religious narrative to a largely illiterate audience in the Middle Ages, in much the same way as it was important to the first animated cartoons in silent film. If you look at artist Carl Ostendarp’s Facebook page, it is a parade of cartoon characters from various periods. In a video from the series 13 Artists in the Studio, a 20-years-younger Ostendarp speaks of his admiration for painters whose work does not demand connoisseurship of the audience. He sees cartooning as based on this principle, and he believes there has always been cross-fertilization between high and low culture. In the 19th and early 20th centuries, much of art history was seen through the lens of connoisseurship, a certain kind of privileged knowing; when I was younger, courses were offered in “art appreciation” as an avenue of edification. What Ostendarp believes in is the possibility of being a serious painter in a world where everyone in the audience is an entitled viewer. Walking into Elizabeth Dee gallery, where Ostendarp currently has a show on view, my first thought is to decipher the meaning of the two letters, a C and an O, floating toward the bottom edge of a field of color in each and every painting in the room. For a moment I think company, but there is no period. Then care of, but there is no forward slash. Then I step further into the gallery and a blue painting, half hidden in the office, catches my eye. “CARL,” it says. Of course, how long was it going to take me? Carl Ostendarp. Although always interested in surface, Carl Ostendarp seemed, early in his career, more interested in topography — a funny mix of the world as flat and the world as round. For topography brings to mind the surface of the earth and thus its spherical character, yet that same attention to surface is a serious painterly concern and one executed on the flat surface of the canvas. Ostendarp built up his surfaces, sometimes with foam, confusing some into thinking that he was a sculptor. In the same YouTube video mentioned above, Ostendarp says: “All the paintings that I admired, like the Pollock paintings, the Newman paintings — the thing that was most impressive to me was their sense of physicality.” The focus on physicality that emerged in his early work is clear in the current exhibition, titled BLANKS. It’s evident in the scale of each painting and the arrangement of the works on the wall, and in the relationship of one painting to the next. It’s clear, walking around the gallery rooms, that the scale and placement of each work is not accidental but carefully thought through. The show is both a collection of paintings and a calculated installation that takes everything in the gallery environment into account. At Pace Gallery in London earlier this year, Ostendarp created a mural upon which other artworks were hung, including one of his own. The palette of the mural was a kindred spirit to that of the paintings in the first room at Elizabeth Dee. At Pace the mural became a parietal layer, like the lining of the inside of a body cavity. (I must confess that I saw this show in reproductions only.) Even the ceiling was aglow with reflective color. It was a room, but it’s easy to anthropomorphize. Fantastic Voyage springs to mind, that sci-fi thriller, or was it comedy, produced in 1966, in which a miniature submarine is launched into the body of a Russian scientist, maneuvering its way through syrupy viscera and abstracted organs. The history of painting is in part a history of engagement with the human form. In the installation at Pace Gallery, the body was there by inference. The mural functioned not just as another artwork or as a ground; it suggested an interior cavity on the one hand, and on the other created self-awareness of the body through the experience of all-enveloping color. Everything is alive or reborn to our attention because the color heightens viewers’ self-consciousness at the moment when they come into contact with the artworks on the wall. The achievement of Ostendarp’s mural was to contextualize each artwork as newly present. It is always difficult to imagine the surprise or astonishment at a new work when we already know it, know how it has entered the art historical context. The mural sought to regift us with that experience of newness. This, it seems to me, is both an act of generosity toward other artists and one of deep respect for the endeavor of art making. It raises some prickly questions, too: Did the other artists consent to having their images shown this way, surrounded by a pink glowing halo of color? What was the relationship of the mural as painting to the paintings hanging on top of it? In the show at Elizabeth Dee, we are presented with paintings that are fields of color. A salmony pink in the first room, a glorious yellow and a saturated orange in the second. And on these fields of color are our two playful icons, two letters or characters, subject to a strange gravity, rolling downhill, levitating themselves, bouncing up and down like the ball in a sing-along. We can anthropomorphize these letters as easily as we can anything else. They are signs, and they are part of the alphabet. The press release asserts that they are “effectively ‘ruining’ the monochrome field.” I don’t think “ruin” is the word I would choose. The letters seem more like a way to playfully cajole us into a reevaluation of the monochrome field. Ostendarp has a special interest in the history of American painting from 1965 to 1975. But 1965 was not just the starting point for artistic strategies that included performance and expanded the range of interdisciplinary practices; it was also the inauguration of “The Great Society,” it saw the firebombing of the home of Malcolm X, it was the year when anti–Vietnam War protesters began to be arrested in the streets. It was a time of fundamental redefinition. And the juxtaposition of the historical paired with a keen interest in the experience of the present, either in the act of creating a ground for other artworks or in the act of disrupting the monochrome field, seems to be a strategy of redefinition. Or perhaps Ostendarp’s initials are simply an advertisement for him — a personal ad for an artist seeking egalitarian souls. Carl Ostendarp: BLANKS continues at Elizabeth Dee (545 W 20th Street, Chelsea, Manhattan) through September 6. Get Hyperallergic in your Inbox! Subscribe to our email newsletter. (Daily or Weekly)
<urn:uuid:9e982952-0d68-4ace-94f4-bbea04770d64>
CC-MAIN-2016-26
http://hyperallergic.com/141492/a-painter-finds-depth-in-flatness/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00118-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97244
1,666
2.734375
3
Definition of Like Fractions The different fractions with the same denominator are Like Fractions. More About Like Fractions - Fractions, whose denominators are not the same, are called unlike fractions. - To add and subtract like fractions, we simply need to add or subtract the numerators, then we can write the result over common denominator Video Examples: Adding and Subtracting Like Fractions Example of Like Fractions 3/7and 5/7 are like fractions, as they have the same denominator 7. Solved Example on Like Fractions Ques: Choose the group of like fractions from the following: I. 3/13, 6/13 , 10/13 II. 7/12 , 5/10 , 4/11 III. 1/9 , 1/7 , 1/10 IV. 2/9 , 4/5 , 1/9 Choices:A. I only B. IV only C. III only D. II, III only Correct Answer: A Step 1: Like fractions have the same denominators. Step 2: All the three fractions in I have the same denominator 13. So, the fractions in I are like fractions. Step 3: The denominators of all the three fractions in II have different denominators (i.e., 12, 10, and 11). So, the fractions in II are unlike fractions. Step 4: The denominators of all the three fractions in III have the different denominators (such as 9, 7, and 10). So, the fractions in III are unlike fractions. Step 5: The denominators of all the three fractions in IV are 9, 5, and 9. As the denominators of two of the fractions (2/9 and 1/9) are the same and the denominator of the other fraction is not the same like the other two fractions in the group, the fractions in IV are unlike fractions. Step 6: So, the fractions in the group I are like fractions.
<urn:uuid:ed77d492-cc65-4a54-9178-0aaff37b3b21>
CC-MAIN-2016-26
http://www.icoachmath.com/math_dictionary/like_fractions.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.861864
427
3.796875
4
You can easily create a homeschool report card using basic computer software and the grades from your child's homeschool courses. Why would you want to add the task of creating a personalized homeschool report card for each of your children to an already busy and sometimes hectic schedule? Does your state require that you complete one as part of your student record keeping? Or, how about the umbrella school you registered with? Well, for whatever the reason, I can share a few tips about how to develop such a document. Basically, you will want to include information that evaluates each child academically, spiritually or morally, and add some comment about their performance or attempts during the recording period. You will want to include the core subjects of English, math, history, science, and Bible, if applicable. You can include such sub-topics as handwriting, reading, and phonics under the English heading or list them separately. Include standardized test scores on the form if you have had your child tested or administered a standardized test at home. Include a grading key that explains how letter grades were determined. For example, a typical grading key looks like this: A – 90-100 B – 80-89 C – 70-79 D – 65-69 F – Failure Letter grades are more commonly used, especially in the higher grades. Sometimes, (S)- Satisfactory, (U)-Unsatisfactory, (N)-Needs Improvement grades will also be used. Character traits can also be graded with letter grades or an S, N, or U grade. Some of the common areas to record would be things such as: You can include anything that you want to emphasis concerning your child/ren. What exactly are some of the benefits of having a report card for your child? Once you have created one based on the template I supply or one that you develop, you can use it to: One suggestion I would make is that your child should never find out about any dissatisfaction you have with his/her work performance or ethics on the report card. Any negative issues should have already been discussed with the child and perhaps your spouse, as well. One of the many benefits of homeschooling is that we are able to address potential problems quickly and diligently work to resolve them. I encourage you to address any school work or behavior challenges early on so that the report card can reflect a much improved performance rather than failures. Also, be tactful and not abrasive when commenting. Words are very powerful, especially those that come from an authority like a parent who is also the teacher. Always strive to motivate and encourage your child and never be destructive or critical with your words. Look here for a sample report card template. I hope this helps to give you ideas about what you can do to create one of your own.
<urn:uuid:751c1f62-bc14-400c-907a-96e907d1095b>
CC-MAIN-2016-26
http://www.allabouthomeschoolcurriculum.com/homeschool-report-card.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00162-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96346
585
2.984375
3
Canada’s Prime Minister, Stephen Harper, has announced that the Canadian government will provide up to C$1.5 billion (US$1.43 billion) ) in the form of incentives over nine years to producers of renewable alternatives to gasoline and diesel fuel. The program, titled Canada’s ecoENERGY for Biofuels, is intended to dramatically boost the country’s production of biofuels. Last December, the Canadian government announced a new regulation requiring a 5% average renewable content in gasoline by 2010. At that time, the government also signaled its intention to develop a similar requirement of 2% renewable content for diesel and heating oil by 2012. Close to three billion liters of renewable fuels will be needed annually to meet the requirements of the new regulations. “With the transportation sector accounting for more than a quarter of Canada’s greenhouse gas output, increasing the renewable content in our fuel is going to put a real dent in emissions,” said Prime Minister Harper.
<urn:uuid:87ef6e30-adb8-4580-b04c-2f5a3904b89b>
CC-MAIN-2016-26
http://fleetowner.com/management/news/canada-offer-biofuel-incentives
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00079-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939563
207
2.703125
3
TORONTO -- Exploratory testing is superior to scripted testing, resulting in better tests and better testers, noted... tester Cem Kaner told attendees at the Conference of the Association for Software Testing (CAST). Kaner, a software engineering professor at the Florida Institute of Technology, advocated that testers use checklists rather than scripts in his keynote speech, "The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers." Kaner is famous for his groundbreaking work in software testing, but many in IT may be unaware that Kaner holds a doctorate in psychology and a law degree. Both of these backgrounds informed Kaner's checklist work. As a psychology student, Kaner observed that brain-damaged rats followed a "script" to avoid a shock, whereas normal rats followed a "checklist." The normal rats were able to adapt their behavior to the situation and complete a few steps to find a safe area. The brain-damaged rats were unable to adapt and followed the exact same behavior regardless of their circumstances. As such, they were incapable of simply running to safety. From this experiment, Kaner concluded that "following scripts is the 'best practice' available for brain-damaged rats." Kaner also sought to dispel many of the misconceptions he sees surrounding exploratory testing and scripted testing. Exploratory testing is not limited to manual testing, and explorers "can use any tool they want," he said. Similarly, exploratory testers may create whatever documents they wish. Exploratory testing is not limited to black box testing nor is it limited to test execution, Kaner explained. The harmful aspects of scripted testing The myths cloaking scripted testing are numerous and insidious. "Myths support the notion that we need complex scripts," Kaner told the audience. One of these myths is that complex scripts promote learning. "Following scripted instruction is a great way of leaving you within the confines of the script," he said. Junior testers who are given complex scripts as "training wheels" are doomed to remain junior testers because they are not cognitively engaged with the testing process, Kaner argued. In addition to being poor teachers, scripted tests do not properly test entry conditions, said Kaner. He listed a large number of factors that might affect a test that a script would not specify, including programs running, versions of those programs, exact amount of free memory available, and even the temperature of the processor being used. The creation of bias may be scripted testing's worst offense; it subverts the purpose of testing, Kaner said. Testers following scripts will look for certain results and ignore others. "People constantly change what they see based on expectations," he said, adding that scripted testing "sets up expectations." This leads to confirmation bias, whereby testers interpret results based on what they expect or want. Even worse, scripted testing can encourage "inattentional" blindness, so that testers do not notice failures even as they are observing them. Testers are "looking for events in a certain time, and something else happens...and it never gets to [their] consciousness," explained Kaner. (For a quick and dramatic example of inattentional blindness, Kaner referred attendees to this short video.) Checklists as cognitive aid As a prosecutor in California, Kaner used checklists to greatly increase reckless driving convictions. "We had to prove that this was beyond really bad driving," he said. Kaner had to examine what makes the driving so bad. He utilized checklists to determine recklessness, including points such as ice on the road, number of people present, traffic levels and so forth. Kaner posited that checklists could be a wonderful resource for testers. Checklists are "not just about the questions you're going to ask. They're much more about 'why do you want to know this?'" he said. Testers have the ability to prioritize checklist items -- an option not available in scripts. Kaner stressed how checklists keep users cognitively engaged with their work. "Checklists demand that you think about whether you need to bother with this and why or why not," he said. Checklists and exploratory testing Exploratory testing requires cognitive engagement from the tester throughout the entire test process, Kaner told the audience. This is in stark contrast to scripted testing, in which cognitive work is limited to test design, he explained. In exploratory testing and checklist-driven work, cognitive effort is connected "to real-time activity where you actually apply it, in front of a judge, a client," said Kaner. Exploratory testers use checklists in the manner they choose. The checklist is a tool; one the tester controls.
<urn:uuid:0d6fecda-6dc0-4744-afed-f1abfa8a179d>
CC-MAIN-2016-26
http://searchsoftwarequality.techtarget.com/news/1323432/Kaner-Exploratory-testing-better-than-scripted-testing
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955743
977
2.5625
3
R. William Jones The always-dapper R. William Jones was a staunch advocate of international basketball and worldwide competition. In 1932, he teamed with Dr. Elmer Berry to create the International Amateur Basketball Federation (FIBA), the organization that governs all international basketball competition. From 1932 through1976, as the first general secretary of the FIBA, his efforts helped spread basketball to over 130 nations through games and clinics. Organizing all international basketball tournaments since 1936, including the Olympic Games, Jones determined the eligibility of every international team in both Eastern and Western hemispheres. Dr. Jones, whose honorary doctorate came from Springfield College, was the first international person to be inducted into the Hall of Fame.
<urn:uuid:92aa99d6-4c00-4a57-a675-54be73a46c97>
CC-MAIN-2016-26
http://www.hoophall.com/hall-of-famers/r-william-jones-enshrined-1964rome.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950445
144
2.53125
3
Many of the historic properties the Alabama Historical Commission assists with are significant because of their architecture. These may be Greek Revival mansions or log cabins, old barns or an early industrial plant, a building 150 years old or one built in the 1950s. While the survey and registration section maintains information about structures listed in the Alabama and National Registers, or included in county or local surveys, architectural history services has information on early Alabama architecture and architectural trends, as well architects and builders. Research on early Alabama architecture is constantly ongoing in the architectural history services Frequently Asked Questions What kind of information is available on Alabama The AHC's architectural archives includes a growing collection of documentary photographic images, measured floor plans, and elevation drawings of architecturally significant structures. Through ongoing fieldwork, the staff makes special efforts to document significant endangered buildings before From time to time, the staff also conducts "thematic" studies that take a broad look at a particular type of architecture, such as historic houses of worship. An extensive electronic database on early Alabama architects and builders is also being developed. The Alabama Department of Archives & History holds drawings and sketches, 1842-2008, compiled by Robert Gamble. There is limited access to this collection. Please make an appointment with the Archives' reference desk to view this material. Click here for the finding aid. Is this information available to scholars and the The AHC currently lacks the staff and facilities to make this information generally available. However, these records may be viewed by special appointment. What about general questions on Alabama architecture, such as early styles and trends, folk architecture, and the like? These questions can be addressed by phone, e-mail, or regular mail. For more information contact Lee Anne Wofford at (334) 230-2659 or
<urn:uuid:be758fc8-6dc4-4064-8893-e50655f55fff>
CC-MAIN-2016-26
http://preserveala.org/archhistoryprogram.aspx?sm=f_i
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.89879
421
2.796875
3
Most people might assume that technology first developed in 1928 would be obsolete by now. But from air conditioned buildings to sliced bread, many inventions of that era are still essential to our lives today. That includes the exercise stress test, which is still the most widely used medical test for coronary artery disease. Doctor Martha Gulati MD, of The Ohio State University's Wexner Medical Center, co-authored a paper in the journal Current Problems in Cardiology, touting the benefits of exercise stress tests, particularly in this age of high-tech medicine. Today, using things like nuclear heart scans, MRIs and CT imaging, doctors can see inside the body like never before, peering deep into the heart with remarkable clarity. "Even though they've been around for nearly a century, they can not only tell us if you currently have heart disease, but can also predict your risk for it in the future," said Martha Gulati. "By today's standards these tests may seem low-tech, but they can be highly effective and very efficient in diagnosing heart problems." Just because those tests are available, doesn't always make them the right choice. "In my practice I use a lot of advanced imaging when it's appropriate," said Dr. Gulati, "but I think we need to get away from just doing the most expensive test because we can." Doctor Gulati says while high-tech imaging can be effective, it is expensive and often involves radiation, which, in some cases, can lead to other health complications. "We need to be doing the right test for our patients, and when the guidelines are strictly followed, for almost all patients who can exercise, the right test, initially, will be the exercise stress test." During these tests, doctors simply attach small electrodes to your body, which monitor your heart during exercise, charting everything from beats per minute to blood pressure, and can even measure things like capacity, blood flow and recovery times. The test is non-invasive and can be administered in doctor's offices. "We sometimes get caught up in the latest technology in our society, and often what gets ignored is the simple stuff," said Dr. Gulati. It was that simple test that saved the life of 73-year old Barbara Current. "I had this anxious feeling in my chest," she said, "it wasn't anything big, but something I'd never felt it before." Having already survived one heart attack, she immediately went to see Dr. Gulati. To their surprise, tests involving high-tech imaging showed that Barbara's heart was normal. But the exercise stress test told a different story. "We found very significant disease," said Dr. Gulati. "In fact, it required having a stent placed in one coronary artery immediately and, subsequently, required another one placed just recently." Given the subtlety of her symptoms, Barbara was surprised at the extent of her disease. "I really didn't think it had anything to do with my heart," she said, "I'm so very fortunately I took that stress test." Explore further: In the Chest Pain E.R., a new testing routine means fewer missed heart disease cases An Update on Exercise Stress Testing, Current Problems in Cardiology, Volume 37, Issue 5, May 2012. Online: www.cpcardiology.com/article/S0146-2806(11)00258-1/abstract
<urn:uuid:b325dcfa-1225-4b8d-8c14-1b99bcff0a6d>
CC-MAIN-2016-26
http://medicalxpress.com/news/2012-06-doctors-century-old-heart.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97293
702
2.75
3
Electronic control in mobile equipment can consist of the following: These inputs can be defined as the user interface, and can consist of joysticks, potentiometers, operator panels, or other input devices. These inputs can be defined as the machine interface, and can consist of pressure transducers, temperature sensors, flow sensors, velocity sensors or RPM sensors. When feedback inputs are used the system is described as “closed loop.” Controller or ECU: This is the brains of the electronic control. It processes the inputs and converts them into a defined output to the hydraulic system. The controller also can have the ability to receive feedback inputs from machine sensors and attenuate its outputs accordingly. The controller can be factory-programmed or have the ability to be user-programmable to meet the specific needs of the application. Outputs can be on/off voltage signals or proportional PWM signals to control the hydraulic valving. The controller can have the ability to engage in two-way communications with a bus system (for example: communication between the ECU and a display, or output signal to an input device). ECU — ELECTRONIC CONTROL UNIT ECUs were developed to replace the older “sequential relay circuits,” that were used for machine control. The ECU works by measuring its inputs and depending on their state, switching its outputs On or Off. The user enters setup instructions, usually via software, that will produce the desired results. Because many of the controller’s functions are user-programmable, an ECU has the versatility to be field-modified for changing applications or conditions. Some ECUs have the capability to convert analog inputs, process them digitally, and produce analog outputs. ECUs that do not have built-in converters require separate analog to digital converters to convert the input. An analog signal is an AC or DC voltage or current, or resistive signal that varies smoothly and continuously. In an analog system, a physical variable is represented by a proportional voltage that varies in correspondence with the physical variable. Electronic circuits that process analog signals are called linear circuits. An example of an analog device is a traditional-style clock that has hour and minute sweep hands that rotate around the dial. As an input, an analog signal can provide infinite resolution due to its wide frequency range. Digital signals vary in discrete (discontinuous) values to represent information for input. A digital signal is normally in the form of a series of pulses that rapidly change from one distinct, fixed voltage level to another. An example of a digital device is a clock that displays the time in actual numerals that change in one-numeral increments. As an input, the resolution is dependent on the number of bits of information processed or available to the controller. In digital systems, physical variables are represented by numerical values using the binary (base 2) number system. ELECTRONIC CONTROL PLATFORMS Some of the different forms that mobile electronics can take are described in the chart shown on this page. There is an increasing complexity and cost as the controllers move from single-function analog controls to complete complex digital control systems. Digital vehicle controllers offer a high level of sophistication by executing a programmed sequence of functions that constantly control all motion parameters. Electronic Control Technology Platforms for the Mobile Equipment Market: Vehicle Control Systems (most complex, highest cost and value) Vehicle Control Subsystems (higher complexity, cost and value) Digital Control Products (intermediate complexity, cost and value) Digital Signal Processor Closed Loop Flow Controller Fan Drive (with Sensor Input) Analog Control Products (least complex, lowest cost and value)
<urn:uuid:b3ab390f-7e0b-4258-a427-e97706ef42b5>
CC-MAIN-2016-26
http://hydraforce.com/Electro/ElecCont_html/3-420-1_Elec-Hyd_Intro/3-420-1_Elec-Hyd_Intro.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.869573
768
3.546875
4
A baby playing with bubbles. - The definition of a bubble is a thin liquid that forms into a ball around air or gas, a tiny ball of air or gas in a liquid, or something in this shape. - An example of a bubble is a thin ball of soap; a soap bubble. - An example of a bubble is the carbonation in a soda. - Bubble means to make or form into thin balls of liquid or foam, or to make a boiling sound. - An example of bubble is for boiling water to start forming little balls on the surface of the water. - An example of bubble is making a popping sound such as the sound made by a pot of tomato sauce that is boiling on the stove. - a very thin film of liquid forming a ball around air or gas: soap bubbles - a tiny ball of air or gas in a liquid or solid, as in carbonated water, glass, etc. - anything shaped like a bubble, sphere, or hemisphere, as a plastic or glass dome - anything that is ephemeral or insubstantial - any idea, scheme, etc. that seems plausible at first but quickly shows itself to be worthless or misleading - a condition or period of extreme overvaluation, as in the market for stocks or real estate, resulting from wildly speculative buying - the act, process, or sound of bubbling Origin of bubbleMiddle English bobel, of echoic origin, originally , as in Middle Dutch bubbel - to make bubbles; rise in bubbles; boil; foam; effervesce - to make a boiling or gurgling sound Origin of bubbleME bobelen - to form bubbles in; make bubble - ⌂ Informal to cause (a baby) to burp - to overflow, as boiling liquid - to be unrestrained in expressing one's enthusiasm, zest, etc. on the bubble⌂ - a. A thin, usually spherical or hemispherical film of liquid filled with air or gas: a soap bubble.b. A globular body of air or gas formed within a liquid: air bubbles rising to the surface.c. A pocket formed in a solid by air or gas that is trapped, as during cooling or hardening. - The sound made by the forming and bursting of bubbles. - Something insubstantial, groundless, or ephemeral, especially a fantastic or impracticable idea or belief: didn't want to burst the new volunteers' bubble. - Something light or effervescent: “Macon—though terribly distressed—had to fight down a bubble of laughter” (Anne Tyler). - a. A usually transparent glass or plastic dome.b. A protective, often isolating envelope or cover: “The Secret Service will talk of tightening protection, but no President wants to live in a bubble” (Anthony Lewis). - a. A usually oval outline, as on a ballot or a standardized test form, intended to be filled in using a pencil or pen.b. A rounded or irregularly shaped outline containing the words that a character in a cartoon is represented to be saying. - Economics An increase in the price of a commodity, investment, or market that is not warranted by economic fundamentals and is usually caused by ongoing investment or speculation in the expectation that the price will increase further. intransitive verbbub·bled, bub·bling, bub·bles - To form or give off bubbles: soup bubbling on the stove. - To move or flow with a gurgling sound: a brook bubbling along its course. - a. To rise to the surface: gas bubbled up through the swamp water.b. To become active or intense enough to come into prominence: “Since then, the revolution has bubbled up again in many forms” (Jonathan Schell). - To display irrepressible activity or emotion: The kids were bubbling over with excitement. Origin of bubbleFrom Middle English bubelen, to bubble. - A spherically contained volume of air or other gas, especially one made from soapy liquid. - A small spherical cavity in a solid material. - bubbles in window glass, or in a lens - Anything resembling a hollow sphere. - (economics) A period of intense speculation in a market, causing prices to rise quickly to irrational levels as the metaphorical bubble expands, and then fall even more quickly as the bubble bursts (eg the South Sea Bubble). - 1749, Henry Fielding, Tom Jones, Folio Society 1979, p. 15: - For no woman, sure, will plead the passion of love for an excuse. This would be to own herself the mere tool and bubble of the man. - (figuratively) The emotional and/or physical atmosphere in which the subject is immersed; circumstances, ambience. - (Cockney rhyming slang) a Greek (also: bubble and squeak) - A small, hollow, floating bead or globe, formerly used for testing the strength of spirits. - The globule of air in the spirit tube of a level. - Anything lacking firmness or solidity; a cheat or fraud; an empty project. (third-person singular simple present bubbles, present participle bubbling, simple past and past participle bubbled) Partly imitative, also influenced by burble. bubble - Computer Definition bubble - Investment & Finance Definition Markets that rise significantly above what rational expectations would dictate. Recently, the stock market run-up in the late 1990s that ended in 2000 is cited as an example of a stock market bubble. Historically there have been many bubbles, such as the South Sea Bubble and the Dutch Tulip Bubble. See also Tulipmania and South Sea Company Bubble.
<urn:uuid:d40a7710-9618-4688-bb5b-92c22e1ca9c8>
CC-MAIN-2016-26
http://www.yourdictionary.com/bubble
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00179-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899591
1,222
3.375
3
Written Culture in a Colonial Context. Africa and the Americas, 1500-1900 Title: Written Culture in a Colonial Context. Africa and the Americas, 1500-1900 Citation: Boston, Brill, 2012, African History (Brill Academic Publishers), 2 The circulation of manuscripts and books between different continents played a key role in the process of the first globalization from the 16th century onwards. This book explores the extent to which the control over the materiality of writing has shaped the numerous and complex processes of cultural exchange during the early modern period. Delmas and Penn bring together two fields of research in this collection of 15 papers: the history of written culture and the history of European colonial expansion in Africa and the Americas between the 16th and 19th centuries. Topics include rock art, scripts, and proto-scripts in Africa; the introduction of alphabets to Mexican tlahcuilos (scribes) of the 16th century; representations of geographical knowledge of Ethiopia in the 16th and 17th centuries; literary-historical polemic in colonial Cape Town circa 1880-1910; Mapuche-Tehuelche Spanish writing and Argentinian-Chilean expansion during the 19th century; literacy and land practices at the Bay of Natal colony; black history and the Afro-Cuban codex of José Antonio Aponte; ministers of religion and written culture at the Cape of Good Hope in the 18th century; and occurrences and eclipses of the myth of Ulysses in Latin American culture. Table of Contents: -- Foreword: Writing at Sea / Isabel Hofmeyr -- Introduction: the written word and the world / Adrien Delmas -- Rock art, scripts and proto-scripts in Africa: the Libyco-Berber example / Jean-Loïc Le Quellec -- From pictures to letters: the early steps in the Mexican tlahcuilo's alphabetisation process during the 16th century / Patrick Johansson -- Edmond R. Smith's writing lesson: archive and representation in 19th century Araucanía / André Menard -- Missionary knowledge in context: geographical knowledge of Ethiopia in dialogue during the 16th and 17th centuries / Hervé Pennec -- From travelling to history: an outline of the VOC writing system during the 17th century / Adrien Delmas -- Towards an archaeology of globalisation: readings and writings of Tommaso Campanella on a theological-political empire between the Old and the New worlds (16th-17th centuries) / Fabián Javier Ludueña Romandini -- Charlevoix and the American savage: the 18th-century traveller as moralist / David J. Culpin -- Written culture and the Cape Khoikhoi: from travel writing to Kolb's 'full description' / Nigel Penn -- Nothing new under the sun: anatomy of a literary-historical polemic in colonial Cape Town circa 1800-1910 / Peter Merrington -- Mapuche-Tehuelche Spanish writing and Argentinian-Chilean expansion during the 19th century / Julio Esteban Vezub -- To my dear minister: official letters of African Wesleyan evangelists in the late 19th-century Transvaal / Lize Kriel -- Literacy and land at the Bay of Natal: documents and practices across spaces and social economics / Mastin Prinsloo -- The 'painting' of Black history: the Afro-Cuban codex of José Antonio Aponte (Havana, Cuba, 1812) / Jorge Pavez Ojeda -- On not spreading the Word: ministers of religion and written culture at the Cape of Good Hope in the 18th century / Gerald Groenewald -- Occurrences and eclipses of the myth of Ulysses in Latin American culture / José Emilio Burucúa Previously published: UCT Press, 2011
<urn:uuid:ff6af25c-fb22-4d7b-bd4f-12319e21d83e>
CC-MAIN-2016-26
http://cadmus.eui.eu/handle/1814/23526
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.85468
803
3.015625
3
The slope of the yield curve (difference between yields on ten-year UST vs three-month UST) and is often looked to as an indicator for the future direction of the US economy. When the curve is steep, the environment is conducive for economic growth, and when the curve flattens, or turns negative, you can expect the economy to slow down, or even contract. The chart below shows the change in the yield curve since the start of 2009. At a current level of 336 basis points, the curve is currently near the high end of its recent range. Given the fact that the curve steepened to near record levels during the credit crisis, the current level also ranks in the 95th percentile of all readings going back to 1962. Some have argued that yield curve is no longer relevant as a reliable indicator given the fact that near record steepness has only been accompanied by a slow and gradual recovery. That argument may certainly have some merit, however, we would caution that back in 2006 when the yield curve last inverted, there was also a widespread view that it was no longer relevant due to the fact that the S&P 500 kept rallying. While it took some time back then for the curve to make its presence felt, when it finally did, the economy and the market's cratered. Along those same lines, we would caution that as long as the economy continues to recover and grow, investors ignoring the yield curve should ignore the yield curve at their own peril.Click to enlarge
<urn:uuid:c36ab6d7-0c1d-4f33-95c6-c7c8750da414>
CC-MAIN-2016-26
http://seekingalpha.com/article/260476-yield-curve-remains-near-record-highs
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980397
304
2.609375
3
Spirit at the Solstice by Martin E. Marty Martin E. Marty recently wroteModern American Religion (Vol. 2): The Noise of Conflict. His article is excerpted from his book A Cry of Absence, to be published in January by Harper & Row. Reprinted by permission. This article appeared in the Christian Century December 22-29, 1982, p. 1307. Copyright by the Christian Century Foundation and used by permission. Current articles and subscription information can be found at www.christiancentury.org. This material was prepared for Religion Online by Ted & Winnie Brock. A wintry heart: a “being toward death.” Winter, the season when a pole of the earth slants farthest from the sun, finds the shadows longest at noon. Somewhere around December 21 in the northern hemisphere, or June 21 in the southern the tilt is the greatest. The long chill begins. Before that time, in the moderate climates, some snow will have fallen to follow the dropping of leaves, but the illusions of postponement can be extended. The slant, the tilt, the time of solstice means the end of illusion: winter has arrived. Winter is more than dormancy; it is a dying. Poets who have springtime in view make much of the way the earth needs annual sleep. Animals hibernate. Trees rest to draw strength for a new bursting of buds. Nature is quiet, but will become vital again. Poets are also beings who are aware of standing on a horizon that opens toward death. The leaves that have dropped from the living trees will never return. Though at first the smell of their decay was sweet, it turned acrid. In the dryness of late autumn, they disintegrated. The twigs are gone. No poetry or wishing will return them. Though some animals will revive from winter’s sleep, others have gone to their caves as to their graves. They disappear, and when spring comes the wanderer ponders: What happened to all the carcasses? Where are the dry bones? A wintry sort of spirituality does not literally trace the cycles of the seasons and is not a weather report or an observation on the climate. This spirituality treats winter as a metaphor or an image of the heart and soul. The wintry image, because it represents more than dormancy -- death -- forces an urgent theme on the spiritual seeker. The search for a piety does not permit evasion of the central issue of life: its “being toward death.” Every Yes hereafter has to be made in the face of “ceasing to be,” as the world ordinarily knows being. The wintry sort of spirituality, says Karl Rahner, promotes solidarity with those whose horizon excludes God. For those who are serious about this excluding, the mystery of death is the great determiner of their distance. Awareness of a probable “ceasing to be” and inability to say Yes in the face of it combine to lead them away from any desire to reckon with God. Whoever says God has chosen to imply goodness and power. If there is goodness and power, why is there death or the pain and suffering usually associated with it? Did God cause the death? If so, where is goodness and love? Did God have the power to abolish death and not use the power? That version advances nothing, for where, even then, are goodness and love? The third option seems hardly more attractive. It holds that God may be powerless to work life in the face of natural death. Why bother with a God too weak to create and sustain what matters most to every person? The questions will not be suppressed. Committed Christians who fuse horizons with those of the godless know they cannot evade the questions. Attempts to smuggle the reverent unbeliever into the kingdom by calling her an “anonymous Christian,” as Karl Rahner would, meet with opposition. The resisters say, in effect, “Don’t baptize me with terminology where you cannot reach me with water.” A classic illustration of this was when some French Dominicans came to admire the writings and character of the novelist Albert Camus. They wanted to find ways to include him anonymously in their camp but found him drawing back. Finally he addressed them, inviting dialogue. They should remain Catholic and he agnostic, he said, staying in their separate camps. If Camus could make sense of a God who permitted babies to die, he could find the Christian scheme attractive. Since he could not, their faith remained fundamentally unattractive, whatever lesser benefits it might bring. Not often do believers have the opportunity to engage those beyond their own horizon that includes God, the way those Dominicans encountered Camus. In recent generations the nonbelieving community has tended serenely to ignore the claims of faith. When they see evidence in the media of a commercialized and trivial entertainment business in the name of faith, the creatively godless shrug their shoulders. They have better things to do than to pay heed to such voices and options. These mean no more to them than would popular astrology to most Christians. Entertaining Christianity, though it employs a language that includes death, is in a hurry to pass over the Good Friday story in order to reach Easter. Such a diversion does not allow the reality of death to loom. Summery-spirited Christians know that they will have to die along with everyone else. The resurrection of Jesus Christ from the dead, since it took the sting out of death, in their eyes. removes some of the seriousness of death. Mentions of rewards for the graced life after death are frequent, but seldom is there a walking through the stages that lead to death. Since much summery Christianity concentrates on healing, it must bring up the fact of disease. Here again, however, the theme can be trivialized, because in the popular books or on the television programs, one comes to meet only with the success stories. People throw their crutches away; the cancer cells miraculously disperse; life is prolonged. Well and good, think those of the wintry sort, but prolonging does not do away with it. Death remains. The leaves fall, decay and disappear. The heart knows, and demands a listening to its confusion. Believers who find themselves at home with the wintry sort have been so voiceless that those who exclude God from their horizon are hardly aware of those who hold to faith. When the believing community begins to produce realists on the subject of death, people who take it seriously on their own terms, their act occasions surprise. Death up close is not a present reality for most of the young in a modern society. The architecture of housing cooperates: there is no upstairs room for grandma in the ranch home, no corner for grandfather in the apartment or condominium. If a member of the senior generation dies, the grandchild is at a distance. After death, the body of the elder is usually disguised by the cosmetic aspects of the funeral parlor art. Only professionals are physically near the dying: the cleric, the medical staff, the funeral parlor director. Even they find safeguards against having to make death personal: they can pull screens and use sanitation and monitoring devices to keep the dirtiness of death distant. Busy calendars and schedules remove the opportunities for reflection, and there is even new terminology to shroud the realistic language about death. Once upon a time people died in the community, but now they die alone. All this has a bearing on death for the dying one, but it also changes the nature of the surviving community. An author writes lines like these as if with a kind of automatic impulse. He fears that they will seem lifeless because they have been repeated so often. Yet he must move one step further and risk more repetition by entering into the present record the observation that death is supposed to have become a taboo subject. A yawn comes easily when one hears one more time that in the 19th century sex was taboo and death was a popular subject, whereas in the 20th everyone can talk about sex, and almost no one brings up death. Recently there has been some change in this context, and books on death have sometimes even become best-sellers. Seminars on the subject attract wide notice. Still, Christian parish leaders report that however voguish the topic may be, it does not attract audiences or readerships that have any staying power. Even where the talk about death and dying comes with ease, a certain taming of the subject occurs. Many a therapist can rattle off the “stages of death and dying” as outlined by Dr. Elisabeth Kübler-Ross or her competitors or successors. To put a name on stages, however, does something to make them more domestic and safe. Such naming is a part of therapy; why criticize the healing impulse? What a culture gains in therapy it may lose in its grasp of soul. Death has its own power. It also resists being graded, located between strata of life or seen following neat stages. Death comes with the brutal crash of steel as autos meet on freeways, through the quiet slit of steel with a razor at the wrist, with the thieving suddenness of a coronary occlusion, with the silent stealth of infant crib death, with the adding-machine efficiency of genocide, or with the plotlessness of invisible mass killings in Kampuchea. The naming of stages does little to prepare a person for the many ways that the last enemy uses to attack. John Dunne found from reading spiritual biographies that somewhere around age 29 or 30, profound people pass to a new stage of awareness. They appropriate the horizon that death creates. People who have watched television death for entertainment or who have read objectively about dying begin seeing subtle changes in their own bearing. They have begun their new move toward autumn. Doors close, options narrow. By now they have been fated to follow one vocation instead of another. They have vowed to spend life with this mate and not that one. As they mature, they hear the commentators describe athletes or mathematicians of their age as being over the hill.” Through the centuries, cultures and the dictates of the body have worked similar effects on diverse people. They move from observing death at a distance to reckoning with its possibility close at hand. Some of the deep religious experiences of conversion occur at about that time. The journals of spirituality, especially those of a wintry sort, start to be written at that age. If Dunne is reckoning correctly, the age when people take death seriously as a personal reality, a point that today occurs before midlife, was near the end of the average life span for people in the ancient world. If the Psalms are to serve as a text that discloses a creative way of being in the face of death, it is important for a reader to remember how close death was to everyone in the original context of the book. No actuaries kept statistics on the people of the era. Through complex means of calculation there can now be educated guesses. The young person of today can expect to reach Psalm 90’s minimum of “threescore years and ten.” When the Psalms were written, few could look ahead so far. In ancient Greece and Rome the average span was believed to be just over 20 years, in medieval England about 33. Two centuries ago it had risen only to 36. In the Psalms; all views of death had to reflect its closeness. The earthy naturalness of Old Testament stories reveals the nearness of death through battles, floods, accidents, miraculous disasters, or any number of other causes. Death was both a part of nature taken for granted and a punishment for evil, the result of God’s activity. To make sense of the psalmic attitudes toward death, it is important to set a larger Old Testament context. The writers of the Psalms confronted death but saw through it to life because in death they saw God. This notion seems startling because the Psalms have so little to say in general and nothing concrete to say about a positive mode of being in afterlife. Shed, the abode of the dead, was attractive neither to visit nor to take up residence in. Sheol allowed for no visitors, and none ever returned from it. This shadowy world was, and in retrospect is, a horror without mitigation. The language of winter is too serene for Sheol’s miry, murky landscape. And yet, one must say, despite the language of “ceasing to be” or going to Sheol, there is a Yes in the face of such language in the depth of Hebrew piety. One angle of vision comes from subsequent styles of Judaism. Two lines from eastern European 18th century Hasidic Judaism reflect the long afterglow of this vision. Rabbi Zalman, one of the great successors to Hasidism’s founder, Baal Shem Tov, was said to have interrupted his prayers to say of the Lord: “I do not want your paradise. I do not want your coming world. I want you, and you only.” This was in the spirit of his predecessor, who said, “If I love God, what need have I of a coming world?” Such language is not likely to satisfy moderns who wish a more open future. It is an important first word for those who have only utilitarian views of God. In the world of the practical, God is loved for the sake of one’s self, for the self’s purposes, and for the yield of this relation in the reward of eternal life. The ancient Hebrew loved God for the sake of a long life in which to enjoy creation, but she also was to love the Lord for the Lord’s sake. The Christian tradition in its vital years picked up something of this sense of the love of God and of trust in the divine ways wherever they lead. From the tradition of Bernard of Clairvaux in the Middle Ages there survives the story of a woman seen in a vision. She was carrying a pitcher and a torch. Why these? With the pitcher she would quench the fires of hell, and with the torch she would burn the pleasures of heaven. After these were gone, people would be able to love God for God’s sake. Here, as so often in Hebrew thought, a regard for the intrinsic character of God and of divine trustworthiness shines through. A believer shifts away from a bartering concept in which one loves God for the sake of a transaction. Now there is a relation in which the trusting one is simply reposed in the divine will. The journey through the season after solstice in the heart will take on purpose and become beatable. The Hebrew Scriptures from page one prepare the seeker for such an embodiment of God. Genesis asserts that God antedates the beginnings. The Scriptures have room for two different creation stories on the first two pages and for others in the Wisdom literature, in Job 38-39, and elsewhere. No one can collate these stories in order to deduce a scientific account of how the world began; the texts have a different purpose. While some religions have no interest in beginnings, faith within the Hebrew Scriptures insists that the Creator is the Lord of all that follows -- including death. Wonder over the universe of nature is a derived wonder. Such awe exists not for its own sake but because God is the agent. The world and the holy as such, the seasons of weather and the heart as such, are not of intrinsic but of derived value. Everything depends on trust in the Creator. God’s world included human mortality. According to the Eden stories, death came as a punishment for disobedience. More than disobedience was involved, nevertheless. Death was the marking line drawn between the divine and the human, between created people and the Creator God. Adam and Eve chose to strive to be immortal and to have knowledge. These both belonged only to God, who would have given creatures one but not both of them. People became mortal, but they had knowledge. This knowledge is what inspired the Hebrew drama and reflects itself in the Psalms. People now do not live to be immortal, but they have knowledge. Death, therefore, though a punishment, is also a simple fact that defines the creature over against the living God. The human now knows something of “good and evil,” life and death. Eden meant ignorant immortality. After Eden comes informed mortality. The Psalms frequently remember that death is a punishment for disobedience, but more often they are matter-of-fact, and the punishment idea is lessened. Responsibility for living replaces the consciousness of punishment. Humans are not to be beasts, nor to live like beasts, for they are still in dumbness, in ignorance. The knowledge of death, for all the grimness of realism it introduces to life, is what gives daily and yearly existence meaning. Humans no longer have immortality, but they have history, memory and hope. Remembering is the root of trust, hoping is the center of faith. Although Sheol is a threat with whose horrendousness we moderns cannot cope, death itself in the Psalms is not mere enemy. God does not act like an Oriental potentate who enjoys humans, like puppies, tumbling before his throne until it suits a divine whim to have them killed. Death, indeed, is not the result of a whim but is to define what is human. Astonishingly, then, in this concept, death is not simply evil any more than winter is evil in the passing of the year. Death is not a reality designed to call humans to refuse the enjoyment of living. Death, the definer, gives meaning to life and history. It is an instrument that helps provide meaning for daily existence. In the rabbinic versions of immortality, God finally evens accounts for the righteous. Among other things, they can resume relations that they had known on earth. The Scriptures are never very clear, however, about such reunions. Jacob does not expect to meet his missing and presumed-dead son, Joseph. Sheol is a zone of darkness and chaos in which such a meeting would be meaningless, indeed impossible. Sheol is thus no match for Greek or later rabbinic immortality or Christian resurrection. The Hebrew Scriptures have no language of bliss after life. They give no voice to a hope for a creation that is reflected back into the old world, thanks to the values of life after death. Sheol follows life but does not serve as an afterlife. It has been said that Sheol is better seen as an afterdeath Sheol never makes its appearance for comfort, to inspire a different mode of living now. Afterdeath is a winter from which no spring emerges, after which no summer invites. Because the Hebrews of the type who wrote the Psalms concentrated on the trustworthiness of God and not on the gift of afterlife, they did not ask for more living after death. They pleaded with God as the giver of life to endow the meaning of their seasons with value. Seldom do they stand in the divine marketplace and bargain for a life to come, though some do haggle for more years. God as the Lord of life matters more than their ego and their survival. Even to record such conclusions of biblical scholars may contribute to the wintry chill that reaches many corners of the texts, God simply keeps asking creatures questions that admit of no easy answer. These impel them into full and busy lives in the light of a divine purpose whose extent remains finally unknowable. Unknowability does not in the end mean silence. God, who is personal, addresses humans and expects response. The drama of daily living results from that conversation. Later Judaism could not leave things so wintry. The Talmud quotes a rabbi: “The end purpose of everything our Mishna has described is the life of the world to come.” The motive of the rabbis was less to elaborate on the sketchy traces of belief in afterlife within the Bible than to make sense of their faith in the justice of God. God, they thought, had to do more than the Scriptures revealed, in order to even out the injustices of this world. Such rabbinic notions introduce a springtime. The Psalms leave those who pray them with a winter radiated by awareness of the divine power over both death and life. The rabbinic teaching on immortality that tempered the psalmic faith began to offer an escape from death by an escape after death. Jewish messianism, because it proposed a purpose to future history, also qualified the old faith of psalmists in the trustworthiness of a God who made each day meaningful on its own terms. After centuries of rabbinic teaching on immortality, Jewish faith in messianism, and Christian witness to the resurrection, it is hard to pull the screen back down to cloud the future as the Psalms did. Moderns reflect on the past as if all people in it faced death with equanimity because they believed in recompense in a life to come. That belief appeared in the interval between the era of the Psalms and our own. Faith in a life to come has by now disappeared from the consciousness of many. They live under the confining canopy of their unguided years. Novelist John Updike (in the New, Yorker of January 11, 1982, p. 95) refers to this interval “when death was assumed to be a gateway to the afterlife and therefore not qualitatively different from the other adventures and rites of passage that befall a soul. . . . Most men until modem times prepared for and enacted their own dying” with a sense of calm, even matter-of-factness. Somehow Updike implies an affirmation in writing that at best says, “Blackness is not all.” Blackness of winter night dominates modern literature and consciousness. Lacking both faith in an afterlife and trust in a Lord of life, those who exclude God from their temporal horizon are left then only with the pain, never with value or meaning. Updike cites an example from a diarylike novel by Lars Gustafsson, The Death of a Beekeeper, to suggest the measure of pain. This Journal records the last days of a Swedish beekeeper who, as he is dying of cancer in 1974, isolates himself in his beloved cottage. The novel is at home with wintriness: It was gray, pleasant February weather, fairly cold and hence not too damp, and the whole landscape looked like a pencil sketch. I don’t know why I like it so much. It is pretty barren and yet I never get tired of moving about in it. Such prose could be translated back into the language of Psalms, which also allow for at-homeness in a bleak landscape. The diary, however, records the winter night, the pain without relief. One can only enter the novel by reading into it a prolongation of the brief stabs and piercings that almost everyone has had momentarily in, say, the dentist’s chair. Can we find a God worthy of trust on this horizon? What I have experienced today during the late night and in the early hours of the morning, I simply could not have considered possible. It was absolutely foreign, white hot and totally overpowering. I am trying to breathe very slowly, but as long as it continues, even this breathing, which at least in some very abstract fashion is supposed to help me distinguish between the physical pain and the panic, is an almost overpowering exertion. . . . The reader imagines herself in such a winter night, without promise of relief, without a responsive God to break the silence: This white hot pain, naturally, is basically nothing but a precise measure of the forces which hold this body together. It is a precise measure of the force which has made my existence possible. Death and life are actually MONSTROUS things. Death and life become monstrous because dying is monstrous. Death is no longer the divider from God that defines humanness, life, and thus the good. A sufferer is left with mere breathing to divide and define pain and panic. A summery faith of the exuberant sort moves rapidly past such pages. Self-help philosophies address other aspects of life. They not only fall silent in the face of such pain but refuse to hear the cries of pain uttered. On such terms, sunny styles of religion cannot serve as a basis for any solidarity of experience with those whose horizon excludes God. On that horizon, nevertheless, is a faithful reporting of the human condition.
<urn:uuid:0dd0ee57-7ce0-4bdc-a67e-a9d2baf61a7c>
CC-MAIN-2016-26
http://www.religion-online.org/showarticle.asp?title=1360
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966218
5,078
2.8125
3
Oaxaca - Land of the Zapotecs and Mixtecs Oaxaca de Juarez, the capital of the state of Oaxaca, in southern Mexico, lies at an altitude of 1,534 m (5,034 ft) in an area of gold and silver mines. It has a predominantly ZAPOTEC population of 212,943 (1990). Oaxaca's industries produce textiles--including handmade serapes--as well as pottery, gold and silver jewelry, and leather goods. An important source of income is tourism. Famous pre-Columbian ruins are located nearby: 42 km (26 mi) to the southeast are the MIXTEC ruins at MITLA, and 5 km (3 mi) to the west is the complex of MONTE ALBAN, a center of ancient Zapotec culture. Among the colonial buildings in the city proper, the most notable are a 17th-century cathedral and the church of Santo Domingo, begun about 1575. The birthplace of the Mexican presidents Porfirio Diaz and Benito Juarez, Oaxaca is the seat of the Benito Juarez University of Oaxaca (f. 1827; university status, 1955). Founded in 1486 as an Aztec garrison, Oaxaca was taken by the Spanish in 1522. The city was captured by Mexican revolutionaries in 1812. Begin Your journey, learn the Steps to Your Indian Ancestry Beginners Lesson in Genealogy
<urn:uuid:24a1ae89-c8c7-48be-b995-e3f06a36a70c>
CC-MAIN-2016-26
http://www.indians.org/welker/zapotec.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912755
336
2.9375
3
Kulling, Monica Going Up! Elisha Otis’s Trip to the Top, illustrated by David Parkins. Tundra Books, 2013. $18. Content: G. PICTURE BOOK. NON-FICTION. From his time as a boy on the farm, Elisha Otis loved watching things go up and down, including the hay hoist, which used woven ropes which would often snap. Otis hauled goods an he ran a gristmill, but it wasn’t until he moved to Albany, New York, that his talent for designing machinery made itself known. His first design for a hoisting platform was for moving machine parts, but the idea of moving people was not far behind. It wasn’t until the New York World’s Fair that Otis was able to demonstrate that his invention would be safe for people as well as inanimate objects. Now people could build buildings higher that six floors. The turn-of-the-century drawings are just right for the true story out of the mid 1800’s. I am not sure modern children will understand why the safety brake was such an important invention, because elevators are so ubiquitous in their world, but it is a great look at the beginnings of the industrial age. EL – OPTIONAL. Cindy, Library Teacher
<urn:uuid:5731eb0c-308e-449e-9e78-5b72c27d4707>
CC-MAIN-2016-26
http://kissthebook.blogspot.com/2013/07/going-up-elisha-otiss-trip-to-top.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975073
278
3.234375
3
- Principle of Credulity - with the absence of any reason to disbelieve it, one should accept what appears to be true (e.g., if one sees someone walking on water, one should believe that it is occurring) - Principle of Testimony - with the absence of any reason to disbelieve them, one should accept that eye-witnesses or believers are telling the truth when they testify about religious experiences. As you might imagine, Swinburne is really unhappy with the New Atheists because they ignore all his sophisticated apologetics and simply ask for evidence of God. That's not playing fair. They probably haven't read any of his books. There ought to be a rule for people who claim to have sophisticated arguments for the existence of god(s). They should have to describe at least one of them. [Hat Tip: Uncommon Descent]
<urn:uuid:dd0a2f40-dd09-4fa0-9ca2-8d51dd7e83ad>
CC-MAIN-2016-26
http://sandwalk.blogspot.com/2012/10/breaking-news-new-atheists-arent-very.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961791
181
2.875
3
The War in the Air - Summary of the Air War When Archduke Ferdinand was assassinated on the 28th of June 1914, it was just over a decade since the Wright brothers first twelve second flight at Kittyhawk. In the intervening years advances in range and reliability proved that the airplane was a viable, if still somewhat exotic, means of transport. In 1909 Bleriot made the first flight across the English Channel. In 1913 Roland Garros made the first cross Mediterranean flight, from the south of France to Tunisia. There was also, in this period, some initial understanding of the military implications of the airplane. After Bleriot's flight H. G. Wells was to write, prophetically, that "…this is no longer, from a military point of view, an inaccessible island." In 1911 the Italians, at war with Turkey in Libya, became the first to make military use of the airplane, dropping grenades from a German-built monoplane. In 1912 they also dropped bombs from an airship. When war broke out the number of aircraft on all sides and all fronts was very small. France, for example, had less than 140 aircraft at the start of the war. By the end of the war she fielded 4,500 aircraft, more than any other protagonist. While this may seem an impressive increase, it does not give a true indication of the amount of aircraft involved. During the war France produced no less than 68,000 aircraft. 52,000 of them were lost in battle, a horrendous loss rate of 77%. The period between 1914 and 1918 saw not only tremendous production, but also tremendous development in aircraft technology. A typical British aircraft at the outbreak of the war was the general purpose BE2c, with a top speed of 116 km/h (72 mph). Powered by a 90 hp engine, it could remain aloft for over three hours. By the end of the war aircraft were designed for specific tasks. Built for speed and manoeuvrability, the SE5a fighter of 1917 was powered by a 200 hp engine and had a top speed of 222 km/h (138 mph). Britain's most famous bomber, the Handley-Page O/400, could carry a bomb load of 900kg (2000 lb) at a top speed of 156 km/h (97mph) for flights lasting eight hours. It was powered by two 360 hp engines. In 1914 it was important that aircraft be easy to fly, as the amount of training that pilots received was minimal, to say the least. Louis Strange, an innovative pilot from the opening stages of the war, was an early graduate of the RFC (Royal Flying Corps) flight school. He began flying combat missions having completed only three and a half hours of actual flying time. For this reason aircraft were designed for stability. By the end of the war stability had given way to manoeuvrability. The famous Sopwith Camel was a difficult aircraft to fly, but supremely agile. Not only did aircraft become faster, more manoeuvrable and more powerful, but a number of technologies that were common at the start of the war had almost disappeared by the end of it. Many of the aircraft in 1914 were of "pusher" layout. This is the same configuration that the Wright brothers used, where the propeller faced backwards and pushed the aircraft forward. The alternative layout, where the propeller faces forwards and pulls the aircraft, was called a "tractor" design. It provided better performance, but in 1914 visibility was deemed more important than speed. World War One marked the end of pusher aircraft. Another technology that scarcely survived the war was the rotary engine. In this type of engine the pistons were arranged in a circle around the crankshaft. When the engine ran, the crankshaft itself remained stationary while the pistons rotated around it. The propeller was fixed to the pistons and so rotated with them. Rotary engines were air cooled, and thus very light. They provided an excellent power-to-weight ratio, but they could not provide the same power that the heavier in-line water cooled engines could. Although they remained in use throughout the war, by 1918 Sopwith remained the last major manufacturer still using them. The rapid pace of technological innovation was matched by a rapid change in the uses to which aircraft were put. If in 1914 there were few generals who viewed aircraft as anything more than a tool for observation and reconnaissance (and many of them had great reservation even to that use) by the end of the war both sides were integrating aircraft as a key part of their planned strategies. While the plane did not play the decisive roll that it was to play in later conflicts, the First World War proved their capabilities. It was during this period that the key tasks that aircraft could perform were discovered, experimented with, and refined: observation and reconnaissance, tactical and strategic bombing, ground attack, and naval warfare. With the growing importance and influence of aircraft came the need to control the air, and thus the fighter was born. Article contributed by Ari Unikoski A "dogfight" signified air combat at close quarters. - Did you know?
<urn:uuid:9a8ccf4b-2291-4f85-b937-20a0208c26d4>
CC-MAIN-2016-26
http://www.firstworldwar.com/airwar/summary.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981851
1,059
3.25
3
The NY Times recently ran an article about “proxemics,” a term that was introduced by anthropologist Edward T. Hall in 1963 to describe the phenomenon that social distance between people can reliably be correlated with physical distance. Hall identified four types of human-to-human distance: • intimate distance for embracing, touching or whispering (15-45 cm, 6-18 inches) • personal distance for interactions among good friends (45-120 cm, 1.5-4 feet) • social distance for interactions among acquaintances (1.2-3.5 m, 4-12 ft) • public distance used for public speaking (over 3.5 m, 12 ft) Proxemics have been subject of a host of studies by environmental psychologists, and recently the field has extended to virtual realities: Game developers now look at proxemics, and a study that observed the avatars of participants in Second Life found that some of the avatars’ physical behavior was in keeping with studies about how humans protect their personal space. Obviously, personal space matters and maintaining the integrity of one’s comfort zone is a basic human instinct. Although the right amount of personal space may vary from culture to culture, the urge to protect our public privacy against the elbow, spit, and cell phone chatter of others is universal. “If you videotape people at a library table, it’s very clear what seat somebody will take,” The Times cites a proxemics reseacher, “one of the corner seats will go first, followed by the chair diagonally opposite because that is farthest away.” The article also refers to a TripAdvisor survey from April in which travelers indicated that if they had to pay for certain amenities, they would rather have larger seats and more legroom than massages and premium food (a current advertisement for Eos Airlines, which flies between New York and London, is promoting the fact that it offers passengers “21 square feet of personal space”). Paco Underhill, the author of "Why We Buy: The Science of Shopping,” contends that most consumers walk away from whatever they are looking at in a store if another person inadvertently brushes against their backside, disturbing their personal space. Furthermore, researchers explain the success of the iPod with peoples' need to create exclusive private comfort zones in public space. This sounds plausible, and in fact, I wonder what percentage of music player, game consoles, PDAs, and cell phone sales can be attributed to consumers’ quest for overcoming unwelcome intimacy. As people typically avoid eye-contact in elevators, subway trains, and in other forced pseudo-intimate social situations, they find devices desirable that distract them from paying “social attention capital.” This is especially true for highly stressful situations such as waiting in public, when the whole room seems to stare at you, pitying you for being alone and having no reason to be there in the first place. In fact, I sometimes play with my BlackBerry although I don’t expect any e-mail, and I write meaningless text messages on my cell phone – just to demarcate my comfort zone and appear busy while waiting. A friend of mine once told me that eating alone in a restaurant was initially so humiliating that she took it on as a trial of courage before it eventually became the proud badge for a stronger public self. As the population increases and cities become denser (the world population has doubled in the past 40 years and the US population tripled over the course of the Twentieth Century), understanding proxemics is becoming more and more critical not only to developers and urban planners but also to product and interaction designers. Urban planners balance public space between over-crowding and sociability; architects and interior designers such as Gensler orchestrate private and public spaces in designing office space for knowledge workers. Product and interaction designers have to take into account the most intimate personal space of consumers. As they design communication devices with social meaning, they ought to measure the comfort zone these devices are expected to create. Reclusion, however, is not the only requirement: Personal communication devices need to be bi-functional, allowing users to connect and disconnect with the outside world at their discretion – sometimes we want to hide, sometimes we want to seek. In the realm of furniture, the Ball Chair by Eero Aarnio is a perfect example of how such ambiguity can be designed. It provides a "room within a room" that protects the user from outside noises and creates a private space for relaxing or having a phone call. But by turning around its own axis on the base the view to the outer space is variable for the user, and he is thus not completely excluded from the world outside. In the world of mobile consumer electronics, the desired “don’t stand so close to me” effect will ultimately conflict with devices that are becoming increasingly invisible, such as the iPod Shuffle, wearable electronics, or entire personal area networks (PANs) that are woven into users’ clothing. They may be more convenient to use but will no longer demarcate users’ comfort zones. In fact, their effect is paradoxical: the more intimate the interfaces, the smaller the personal space they create.
<urn:uuid:a18c1f53-2918-458d-a643-412715d7b7af>
CC-MAIN-2016-26
http://iplot.typepad.com/iplot/2006/11/proxemics_desig.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954558
1,085
2.71875
3
Global rates of ocean thermal energy conversion (OTEC) are assessed with a high-resolution (1 deg × 1 deg) ocean general circulation model (OGCM). In numerically intensive simulations, the OTEC process is represented by a pair of sinks and a source of specified strengths placed at selected water depths across the oceanic region favorable for OTEC. Results broadly confirm earlier estimates obtained with a coarse (4 deg × 4 deg) OGCM, but with the greater resolution and more elaborate description of key physical oceanic mechanisms in the present case, the massive deployment of OTEC systems appears to affect the global environment to a relatively greater extent. The maximum global OTEC power production drops to 14 TW, or about half of previously estimated levels, but it would be achieved with only one-third as many OTEC systems. Environmental effects at maximum OTEC power production are generally similar in both sets of simulations. The oceanic surface layer would cool down in tropical OTEC regions with a compensating warming trend elsewhere. Some heat would penetrate the ocean interior until the environment reaches a new steady state. A significant boost of the oceanic thermohaline circulation (THC) would occur. Although all simulations with given OTEC flow singularities were run for 1000 years to ensure stabilization of the system, convergence to a new equilibrium was generally achieved much faster, i.e., roughly within a century. With more limited OTEC scenarios, a global OTEC power production of the order of 7 TW could still be achieved without much effect on ocean temperatures.
<urn:uuid:f28f75bc-1d07-4523-a69a-b8eec433b09b>
CC-MAIN-2016-26
http://energyresources.asmedigitalcollection.asme.org/article.aspx?articleid=1818709
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00200-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925246
310
2.8125
3
LONDON Busy, congested, stressful. This is how the world’s first subway system was depicted by London newspapers in 1863. It’s a situation that would be familiar to nail-biting passengers of the present as the Tube turned 150 years old Wednesday. “The constant cry, as the trains arrived, of ‘no room,’ appeared to have a very depressing effect upon those assembled,” The Guardian newspaper reported on the public opening of London’s Metropolitan Line on Jan. 10, 1863. The first stretch of rail had opened the day before, on Jan. 9. The line — the first part of what is now an extensive London transport network that has shaped the British capital and its suburbs — ran 120 trains each way during the day, carrying up to 40,000 excited passengers. Extra steam locomotives and cars were called in to handle the crowds. Architectural historian David Lawrence said the rapid expansion of the subway network — better known in London as the Tube — had a major impact on the city’s design. The Tube helped lure people away from the inner city into new areas where new housing was being built near the stations.
<urn:uuid:7cab8cee-8089-47d4-9ebc-083faf1fda8d>
CC-MAIN-2016-26
http://www.arkansasonline.com/news/2013/jan/09/worlds-first-subway-marks-150-years-operation/?f=news
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979332
248
3.71875
4
Program Notes: Duck Soup (1933) Political anarchists have long been caricatured as gaunt, bearded men in overcoats tossing bowling ball-shaped bombs. The Marx Brothers were anarchists, too, but instead of bombs their weapons were clever wordplay, slapstick physicality, and madcap characters. Rather than blowing up buildings they blew up the pretentions of modern “civilized” life. They poked their fingers in the eyes of respectability, decency, and what passed for sanity. Even today their crazed mania seems pretty “out there.” One can only imagine how it felt to Depression-era audiences. And yet the Marx Brothers were huge, huge stars. They got their start as a family musical act, working their way up through the ranks of Jewish theater and the vaudeville circuits before starring in Broadway musicals in the 1920s. Along the way each brother developed a comic alter ego. Julius painted on a black moustache and donned glasses with busy eyebrows to become the insult-lobbing Groucho. Leonard became Chico with a hammy Italian accent, a Tyrolean hat, and curly dark wig. He did musical tricks on the piano. (By the way, his name was pronounced Chick-O, in recognition of his womanizing ways.) The craziest of the three was Adolph’s Harpo, who never spoke on stage or in front of the camera. With his crushed top hat, explosion of blond ringlets, billowing trench coat and arsenal of squeezebulb-honking horns, he was like a big bratty child, albeit one who liked to leer at pretty girls. Like Chico, he had a musical speciality – Harpo played the harp. Zeppo (Herbert) was the fourth brother, the one nobody remembers. In the brothers’ early films he played the straight man – which is ironic since in private life he was acknowledged to be the funniest of the siblings. He was a brilliant engineer and inventor who became a millionaire after he left the act. There was a fifth brother, Gummo (Milton), who abandoned show biz back in the clan’s vaudeville days. He had a successful career as a theatrical agent. The Marxes became hugely popular with their early films: The Cocoanuts (1929), Animal Crackers (1930), and Monkey Business (1931). Their 1932 comedy Horse Feathers was Paramount’s biggest hit of the year, so there was much anticipation of what they would deliver in their next feature. But behind the scenes, things were rocky. Financially, Paramount was on the ropes and the brothers hadn’t been paid all they were owed for their earlier successes. Their next film – Duck Soup – would fulfill their five-picture deal with the studio, and they were ready to jump ship. Given all that, the Marxes might have kissed off the project and turned in an inferior product. Instead, Duck Soup is their masterpiece, acclaimed not only for its brilliant comic routines but for its satire of nationalism, jingoism, and saber-rattling. (Of course, the brothers denied there was any significant political subtext to their work. “What significance?” Groucho protested. “We were just four Jews trying to get a laugh.”) Groucho plays the cigar-wagging Rufus T. Firefly, who becomes dictator of the European country of Fredonia. The ambassador of neighboring Sylvania (Louis Calhern) hires a couple of inept jokers (Chico and Harpo) to spy on Firefly in preparation for an invasion of the country. Also prominent in the cast is Groucho’s perennial foil, opera singer Margaret Dumont. She plays the wealthy Mrs. Teasdale, who uses her money and influence to put Firefly in power. Some of Groucho’s most memorable lines are directed at this human battleship: “Married! I can see you right now in the kitchen, bending over a hot stove. But I can’t see the stove!” There’s no plot to speak of here...mostly the setup provides a framework for some of the Marx Brothers’ most memorable routines. For example, there’s the famous mirror sequence. Attempting to burglarize Groucho’s home, Harpo disguises himself as Groucho in nightshirt, bedcap, glasses and moustache. When the real Groucho shows up, Harpo pretends that a doorway is actually a full-length mirror. Whatever Groucho does, Harpo matches his every move. This goes on for several hysterical minutes as Groucho tries to catch his “reflection” in a mistake; then Chico blunders onto the scene, also disguised as Groucho in sleepwear. At this point you can’t tell which one is the real Groucho. (Years later, Harpo would recreate the bit for an episode of I Love Lucy.) Another classic exchange finds Chico and Harpo posing as sidewalk vendors and making life miserable for their competition, a lemonade seller played by veteran straight man Edgar Kennedy, who had worked with Chaplin and Mack Sennett. Both routines had been perfected by the brothers over years of live performance and brilliantly adapted for the film medium. But Duck Soup also had some fiercely original material, especially the siege of the farmhouse that climaxes the film. Firefly and his compatriots are under fire by advancing Sylvanian troops. In an absolutely chaotic sequence, Groucho tries to organize his fighters – except that in each shot he’s wearing a different outfit. He appears as a Civil War general (both federal and rebel), as a Boy Scout in shorts, as a coonskin-capped frontiersman...it’s insane. While it is today regarded as their best movie, Duck Soup was considered a disappointment because it was only the year’s sixth-highest grossing film. Not only was it the Marx Brothers’s last film for Paramount, it was the last to feature Zeppo. They still had other big hits ahead of them – like A Day at the Races and A Night at the Opera – but the brothers also had to deal with studio chiefs who insisted that their chaotic approach be watered down with real plots and especially tangential love stories involving insipid young co-stars. The late Roger Ebert nicely summed up the importance of these anarchistic entertainers: “Although they were not taken as seriously, they were as surrealist as Dalí, as shocking as Stravinsky, as verbally outrageous as Gertrude Stein, as alienated as Kafka. Because they worked the genres of slapstick and screwball, they did not get the same kind of attention, but their effect on the popular mind was probably more influential.” The series Make ‘Em Laugh features films voted the best American comedies of all time by the American Film Institute. Saturdays at 1:30 p.m.: Admission to these films is free. About the Author Robert W. Butler is a lifelong Kansas City area resident, a graduate of Shawnee Mission East High School and the William Allen White School of Journalism at the University of Kansas. For several decades he was the movie editor of the Kansas City Star; he now writes a movie-themed blog at butlerscinemascene.com. He joined the Library's Public Affairs team in 2012.
<urn:uuid:397fea58-b64f-49af-a561-8b4c0d37143e>
CC-MAIN-2016-26
http://www.kclibrary.org/blog/film-blog/program-notes-duck-soup-1933
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00178-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973548
1,586
2.515625
3
Three-quarter-inch CDX plywood was used to sheath the roof, and a self-adhering membrane applied over the sheathing provides a water- and air-tight seal around every nail or screw used to attach the roofing material to the sheathing. A layer of 2-inch-thick open-cell icynene insulation applied to the underside of the roof sheathing helps complete the air seal. Two-by-fours span the underside of the I-joists, to which drywall was attached, and the remaining 131/2-inch space was filled with blown-in cellulose for an R-value of 62. The home’s passive solar design, plus superior insulation and airtightness, reduce the reliance on mechanical heating systems. Heat from passive solar gain and a Fujitsu ductless heat pump is distributed via airflow through a heat recovery ventilator (HRV). The HRV passes stale indoor air through a heat exchanger, which transfers about 75% of the heat to fresh, incoming air. Although an airtight home is highly energy efficient, an HRV is necessary to ensure good indoor air quality. The 200-cubic-foot-per-minute HRV eliminates the need for bathroom vents, since the HRV also eliminates excess moisture from the home with the rest of the indoor air. To maximize the home’s efficiency and minimize its energy use, energy-efficient appliances—like an Electrolux electric induction range, a Bosch Axxis front-loading clothes washer, and LED track lighting—are used. Chris and Leigh Ann also prewired their home for a future PV system. There have been a plethora of green building standards over the past two decades, and rightfully so—buildings are a major energy user and contributor to global climate change. In the United States, 76% of all electricity is used for heating, cooling, appliances, and lighting. Among standards set by the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) certification, Living Building Challenge, and the 2,000-Watt Society, Passivhaus standards stand apart in their simplicity and adaptability to individual climates to achieve the same energy-use goals regardless of location. This adaptability is similar to how today’s building codes function—and Passivhaus standards are probably the easiest of the green standards to transfer to our existing code-enforcement mechanism. Unlike LEED certification (and other holistic strategies, like the Living Building Challenge), Passivhaus focuses only on energy use. Renewable energy or sustainable material use is not brought into the equation, as it is for LEED and other certification programs. This single-minded purpose generally results in a 90% reduction in typical heating and cooling use and a 70% reduction in overall energy use compared to homes built according to today’s conventional standards. Even compared to the standard LEED-certified building, the overall reduction is still about 30%. It’s hard to say that we should have just one green building standard or another, because of so many new and innovative building materials, the acceptance of grid-tied PV systems, and a broader understanding of how a building functions as part of the living landscape. Fortunately, the PHIUS understands that trying to become established in opposition to the existing green building standards is counterproductive—they are trying to make it simpler to integrate Passivhaus certification with the existing Home Energy Rating System (HERS) Index. HERS rates a home or building’s energy efficiency—a typical resale home scores 130 on the index; a conventional new home usually scores 100. A negative score implies that a home produces more energy than it consumes—a concept that may be met with some skepticism by green building professionals, since this number does not account for a home’s embodied energy—energy used to extract materials, produce products, and transport them to the building site. While the Seniors’ home does not have a HERS Index score, its blower door test resulted in 0.51 ACH at 50 pascals of pressure, or about one air exchange every two hours. For comparison, an Energy Star home based on the EPA guidelines will have a typical value of 3.5 ACH at 50 pascals—that’s one-seventh as air-tight as the Seniors’ home. Since HERS raters are now common in most parts of the country, this may help make the Passivhaus certification accessible to all potential builders.
<urn:uuid:0b870323-1c73-44ea-80a2-eadc00b8a1b8>
CC-MAIN-2016-26
http://www.homepower.com/articles/home-efficiency/design-construction/passivhaus-chapel-hill/page/0/1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00125-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930709
935
2.53125
3
READER'S STORY: What was the reason for the environmental department people, looking through binoculars, up the river, from the Granville Bridge this arvo? Is there another crocodile sighting? Thanks for your query Chezza. The Chronicle contacted the government after receiving your information. A spokeswoman for the Department of Environment and Heritage Protection said: EHP officers conducted one of their regular crocodile surveys of the Mary River on Thursday but their efforts were focused between Beaver Rock and Saltwater Creek, some 10km downstream from the Granville Bridge. These river surveys are undertaken both during the day and also as spotlight surveys at night. The aims of the surveys are to monitor the crocodile's movement's within the river system, to monitor if any further crocodiles enter the area and to provide information that may assist with the capture program. In line with the Government's policy, the department is continuing with efforts to catch this crocodile with traps set on the Mary River. Officers located the crocodile during Thursday's survey in the Mary River in the vicinity of Saltwater Creek. Our surveys indicate the crocodile moves throughout the entire length of the Mary River and people are urged to exercise crocwise behaviour wherever they are on the river. - Obey croc warning signs - Don't swim or let domestic pets swim in waters where crocs may live - Be aware that crocodiles also swim in the ocean - Stand back from the water when fishing or cast netting - Never provoke, harass or feed crocs - Never leave food, fish scraps or bait near the water, a camp site or boat ramp - Never interfere with or fish or boat near crocodile traps - Always supervise children If people see a crocodile in the Mary River, the department encourages them to report the sighting immediately by phoning 1300 130 372. Update your news preferences and get the latest news delivered to your inbox.
<urn:uuid:2692446c-d8ec-40d4-a3b1-bb433c4d7ccd>
CC-MAIN-2016-26
http://www.frasercoastchronicle.com.au/news/crocs/2295264/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94265
400
2.703125
3
- freely available Remote Sensing 2013, 5(3), 1258-1273; doi:10.3390/rs5031258 Abstract: Projected changes in the frequency and severity of droughts as a result of increase in greenhouse gases have a significant impact on the role of vegetation in regulating the global carbon cycle. Drought effect on vegetation Gross Primary Production (GPP) is usually modeled as a function of Vapor Pressure Deficit (VPD) and/or soil moisture. Climate projections suggest a strong likelihood of increasing trend in VPD, while regional changes in precipitation are less certain. This difference in projections between VPD and precipitation can cause considerable discrepancies in the predictions of vegetation behavior depending on how ecosystem models represent the drought effect. In this study, we scrutinized the model responses to drought using the 30-year record of Global Inventory Modeling and Mapping Studies (GIMMS) 3g Normalized Difference Vegetation Index (NDVI) dataset. A diagnostic ecosystem model, Terrestrial Observation and Prediction System (TOPS), was used to estimate global GPP from 1982 to 2009 under nine different experimental simulations. The control run of global GPP increased until 2000, but stayed constant after 2000. Among the simulations with single climate constraint (temperature, VPD, rainfall and solar radiation), only the VPD-driven simulation showed a decrease in 2000s, while the other scenarios simulated an increase in GPP. The diverging responses in 2000s can be attributed to the difference in the representation of the impact of water stress on vegetation in models, i.e., using VPD and/or precipitation. Spatial map of trend in simulated GPP using GIMMS 3g data is consistent with the GPP driven by soil moisture than the GPP driven by VPD, confirming the need for a soil moisture constraint in modeling global GPP. Estimation of global vegetation Gross Primary Production (GPP) and Net Primary Production (NPP) and their interannual variations are critical for understanding the feedbacks between the biosphere and the atmosphere. Ecosystem carbon models, inversion models, and inventories have been used for assessing global land primary production, generating total annual global estimates of GPP and NPP converging around 120 and 60 Pg·C·yr−1, respectively. Meanwhile, Net Biome Productivity (NBP), the net carbon accumulation by ecosystems , was estimated just 2% of GPP for the 1990s . Therefore, estimation of interannual variations of GPP and NPP are also important as well as their total magnitudes for understanding NBP response to CO2 emissions and changes in climate. To elucidate the mechanisms that cause the interannual variation in GPP, we need to rely on bottom up modeling approaches . However, in contrast to total magnitude of GPP, there is no consensus on interannual variation in global GPP or NPP even for the last few decades with satellite observations (for example, [6,7]). One reason for the models failing to reach agreement on the interannual variations of GPP is the oversimplification of the simulated responses of vegetation to climate variability. By tuning the model parameters to match their output to the data from validation sites, even simple models can provide a reasonable estimate of total GPP [8,9]. Indeed, as more validation data are becoming available, the annual magnitudes of global GPP and NPP estimations from different models have been converging [1,2]. However, it is another issue whether those simple models tuned to acceptable annual GPP range can produce realistic interannual variations in estimated carbon fluxes. In addition, not enough long-term data are available to validate the model results globally on inter-annual time scales. The recent availability of a 30-year satellite record of Global Inventory Modeling and Mapping Studies (GIMMS) 3g data, focus of this special issue, from NOAA/AVHRR provides an unprecedented opportunity to examine the interpretation of long-term GPP simulations by simple models. In this study, we focus on the effect of drought stress on the interannual variation in GPP, and assess the structural uncertainty in model-simulated trends of global GPP. Reductions in GPP caused by drought stress can be modeled through increases in Vapor Pressure Deficit (VPD) and/or reductions in precipitation via soil moisture. Because time series of VPD and precipitation are generally highly correlated, some models use only VPD sub-models or only soil moisture sub-models to simulate the impact of drought stress on GPP . Short-term comparisons have shown that VPD-only models can produce variations in GPP that are similar to the ones obtained from models with both VPD and soil moisture sub-models . These similarities are not surprising when precipitation and VPD trends are coherent, but this is not necessarily always the case. For example, it has been reported that while global warming-induced increases in VPD were observed , global total precipitation did not show a significant trend for the last three decades [12,13]. In this case, we would expect the VPD-only models to produce incorrect time series of GPP estimates. Furthermore, according to the Coupled Model Intercomparison Project Phase 5 (CMIP5), reduction in relative humidity with global warming was expected to continue over the 21st century , while globally averaged precipitation was projected to increase with high uncertainty around regional estimates . Therefore, it is crucial to clarify how model structure of drought stress affects the interannual variations in GPP. To address this question, we used the Terrestrial Observation and Prediction System model (TOPS) to produce global GPP estimates from 1982 to 2009 using GIMMS 3g data, and analyze how VPD and soil moisture influence the interannual variation in global GPP. 2. Data and Methods 2.1. The Terrestrial Observation and Prediction System Model (TOPS) TOPS is a diagnostic ecosystem process model that simulates the fluxes of energy, carbon, and water through vegetation in response to climate and weather variability . TOPS employs a Light Use Efficiency (LUE) model to calculate GPP , as follows: Soil moisture is simulated using a one-layer bucket model with predefined wilting point and field capacity. Precipitation and evapotranspiration dynamics largely control soil moisture. Evapotranspiration is simulated with a two-layer model that consists of soil evaporation and canopy evapotranspiration. The canopy evapotranspiration was simulated using the Penman-Monteith equation with a Jarvis-type stomatal conductance submodel . Water cycle components in TOPS, very similar to those in BIOME-BGC , have been validated over the past 25 years, for example stream flow , snow cover , and water stress . Often less than average rainfall (hydrological drought) results in higher VPD inducing both physiological as well as meteorological drought conditions. Increased VPD triggers the closure of stomata resulting in a decrease in GPP. The stomatal responses to drought and their impact on canopy process are well observed in flux tower observations [22,23]. Because TOPS was developed from Biome-BGC, the GPP calculation in TOPS is similar to that of the MODIS 17 algorithm . The main difference between the TOPS and MODIS 17 algorithms is that TOPS has a soil moisture routine and a soil moisture control on GPP, while the MODIS 17 algorithm is a VPD-only model. One of the reasons why MODIS 17 algorithm does not have soil moisture control is that MODIS 17 algorithm is developed for near real-time monitoring on a global scale. There are no satellite observations of soil moisture, and adding soil moisture sub-model is computationally expensive for an operational algorithm. 2.2. LAI and FPAR TOPS requires estimates of Leaf Area Index (LAI) and fPAR to define the amount of vegetation and its photosynthetic capacity. For this study, LAI and fPAR were derived from the GIMMS 3g dataset using a neural network algorithm and MODIS land cover . 2.3. Climate Data TOPS ingests daily climate data for temperature, precipitation, VPD, and shortwave radiation and these inputs are obtained from the CRU-NCEP dataset version 4 . The CRU-NCEP dataset provides climate variables for the period 1901–2010 and was made from the CRU TS3.1 dataset and the NCEP-NCAR Reanalysis data (hereafter referred to as CRU and Reanalysis, respectively). The CRU is 0.5-degree monthly climate data based on ground data, while the Reanalysis is ca. 2.5-degree 6-hourly modeled datasets. To compensate the downside of each dataset, the Reanalysis was interpolated to 0.5 degree and 6-hourly variations of the interpolated Reanalysis for each month were added to CRU monthly data to make the CRU-NCEP dataset. In this study, we used CRU-NCEP data for maximum temperature, minimum temperature, precipitation, specific humidity, and shortwave radiation for the period 1982 to 2009. Because the monthly time-series of the CRU-NCEP dataset is provided by the CRU dataset, the uncertainty of the CRU-NCEP dataset was inherited from the CRU dataset. The uncertainty of the CRU datasets tends to be larger in the earlier portion of the datasets and over developing countries. Because VPD data are not available from the CRU-NCEP data, VPD data were calculated from maximum temperature, minimum temperature, and specific humidity within TOPS. 2.4. TOPS Simulations TOPS was run from 1982 to 2009 at 0.5-degree resolution globally. We analyzed the vegetation response to each of the individual climate components and their combined effect using the approach adopted by Ichii et al.. For each simulation, we use the CRU-NCEP time series of only one climate variable at a time, while holding the other climate components to their 1982 to 2009 climatologies. In addition, to analyze the effects of the down regulation functions Ψvpd and Ψsm in Equation (2), we perform TOPS simulations by keeping one of them equal to 1 (i.e., no control), while allowing the other one to vary. These simulations are summarized in Table 1. Hereafter, we refer to each simulation with the naming convention reported in Table 1. To initialize soil moisture, we spin-up TOPS with a 10-year spin-up run using the first 10 years (1982–1991) of climate data, and average of soil moisture difference for all the pixels was 0.72 mm between spin-up 1991 run and S_control 1991 run. 3.1. How Did Each Climate Component Control Simulated Trends in Global GPP? The effect of each climate component on the interannual variations of global GPP is shown in Figure 1. Under S_control, GPP kept increasing until around 2000 and then declined modestly until 2007. This trend is consistent with the results of shorter-term studies using the MODIS 17 algorithm [6,32]. For each climate variable analysis, only S_vpd showed a consistent decreasing trend, while the other simulations all produced increasing trends in global GPP (Figure 1). These results suggest that land models solely relying on VPD may overestimate the reduction in GPP caused by water stress in 2000s. The cross-correlation coefficient matrix among the GPP time series produced by the different simulations is shown in Table 2. The GPP derived from the four climate variable simulations (S_temp, S_vpd, S_precip, and S_srad) did not correlate well with each other. The highest correlation was found between S_temp and S_precip, but the Pearson coefficient is still low (r = 0.43). Thus, the high correlation between S_clim and S_precip can be simply explained with precipitation having the strongest influence on climate-driven GPP. In spite of the low correlation coefficients among the four climate variable simulations, Figure 1 shows a clear correspondence in the short term, i.e., shorter than a decade. S_temp and S_vpd are anti-correlated, with increases in the GPP driven by temperature and decreases in the GPP driven by VPD. The symmetric patterns are caused by VPD variation being largely driven by temperature. Elevated temperatures promote higher GPP at high latitudes, while high VPD lowers GPP by inducing drought stress. The comparison between S_vpd and S_precip in Figure 1 showed a different correlation pattern between short and long-term. Over the short-term, as in the case of ENSO, both S_vpd and S_precip decreased. Meanwhile, over long-term, the S_vpd showed the opposite trend of S_precip. The trend of increasing temperatures caused S_vpd to have an overall decreasing trend, while S_precip increased over the same period. The controlling effects of temperature on VPD also resulted in S_vpd having no correlation with MEI (r = 0.04), while S_precip was well correlated with MEI (r = −0.79) (Table 2). The same analysis presented in Table 2 was performed using the residual carbon flux, which was calculated from fossil fuel and cement emissions, land-use change emissions, atmospheric growth, and ocean carbon flux . Assuming that the residual carbon is equivalent to the land sink, the analysis can directly assess the climate influence on carbon sequestration by land vegetation. The correlation coefficient of S_precip was improved from 0.01 to 0.31, but the correlation was still insignificant. The coefficients of the other simulations (S_clim, S_temp, S_vpd, and S_srad) were not improved. 3.2. Can Simulated Global GPP Explain Interannual Variations in Atmospheric CO2 Growth Rate? Among the four climate component simulations, S_vpd had the highest correlation with the growth rate of CO2 (r = −0.69) (Table 2). As a first thought this high correlation could lead to validating the hypothesis that VPD controls global GPP and the CO2 growth rate. However, this hypothesis must be rejected on the grounds that the CO2 growth rate should strongly correlate with the Net Ecosystem Production (NEP). On the other hand, it has to be noted that S_vpd is strongly correlated with the GISS tropical (24°N–24°S) land temperature (r = −0.85), and the GISS tropical land temperature is also highly correlated with the CO2 growth rate (r = 0.74). It is therefore reasonable to assume that S_vpd shows a spurious and not a causal relationship with the CO2 growth rate through temperature, which controls both S_vpd and respiration. In Figure 2 we compared the time-series of S_clim, S_wo_sm, and S_wo_vpd with the CO2 growth rate. S_wo_sm and S_wo_vpd showed opposite long-term trends, more pronounced from the year 2000 onwards (Figure 2(a)), similarly to what was observed for S_vpd and S_precip in Figure 1. Overall, in the short term the interannual variations in GPP of the three simulations are anti-correlated with the CO2 growth rate. Similar to the relationship between S_precip and S_vpd, S_wo_sm showed higher correlation with the CO2 growth rate (Figure 2(b)). The long-term correlation coefficients of S_wo_sm and S_wo_vpd with the CO2 growth rate were −0.67 and 0.12, respectively. However, in the Pinatubo eruption era (1991–1994), all the three simulations deviated from the CO2 growth rate. This confirms that one cannot explain the CO2 growth rate variability through GPP variability and that changes in respiration are required to simulate the observed CO2 growth rate. Therefore, though we still cannot exclude the possibility that TOPS failed to model VPD drought-effect on GPP, high correlation between GPP and CO2 growth rate was most likely spurious. Increase in diffusive radiation ratio in Pinatubo eruption era can mitigate the reduction in global GPP, but the effect was not strong enough to make global GPP increase . 3.3. Which Control (VPD or Soil Moisture) Can Explain Long-Term Trend in GIMMS NDVI? To evaluate whether the TOPS simulations are consistent with satellite observations, we calculated the differences of mean annual GPP between 2000–2009 and 1982–1999 for three simulations (S_control, S_veg, and S_clim) (Figure 3). Because S_veg was derived from GIMMS 3g under a fixed climate, we can assume S_veg to correspond to the anomaly of satellite-observed GPP. S_control and S_clim were very consistent with each other, while S_veg showed spatial patterns of higher GPP in China, Brazil, India, and USA, indicating that TOPS underestimated GPP in these regions during the 2000s. Next, we calculated the difference of mean annual GPP between 2000–2009 and 1982–1999 for S_wo_sm and S_wo_vpd (Figure 4(c,d)). Inconsistencies between S_wo_sm and S_wo_vpd occurred in Brazil, Africa, and Europe, with S_wo_sm showing more negative anomalies than S_wo_vpd in most of the regions. In these regions, precipitation increased in 2000s (Figure 4(b)), while VPD also increased (Figure 4(a)). Estimates from S_wo_vpd, compared to S_wo_sm, are more consistent with S_veg (Figure 3(b)) especially in West Brazil and Europe. Overall the effects of drought stress were more marked in the S_wo_sm simulations than in S_wo_vpd. Figure 5 shows the histograms of the differences between 2000–2009 GPP and 1982–1999 GPP for the 3 simulations (S_veg, S_wo_sm, and S_wo_vpd), which were derived from Figures 3(b) and 4. Both S_veg and S_wo_vpd were positively skewed (skewness are 0.597 and 1.865, respectively), while S_wo_sm was negatively skewed (skewness is −1.460). The mean of S_veg (0.011 kg·C·yr−1) was between the means of S_wo_sm (−0.008 kg·C·yr−1) and S_wo_vpd (0.023 kg·C·yr−1). As a result, distribution of S_veg is between S_wo_vpd and S_wo_sm, but closer to S_wo_vpd than S_wo_sm. These results suggest that the TOPS model overestimated the drought stress due to an overestimated VPD effect in the 2000s, and that both precipitation and VPD down regulation functions are required to simulate the long-term GPP trend. According to van der Molen et al., there are two direct dependencies of GPP on drought: structural changes in the vegetation, and physiological responses of the vegetation. In this study we only consider the latter ones. Depending on the physiological responses of stomata to soil moisture, plant species can be roughly divided into two types, i.e., isohydric and anisohydric species . The isohydric species close their stomata when soil moisture decreases or VPD increases, while the anisohydric species are insensitive to soil moisture but close their stomata only responding to high VPD. Therefore, modeling drought response using VPD or soil moisture is similar to assuming that vegetation is composed by either isohydric or anisohydric species. The TOPS model structure assumes an isohydric behavior of vegetation, whereas models in which drought is simulated through VPD controls, such as the MODIS 17 algorithm, assume an anisohydric behavior of vegetation. Although plants cannot be clearly divided into isohydric or anisohydric by species , forest trees are predominantly of isohydric nature [41–43]. Thus, ecosystem models should have both VPD and soil moisture sub-models to properly represent the drought effect on GPP. Although both VPD and precipitation are required for modeling physiological processes an exception can be made for short-term analyses when VPD and precipitation tend to be closely related. Our simulation in Figure 1 showed similar trend with MODIS17 analysis in 2000s . Caution should be exerted, however, in extending the interpretation of short-term effects of drought effects on GPP to long-term trends. Our 30-year simulation clearly showed different trends between soil moisture-driven and VPD-driven simulations. Dynamic global vegetation models (DGVMs) also showed model-dependent sensitivities to increased VPD in correspondence to increased temperature in the Amazon during the 21st century . Though this study focused on the global scale variability and trends in GPP, we need more studies dealing with the differential controls on a regional scale. For example, Mu et al. reported the decoupling between precipitation and VPD caused a failure in GPP simulation by MODIS 17 algorithm in monsoon-controlled China. It is also known that variations in VPD sometimes fail to capture severe droughts at a watershed scale . Therefore, assessing long-term trend in GPP in regional scale is more difficult by VPD-only model. In addition to climate variability, other factors, not accounted here, such as CO2 fertilization, nitrogen deposition , and diffuse radiation , affect the interannual variation in GPP. These effects are difficult to quantify and complicate the bottom-line GPP trend through combined effects . In this study, by focusing on the difference of after-2000 and before-2000, we ignored these effects on the interannual variation in GPP. CO2 concentration and nitrogen deposition have a smaller interannual variability compared to the climate variables [5,48], and the effect of diffuse radiation is marginal over the three decades studied here . The differences in GPP after 2,000 simulated by different models were also found in time series of estimated evapotranspiration . Jung et al. showed that most of the ecosystem models displayed an increasing trend in modeled evapotranspiration from 1982 to 1998, but after that trends diverged among models. Jung et al. concluded that the decreasing trend in evapotranspiration found in some models after 1998 was due to the limited soil moisture supply. However, similarly to the divergent GPP trends simulated for the 2000s, the diversion after 1998 can be explained by the relative sensitivity of the model structure to VPD compared to other climate components. The observed global warming trend over the past few decades causes the VPD to increase. It is therefore crucial to assess how different land models handle drought stress so as not to equate an increasing trend in VPD with a decline in GPP or ET. This study focused on long-term trend around three decades, so that this study does not provide any conclusive judgment on the topic of short-term drought-induced NPP decline after 2000 [6,50,51]. Furthermore, discussing NPP trend is harder than GPP because of the need to include autotrophic respiration which is complex in itself . Our results suggest that proper assessment of water limitation is one of the key issues to be clarified before assessing trends in global GPP or NPP. In this study we performed a series of experiments using the TOPS model and Global Inventory Modeling and Mapping Studies (GIMMS) 3g data to evaluate the impacts of drought on the interannual variation of Gross Primary Production (GPP) simulated either in terms of VPD or soil moisture effects. Although Vapor Pressure Deficit (VPD) alone can simulate the effects of drought stress on GPP for short periods, we find that both VPD and soil moisture are required to simulate the long-term trend in global GPP. Terrestrial Observation and Prediction System (TOPS) simulations with a VPD control only underestimate GPP during the period 2000–2009 because of over-sensitivity to VPD drought effects. We also find that the strong correlation of the interannual variations of VPD with the CO2 growth rate observed in recent studies can be spurious because it is induced by a warming temperature trend. We recommend that assessments similar to the ones carried out for this study be performed for all ecosystem models aiming at analyzing the long-term trend in GPP or evapotranspiration. These sensitivity analyses are needed to correctly project the effects of climate change on the global carbon cycle. We wish to thank Compton J. Tucker, Jorge E. Pinzon, and Molly E. Brown for providing GIMMS 3g datasets. This study was funded by NASA’s Earth Sciences Program. This research was performed using NASA Earth Exchange. NEX combines state-of-the-art supercomputing, Earth system modeling, remote sensing data from NASA and other agencies, and a scientific social networking platform to deliver a complete work environment in which users can explore and analyze large Earth science data sets, run modeling codes, collaborate on new or existing projects, and share results within and/or among communities. References and Notes - Beer, C.; Reichstein, M.; Tomelleri, E.; Ciais, P.; Jung, M.; Carvalhais, N.; Rödenbeck, C.; Arain, M.A.; Baldocchi, D.; Bonan, G.B.; et al. Terrestrial gross carbon dioxide uptake: Global distribution and covariation with climate. Science 2010, 329, 834–838. [Google Scholar] - Ito, A. A historical meta-analysis of global terrestrial net primary productivity: Are estimates converging? Glob. Change Biol 2011, 17, 3161–3175. [Google Scholar] - Randerson, J.T.; Chapin, F.S.; Harden, J.W.; Neff, J.C.; Harmon, M.E. Net ecosystem production: A comprehensive measure of net carbon accumulation by ecosystems. Ecol. Appl 2002, 12, 937–947. [Google Scholar] - Denman, K.L.; Brasseur, G.; Chidthaisong, A.; Ciais, P.; Cox, P.M.; Dickinson, R.E.; Hauglustaine, D.; Heinze, C.; Holland, E.; Jacob, D.; et al. Couplings Between Changes in the Climate System and Biogeochemistry. In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change; Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K.B., Tignor, M., Miller, H.L., Eds.; Cambridge University Press: Cambridge, UK/New York, NY, USA, 2007. [Google Scholar] - Le Quéré, C.; Raupach, M.R.; Canadell, J.G.; Marland, G.; Bopp, L.; Ciais, P.; Conway, T.J.; Doney, S.C.; Feely, R.A.; Foster, P.; et al. Trends in the sources and sinks of carbon dioxide. Nat. Geosci 2009, 2, 831–836. [Google Scholar] - Zhao, M.; Running, S.W. Drought-induced reduction in global terrestrial net primary production from 2000 through 2009. Science 2010, 329, 940–943. [Google Scholar] - Potter, C.; Klooster, S.; Genovese, V. Net primary production of terrestrial ecosystems from 2000 to 2009. Clim. Change 2012, 113, 1–13. [Google Scholar] - Ruimy, A.; Kergoat, L.; Bondeau, A. Participants of the Potsdam NPP Model Intercomparison. Comparing global models of terrestrial net primary productivity (NPP): Analysis of differences in light absorption and light-use efficiency. Glob. Change Biol 1999, 5, 56–64. [Google Scholar] - Wang, W.; Dungan, J.; Hashimoto, H.; Michaelis, A.R.; Milesi, C.; Ichii, K.; Nemani, R.R. Diagnosing and assessing uncertainties of terrestrial ecosystem models in a multimodel ensemble experiment: 2. Carbon balance. Glob. Change Biol 2011, 17, 1367–1378. [Google Scholar] - Churkina, G.; Running, S.W.; Schloss, A.L. The participants of the Potsdam intercomparison comparing global models of terrestrial net primary productivity (NPP): The importance of water availability. Glob. Change Biol 1999, 5, 46–55. [Google Scholar] - Mu, Q.; Zhao, M.; Heinsch, F.A.; Liu, M.; Tian, H.; Running, S.W. Evaluating water stress controls on primary production in biogeochemical and remote sensing based models. J. Geophys. Res 2007, 112, G01012. [Google Scholar] - Smith, T.M.; Yin, X.; Gruber, A. Variations in annual global precipitation (1979–2004), based on the Global Precipitation Climatology Project 2.5° analysis. Geophys. Res. Lett 2006, 33, L06705. [Google Scholar] - Zhou, Y.P.; Xu, K.-M.; Sud, Y.C.; Betts, A.K. Recent trends of the tropical hydrological cycle inferred from Global Precipitation Climatology Project and International Satellite Cloud Climatology Project data. J. Geophys. Res 2011, 116, D09101. [Google Scholar] - Fischer, E.M.; Knutti, R. Robust projections of combined humidity and temperature extremes. Nat. Clim. Chang 2012, 3, 126–130. [Google Scholar] - Meehl, G.A.; Stocker, T.F.; Collins, W.D.; Friedlingstein, P.; Gaye, A.T.; Gregory, J.M.; Kitoh, A.; Knutti, R.; Murphy, J.M.; Noda, A.; et al. Global Climate Projections. In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change; Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K.B., Tignor, M., Miller, H.L., Eds.; Cambridge University Press: Cambridge, UK/New York, NY, USA, 2007. [Google Scholar] - Nemani, R.; Hashimoto, H.; Votava, P.; Melton, F.; Wang, W.; Michaelis, A.; Mutch, L.; Milesi, C.; Hiatt, S.; White, M. Monitoring and forecasting ecosystem dynamics using the Terrestrial Observation and Prediction System (TOPS). Remote Sens. Environ 2009, 113, 1497–1509. [Google Scholar] - Monteith, J. Solar radiation and productivity in tropical ecosystems. J. Appl. Ecol 1972, 9, 747–766. [Google Scholar] - Jarvis, P.G. The interpretation of the variations in leaf water potential and stomatal conductance found in canopies in the field. Phil. Trans. R. Soc. Lond. B Bio. Sci 1976, 273, 593–610. [Google Scholar] - Running, S.W.; Hunt, E.R., Jr. Generalization of a Forest Ecosystem Process Model for Other Biomes, Biome-BGC, and an Application for Global-Scale Models. In Scaling Physiological Processes: Leaf to Globe; Ehleringer, J.R., Field, C.B., Eds.; Academic Press: San Diego, CA, USA, 1993; pp. 141–158. [Google Scholar] - Ichii, K.; White, M.A.; Votava, P.; Michaelis, A.; Nemani, R.R. Evaluation of snow models in terrestrial biosphere models using ground observation and satellite data: Impact on terrestrial ecosystem processes. Hydrol. Process 2008, 22, 347–355. [Google Scholar] - Nemani, R.; White, M.; Pierce, L.; Votava, P.; Coughlan, J.; Running, S. Biospheric monitoring and ecological forecasting. Earth Obs. Mag. 2003, 3/4, 6–8. [Google Scholar] - Oren, R.; Sperry, J.S.; Katul, G.G.; Pataki, D.E.; Ewers, B.E.; Phillips, N.; Schäfer, K.V.R. Survey and synthesis of intra- and interspecific variation in stomatal sensitivity to vapour pressure deficit. Plant Cell Environ 1999, 22, 1515–1526. [Google Scholar] - Reichstein, M. Inverse modeling of seasonal drought effects on canopy CO2/H2O exchange in three Mediterranean ecosystems. J. Geophys. Res 2003, 108, 4726. [Google Scholar] - Zhao, M.; Heinsch, F.A.; Nemani, R.R.; Running, S.W. Improvements of the MODIS terrestrial gross and net primary production global data set. Remote Sens. Environ 2005, 95, 164–176. [Google Scholar] - Zhu, Z.C.; Bi, J.; Pan, Y.; Ganguly, S.; Anav, A.; Xu, L.; Samanta, A.; Piao, S.; Nemani, R.R.; Myneni, R.B. Global data sets of vegetation LAI3g and FPAR3g derived from GIMMS NDVI3g for the period 1981 to 2011. Remote Sens 2013, 5, 927–948. [Google Scholar] - Friedl, M.A.; Sulla-Menashe, D.; Tan, B.; Schneider, A.; Ramankutty, N.; Sibley, A.; Huang, X. MODIS collection 5 global land cover: Algorithm refinements and characterization of new datasets. Remote Sens. Environ 2010, 114, 168–182. [Google Scholar] - CRUNCEP Data Set. Available online: http://dods.extra.cea.fr/data/p529viov/cruncep/readme.htm (accessed on 13 June 2012). - University of East Anglia Climatic Research Unit (CRU), CRU Time Series (TS) High Resolution Gridded Datasets. Climatic Research Unit (CRU) Time-Series Datasets of Variations in Climate with Variations in Other Phenomena. Available online: http://badc.nerc.ac.uk/view/badc.nerc.ac.uk__ATOM__dataent_1256223773328276 (accessed on 13 June 2012). - Kalnay, E.; Kanamitsu, M.; Kistler, R.; Collins, W.; Deaven, D.; Gandin, L.; Iredell, M.; Saha, S.; White, G.; Woollen, J.; et al. The NCEP/NCAR 40-year reanalysis project. Bull. Am. Meteorol. Soc 1996, 77, 437–471. [Google Scholar] - Abbott, P.F.; Tabony, R.C. The estimation of humidity parameters. The Meteorol. Mag 1985, 114, 49–56. [Google Scholar] - Ichii, K.; Hashimoto, H.; Nemani, R.; White, M. Modeling the interannual variability and trends in gross and net primary productivity of tropical forests from 1982 to 1999. Glob. Planet. Change 2005, 48, 274–286. [Google Scholar] - Nemani, R.R.; Keeling, C.D.; Hashimoto, H.; Jolly, W.M.; Piper, S.C.; Tucker, C.J.; Myneni, R.B.; Running, S.W. Climate-driven increases in global terrestrial net primary production from 1982 to 1999. Science 2003, 300, 1560–1563. [Google Scholar] - Conway, T.; Tans, P. Trends in Atmospheric Carbon Dioxide. Available online: www.esrl.noaa.gov/gmd/ccgg/trends/ (accessed on 13 June 2012). - Wolter, K.; Timlin, M.S. El Niño/Southern Oscillation behaviour since 1871 as diagnosed in an extended multivariate ENSO index (MEI.ext). Int. J. Climatol 2011, 31, 1074–1087. [Google Scholar] - Hansen, J.; Ruedy, R.; Sato, M.; Imhoff, M.; Lawrence, W.; Easterling, D.; Peterson, T.; Karl, T. A closer look at United States and global surface temperature change. J. Geophys. Res 2001, 106, 23947–23963. [Google Scholar] - Le Quéré, C.; Andres, R.J.; Boden, T.; Conway, T.; Houghton, R. A.; House, J. I.; Marland, G.; Peters, G. P.; van der Werf, G.; Ahlström, A.; et al. The global carbon budget 1959–2011. Earth Syst. Sci. Data Discuss 2012, 5, 1107–1157. [Google Scholar] - Mercado, L.M.; Bellouin, N.; Sitch, S.; Boucher, O.; Huntingford, C.; Wild, M.; Cox, P.M. Impact of changes in diffuse radiation on the global land carbon sink. Nature 2009, 458, 1014–1017. [Google Scholar] - Van der Molen, M.K.; Dolman, A.J.; Ciais, P.; Eglin, T.; Gobron, N.; Law, B.E.; Meir, P.; Peters, W.; Phillips, O.L.; Reichstein, M.; et al. Drought and ecosystem carbon cycling. Agr. Forest Meteorol 2011, 151, 765–773. [Google Scholar] - Tardieu, F.; Simonneau, T. Variability among species of stomatal control under fluctuating soil water status and evaporative demand: Modelling isohydric and anisohydric behaviours. J. Exp. Bot 1998, 49, 419–432. [Google Scholar] - Franks, P.J.; Drake, P.L.; Froend, R.H. Anisohydric but isohydrodynamic: Seasonally constant plant water potential gradient explained by a stomatal control mechanism incorporating variable plant hydraulic conductance. Plant Cell Environ 2007, 30, 19–30. [Google Scholar] - Bucci, S.J.; Goldstein, G.; Meinzer, F.C.; Franco, A.C.; Campanello, P.; Scholz, F.G. Mechanisms contributing to seasonal homeostasis of minimum leaf water potential and predawn disequilibrium between soil and plant water potential in Neotropical savanna trees. Trees 2005, 19, 296–304. [Google Scholar] - Fisher, R.A.; Williams, M.; Do Vale, R.L.; Da Costa, A.L.; Meir, P. Evidence from Amazonian forests is consistent with isohydric control of leaf water potential. Plant Cell Environ 2006, 29, 151–165. [Google Scholar] - Bond, B.J.; Meinzer, F.C.; Brooks, J.R. Chapter 2. How Trees Influence the Hydrological Cycle in Forest Ecosystems. In Hydroecology and Ecohydrology: Past, Present and Future; Wood, P.J., Hannah, D.M., Sadler, J.P., Eds.; John Wiley & Sons Ltd: West Sussex, UK, 2008; pp. 7–35. [Google Scholar] - Galbraith, D.; Levy, P.E.; Sitch, S.; Huntingford, C.; Cox, P.; Williams, M.; Meir, P. Multiple mechanisms of Amazonian forest biomass losses in three dynamic global vegetation models under climate change. New Phytol 2010, 187, 647–65. [Google Scholar] - Hwang, T.; Kang, S.; Kim, J.; Kim, Y.; Lee, D.; Band, L. Evaluating drought effect on MODIS Gross Primary Production (GPP) with an eco-hydrological model in the mountainous forest, East Asia. Glob. Change Biol 2008, 14, 1037–1056. [Google Scholar] - Reich, P.B.; Hobbie, S.E.; Lee, T.; Ellsworth, D.S.; West, J.B.; Tilman, D.; Knops, J.M.H.; Naeem, S.; Trost, J. Nitrogen limitation constrains sustainability of ecosystem response to CO2. Nature 2006, 440, 922–925. [Google Scholar] - McMurtrie, R.E.; Norby, R.J.; Medlyn, B.E.; Dewar, R.C.; Pepper, D.A.; Reich, P.B.; Barton, C.V.M. Why is plant-growth response to elevated CO 2 amplified when water is limiting, but reduced when nitrogen is limiting? A growth-optimisation hypothesis. Funct. Plant Biol 2008, 35, 521. [Google Scholar] - Reay, D.S.; Dentener, F.; Smith, P.; Grace, J.; Feely, R.A. Global nitrogen deposition and carbon sinks. Nat. Geosci 2008, 1, 430–437. [Google Scholar] - Jung, M.; Reichstein, M.; Ciais, P.; Seneviratne, S.I.; Sheffield, J.; Goulden, M.L.; Bonan, G.; Cescatti, A.; Chen, J.; De Jeu, R.; et al. Recent decline in the global land evapotranspiration trend due to limited moisture supply. Nature 2010, 467, 951–954. [Google Scholar] - Samanta, A.; Costa, M.H.; Nunes, E.L.; Vieira, S.A.; Xu, L.; Myneni, R.B. Comment on “Drought-induced reduction in global terrestrial net primary production from 2000 through 2009”. Science 2011, 333, 1093, author reply 1093. [Google Scholar] - Medlyn, B.E. Comment on “Drought-induced reduction in global terrestrial net primary production from 2000 through 2009”. Science 2011, 333, 1093, author reply 1093. [Google Scholar] |S_wo_vpd||x||x||x||x||ΨVPD = 1| |S_wo_sm||x||x||x||x||ΨSM = 1|
<urn:uuid:4b007c4a-9a6f-4760-afed-83933c22dfa4>
CC-MAIN-2016-26
http://www.mdpi.com/2072-4292/5/3/1258/htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00148-ip-10-164-35-72.ec2.internal.warc.gz
en
0.836712
9,235
2.53125
3
Results of an early clinical trial suggest that a breast cancer vaccine developed at Washington University School of Medicine in St. Louis is safe in patients with metastatic breast cancer. The results, which were published in Clinical Cancer Research and announced in a news release, also suggest that the vaccine helped slow the cancer’s progression. Researchers developed the vaccine to target mammaglobin-A, a protein found almost exclusively in breast tissue, according to the news release. The vaccine works to prime a type of white blood cell in the body’s adaptive immune system, and seek out and destroy the cells with mammaglobin-A. “Being able to target mammaglobin is exciting because it is expressed broadly in up to 80 percent of breast cancers, but not at meaningful levels in other tissues,” William E. Gillanders, senior study author and breast cancer surgeon said in the release. “In theory, this means we could treat a large number of breast cancer patients with potentially fewer side effects.” In patients with tumors that do not produce mammaglobin-A, the vaccine would not be effective, according to the news release. In the study, 14 patients with metastatic breast cancer that expressed mammaglobin-A were given the vaccine. During the Phase 1 trial, which tested the vaccine’s safety, patients experienced few side effects. The study authors considered these effects mild or moderate. Researchers also recorded preliminary evidence that indicated the vaccine slowed the cancer’s progression, even in patients who had less potent immune systems due to the disease and exposure to chemotherapy. “Despite the weakened immune systems in these patients, we did observe a biologic response to the vaccine while analyzing immune cells in their blood samples,” Gillanders said. “That’s very encouraging. We also saw preliminary evidence of improved outcome, with modestly longer progression-free survival.” About half of the volunteers given the vaccine showed no progression of their cancer a year later. The researchers are planning a larger clinical trial to test the vaccine in newly diagnosed breast cancer patients. “If we give the vaccine to patients at the beginning of treatment, the immune systems should not be compromised like in patients with metastatic disease,” Gillanders said. “We also will be able to do more informative immune monitoring than we did in this preliminary trial. Now that we have good evidence that the vaccine is safe, we think testing it in newly diagnosed patients will give us a better idea of the effectiveness of the therapy,” he said.
<urn:uuid:79b534bb-4be6-4627-81f8-1d0655e13da6>
CC-MAIN-2016-26
http://www.foxnews.com/health/2014/12/01/new-breast-cancer-vaccine-proves-safe-in-early-clinical-trial.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957505
532
2.8125
3
We use scanning tunneling microscopy and spectroscopy (STM / STS) to determine the surface atomic and electronic structure of materials of interest. We have the unique ability to perform these measurements in reactive gases (oxygen, hydrogen, water vapor) and at elevated temperatures. We use modified Omicron VT-STM/AFM equipment which can also perform atomic force microscopy (AFM) in non-contact mode. X-ray photoelectron spectroscopy (XPS), in an angle-resolved geometry, is used to analyze the top layers of a material's surface to deduce chemical information- both to detect which elements are present on the surface and their chemical binding environment and laterally-averaged electronic structure. We perform these measurements as a function of temperature, and with surfaces annealed in reactive gases in situ in our analysis chamber. We use our computer cluster, as well as national supercomputing infrastructures supported by NSF and DOE, to perform simulations at the electronic and atomic level, such as density functional theory (DFT), molecular dynamics (MD), and kinetic Monte Carlo (kMC) calculations.
<urn:uuid:6dc5f9eb-0059-4305-9c56-9146f7f3c391>
CC-MAIN-2016-26
http://web.mit.edu/yildizgroup/LEI/facilities.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00018-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915453
238
2.5625
3
Stem cell research is the next great leap in medicine. In the future, new tissue grown in a laboratory could replace a failing heart, or new cells take the place of damaged cells in the brain. Rather than using stem cells from embryonic sources, which opens difficult ethical and complicated scientific issues, scientists have been looking to adult human stem cells, culled from a person's own body. Adult stem cells are now being cultivated from various tissues in the body ― from skin, bones and even wisdom teeth. At the forefront in this research is a team of scientists from Tel Aviv University and Scripps Research Institute in California. They recently reported a breakthrough on a new classification system for identifying pluripotent stem cells in human tissue. News about this system recently appeared in the prestigious scientific journal Nature. Pluripotent stem cells have the potential to differentiate into every distinct cell type in the developed human body. They hold great promise for use in drug development and the treatment of many devastating disorders. Avoiding Cultural and Religious Controversy "There is a huge interest in scientists taking skin cells or other body cells of a person, and then turning them into stem cells for creating new neurons in the brain," says Igor Ulitsky, a Ph.D. student at Prof. Ron Shamir's lab in the Blavatnik School for Computer Science, Tel Aviv University, who pioneered some of the research techniques. "Using a person's own stem cells is both ethically acceptable, and in some cases even better for regenerating tissue than embryonic stem cells." Tel Aviv University research played a central role, creating new bioinformatics algorithms to analyze the data and put together the pieces of the puzzle. The result is, in effect, an encyclopaedia describing different stem cell types and their characteristics. Before this breakthrough, made possible by international collaboration, scientists were baffled by how to distinguish different stem cell types. "Our lab helped devise a method to classify stem cells according to their machinery," Ulitzky explained. "Stem cells have small but significant differences between them, and knowing the potential properties of each kind is valuable for advancing this promising field of research." An Ethical and Scientific Test With rapid advances in the field of stem cells including methods to induce pluripotence in various cells, such as those that comprise human skin the question of how to define pluripotence has become increasingly critical. This is especially the case for human cell lines, which for both ethical and scientific reasons cannot be treated as those from other species. "There has been no ethically acceptable equivalent test that could prove pluripotency in human cell preparations," said Franz-Josef Mueller, M.D., an investigator at Scripps. "Many have been purported to be multi- or pluripotent, but there has been no practical way to define pluripotency in human cells." Using a collection of about 150 human stem cell samples, the researchers created a database of global gene expression profiles and discovered that all of the pluripotent stem cell lines showed a remarkable similarity in the analysis, while other cell types were more diverse. The analysis by Shamir's lab revealed a protein-protein network common to pluripotent cells, pointing to what may be one of the key building blocks of the machinery that enables these transformative cells to differentiate into multiple cell types. Next, the researchers plan to investigate the regulation of this protein network and how it might be used to advance the development of human gene therapies. |Contact: George Hunka| American Friends of Tel Aviv University
<urn:uuid:eea407dc-3c00-4acd-832a-ef321496ad0b>
CC-MAIN-2016-26
http://www.bio-medicine.org/biology-news-1/Tel-Aviv-University-researchers-create-new-stem-cell-screening-tool-4720-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00025-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93339
731
3.421875
3
In 1882-4, Frances Groome's Ordnance Gazetteer of Scotland described Gallatown like this: Gallatown, a suburban village in Dysart parish, Fife, 5 furlongs NNW of Dysart station, commencing at the N end of Sinclairtown, and extending ½ mile northward along the road from Kirkcaldy to Cupar. It is included in the parliamentary burgh of Dysart, but (since 1876) in the royal burgh of Kirkcaldy. Originally called Gallowstown, it took that name either from the frequent execution at it of criminals in feudal times, or from the special execution of a noted robber about three centuries ago; and it long was famous for the making of nails. It now participates generally in the industry, resources, and institutions of Sinclairtown; and it has a Free church and a public school. A Vision of Britain through Time includes a large library of local statistics for administrative units. For the best overall sense of how the area containing Gallatown has changed, please see our redistricted information for the modern district of Fife. More detailed statistical data are available under Units and statistics, which includes both administrative units covering Gallatown and units named after it. GB Historical GIS / University of Portsmouth, History of Gallatown in Fife | Map and description, A Vision of Britain through Time. Date accessed: 27th June 2016 Click here for more detailed advice on finding places within A Vision of Britain through Time, and maybe some references to other places called "Gallatown".
<urn:uuid:b95a9f3b-1ba6-412e-8f7e-961102a5a479>
CC-MAIN-2016-26
http://www.visionofbritain.org.uk/place/place_page.jsp?p_id=22366
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949178
319
2.65625
3
Get a head start on grammar with this sentence building worksheet. Kids unscramble crazy cake sentences and put the words in an order that makes sense. Everyone has ideas about what they'd do if they could go back in time. See what your kid's ideas are and give her some creative writing practice in the process! Do fish drive around from place to place? Have a fun storytelling adventure with your child and create an imaginative story to go with this picture. Imagine what would happen if a crow wanted to be a peacock! Give your little storyteller a fun way to practice her creative writing with this prompt. What would you get Hedgehog for his birthday? Enjoy a fun story starter with your budding author, a great way for her to exercise her creativity. What would happen if two snails entered a race? Start your own story with these slow-and-steady snails! What if you had a T-Rex as a pet? Dino lovers can put their imaginations to the test with this adorable story starter. Let's get to know you! Have your beginning writer complete this activity all about her, a great way to build confidence in herself and in her writing. Inspire your little storyteller to exercise her imagination with an adorable story starter! How did this rhino become a ballerina? What happens when a crocodile has a toothache? Get together with your little storyteller for a fun creative writing activity. What kinds of games would you play in the jungle? Give your little monkey a fun way to practice her storytelling skills with this story starter prompt. Have you ever heard of a giraffe that loves to roller skate? This fun-filled story starter is sure to put a smile on your little writer's face.
<urn:uuid:38b2ae49-7e4a-41f4-96ea-13f1f036891b>
CC-MAIN-2016-26
http://www.education.com/collection/pxiong101/writing/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958262
367
3.390625
3
Q: My eight-year-old son is asking me about how babies are made. I gave him a short-version answer, and now he has a lot more questions. I'm realizing that my older daughter (now 12) probably had a lot of questions she didn't ask out loud when I gave her the simple answers a few years ago. What books do you have for both of them? A: We have plenty of books on this topic for different ages. You will find it much easier to answer your children's questions with the help of some well-chosen books! Whether you read a book aloud to a younger child, give one to an older child to read herself, or simply read one yourself to get ideas of the best ways to respond to their questions, having some published information will help you teach your children about human development and reproduction. You may not agree with every statement in all of these books – each family has their own set of values and perspectives. However, these books represent ideas that exist in the world, so responding to them either with agreement or disagreement will clarify for your children what you believe and what your expectations are for them, while at the same time sharing essential information they need to know. As you can see, they range from those books that simply explain how babies are made to those that explain what changes a body goes through that enable people to have babies! Those questions come up more as a child grows. Children of every age have questions, and it's never to early to present an age-appropriate answer to any question they have. (See the blog post from last month for books about how babies are made that are best for preschool-age kids.) If your kids see you as a neutral and reliable source of information, you will have the basis for continuing communication as they become teenagers. We'd love to answer your question next! We assure confidentiality. Oakland Public Library Children's Librarians answer your questions on the 1st and 3rd Thursday of the month.
<urn:uuid:b5c762af-8a73-401d-ac1b-f23736fc6d6a>
CC-MAIN-2016-26
http://oaklandlibrary.org/category/tags/read-aloud
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977206
409
3.3125
3
Simple Definition of aisle : a passage between sections of seats in a church, theater, airplane, etc. : a passage where people walk through a store, market, etc. Full Definition of aisle 1 : the side of a church nave separated by piers from the nave proper 2 a (1) : a passage (as in a theater or railroad passenger car) separating sections of seats (2) : such a passage regarded as separating opposing parties in a legislature <supported by members on both sides of the aisle> b : a passage (as in a store or warehouse) for inside traffic Examples of aisle in a sentence The bride walked down the aisle to the altar. By the end of the concert, the people in the theater were dancing in the aisles. Origin and Etymology of aisle Middle English ile, alteration of ele, from Anglo-French, literally, wing, from Latin ala; akin to Old English eaxl shoulder, Latin axis axletree — more at axis First Known Use: 15th century AISLE Defined for Kids Definition of aisle for Students 1 : a passage between sections of seats (as in a church or theater) 2 : a passage between shelves (as in a supermarket) Seen and Heard What made you want to look up aisle? Please tell us where you read or heard it (including the quote, if possible).
<urn:uuid:9977b0ac-e1d6-437a-9e17-8d5565e89192>
CC-MAIN-2016-26
http://www.merriam-webster.com/dictionary/aisle
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00079-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919513
297
3.359375
3
About Urdu Poetry You may not know what Urdu poetry is if you are not familiar with Hindu and the traditions that comes with it. There are some questions that you may be asking yourself. What is Urdu poetry? How to find it? Why read it? What Is Urdu Poetry? This is a poetry that has been around for many years and there are many famous poets that have been doing it for many years. It's really not that much different from American poetry except that you will need to know how to read Hindu or have a program that is able to read the language. It's a very pretty way of looking at most things in the world and helps you to understand the things that are around you. If you have ever taken the time to read it, you will see what many people see in it when they do read it and finally understand what it is and why they should read it in the long run. How Do You Find It? When you start looking for Urdu poetry, you may notice that you will have a harder time finding it in most places. The Internet is going to be the best place to look for this kind of poetry. You will need to take the time to look for the good sites as well as finding the proper ways to read it. When you start looking for it you may also be able to find Urdu poetry in the library for check out and reading. You should check with your local library and find out if they have this kind of book or poetry. Your local library will most likely have a collection of works that you can read and understand with a little help and a little time to learn the language. Why Should You Read It? You will want to read this kind of poetry so that you can be a better rounded person. Knowing this kind of poetry will help you be happier and will help you to understand what you are looking for in poetry. Hindu poetry can be some of the most beautiful kinds to read and look at. Many people take for granted the need to be multicultural and understand more then one language. You can learn this great language at least enough to understand what you are reading and what you may want to see in it. There are many things that you should think about when you are looking to read Urdu poetry because of the high cultural content that you can get from it. Many people look at this kind of poetry and then go on. They don't want to take the time to understand what they can get out of it and what they could do with the knowledge that these poems can bring. You should take the time to learn what they are saying and what wonderful and new things you can learn from the poems themselves. Author: Basit Habib The Author writes articles on mushaira and new urdu poets which can be found on the web.
<urn:uuid:41ddb537-0806-4c67-8adf-884041909fe6>
CC-MAIN-2016-26
http://www.poetrysoup.com/urdu_poetry/about_urdu_poetry.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982552
580
2.59375
3
Stacked Population Index: Mumbai Stacked City (Research) Mumbai is well known for its high population density. There are 3.34 million in Mumbai's Island City, resulting in a massive density of 36,200 people in each square kilometer. This density is relatively constant across this central area of Mumbai, where the architecture consists of a contiguous patchwork of medium-height, mixed-use building structures. Equally omnipresent are the informal settlements woven throughout the urban fabric, which comprise 20 percent, or 74 square kilometers, of the city’s geography—yet accommodate as much as 65 to 75 percent of the population. These settlements are largely limited to low-rise structures that on average are not taller than two stories, meaning those areas have a high density of tightly packed inhabitants. To make this information more accessible, the Lab has developed a new index of population density based on Floor Space Index (FSI is the term used in India, while the term Floor Area Ratio (FAR) is more common in other countries). Using Google Earth to create a detailed map of the current population density of Mumbai, Lab Team member Neville Mars spearheaded this methodology, which examines the average density in gridded map cells of 500 square meters. These numbers form the basis of a new index, dubbed “Stacked Population Index” (SPI). The model derived from this data provides an accurate depiction of Mumbai’s density, revealing precisely how it varies and reflects living conditions throughout the city. This work was created by Neville Mars in connection with the BMW Guggenheim Lab project, a project of the Solomon R. Guggenheim Foundation, and is licensed under a Creative Commons Attribution 3.0 Unported License. Photo: Neville Mars
<urn:uuid:4e06a191-66a6-4898-8c10-476b34f26f4a>
CC-MAIN-2016-26
http://www.bmwguggenheimlab.org/whats-happening-b/mumbai-lab-city-projects/stacked-population-index
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931383
358
2.859375
3
For Christians, today is Ash Wednesday, the beginning of the forty day (not counting Sundays) season of Lent. Lent comes from the Anglo-Saxon word lencten, which means “spring.” The season is a preparation for celebrating the resurrection of Christ on Easter Sunday. Historically, Lent began as a period of fasting and preparation for baptism by converts and then became a time for penance by all Christians. Most churches that observe the season of lent will mark their worship space with somber colors such as purple or ash gray and rough-textured cloth as most appropriate symbols. Ash Wednesday provides us with the opportunity to confront our own mortality and to confess our sin before God within the community of faith. The form and content of the Ash Wednesday Service focuses on the themes of sin and death, but it does so within the context of God’s redeeming love in Jesus Christ. The use of ashes as a sign of mortality and repentance has a long history in Jewish and Christian worship, and the Imposition of Ashes can be a powerful and tangible way of participating in the call to repentance and reconciliation. May we all be drawn closer to God and made more like Christ by God's grace during this Lenten journey to the cross and resurrection.
<urn:uuid:1bb4b10d-1e11-4301-bd15-ec23af43398e>
CC-MAIN-2016-26
http://wesleyananglican.blogspot.com/2009_02_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950532
258
3.140625
3
Grille starts his talk at TEDX Pittwater. Robin Grille is a psychologist, author, educator and advocate for children who is not alone in his dream for a better world. For those interested, you will find that what he has to share is one of the most crucial keys to creating the future we aspire towards. How do we unlock the peace code in the human brain and help it to find its’ full expression? I had the pleasure of collaborating with Robin many years ago in promoting The Children’s Well-Being Manifesto, and his work continues to inspire great hope. For those in the UPLIFT community, the notion of creating a new story of healing is deeply entrenched and also backed by science as seen in the research of Bruce Lipton, PhD. We literally have the ability to change the world we live in by addressing our core belief systems. This logic can be applied to our deeply held beliefs that human-beings are wired for violence, which the science of epigenetics refutes completely. Human behavior is much more a product of our environment and conditioning than it is dictated by genes. This points directly to child-rearing practices, and the ways that it affects the developing brain. Harsh, punitive, and cold environments along with chronic stress cause the brain to release a neurotoxin known as cortisol. Cortisol literally destroys brain cells in the area of the brain connected to emotional regulation and impulse control causing the prefrontal lobes to atrophy. Whereas, loving supportive connection in a safe environment causes the brain to secrete oxytocin which developed these centers and cultivates the capacity for empathy, which is the neurological foundation for peace. The conclusion is that Violence is a Preventable Brain Disorder. In his talk (below) Robin Grille also explores the fascinating historical and cultural roots of our story of violence along with a 7-step plan to re-write the code and create a peaceful planet where we are less violent to each other and towards our environment. In a recent uplift blog post titled, How to Stop the 6th Mass Extinction Bruce Lipton states: …the realization that we can change the whole story right now. We don’t need to try to fight the old story. We simply need to walk outside the old story and build a new story. People will leave the old story when they see a new story working. Every individual who changes their own story, is changing the vibrational environment within which we live. We can have the spontaneous remission of the planet’s ills and we can change the environment by just changing who we are. Clearly we are living in a potent time where science and spirituality give us the tools to change our ways of creating and interacting with the world around us. Please make some time in your day to watch this enlightening talk and share the inspiration with your networks. More importantly, make the effort to help that single-parent in your community and open your heart to embrace the children in your life with love, connection, support, and safety!
<urn:uuid:4506aae9-35d3-4e7e-9db2-9b5d84950fa5>
CC-MAIN-2016-26
http://upliftconnect.com/violence-preventable-brain-disorder/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00060-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956849
620
2.65625
3
“Democrats & Republicans: The Philosophy of the State” I was asked to contribute a piece on political philosophy to Florida A&M University’s Living Well series and here is what I wrote: Election Day is almost here and every U.S. citizen must reflect on the similarities and differences between each presidential candidate before casting a vote. This requires more than a quick scan of party platforms. Instead, a deeper focus on the philosophy behind Democratic and Republican party rhetoric will help the average citizen make a more informed decision on Nov. 6. While many believe philosophy has little impact outside of academics, the campaign trail has shown the importance of really knowing the core philosophical values that guide each candidate and will ultimately determine America’s fate for the next four years. Lost in the heat of partisan politics is the truth that most of us share the same values as a people. In fact, both parties share core philosophical views traceable to the European and American thinkers of the Enlightenment Period (about 1600-1800) such as Thomas Hobbes, Adam Smith, John Locke, Thomas Jefferson, Alexander Hamilton, and Jean-Jacques Rousseau. The English philosopher John Locke is probably the philosopher who most influenced American politics in his case for the right to life, liberty and property. In addition to these cherished ideals, Locke also believed legitimate government relies on the consent of the citizenry through majority rule. Interestingly, Democrats and Republicans are united in the once radical view that government exists for the good of the people. Self-Interest vs. the Common Good Many of their differences, however, stem from deciding what role the government should play in serving this good. Republicans tend to take a more conservative approach and accept the philosophy of Adam Smith, the author of The Wealth of Nations. Smith’s view, commonly known as laissez-faire capitalism, encourages individuals to act on the basis of self-interest in a free and competitive market that best serves the good of all. Furthermore, Republican views on the role of government are best attributed to the American philosopher Henry David Thoreau: “that government is best which governs least.” Rather than have the state direct the market, Smith spoke of the market’s “invisible hand” as a metaphor to describe the self-regulating behavior of the marketplace. While Democrats also embrace capitalism, the philosophy of the current Democratic Party was shaped by the Great Depression and the New Deal. From their standpoint, this economic disaster was caused by allowing the invisible hand of the market to act with little restraint or regulation. As such, Democrats tend to favor having the state play a prominent role in regulating the economy. This allows the state to serve the good of the people by checking the excesses of self-interest in favor of the common good. Republicans contend checking of excesses can harm the public good by choking the economy. Thus, Republicans generally favor less state influence over the economy. The Democrats, as exemplified in the New Deal, generally take the view that the state has a positive, active and significant role to play in securing the good of the people. Specifically, the Democratic Party believes the state should be altruistic in its support of programs like federal student aid, welfare, and healthcare. While the Republican Party also holds to the idea of the state having an active role in the public good and in caring for citizens during times of need, they generally embrace the idea that the role of the state should be more limited and it is preferable for people to rely on personal success than private charity. A very strong version of this view is put forth by the Tea Party. Interestingly, they explicitly acknowledge the influence of philosopher Ayn Rand. In her collection of essays titled The Virtue of Selfishness, argued that we are morally obligated to achieve happiness. As she saw it, ethics based on altruism (the moral view that we should act for the benefit of others) would prevent people from achieving happiness. This is because altruists would be wasting their resources on other people rather than using them to achieve their own happiness. Her solution was that people should embrace what philosophers call ethical egoism—the moral view that a people should exclusively act in their own self-interest. While this might sound harsh, the justification is that this creates a better society in which people can succeed by their own efforts without being dragged down by supporting others and without being trapped in dependence. Thus, some of the key philosophical distinctions between the Democrats and the Republicans involve their views of what role the state should play in securing the general good. The Democrats advocate a more extensive role for the state in securing this good while the Republicans claim the general good is better served by a more limited state. This disagreement is often dramatically exaggerated in political rhetoric, which makes it all the more important to remember that far more unites us as Americans than divides us as Democrats or Republicans (or independents). I’ll be doing a Twitter live chat as well: Join LaBossiere on Twitter for a live chat on Nov. 1 at 6 p.m. to answer your questions about political philosophy. Follow FAMU_1887 via hashtag #LivingWell101.
<urn:uuid:6bceeec0-fc92-41eb-b661-5befde5ceafa>
CC-MAIN-2016-26
https://aphilosopher.wordpress.com/2012/11/01/democrats-republicans-the-philosophy-of-the-state/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00098-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955034
1,055
2.75
3
Metadata is data about data. For example, if you save an audio file to a database, what might you want to know about that file - possibly, its duration, format, creator, owner, subject, among other information. approaches to creating instruction with repurposeable learning objects suggest the use of Extensible Markup Language (XML) or a similar language that facilitates the ready use of metadata and retrieval based on that metadata. Although the creation of XML objects is beyond the scope of this module, it might still serve the developer of HTML objects to include useful metadata. Metadata tags are invisible to the page viewer, but are used by search engines and database routines. To add or edit metadata, the HTML code could be modified directly, or |Example 1. This Page's Header The HTML code behind this Web page, which you might be looking at as one panel in a frames page, includes a head and a body section. The metadata is listed in the head as follows, with extra spaces added for ease of reading: HTML Metadata Code: <meta name="Author" content="Jim Flowers"> <meta name="GENERATOR" content="Microsoft FrontPage 5.0"> <meta name="Description" content="HTML pages objects can be enriched by adding metadata."> <meta name="Keywords" content="metadata, HTML, learning object, rlo, <title>Adding Metadata to HTML Pages</title> Using Web Page Creation Software to Web page creation software typically has a means to enter metadata, as FrontPage does through its Page Properties dialog box, under the Custom tab: In addition to an "Author" tag, you may wish to create other meta tags to identify the date of creation or revision, or other specifics about creation. The "Description" tag allows a description to appear under selected search engines. The "Keywords" content identifies important keywords used for searches, and the use of this is recommended if you would like the objects you create to be found using these searches. Too often, Web page authors even leave the page title blank. This typically appears in the top of the browser, as the name of the page found by a search, and often can appear in printouts of the page, so it is important to name each page. When one page is used as a template, copied to create other pages, the names are often overlooked, especially since they do not appear on the previewed page. The final base target tag is not metadata, but an instruction for this particular page that unless otherwise specified, hyperlinks are to open in a new (blank) browser window. Benefiting from Metadata One benefit of using metadata is the association of each object with data in a series of fields that are relevant to your situation. You can keep track of creation date, revision date, the name of the person revising the object, and other data of relevance. Another benefit is that metadata facilitates search and retrieval. The keywords you specify may help those using some Web search engines to find With the advent of XHTML, FrontPage 2003 and its support for XML, and greater acceptance of the SCORM requirements, serious developers may find it best to leave behind HTML for the extensible languages that offer more support for database retrieval and greater reliance on metadata. But for now, if you create HTML learning objects, try adding some metadata to improve the ability of others to find your objects.
<urn:uuid:f7e29ae8-af41-412a-8e95-a6c04d2c35a0>
CC-MAIN-2016-26
http://jcflowers1.iweb.bsu.edu/mod/rlometadata.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.819813
755
3.34375
3
|Volume 20, Number 3||1995| Currently, in most veterinary institutions, surgical instruction involves the use of animals, either alive or as cadavers. Many veterinary faculty and students question the use of live animals for teaching surgery, and condemnation of this practice by animal welfare groups has been the focus of considerable public attention. This concern has also made it more difficult and expensive to procure animals for teaching purposes. To date, institutions teaching surgery have had three options: ignore public concerns; use cadavers; or use client-owned animals which are presented for recovery surgery. Fighting public opinion can lead to "bad press" for the institution and a number of legal suits have been filed against schools by students who feel that being forced to use live animals is a violation of their rights. Cadavers often have to be stockpiled for varying periods of time prior to their use. This requires freezer space and the thawed tissues are quite abnormal and aesthetically unpleasant. The integration of students into clinical surgery may be the ideal method of teaching but requires considerably more student contact time and larger faculty numbers than are available at most schools. The advanced type of referral case which is seen at most teaching hospitals is also generally considered to be inappropriate for training the novice surgeon. Although there are some models currently on the market, plastic bones for teaching orthopedics and artificial skin for practicing suturing, there has been no model in general use for teaching abdominal surgery. We therefore developed our own dog abdominal surrogate for instructional exercises (DASIE) and evaluated its acceptance by the undergraduate students in the surgery training programs at the Ontario Veterinary College and the Atlantic Veterinary College. Materials and Methods Each DASIE consists of a hollow cylinder of laminated foam rubber and fabric with the ends plugged with a rectangular reinforcement block. The multiple layers of the outer shell are designed to be cut and sutured individually, much like the tissues of the canine abdomen. Colored threads are incorporated between the layers of the DASIE wall to simulate blood vessels that are transected by the skin and subcutaneous incision. These mock vessels can be grasped with hemostats and ligated. Within the cavity of the DASIE is a length of knitted tube that can be handled surgically like small intestine. The DASIE was integrated into the undergraduate surgery laboratories at the Ontario and Atlantic Veterinary Colleges to teach abdominal draping, aseptic technique, the use of surgical instruments and the rudiments of tissue handling. Prior to the use of the DASIE, students were given the opportunity to practice these skills on a solid block of foam rubber. Following the DASIE laboratory the students performed abdominal surgery on live animals (dogs, sheep or goats). A second DASIE laboratory was performed by the students after their live animal experience. Following completion of the second DASIE laboratory, a questionnaire was used to evaluate student acceptance of the teaching models. Students were asked to compare the foam block, DASIE and animal, and rate how useful each was for: 1. Learning sterile technique and general operating team roles; 2. Learning sterile draping, including the use of skin edge drapes; 3. Learning to cut tissue in layers to open a body cavity; 4. Learning the use and "feel" of surgical instruments; 5. Learning the control of hemorrhage by ligation of blood vessels; 6. Learning to handle and suture intestinal tissue; 7. Learning to close a body cavity by suturing tissue layers; 8. Learning suture patterns and to tie knots in suture material; 9. An overall learning experience. Each learning objective was scored for each model; out of a possible five points 1 being the lowest or poorest learning experience and 5 being the highest or best learning experience. Students were also asked if they agreed with the use of live animals for surgical instruction and were invited to comment, positively or negatively, on the use of the DASIE for teaching the surgery laboratory. The mean values for the responses to each question were calculated and statistical significance was determined by Chi square analysis. A total of 116 students completed the questionnaire. Ninety-six percent of these students agreed with the use of animals for teaching surgery provided the animals were treated humanely. The results of the questionnaire are shown in Table 1. All mean values within each question were significantly different from each other (P<0.05). Question five (ligation of blood vessels) had fewer responses since only students from the Atlantic Veterinary College used DASIEs containing the artificial blood vessels. Most comments made by the students were positive and related to what they perceived as reduced stress in the laboratory and their support of decreased animal utilization. Several suggestions were made regarding potential improvements in the design or manufacture of the DASIE. No negative comments were received. Table 1. Summarized results of question 1-9. Respondents for all questions were from both Universities except "ligation of Blood vessels" which included only students from the Atlantic Veterinary College. Mean Learning Value * Learning Objective Foam Block DASIE Animal 1. Sterile Technique 2.08 3.88 4.72 2. Use of Surgical Drapes 1.91 3.84 4.89 3. Opening a Body Cavity 1.14 2.88 4.98 4. Use of Surgical Instruments 2.64 3.97 4.73 5. Ligation of Vessels 1.03 1.75 5.00 6. Suturing intestine 1.05 2.55 4.98 7. Suturing Tissue Layers 1.18 3.04 4.96 8. Suture Patterns 3.30 4.29 4.70 9. Overall Learning Experience 2.43 3.81 4.99 * Respondents rated the learning value of each learning objective for each model on a scale of 1 to 5- 1 was low and 5 was high. The results of the questionnaire confirm the impression of the faculty and staff who participated in the training sessions that the DASIE functioned well as an intermediate step between the single layer foam block and the animal. The multiple layers of laminated fabric and foam rubber respond to surgical instruments much like the tissues of the canine abdomen. The layered outer wall and internal tube of "bowel" permit students to practice the various suture patterns used clinically for abdominal, gastrointestinal and urogenital procedures. The "blood vessels" between the layers give the students practice at grasping and ligating specific points of tissue. Because the welfare and survival of the patient is not an issue, the use of the surrogate helps reduce the apprehension that most students feel when first attempting surgery on live animals. Clean-up following the training exercise is minimized with the DASIE. Students have more time to learn proper surgical technique rather than spending a large portion of their laboratory period washing instruments. Students also have the option of borrowing or purchasing a DASIE to practice their surgical techniques outside the formal laboratory session. The initial purchase price of a DASIE is approximately one tenth that of a conditioned research dog at our facilities. Because of its cylindrical shape, each surrogate can be rotated to allow six to eight incisions without affecting its teaching value. This multiple-use capability increases the potential cost savings. As a surrogate, the DASIE was well received by students. We consider them to be an effective, low stress method of preparing for live animal surgery. Its use has reduced the need for animals in teaching abdominal surgery. This follows the philosophical trend of today's society in its demands for non-living teaching models. We suggest the use of an abdominal surrogate as an aesthetically acceptable alternative to live animal or cadaver surgery for some introductory surgical laboratories. References and Endnotes 1. Rollin BE: Changing social ethics on animals and veterinary medical education. J Vet Med Educ 17:117-84, 1990. 2. Bauer MS, Glickman N, Glickman L, Toombs JP, Bill P: Evaluation of the effectiveness of a cadaver laboratory during a 4th-year veterinary surgery rotation. J Vet Med Educ 19:77-84, 1992. 3. Jennings PB: Alternative to the use of living animals in the student surgery laboratory. J Vet Med Educ 13:14-16, 1986. 4. DeYoung DJ, Richardson DC: Teaching the principles of internal fixation of fractures with plastic bone models. J Vet Med Educ 14:30-31, 1987. 5. Johnson AL, Harai J, Lincoln J, Farmer JA, Korvick D: Bone models of pathological conditions used for teaching veterinary orthopedic surgery. J Vet Med Educ 17:13-15, 1990. 6. Bauer MS, Seim HB: Alternative methods to teach veterinary surgery. Humane Innovations and Alternatives 6:401-404, 1992. 7. Johnson AL, Farmer JA: Evaluation of traditional and alternative models in psychomotor laboratories for veterinary surgery. J Vet Med Educ 16:11-14, 1989.
<urn:uuid:0d6b2f76-14b0-49c9-a8e3-75ec89f6af33>
CC-MAIN-2016-26
http://scholar.lib.vt.edu/ejournals/JVME/V20-3/holmberg.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942123
1,857
2.84375
3
Translation of inflict in Spanish: - to inflict something onsomebody/somethingthe suffering which he inflicted on his familyel sufrimiento que le causó a su familiael sufrimiento que le ocasionó a su familiael sufrimiento que le infirió a su familia [formal]he would never forgive the indignities inflicted on himnunca perdonaría las vejaciones de las que había sido objeto or que le habían infligidowe didn't expect to inflict such an overwhelming defeat on themno esperábamos infligirles una derrota tan aplastanteheavy penalties were inflicted on themse les aplicaron or se les impusieron penas severasshe inflicted her company on usnos impuso su presenciase nos pegó [colloquial]Example sentences - Its whip-like tail can drive a tail spine into an intruder and inflict a painful wound. - It inflicts a painful sting that is sometimes deadly to humans, as well as to young, unprotected livestock and wildlife. - Both the Greater Weever and the Lesser Weever are capable of inflicting a sharp and painful sting from the spiny rays of the first dorsal fin. - We've tried everything to help him deal with his issues, to get him to talk and to make him realize that the way he inflicts his rage on those around him is totally unacceptable. - But globalisation inflicts insecurities on many whose cultures are put on the defensive and whose civilisations, after ages of little change, are compelled to adapt to outside influences. - At one level, this is certainly the case: the loss of a top operative inevitably inflicts some damage on the operational capabilities of an organisation. What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:df28a889-6860-4edc-925d-005b34cf9982>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/translate/english-spanish/inflict
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.842813
416
2.75
3
PROVIDENCE METHODIST CHURCH In 1865 the Maryland Conference of the Methodist Protestant Church established a “mission” or charge circuit in Sussex County. At the time local members of that faith were meeting nearby in Rogers School. Services were held in the schoolhouse until 1886, when the present church was built on land that had been provided by Harrison Rogers. The church lot was formally transferred to the trustees by Charles H. Elliott in 1924. Following the closure of Rogers School in 1932, the congregation acquired the vacant building for use as a Community Hall. Constructed in 1904 to replace the earlier structure where the congregation had once met, the building was moved to its present location in 1933. Enter your address for driving directions to this marker The Delaware Public Archives operates a historical markers program as part of its mandate. Markers are placed at historically significant locations and sites across the state. For more information on this program, please contact Kevin Barni at (302) 744-5015 LOCATION: Near Georgetown. East side Rd. 431 approximately 2.5 miles south of the intersection of Rd. 431 and US113.
<urn:uuid:4daf1dea-eac4-4b3e-a845-39739368e690>
CC-MAIN-2016-26
http://archives.delaware.gov/markers/sc/SC-130.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975373
234
2.734375
3
Editor’s Note: Dr. Daju Resosudarmo of the Center for International Forestry Research will speak on “Forests in Sustainable Development” in the UN General Assembly on 4 February starting at 9 a.m. EST (4 p.m. GMT); her presentation will be shown live online at webtv.un.org. The Open Working Group’s website will post supporting materials after the meeting; Dr. Resosudarmo’s presentation can also be viewed here. BOGOR, Indonesia — The final meeting of the 30-member UN Open Working Group on Sustainable Development Goals (SDGs) is convening to address the last of 32 themes to be considered as part of the formulation of the SDGs. The role of forests is one of those themes. The SDGs — a proposed framework to replace the Millennium Development Goals (MDGs), which expire in 2015 — are intended to guide global action on health, poverty, hunger, climate and other development challenges. Forests cut across all of them. “Forests are important throughout the development agenda,” said Peter Holmgren, Director General of the Center for International Forestry Research (CIFOR). “They’re important for food security, for protecting the environment, for climate change, for the green economy — so we can’t really place forests in one box. We need to figure out how forests can contribute across the range of SDGs.” Forests maintain water supplies, help mitigate climate change, and provide billions of the world’s poorest people with income, food and medicine. Global policy makers know well the value of forests, but development interventions have failed to leverage their contributions to ecosystems and livelihoods. Furthermore, global development frameworks in recent years — the MDGs among them — have tended to be sector-based, addressing narrow issues through narrow means. For example, though the MDGs made tangible gains in slashing poverty and hunger while increasing access to primary education and maternal care, among their documented shortcomings was a failure to address the environment in an integrated and cross-sectoral way. “The MDGs were quite specific, and often sector-bound,” Holmgren said in an interview. But even in the SDGs, forests alone won’t save the day: “We can’t really find the solutions if we only look at the forests,” he said. Instead, Holmgren recommends a landscapes approach — one that considers multiple sectors such as forests and agriculture jointly in a given area — to provide the basis to discuss common solutions. To that end, an SDG on “sustainable landscapes” will be proposed in New York. CIFOR Scientist Daju Resosudarmo, invited to speak to the UN Open Working Group, will call upon the members to consider an SDG that addresses several landscape functions, such as livelihood provision, ecosystem services and food production. Forests are a major component in these functions, and she will argue that an SDG targeting them is potentially beneficial not just for forests themselves but for the success of the overall development goals. “We are not only asking how forests can be harnessed for poverty eradication and sustainable development,” Resosudarmo said in prepared remarks. “We can also turn the question on its head and examine how a new development framework can support forests and forestry in fulfilling their multiple functions.” An SDG on sustainable landscapes could count on political support behind it. At a UN Forum on Forests meeting in April 2013, representatives from Ghana and China urged for a cross-cutting approach to forests and development, with a focus on poverty eradication. Indonesian representatives did likewise, calling for an SDG “that includes poverty eradication, sustainable growth and equity, and forests,” rather than a stand-alone goal specific to forests. Holmgren is hopeful that policy makers will incorporate this approach and emphasized that the SDGs are a useful chance to find joint solutions to the world’s most pressing challenges. “The SDGs represent an opportunity,” he said. “It is a new way of formulating where we want to take the planet and where we want to take human society.” The SDGs will be finalized in September 2015.
<urn:uuid:218d494c-153f-4d26-9986-54a1d1e1f05a>
CC-MAIN-2016-26
http://blog.cifor.org/21197/role-of-forests-in-sdgs-takes-center-stage-at-un
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00047-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934875
901
3.109375
3
Tumor Exosome Protein Signatures Predict Future Organ Sites of Cancer Spread Findings By Weill Cornell Medicine Support Paget's "Seed and Soil" Theory of Metastasis NEW YORK (October 28, 2015) — It's been a longstanding mystery — why certain types of cancers spread to particular organs in the body. Now, investigators from Weill Cornell Medicine have discovered precisely how this happens, supporting a century-old hypothesis known as the seed and soil theory of metastasis. |This image shows exosomes, green, that have infiltrated the whole lung. | Image credit: Ayuko Hoshino, David Lyden The culprit? Protein signatures on the membranes of small, tumor-secreted packages containing the blueprint that drives cancers to distant organs. These signatures could offer doctors a powerful new way to detect whether a patient's tumor will metastasize and where, providing critical insights into the estimated 1.6 million new cancer cases diagnosed every year. Ninety percent of all cancer-related deaths are related to metastasis. In their study, published Oct. 28 in Nature, the scientists investigated the role of exosomes, comprised of tumor-derived proteins, in preparing a microenvironment fertile for cancer metastasis. Working with exosomes derived from multiple cancers, they discovered that the proteins exosomes carry act as "ZIP codes" that direct exosomes to distinct organs, where they lay the molecular groundwork for metastases to form. "Our research offers a new approach to identifying patients who are likely to develop metastatic disease," says senior author Dr. David Lyden, the Stavros S. Niarchos Professor in Pediatric Cardiology and a professor of pediatrics and of cell and developmental biology at Weill Cornell Medicine. "Instead of waiting for late-stage metastasis, we can now initiate preventative strategies at an earlier point of disease progression with the hope of preventing its spread. This really changes the treatment paradigm." Most cancers have a preferred site of metastasis. While it might seem logical for a melanoma of the eye, for example, to metastasize to the brain because of its proximity, it in fact travels to the liver. Pancreatic cancers frequently metastasize to the liver, and pediatric bone cancers to the lungs. In 1889, a London physician named Stephen Paget was the first to propose "the seed and soil" hypothesis for the preferential spread of cancer to specific organ sites, known as metastatic organotropism. Paget proposed that distant secondary sites are somehow more receptive to tumor growth, just like soil awaiting seeds to sprout. To illuminate the role of exosomes in cancer metastasis, the investigators used patient-derived cell lines of breast cancer that spreads to the lungs and of pancreatic cancer that spreads to the liver. They coupled a fluorescent dye to the fatty membranes of exosomes isolated from each cancer and injected them into healthy mice. There, they observed the exosomes traveling to and fusing with cells in the lungs and in the liver, respectively. The investigators then used the same approach for 28 other cancers, including colorectal, lung, melanoma and pediatric cancers that metastasize to specific organs, such as lung, liver and brain. Exosomes from each cancer reached the organs associated with metastasis. Once there, the exosomes reprogrammed the organ sites that would otherwise be incapable of colonizing cancer to support tumor cell growth. "The exosomes reach these organs long before the tumors do, suggesting that the exosomes themselves prepare the "soil" — the distant organs — for metastasis," says first author Dr. Ayuko Hoshino, a research associate in pediatrics at Weill Cornell Medicine. Next, the researchers sought to determine which specific cell types within each metastatic organ are incorporating tumor-derived exosomes and are responsible for the formation of niches that support cancer metastasis. "Exosomes not only present affinity for specific organs, but also for specific cell types inside these organs," says co-first author Dr. Bruno Costa-Silva, an instructor of cell and developmental biology in pediatrics at Weill Cornell Medicine. To find out how this preferential targeting occurs, the researchers in the Lyden Lab, funded by the Children's Cancer and Blood Foundation, analyzed the proteins in exosomes isolated from the cancer cell models using a technique called mass spectroscopy. They identified a family of binding proteins called integrins that were present at high levels on the surface of exosomes. "There was a pattern that became quite apparent," says Dr. Lyden, who also has appointments in the Sandra and Edward Meyer Cancer Center and the Gale and Ira Drukier Institute for Children's Health at Weill Cornell Medicine. "We found a particular integrin for each organ-specific site." Among members of the integrin family, exosomes bearing alpha-6beta-4 and alpha-6beta-1 integrin promoted lung metastasis, while those bearing alpha-vbeta-5 integrin promoted liver metastasis. Brain metastasis was associated with alpha-vbeta-3 integrin-expressing exosomes. The binding of exosomes carrying these integrins to specific organs promoted inflammation at future metastatic sites, creating an environment that would support tumor growth in these organs. |Dr. David Lyden| "The integrin-specific signature that we identified may have significant value clinically, serving as a prognostic indicator for metastasis to specific organ sites," Dr. Lyden says. In addition, detecting the presence of organ-specific integrins on bloodborne, tumor-derived exosomes could direct physicians to monitor patients more closely and tailor therapies accordingly. "This will greatly assist clinicians in initiating preventive therapies for patients susceptible to developing organ-specific metastases," Dr. Lyden says. Exosome integrin profiles also offer promising new avenues of therapeutic development, he adds. These findings extend the Lyden Laboratory's research into exosomes and demonstrate that they are the primary vehicle by which cancers preferentially metastasize to distant organs. Investigators in the lab recently found that exosomes from different types of tumors, such as melanoma and pancreatic cancer, could kick-start molecular changes that establish future metastatic niches. These discoveries were made possible by an international collaboration between researchers at Weill Cornell Medicine, National Taiwan University, Memorial Sloan Kettering Cancer Center, the Spanish National Cancer Research Center, University of Porto, the University Medical Center Hamburg-Eppendorf, University of Nebraska Medical Center, University of Pennsylvania, Princeton University, Fred Hutchison Cancer Center, Oslo University Hospital, and University of Tokyo. Weill Cornell Medicine Weill Cornell Medicine is committed to excellence in patient care, scientific discovery and the education of future physicians in New York City and around the world. The doctors and scientists of Weill Cornell Medicine — faculty from Weill Cornell Medical College, Weill Cornell Graduate School of Medical Sciences, and Weill Cornell Physician Organization — are engaged in world-class clinical care and cutting-edge research that connect patients to the latest treatment innovations and prevention strategies. Located in the heart of the Upper East Side's scientific corridor, Weill Cornell Medicine's powerful network of collaborators extends to its parent university Cornell University; to Qatar, where an international campus offers a U.S. medical degree; and to programs in Tanzania, Haiti, Brazil, Austria and Turkey. Weill Cornell Medicine faculty provide comprehensive patient care at NewYork-Presbyterian Hospital/Weill Cornell Medical Center, NewYork-Presbyterian/Lower Manhattan Hospital and NewYork-Presbyterian/Queens. Weill Cornell Medicine is also affiliated with Houston Methodist. For more information, visit weill.cornell.edu. Posted October 28, 2015 2:00 PM | Permalink to this post
<urn:uuid:2a38eed0-0bd9-4395-8e2a-21f2f65e7350>
CC-MAIN-2016-26
http://weill.cornell.edu/news/pr/2015/10/tumor-exosome-protein-signatures-predict-future-organ-sites-of-cancer-spread.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922821
1,624
2.796875
3
Like most members of Phylum Cnidaria, the tentacles of Phillorhiza are equipped with stinging cells called cnidocytes. Within these cells are stinging organelles called nematocysts. When discharged, nematocysts can immobilize small prey items that are subsequently ingested. Nematocysts are also used as a defense mechanism. The planktonic egg and larval stages of several fish species (including commercially important species such as red snapper in the gulf of Mexico) are probably important as prey items.Additionally, throughout its native range and much of its introduced range, P. punctata also harbor endosymbiotic zooxanthellae within their bell. In a relationship analogous to that of reef-building tropical corals and their resident zooxanthellae, primary production of the photosynthetic zooxanthellae likely fulfills a large proportion of the nutritional needs of the host jellyfish (Garcia and Durbin 2003).The invasive Gulf of Mexico P. punctata populations were/are unusual in that they lacked endosymbiotic zooxanthellae (Graham et al. 2003). Zooplanktivory was/is the sole trophic mode of these populations which, nonetheless, attained high population numbers.
<urn:uuid:9459a27c-d44e-45ab-a6a4-b52d9bcca14e>
CC-MAIN-2016-26
http://eol.org/data_objects/11526662
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00087-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93812
266
3.3125
3
BATES, JAMES CAMPBELL BATES, JAMES CAMPBELL (1837–1891). James Campbell Bates, doctor and Confederate army officer, was born on May 14, 1837, in Overton County, Tennessee, the son of Nancy (McDonald) Bates. His father died when he was three years old, prompting Bates, his mother, and an older sister to move to Henderson, Rusk County, Texas, to be closer to his mother’s family and siblings. In 1856 they moved to Paris, Texas, where Bates attended local schools. Determined to continue his education, Bates returned to Tennessee to attend Bethel College and then later graduated from the University of Virginia. In 1860 Bates resided in Paris with his mother, his sister Adela and her husband William Bramlette, their children, three slaves, and two boarders. Bates’s intelligent bearing and responsible demeanor encouraged the United States government to name him a census marshal for Lamar County. As Bates recorded the census for 1860, the secessionist movement in Texas gained momentum, and by February 1861, secessionists successfully convinced Texans to leave the United States and join the Confederacy. When the war started, volunteer companies of soldiers organized and scores of men joined. In North Texas, the Ninth Texas Cavalry was officially mustered into service on October 14, 1861. The Ninth’s organizer and first commander was Clarksville merchant William B. Sims, and some of the first action they saw was in Indian Territory. When James Bates joined the Ninth Cavalry, he was elected third lieutenant of Company H. He earned the respect of his fellow cavalrymen and quickly rose in the ranks. He continued to earn high accolades from the other soldiers, especially after he rescued a wounded man while the company was still under fire at the battle of Chusto-Talasah in December 1861. In spite of the setback at the battle of Pea Ridge in Arkansas, Bates was elected first lieutenant on March 26, 1862. On May 10, 1862, he accepted the rank of captain. By 1864 Bates and his company had made their way from Texas to Georgia. On May 21, the Ninth Cavalry was protecting bridge crossings north of Allatoona, Georgia, on the Etowah River when they came under heavy fire. Although the Ninth repelled the attack, Bates was severely wounded and his men feared for his life. A minié ball hit him in his mouth knocking out some teeth, splitting his tongue, and breaking his jaw. His recuperation took months, but he was determined to return to the fight, and on April 2, 1865, he returned to his camp with his men. He was now a lieutenant colonel, having been elected during his recuperation on September 2, 1864. The war officially ended seven days later. Bates went home to Paris in late 1865. He farmed, and then in 1866 decided to go back to school to become a doctor. He graduated from the University of Virginia and then studied in New York’s Bellevue Hospital. In 1867 he married longtime sweetheart Thirmuthis “Mootie” Johnson, and they had seven children. The war took its toll on Bates’s health. He was never a well man after his return home, and even after moving to Palo Pinto County for a change of climate, his health continued to deteriorate. He and his family moved back to Paris in 1887, and Bates died on August 11, 1891. He is buried in Evergreen Cemetery in Paris. Richard Lowe, ed., A Texas Cavalry Officer’s Civil War: The Diary and Letters of James C. Bates (Baton Rouge: Louisiana State University Press, 1999). James A. Mundie, Jr., with Bruce S. Allardice, Dean E. Letzring, and John H. Luckey, Texas Burial Sites of Civil War Notables: A Biographical and Pictorial Field Guide (Hillsboro, Texas: Hill College Press, 2002). Ralph A. Wooster, Lone Star Regiments in Gray (Austin: Eakin Press, 2002). Image Use Disclaimer All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Handbook of Texas Online, Stephanie Piefer Niemeyer, "Bates, James Campbell," accessed June 24, 2016, http://www.tshaonline.org/handbook/online/articles/fbafb. Uploaded on February 23, 2011. Modified on June 10, 2011. Published by the Texas State Historical Association.
<urn:uuid:cced8bf9-5ff3-4883-ade1-e5198fd1ea02>
CC-MAIN-2016-26
https://www.tshaonline.org/handbook/online/articles/fbafb
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975299
1,107
3.09375
3
Presidential Proclamation -- National Equal Pay Day Throughout our Nation's history, extraordinary women have broken barriers to achieve their dreams and blazed trails so their daughters would not face similar obstacles. Despite decades of progress, pay inequity still hinders women and their families across our country. National Equal Pay Day symbolizes the day when an average American woman's earnings finally match what an average American man earned in the past year. Today, we renew our commitment to end wage discrimination and celebrate the strength and vibrancy women add to our economy. Our Nation's workforce includes more women than ever before. In households across the country, many women are the sole breadwinner, or share this role equally with their partner. However, wage discrimination still exists. Nearly half of all working Americans are women, yet they earn only about 80 cents for every dollar men earn. This gap increases among minority women and those with disabilities. Pay inequity is not just an issue for women; American families, communities, and our entire economy suffer as a result of this disparity. We are still recovering from our economic crisis, and many hardworking Americans are still feeling its effects. Too many families are struggling to pay their bills or put food on the table, and this challenge should not be exacerbated by discrimination. I was proud that the first bill I signed into law, the Lilly Ledbetter Fair Pay Restoration Act, helps women achieve wage fairness. This law brings us closer to ending pay disparities based on gender, age, race, ethnicity, religion, or disability by allowing more individuals to challenge inequality. To further highlight the challenges women face and to provide a coordinated Federal response, I established the White House Council on Women and Girls. My Administration also created a National Equal Pay Enforcement Task Force to bolster enforcement of pay discrimination laws, making sure women get equal pay for an equal day's work. And, because the importance of empowering women extends beyond our borders, my Administration created the first Office for Global Women's Issues at the Department of State. We are all responsible for ensuring every American is treated equally. From reshaping attitudes to developing more comprehensive community-wide efforts, we are taking steps to eliminate the barriers women face in the workforce. Today, let us reaffirm our pledge to erase this injustice, bring our Nation closer to the liberty promised by our founding documents, and give our daughters and granddaughters the gift of true equality. NOW, THEREFORE, I, BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim April 20, 2010, as National Equal Pay Day. I call upon all Americans to acknowledge the injustice of wage discrimination and join my Administration's efforts to achieve equal pay for equal work. IN WITNESS WHEREOF, I have hereunto set my hand this twentieth day of April, in the year of our Lord two thousand ten, and of the Independence of the United States of America the two hundred and thirty-fourth.
<urn:uuid:32015682-ac60-4911-aeb7-beff917e67c9>
CC-MAIN-2016-26
https://www.whitehouse.gov/the-press-office/presidential-proclamation-national-equal-pay-day
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00018-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947222
617
3.171875
3
Winter is not my favorite time of year for several reasons, including the fact that a lot of wildlife have gone south for the winter or are hibernating. Most plants are dormant, making the landscape bleaker. When wildlife are out and about, it’s often in groups. I started mulling over this winter communal tendency after three events recently: a flock of vultures had briefly chosen to roost in the town of Washington, five male cardinals were sitting quietly together in one bare sapling in the forest that borders my backyard and my landlord found a half-dozen five-lined skinks hibernating under some debris near his garage. While spring brings individuals in some species together briefly to mate (such as frogs), individuals in other species, especially birds, disperse at that time to stake out and defend territory in which to mate and raise their young. Once the breeding season is over, some species come together for a variety of reasons: protection, foraging, keeping warm or other reasons that may not be as apparent and could include social components. Winter groups may consist of just a few individuals, or, as in the case of some bird species, huge flocks – or both. Birds are especially likely to come together in winter. Some form loose flocks, coming together to feed and roost; others may stay in flocks throughout the day. During this winter communal period, these groups have to sort out dominance, with males at the top of the hierarchy in most species. Advantages to grouping in terms of security are pretty obvious – such as an enhanced ability to drive off or distract intruders – but wouldn’t individuals fare better when it comes to foraging? In winter, food tends to be scarcer and more concentrated in fewer areas. Insects are not active, and a multitude of plants are not producing fruit. What fruit is available is usually on late-fruiting plants, from grapes to tough-skinned winterberry. In winter, some wildlife switch their diets to take advantage of these cold-season offerings or supplement them with fruits. Wildlife searching for these geographically concentrated foods may find sharing a better, or at least safer, strategy than doing it on their own. Winter groups are certainly more mobile than breeding pairs and can go where the food is. Vultures are among the species that can form large groups to roost in the winter. This massing can be daunting to humans, considering the dark symbolism, unpleasant eating habits and copious amount of white poop these large birds bring with them. Crows can also be especially annoying to humans. Like other corvids, a bird family that also includes ravens and jays, crows can be extremely vocal and loud, especially when massed in large flocks. I had a crow roost in pine trees in back of another house I rented in the county, and their convergence in the evening was indeed quite loud, but I loved having these fascinating birds around. Dark-eyed juncos, American goldfinches, bald eagles, starlings (a nonnative species) and cardinals are among other bird species that flock up in winter. The five male cardinals in the sapling in my backyard looked like a classic Virginia Christmas card in which a male’s cheery red plumage stands out in contrast to the white and gray of the winter landscape. It was easy to figure out why these particular male cardinals were buddying up in the backyard on this occasion. I had swept a patch of the yard clear of snow and spread bird feed there. Although males dominate such winter groups, the fact that the cardinals in the sapling were all males was likely coincidental. At least as many females were nearby, and I often see them together. The skinks found sheltering together under debris near the garage likely also were together coincidentally. Reptiles are inactive in winter to conserve energy, since they can’t generate it themselves as mammals can. Instead they seek shelter and brumate, a state similar to hibernation on or under logs, rocks, leaves or other natural debris or in underground dens. The half-dozen skinks just ended up in the pile of leaves and dirt because it was one of few options around the garage. Since my landlord was cleaning up around the garage, he transported the lizards to a safer location, piling up leaves over them to protect them.
<urn:uuid:58178cb5-3c5e-4fa5-90e4-24d6b26aec1f>
CC-MAIN-2016-26
http://www.rappnews.com/2013/01/31/wild-ideas-hanging-out-with-your-peeps-in-winter/110040/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970651
904
3.265625
3
The "Turing test" has been making headlines thanks to a field you might not expect: video games. In the middle of the last century, British mathematician Alan Turing proposed the following test for determining whether a computer could think...on its own! Your hard drive is getting full, so you delete a few photos, some word documents, and then empty the recycle bin. They're gone now, right? Computer scientists are investigating mimicking nature in an effort to make it harder to hack into computers and networks. Learn how on this Moment of Science.
<urn:uuid:1fd1e198-658f-4692-a9e8-a94dc4dfe618>
CC-MAIN-2016-26
http://indianapublicmedia.org/amomentofscience/tag/computer-science/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94306
114
3.046875
3
Barbour County is located in the southeastern section of the state, bounded on the east by the Chattahoochee River and the State of Georgia. The county seat was established in Louisville in 1833, and moved to Clayton in 1834. Today Barbour County contains two courthouses - one in Clayton and one Source: Owen, Thomas McAdory. History of Alabama and Dictionary of Alabama Biography. Chicago: S. J. Clarke Publishing Co., THE EUFAULA TREE THAT OWNED ITSELF... Creek Indians likely met under its cool summer branches. When Eufaula was known as Irwinton, it was an outpost of sorts and early settlers passed by it as they traveled north and over the wooden bridge that crossed Chewalla Creek. Later, it stood sentinel before the home of Confederate Capt. John A. Walker and little girls made play houses under its canopy while little boys played marbles. When Capt. Walker's house burned, the tree survived. During the cyclone of 1919, the roots held firm. (...) An iron fence was donated by Dr. J. L. Houston, along with a bronze plaque that bore the inscription: "The Tree That Owns Itself, deeded by the city of Eufaula to the Post Oak Tree, April 8, 1936, christened the Walker Oak May 1, 1936, 'Only God Can Make a Tree'." The fence itself was historic and according to The Tribune, it had "adorned flower gardens of long ago." In 1961, the tree was still standing guard at the intersection of Highland Avenue, Cotton Avenue and Eufaula Avenue. Tourists often stopped to take its picture and read its inscription. But on April 9, 1961, the long-standing tree met its match when a tornado-like wind swept through Eufaula. NOTE: WE ARE LOOKING FOR A PICTURE OF THIS TREE FOR OUR WEBSITE, IF ANYONE HAS ONE, COULD YOU DONATE A SCAN TO US, WE'LL PROVIDE CREDIT TO YOU FOR THE SUBMITTAL. Thanks. Read the History of Barbour County online: Caution: This book is in Adobe PDF format, and it is quite large even being broken into 100 page segments, watch your left hand, bottom corner and it will show the progression of your document loading. Make sure to visit our other projects & subprojects as well - We hope you enjoy the book. Contact us via the form @ http://www.gagenweb.org
<urn:uuid:86fe86c5-8cfa-469b-b702-5a85b91d1867>
CC-MAIN-2016-26
http://www.usgennet.org/usa/ga/county/fulton/barbourhistory/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947486
539
2.703125
3
Health Sleuths Assess Homocysteine as Culprit By JANE E. BRODY Published: June 13, 2000 What's ''normal''? That's the question dozens of researchers and some physicians are now asking themselves about homocysteine, a substance in blood that may rival cholesterol as a major actor in the nation's leading killer, heart disease, and play an important role in other common health problems as well. Recent evidence has implicated elevated blood levels of homocysteine, an unhappy byproduct of normal metabolism, in conditions ranging from miscarriages and birth defects to strokes, Alzheimer's disease and other disorders that afflict older people, including osteoporosis and presbyopia, the eye changes that force the middle-aged to acquire reading glasses. And since hostility and stress raise blood levels of homocysteine, it may explain why these emotional states are linked to heart attacks. Though strong hints of homocysteine's ability to damage arteries date back more than 30 years, until the 1990's its importance in cardiovascular disease was completely overshadowed by cholesterol. While concerns about elevated cholesterol levels fingered dietary fat as the culprit, high levels of homocysteine are associated with a diet rich in animal protein, the source of homocysteine's parent compound, the amino acid methionine. Furthermore, as Dr. Meir Stampfer of the Harvard School of Public Health pointed out, ''there was no commercial interest in studying homocysteine,'' since the way to reduce it -- eating less meat and taking supplements of B vitamins -- is inexpensive and not patentable. For cholesterol, on the other hand, pharmaceutical companies seeking to sell cholesterol-lowering drugs paid for many studies. In addition, homocysteine's potential role in common disorders may have been overlooked because the levels associated with an increased risk of health problems are still listed as normal by medical laboratories -- 8 to 20 micromoles per liter of blood plasma. However, recent studies have linked levels as low as 15 micromoles to an increased risk of heart attack, stroke, peripheral vascular disease and venous thromboembolism, potentially life-threatening blood clots in the veins. Among the 15,000 doctors participating in the Physicians' Health Study, those with a homocysteine level of 15 micromoles or higher had a heart attack rate three times as high as those with lower levels over a period of just five years, Dr. Stampfer and his Harvard colleagues found. Even a level of 12 micromoles can double coronary risk. Dr. William Castelli, former director of the Framingham Heart Study, who now heads the Framingham Cardiovascular Institute in Massachusetts, considers levels higher than 9 micromoles to be elevated, placing people at increased risk of a heart attack or stroke. In the Framingham study, he said, about 40 percent of the people had homocysteine levels greater than 9. ''Homocysteine is an important new risk factor for cardiovascular disease,'' Dr. Castelli said in an interview. ''There are about 17 studies now under way to determine the benefits of lowering homocysteine levels. We're still missing crucial data. We don't yet know if lowering homocysteine will result in a lower rate of heart attacks or strokes.'' The studies are being financed by the National Institutes of Health. But Dr. Castelli and others pointed out that since the way to bring down homocysteine -- eating less meat and taking supplements of the B vitamins folate, B6 and B12 that are required by the enzymes that process homocysteine -- is harmless and inexpensive, people should not have to wait five or more years for the research results before trying to lower their own homocysteine levels. There are several ways in which homocysteine can damage blood vessels. It injures the cells that line arteries and stimulates the growth of smooth muscle cells; both effects can result in lesions that narrow the channels through which blood flows. Homocysteine can also disrupt normal blood clotting mechanisms, increasing the risk of clots that can bring on a heart attack or stroke. Damage to the small blood vessels in the brain may explain the relationship that has been found between elevated homocysteine levels and the loss of cognitive function and Alzheimer's disease, say Dr. Jacob Selhub and colleagues at the United States Department of Agriculture Human Nutrition Research Center on Aging at Tufts University in Boston. Starting in 1983, they noted, several studies have linked an inadequate supply of B vitamins to a decline in cognitive function in the elderly, and some studies have shown that taking supplements of B vitamins improves cognitive performance. For example, this year in The American Journal of Clinical Nutrition, Dr. David A. Snowdon and colleagues at the University of Kentucky College of Medicine in Lexington described a study of 30 nuns who had lived in the same convent and eaten from the same kitchen until their deaths at ages 78 to 101. Blood samples taken years earlier revealed that those who had low blood levels of folate were far more likely to have suffered atrophy of the cerebral cortex. The lower the folate levels, the more severe the atrophy of the nuns' brains.
<urn:uuid:f8bee61c-3701-4a4f-a220-3a242291e72b>
CC-MAIN-2016-26
http://www.nytimes.com/2000/06/13/science/health-sleuths-assess-homocysteine-as-culprit.html?src=pm&pagewanted=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949146
1,069
3.1875
3
Suppose you're the network administrator at a large UNIX shop, and your MIS department standardizes all your network's client workstations on Windows NT Workstation 4.0. Naturally, your new NT users want to access their UNIX-based files from their NT machines. What are your choices for a low-cost, workable solution to this problem? Unfortunately, the options are fairly limited. UNIX and NT originated from distinct roots, and because their backgrounds are different, each operating system's (OS's) mechanism for storing and sharing files is unique. There is good news, however. With the growing popularity of NT in large enterprise environments, several methods can help you facilitate file sharing between NT and UNIX. You can enact noninteractive access by means of Microsoft programs such as File Transfer Protocol (FTP) or HyperTerminal, or interactive access by using tools that employ either the Common Internet File System (CIFS) standard or the NFS communications protocol. In this article, we'll describe how these access methods work, and what their strengths and weaknesses are. Then, we'll discuss the security problems that arise when you share files across platforms, and what you can do to address those problems. Along the way, we'll describe connectivity tools that can help make cross-system file sharing as painless and transparent as possible. Microsoft's TCP/IP suite is limited in file-transfer options. One option is to use NT's FTP client to transfer files between UNIX and NT hosts. Or, you can use the Telnet program to transfer files. Unfortunately, these solutions are slow and don't work in environments where multiple-user access to a file is necessary. In addition, Telnet can transfer only ASCII files--not binary files. TCP/IP solutions are suited primarily to environments where you need to transfer personal files to and from a storage facility on a UNIX host. The points in the TCP/IP methods' favor are that all UNIX OSs include FTP and Telnet servers, and NT includes FTP and Telnet clients. Another native Microsoft solution is to use HyperTerminal (packaged with NT 4.0 and Windows 95) to transfer files to and from a UNIX system. HyperTerminal supports four file-transfer protocols: XMODEM, YMODEM, ZMODEM, and Kermit. When you use HyperTerminal, you must have a program on your UNIX system that supports one of the HyperTerminal transfer protocols. If you use a third-party vendor's TCP/IP suite on your NT machines, you might have additional options for performing NT-UNIX file transfers. On most UNIX systems, users have access to the remote copy (rcp) command, which copies files from one OS to another. Another group of programs, collectively referred to as the UNIX-to-UNIX Copy (UUCP) program, lets you transfer files interactively or in a batch mode. Vendors are now making these once UNIX-specific programs available on NT for easier cross-system communication. The CIFS and NFS options are interactive--either protocol installed on one platform can access files on the other platform as if the files were local. However, to use CIFS or NFS, you must install additional software on either your UNIX or NT hosts. CIFS, originally known as Server Message Block (SMB), is the default network file-sharing mechanism that NT machines use. You equip your UNIX hosts with CIFS software to let UNIX users participate in your NT file-sharing network environment. Alternatively, you can install NFS-enabling software on your NT machines to let your NT users participate in UNIX file sharing. Using NFS requires you to install an additional software package on all your NT machines, a potential administrative headache. Fortunately, a growing number of NFS products offer gateway connectivity between desktop computers and NFS resources, eliminating the need to install software on every NT machine. Let's look more closely at the CIFS and NFS options. CIFS on UNIX. Implementing a CIFS solution on the UNIX side is often the cleanest cross-system file-sharing solution, because it doesn't require you to install special drivers on your NT host. In addition, growing numbers of UNIX vendors include some form of CIFS software with their products. Even if your UNIX vendor does not include a CIFS solution with its products, you can still choose from several good freeware and third-party products. At the inexpensive end of the equation is the freeware product Samba. Available in source-code form over the Internet, Samba is perhaps the best CIFS-enabling software product available. You can configure Samba to act as a Primary Domain Controller (PDC) for your NT domain. When a UNIX user connects to the domain, Samba automatically executes an NT logon script. Alternatively, Samba lets you share UNIX directories and printers as shares, as any NT host would. (For more information about Samba, see Mark Joseph Edwards, "Samba," March 1997.) If freeware doesn't excite you, you can opt for a commercial product. Perhaps the predominant CIFS-enabling UNIX product on the market today is SCO VisionFS. SCO VisionFS offers full CIFS capabilities, including file and printer sharing. Unfortunately, SCO VisionFS doesn't offer any of the advanced domain capabilities Samba offers; however, SCO VisionFS lets you verify user security against an NT domain controller. A version of SCO VisionFS exists for virtually every major UNIX system, including AIX, HP-UX, and SunOS. The downside to SCO VisionFS is its cost: You need to purchase a client access license for each user who will use the product to share files. Operating either Samba or SCO VisionFS on your UNIX host requires NetBIOS enabled over TCP/IP. Because most UNIX OSs don't have a NetBIOS over TCP/IP driver, SCO VisionFS contains a self-contained NetBIOS driver that provides this capability. (Samba includes a NetBIOS daemon, nbd, that enables NetBIOS over TCP/IP.) Setting up and administering both SCO VisionFS and Samba is easy, although both products require a thorough knowledge of the UNIX OS you install them on. The most difficult part of administering SCO VisionFS or Samba might be creating user accounts on NT and UNIX systems that have access to files on both systems. For example, on an NT domain Mike's logon might be mdeignan, whereas on a UNIX machine it might be mpd. If Mike tries to access resources on the UNIX machine from an NT domain, no mdeignan logon exists to let him do so. CIFS software needs to know how to translate NT logon names to UNIX account names. In most instances, cross-system file-sharing software packages have a manual translation table, but you need to configure the software to tell it how to perform the translation. In general, using the same username on both platforms is easiest--even if you experience some short-term pain in converting all your usernames to a new standard. NFS on NT. Sun Microsystems developed NFS to facilitate file sharing between UNIX systems in large decentralized computing environments. You can implement NFS capabilities in two ways: by using either an NFS client or an NFS server. The NFS client lets you mount UNIX file systems the UNIX server exports (equivalent to an NT share) on NT host machines. The NFS server lets you export file systems from your NT hosts to UNIX machines. UNIX machines can access these file systems as if they were file systems on other UNIX machines running NFS. No freeware NFS server or client is currently available for NT systems--your choices are limited to commercial products. Fortunately, many vendors offer NFS connectivity, including Sun Microsystems, NetManage, WRQ, Intergraph, and Hummingbird Communications. Sun Microsystems' NFS client software is Solstice NFS Client. You can download a 30-day evaluation version from Sun's Web site. NetManage offers an NFS server and client software package as part of its UNIX Link Internet applications suite. WRQ's Reflection NFS Gateway lets NT clients on your network access NFS file systems through your NT server without requiring you to install additional networking software on each workstation. Intergraph offers AccessNFS, a series of NFS-enabling products. DiskAccess provides full NFS client capabilities for NT or Win95 systems. DiskShare is an NFS server product. AccessNFS Gateway lets you mount NFS file systems on your NT server and share them with other clients on your network. Microsoft licensed AccessNFS for inclusion in NT 5.0. Hummingbird Communications' NFS Maestro provides both client and server NFS capabilities for your NT host. NFS Maestro lets NT hosts either export file systems to UNIX machines or mount file systems from UNIX hosts as local file systems. File Sharing and Security Two major problems have slowed the acceptance of NFS as a file-sharing mechanism in the NT world. First, NFS usually requires a great deal of administrative overhead, in terms of both administration and software maintenance. If yours is a smaller networking environment and you use a networking protocol other than TCP/IP, you must install TCP/IP so NFS can function. If you install NFS, you can no longer use NetBEUI. The second and perhaps larger problem is NFS's lack of comprehensive file- and directory-level security. UNIX implementations, and consequently NFS implementations, lack the comprehensive file- and directory-level security capabilities that NT offers. CIFS-based solutions on UNIX systems suffer from similar security problems, and depending on the CIFS-enabling technology you use, you might have the same security problems with CIFS that you have with NFS. But if you understand the implications of file and directory security in a CIFS or NFS world, you can implement an effective security solution that lets your users share files without compromising the integrity of your system's data. Let's take a moment to explore NT's and UNIX 's security structures. When you understand how they compare, you will be better able to formulate an effective cross-system security strategy. NT: The Janitor's Key Ring When Jim was in grade school, his school's janitor had a huge key ring attached to his belt. The thing must have had a thousand keys on it, and it didn't jingle but thunked as the janitor walked down the halls. Security on an NT system reminds us of that key ring. You can have thousands of keys on your system, and each door can have multiple locks. Which locks you can open on which doors depends on your keys. Permissions in NT give users special keys. One key gives you read access to a particular resource, whereas another key gives you write access to a different resource. Because users can be members of various groups, they have keys for those groups, as well. One important NT functionality is the ability to create groups within groups. Group membership in NT is transitive, so a user in Group A is automatically a member of all the groups that contain Group A. For example, the local Administrators group contains the global Domain Admins group. Because of this transitivity, members of the Domain Admins group are automatically members of the local Administrators group. Transitivity often makes things easier for NT-only administrators, but it presents problems in a mixed network. Another important aspect of NT security is the trust relationship. Administrators can configure two domains to trust each other, which lets administrators in one domain give access permissions to users in the other domain. This trust relationship lets two domains share resources without the need for a common user database. From a security perspective, NTFS's most significant feature is that you define access to a resource by using an access control list (ACL). As its name implies, an ACL is a list associated with a file of the users and groups who have permission to access or modify the file. However, if you use FAT on your NT system, you can't use the security features NTFS provides. UNIX: Fewer Keys to Fewer Locks Gaining access to a UNIX system is essentially the same as accessing NT. Users have a user account and password to identify themselves to the system. When users log on to the UNIX system, their identity determines their access to resources. Like NT, UNIX designates users and groups to determine resource access. Unlike NT, however, UNIX stores lists of users and groups in text files (/etc/passwd for users and /etc/group for groups). These text files contain all the information about the user's initial environment, including home directory, startup shell, user ID (UID), group ID (GID), and on some systems the encrypted user password. Most recent versions of UNIX can store encrypted user passwords in a separate file (usually the file /etc/shadow), so it is not accessible to users of those versions. Like NT, current versions of UNIX let users belong to multiple groups. However, many UNIX systems don't allow groups within groups. You can add this functionality to your UNIX system by using Network Information Service (NIS), the most recent version of which is NIS+. NIS is a database containing user and group information that is similar to the user database in NT. NIS manages this information with a single server (the master server) in a domain, which replicates the information to the computers in the domain. (An NIS domain is not necessarily the same as either an Internet domain or an NT domain.) As in NT, in UNIX a user's UID and GID determine access to resources. However, UNIX allows only three kinds of permission: read, write, and execute. Special permissions, such as take ownership or change permissions, do not exist in UNIX. UNIX systems do not use ACLs, defining access through three designations: owner, group, and world/ other. (The world/other designation is similar to NT's Everyone group). When you use the ls l command on a UNIX system (the syntax varies slightly depending on the specific UNIX system) to list files, a list of access permissions for several files appears. This list is similar to Table 1. Each line of the table consists of four columns. The first column identifies the file's access permissions. The second column identifies the file's owner. The third column identifies the file's group membership. The fourth column lists the file's size, creation date and time, and name. Let's look more closely at the first column--the file's access permissions. In the sample file access permissions -rw-r----- 1 jimmo admin 12709 15 Sep 98 12:04 security.txt the access permissions list consists of 10 characters. The first character is usually either a d, which identifies a directory, or a hyphen (-), identifying a file. The following nine characters break out into three sets of three characters each. An r signifies read permission, a w signifies write permission, and an x signifies execute permission. A hyphen (-) stands for the absence of a specific permission. The first set of three characters represents the file owner's permissions, the second set of three characters represents the file's group permissions, and the third set of three characters represents the file's world/other permissions. UNIX associates files with an owner and a group. If you are not the file's owner and not in the file's group, UNIX determines access by the world/other access. UNIX has no permission to explicitly deny a user or group access to a file. Therefore, if a UNIX user is a member of a group with access permission to a file or directory, the user has access to that file or directory. In the sample file access permissions printed above, the file's owner has read and write permission but not execute permission to the file. The file's group has only read permission to the file, and the world/everyone group has no access to the file. As we mentioned, UNIX systems use NFS to share files. The NFS server keeps a list of file directories, and UNIX administrators can define access to those directories for specific users and also for specific machines. So, although a user might have permission to access a particular file, that user can't open the file on a machine that doesn't have access permission. A key difference between NT and UNIX is that NFS access in UNIX is usually defined on a per-machine--not per-user--basis. As in NT, you run into problems in UNIX when you want to share resources between machines. Because UNIX users can exist on the NFS server but not on the client machine, UNIX administrators face the question of how to determine appropriate machine access. By default, UNIX treats the UID on a client the same way it treats the UID on the server. This default behavior requires a user to have matching UIDs on client and server. Although maintaining matching client and server UIDs is a straightforward task in UNIX, the possibility still exists that a user's client and server UIDs won't match. UNIX systems that employ NIS have a common user database where all UIDs are consistent. However, such a one-to-one mapping between client and server is not always desirable: for example, in the case of root users. A number of UNIX versions, such as DEC UNIX, enable the mapping of root users by default; that is, the root on one machine is the root on another. (In the NT world, default mapping of root users would be as if a Local Administrator had rights on every NT machine in the domain.) In addition to employing NFS mapping, UNIX administrators can map users on different machines through user equivalence. In user equivalence, you define two machines as equivalent (i.e., users with a particular ID can access both machines), or you can define equivalence on a per-user basis. When you define equivalence on a per-user basis, you can define equivalence between multiple users. For example, you can define multiple users on a remote machine as equivalent to a single user on the local machine. User equivalence functions similarly to a trust relationship in NT, in that users don't need to authenticate themselves on all the machines they connect to. When the Keys Don't Fit the Locks Although security problems can occur when like systems connect (e.g., NT to NT or UNIX to UNIX), the problems increase dramatically when different systems connect. The keys no longer fit the locks; that is, no standard mechanism is available for sharing user information between the two systems. The two obvious problems that result are difficult to overcome. The first problem is the difference in the way each system defines access to particular resources: No simple one-to-one relationship exists between these definitions. The second problem is that each system stores and manages user information differently. Thus, sharing user information between systems is extremely difficult. NT 5.0 addresses the second problem to some extent with the Kerberos authentication protocol, which provides authentication services from one database. To use Kerberos, clients encrypt passwords, which increases security. Administration is easier because Kerberos provides a central user database for NT and UNIX. Many applications are already Kerberos-aware. If you provide UNIX resources to NT clients with NFS, your NT machines need an NFS client and must convert from an NT username to a UNIX username. Some connectivity products provide a separate logon program for connecting NT clients to the NFS server. Often, these tools save the separate password on the local machine, providing an invisible connection from the NT computer to the NFS server. Doing so provides a single logon and one-to-one mapping between an NT user and a UNIX user. In addition, some connectivity products support NIS. With NIS, when users log on to an NT computer, the NIS server can automatically authenticate them. Many connectivity products have mapping tables, so multiple users who log on to one NT machine map to the correct UNIX user. With these products, if the UNIX server is running the pcnfsd daemon, NT users might need only to type in their NT username and password. The UNIX server can then log them in, supplying a UID and GID. If the UNIX server is not running the daemon, NT users must manually input a UID and GID. Although the problem of sharing user information between systems is the same when a UNIX machine provides connectivity via CIFS, the situation can be different. With NFS, a UNIX server almost exclusively provides connectivity service. However, when a UNIX server uses CIFS to provide connectivity, an NT server in the network might provide authentication. Different connectivity products take different approaches to this situation. For example, SCO's product OpenServer's Advanced File and Print Server (AFPS) can take over the functions of an NT PDC or backup domain controller (BDC). AFPS maintains a complete user database, as you would find on an NT server, and doesn't map users, because it considers NT and UNIX users identical. When users log on to the domain from an NT workstation, the AFPS server authenticates them, even if the AFPS server functions as a BDC. AFPS has two added advantages. First, AFPS provides server tools that are identical to the tools on an NT machine (e.g., User Manager, Server Manager) and that you can install on clients. AFPS User Manager can create users, just as NT's User Manager can do. Second, when you create users with the OpenServer interface, they automatically become AFPS users. AFPS gives users access to UNIX resources via NFS and to NT resources via CIFS. Samba can take over many NT PDC functions. For example, Samba can provide authentication services and logon scripts. When you use Samba you must create users on the local UNIX machine, and Samba does not support the ACL on NT files. However, with Samba, you can designate another server (e.g., NT or AFPS) to provide authentication. Samba also accomplishes one-to-one mappings between NT and UNIX users and can map to multiple users and groups of users. For example, from one machine you can map the NT administrator to the UNIX root user. Doing so gives you root access to all the files on the server. You can also create a group of NT users that need access to one directory and map them to one UNIX user. Open the Door Supporting your users sometimes means giving them access to resources on a different OS, whether NT to UNIX or UNIX to NT. If you've been taking the sledgehammer approach and making copies of all your data for both OSs, you need no longer do so. Now that you know which keys fit your NT and UNIX locks, you can open the door to easier and more effective NT-UNIX administration. |NT-UNIX Connectivity Products| (NFS and NFS gateway connectivity) Intergraph Corporation * 800-345-4856 Chameleon UNIX Link NetManage * 408-973-7171 FacetCorp * 800-235-9901 FTP Software * 978-685-4000 Network Computing Devices * 800-800-9599 (NFS and NFS gateway connectivity) Hummingbird Communications * 416-496-2200 Omni-NFS Gateway for Windows NT (NFS gateway connectivity) XLink Technology * 408-263-8201 Samba (CIFS connectivity) SCO Advanced File and Print Server (CIFS connectivity) SCO VisionFS (CIFS connectivity) The Santa Cruz Operation * 831-425-7222 Solstice NFS (NFS connectivity) Sun Microsystems * 800-786-3463 Reflection NFS Gateway (NFS gateway connectivity) WRQ * 800-872-2829 WinNFS (NFS connectivity) Network Instruments * 800-526-7919
<urn:uuid:4a6e5629-5a1f-44c8-8489-23e2625a63dc>
CC-MAIN-2016-26
http://windowsitpro.com/print/windows/sharing-and-securing-information-mixed-nt-unix-environments
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901378
5,099
2.6875
3
The Economist has an article this week celebrating the 25th anniversary of IBM's introduction of the personal computer onto the US market: In many ways, the PC triumphed due to the very un-IBM way in which it was developed. When IBM's previous attempts at a PC failed to sell, being too expensive, a “skunk works” team of engineers was convened in Boca Raton, Florida. The team did not report through IBM's stifling bureaucracy, but directly to the top of the company. It was given a year to devise a low-cost machine. “The people doing that work weren't talking about it, there weren't any business cases done, there wasn't any annual budget review,” explains Lewis Branscomb, IBM's chief scientist from 1972 to 1986. “IBM did a lot of radical things—and that proved to be very successful.” To meet its ambitious goals, the team bucked two IBM traditions. First, instead of using only IBM parts, the team chose off-the-shelf components. Second, rather than keep the design a secret, the team made the specifications open, so that independent software developers could flourish. When the PC finally launched, IBM expected to sell 250,000 units in five years. In the event, it had sold nearly 1m by 1985. Yet the very factors that led to the PC's success inadvertently prevented IBM from reaping all the benefits itself. The PC used a microprocessor made by Intel and an operating system made by Microsoft (led by a 25-year-old called Bill Gates). Neither was exclusive to IBM, and within a year other companies had worked out how to make much cheaper “clones” of its PC. Microsoft and Intel, not IBM, turned out to be holding personal computing's crown jewels. "This IBM project was a super-exciting, fun project," Mr Gates told PC magazine in 1982. Asked what the future would bring, Mr Gates was as blunt as he was prescient: "Hardware, in effect, will become a lot less interesting. The total job will be in the software." He was right. Today, society both benefits and suffers from the PC's flexibility and openness. The magic of the PC is that it is a general-purpose machine to which new functions can be added simply by installing a new piece of software. "The PC is a very fertile device," says Dan Bricklin, the inventor of VisiCalc, the first spreadsheet program. But this versatility comes at a price, since it makes the PC more complex, less secure and less reliable than a dedicated, single-purpose device. There is a local legend here in Boca that Bill Gates used to have an occasional beer in the local Hooters [or the waterhole that preceded the Hooters spot which in turn has now morphed into a Starbucks!] and tell his fellow tipplers that he was going to be a gazillionaire from this new PC phenomenon. Probably apocryphal, as Gates seems too single-minded to sit around bars and yak about his personal business plan. Yes, the very versatility and fecundity of the PC platform contained the seeds of its eventual superannuation. As a result of these shortcomings, many technologies incubated on the PC are moving off it. Functions such as e-mail and voice-over-internet calling that were first rendered in software, just as Mr Gates predicted, are now mature enough to be rendered in hardware. As a result, the PC is no longer centre of the technological universe; today it is more likely to be just one of many devices orbiting the user. You can now do e-mail on a BlackBerry, plug your digital camera directly into your printer, and download music directly to your phone—all things that used to require a PC. At the same time, the PC is under threat as the primary platform for which software is written, as software starts instead to be delivered over the internet. You can call up Google or eBay on any device with a web browser—not just a PC. People have been saying it for years, but this could finally allow much cheaper web terminals, or “network computers”, to displace PCs, at least in some situations. These shifts are affecting the big firms that grew up around the PC. Microsoft has moved into games consoles and set-top boxes, chiefly in case these other devices emerge as challengers to the PC as “hubs” for digital content. This week it confirmed that it will launch a digital music-player, called Zune, in response to Apple's successful march into non-PC markets with the iPod. As for PC-makers themselves, the falling prices and commoditisation that have so benefited consumers have turned them into low-margin box-shifters. IBM got out of the business in 2004, selling its PC division to Lenovo, a Chinese firm. Apple under Steve Jobs has proved more supple in the next generation of spin-offs, even though its original dedicated platform was too proprietary to grab market share as Microsoft did in the early years of the PC. [Also, Bill Gates had a leg up on the market by having participated in the original skunk-works project here in Boca]. And the Pentagon's DARPA spawned the internet, which became the world-wide web which in turn generated all sorts of business opportunities and, of course, blogging! The Economist notes this, but says the PC will remain the spawning ground of new tech. This does not mean the PC is dead. PC sales, at 200m a year, are at an all-time high. The PC's versatility means it will still be the platform on which new technologies tend to appear first. But with the rise of a plethora of other devices and the emergence of the web as a software platform, the PC now faces a struggle against its own technological offspring. Sort of the Sorcerer's Apprentice, but we all have gained from IBM's original experiment with diversity.
<urn:uuid:57aeb64b-2cfd-4536-8e49-cfebe0e3aad0>
CC-MAIN-2016-26
http://daveinboca.blogspot.com/2006/07/pc-and-twenty-five-years.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972356
1,236
2.859375
3
Severe Weather Awareness Week begins today with a look at the role Skywarn storm spotters. Severe weather can strike during any month of the year at any time of the day or night. When severe thunderstorms threaten, the National Weather Service calls Skywarn volunteers into action. Skywarn volunteers are the eyes and ears of the National Weather Service, providing instant reports of severe weather, including hail, high winds, and dangerous cloud formations. Skywarn spotters keep a close eye on the sky anytime severe thunderstorms approach. Many communities deploy spotters around the edge of the city and use their early reports of impending hazardous weather to warn the community. Some spotters relay reports from their home or business wheile other more adventurous volunterrs brave the elements and try to get as close to the storm as possible. Who are these Skywarn volunteers? A large number of Skywarn storm spotters in the Mid South are Amateur Radio Operators commonly known as HAMs. These public service minded individuals make ideal storm spotters since they have the ability to communicate their reports. They are willing to be trained and they have a real interest in helping the National Weather Service, and their local communities prepare for severe weather. Amateur Radio operators are on call 24 hours a day, 365 days a year, even thought they receive no compensation of any kind for their hard work. Many other groups participate in the Skywarn program, including law enforcement agencies, fire departments, utility companies, rescue squads, and the news media. In some areas, individual citizens are trained as spotters, and are asked to relay their reports to the National Weather Service. Spotters are a vital link in the warning process, and it is important to have as many trained spotters in each county as possible. The National Weather Service can help your community set up a Skywarn spotter network.
<urn:uuid:0fb94776-6ad3-4bf1-9fcd-b6604c97ce31>
CC-MAIN-2016-26
http://www.srh.weather.gov/meg/?n=skywarnham
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951238
377
2.875
3
Skip to Main Content Mobility assistive devices (MAD) such as canes can improve mobility and allow independence in the performance of mobility-related tasks. The use of MAD is often prescribed for stroke survivors. Despite their acknowledged qualities, MAD in real life conditions are typically underutilized, misused and abandoned. Ecologically sound, evidence based outcome measures need to be developed so as to capture the inherent complexities behind real life use of MAD and identify markers and mitigators of a successful integration of MAD into the daily activities of stroke survivors. In this study, we used accelerometers, gyroscopes, and a load cell to identify the task a patient was performing and examine the use of the cane in the context of the task.
<urn:uuid:93eda084-8644-49fd-b040-f13a449d04d0>
CC-MAIN-2016-26
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=4353306&contentType=Conference+Publications
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949073
149
2.640625
3
SynLube™ Lube−4−Life® 1−800−SYN−LUBE The Mars Pathfinder mission was widely proclaimed as "flawless" in the early days after its July 4th, 1997 landing on the Martian surface. Successes included its unconventional "landing" -- bouncing onto the Martian surface surrounded by airbags, deploying the Sojourner Mars Rover, and gathering and transmitting voluminous data back to Earth, including the panoramic pictures that were such a hit on the Web. JPL Mars Rover "Sojourner" on Mars. Each of the six wheels contains one miniature All of the motor bearings are lubricated with 1/2 drop of SynLube™. Total of 12 drops of SynLube™ are on Mars ! Mars Rover "Sojourner" at JPL. Mars Rover "Curiosity" at JPL. This photograph of the NASA Mars Science Laboratory rover, Curiosity, was taken during mobility testing on June 3, 2011. The location is inside the Spacecraft Assembly Facility at NASA's Jet Propulsion Laboratory, Pasadena, California Curiosity is about twice as long and more than five times as heavy as any previous Mars rover. Its 10 science instruments include two for ingesting and analyzing samples of powdered rock delivered by the rover's robotic arm. During a prime mission lasting one Martian year -- nearly two Earth years. Researchers will use the tools on the rover to study whether the landing region has had environmental conditions favorable for supporting microbial life and favorable for preserving clues about whether life existed. Curiosity, was shipped to NASA's Kennedy Space Center in Florida on June 23, 2011. Launch: Nov. 26, 2011 Mars Landing: Aug. 6, 2012 (EDT) The rover landed at the foot of a layered mountain inside the planet's Gale Crater. JPL, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory mission for the NASA Science Mission Directorate, Washington. Image Credit: NASA/JPL-Caltech Each of the six wheels contains one electric motor. All motor bearings are lubricated with SynLube™. The lubricant is also used in 7 other locations on the Rover. Total of 260 cc of SynLube™ are used in this Mars Rover ! Send E-mail to firstname.lastname@example.org with questions or comments about this web site. Copyright © 1996-2013 SynLube Incorporated Last modified: 2013-04-26 Lube−4−Life® is a Registered Trademark of SynLube Incorporated
<urn:uuid:2af6832e-af1b-456f-8141-66c8a0de3a39>
CC-MAIN-2016-26
http://www.synlube.com/mars.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00078-ip-10-164-35-72.ec2.internal.warc.gz
en
0.891629
548
2.65625
3
Ajanta caves are located 99-km away from Aurangabad district in the state of Maharashtra. Ajanta caves were carved out from the 2nd century BC to 6th century AD, and are ranked high as a world heritage site. They were hidden in the midst of a lonely glen with a streamlet flowing down below. They were scooped out into the heart of the rock so that the pious Buddhist monk could dwell and pray. During this time, images of Buddha interpreting his different life stories and several types of human and animal figures were carved out of rock in-situ. All sections of people of the contemporary society from kings to slaves, women, men and children are seen in the Ajanta murals interwoven with flowers, plants, fruits, birds and beasts. There are also the figures of 'Yakshas', 'Kinneras' (half human and half bird) 'Gandharvas' (divine musicians), 'Apsaras' (heavenly dancers), which were of concern to the people of that time. The Ajanta caves are dedicated solely to Buddhism. The caves, including unfinished are thirty in number of which five (9, 10, 19, 26 and 29) are "Chaitya-Grihas" and the rest are "Sangharamas" or Viharas (monasteries). The caves 1, 2, 16 and 17 can be ranked high among the greatest artistic works of the contemporary world. The 30 Chaityas and Viharas have paintings, which illustrate the life and incarnations of Buddha. The artist has lent his creativity in each work with an overwhelming sense of vitality. These paintings have survived time and till date the numerous paintings glowing on the walls make the atmosphere very vibrant and alive. In Cave 1, Prince Buddha is depicted delicately holding the fragile blue lotus, his head bent sideways as if the weight of his ornate jewelled crown is too heavy for his head. His half-closed eyes give an air of meditation, almost of shyness. Cave number 2, which is one of the better-preserved monasteries with a shrine, shows how sculpture, paintings and architectural elements were used together to enhance the atmosphere of piety and sanctity. The ceiling and wall paintings illustrate events associated with Buddha's birth. A sculptured frieze of the miracle of "Sravasti", when Buddha multiplied himself a thousand times can be seen in cave 7. In cave 17 one can find the paintings that depict stories from the Jatakas or tales of the previous incarnations of Buddha and also Buddha with his right hand raised, with the palm facing the viewer, which is a symbol of "Abhaya" - reassurance and protection. The best surviving examples of a rock cut Chaitya Griha can be seen in cave 19 at Ajanta. The elegant porch is topped by the distinctive 'horseshoe' shaped window - flanked by 'Yakshas' or guardians, standing Buddha figures and elaborate decorative motifs. The interior of the cave is profusely carved with pillars, a monolithic carved symbolic Stupa and images of Buddha, which heralded the introduction of Mahayana phase. In cave 26, Buddha is seen seated under a Bodhi tree at Bodhgaya, meditating, when Mara and her voluptuous daughters attempted to tempt him. Buddha touched the earth with his left hand to witness his enlightenment. The "Parinivana" (ultimate enlightenment or liberation) came when Buddha left the world- as depicted in the 7m (23ft) image of the reclining Buddha in cave number 26.
<urn:uuid:27c0080a-af77-4cdc-96d7-5d9073964c41>
CC-MAIN-2016-26
http://www.indiatravelfaq.com/india-monuments/aurangabad-ajanta-caves.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00054-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964459
759
2.9375
3
In terms of its breathtaking scale, India is a democracy unmatched. From April 7, the nation of 1.2-billion people will begin trudging to polling booths across the vast expanses of the world’s largest democracy. Whichever way Indian voters lean, and however that may impact the region, simply holding a national election in India is a huge logistical accomplishment. This year about 100-million Indians – roughly three-times the population of Canada – will head to the polls for the first time. That pushes the number of registered voters in India to about 814-million people, more than the entire population of Europe. Every time India holds an election it is the largest democratic exercise in history. But it doesn’t happen on its own. Behind the final tally lies an enormous and sophisticated effort involving millions of election officials, police officers, soldiers, bureaucrats and ground-level volunteers – as well as an unyielding faith in the idea of a democratic India. WHAT'S AT STAKE? There are 543 seats in the India’s lower house, the Lok Sabha. Precisely 272 seats are required for a straight, numerical majority, but no single party has won that way since 1989. In the 2009 election, the two main parties (Congress and BJP) accounted for just 47 per cent of the vote, with the remainder going to local and regional parties. That means India is now ruled by coalitions, with cabinet appointments doled out to key supporters. For the past decade, India has been ruled by the Congress-led United Progressive Alliance coalition, with the BJP-led National Democratic Alliance in opposition. The 2014 election is being shaped by two key issues: the economy and corruption. Under the present coalition, GDP growth has gone from 10.5 per cent in 2010 to just 3.2 per cent in 2012, and is climbing only slowly, leaving millions of Indians trapped in dire poverty as economic policy-making became paralyzed by corruption and inaction. At the same time, the present government has seen enormous scandals in the telecommunications and coal industries, which has led to a broadly supported anti-corruption movement, as well as a new political party. WHEN DO THEY VOTE? Indian elections are not one-day affairs. Voting days are rolled out state-by-state, beginning on April 7 and concluding five weeks later on May 12. It is almost entirely because of the monumental logistics required from such a process, as well as making sure security services aren’t spread too thinly across the country. Because dozens of prominent politicians and hundreds of candidates are criss-crossing the vast landscape at any given point in time during the election and holding huge rallies, staggered votes allow the central government and each state to deploy proper security precautions. Despite India’s generally peaceful, pluralistic society, there are occasional, horrific sparks of communal violence between Hindu and Muslim communities. India’s vibrant democracy is also occasionally scarred by terrorist attacks – some organized from neighbouring Pakistan – so political appearances by prominent figures have tight security. Narendra Modi, for example, is surrounded by special commandos in black uniform, as well as a crowd of local police officers, and he rides in the middle of a heavily armed procession led by a jeep outfitted with signal-jamming equipment. But even the smallest neighbourhood rally by the Aam Aadmi (Common Man) Party is likely to attract a half-dozen police. In total, around 11 million personnel, including the police and the military, will be deployed throughout the election process. Because all the states vote separately, campaigning parallels the staggered voting – and the whole process turns into a democratic marathon, with major politicians flying by plane and helicopter to rallies in support of local candidates in key constituencies over the course of five gruelling weeks. Once all the states have voted, the results are counted and announced on May 16. HOW DO THEY VOTE? The logistical difficulties of organizing and executing a successful Indian election are almost unimaginable. The fact that it happens at all is a remarkable accomplishment. India’s hundreds of millions of voters do not slip a ballot into a physical box such as democracies of Canada and the United States. For the third time, India’s election is entirely electronic – enabled by a vast array of bulky, electronic voting machines that are dispatched under armed escort to the most remote reaches of the subcontinent, sometimes days in advance. The machines are stored in secure rooms with double lock systems and are guarded by police and monitored by closed-circuit cameras 24 hours a day. Voters simply push the blue button next to their candidate.Report Typo/Error
<urn:uuid:c354c966-be86-4b0f-a821-89d07bcdbf33>
CC-MAIN-2016-26
http://www.theglobeandmail.com/news/world/in-the-name-of-democracy-upwards-of-a-billion-indians-head-to-the-polls/article17849906/?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959622
958
3.046875
3
Plastic is convenient, lightweight, unbreakable and inexpensive. But controversy rages over its potential health risks. BPA (bisphenol A) has become a target for criticism. It is used in everything from water bottles and football helmets to baby bottles and eyeglasses. The FDA recently revised its formerly nonchalant attitude to the chemical, a potential hormone mimic. The agency now admits there may be some concern over BPA’s effects on brain development in fetuses, babies and young children. Since BPA acts like estrogen, it might also influence breast and prostate development. The agency has called for additional research to be conducted by the National Toxicology Program. In the meantime, the FDA suggests that consumers take steps to protect themselves and their children by not heating foods or liquids in hard plastic containers in the microwave, and by not putting hot liquids into sippy cups or bottles that contain BPA. The chemical is also found in the lining of metal cans. An article in Consumer Reports (December 2009) revealed that a surprising amount of BPA had leached into some canned goods. New data from the Environmental Working Group show just how thoroughly BPA has made its way into our tissues. Scientists for the nonprofit advocacy group found BPA in nine of ten samples of umbilical cord blood they tested, suggesting that exposure begins in the womb. If consumers carefully avoid food from cans and hard, clear containers, they might minimize the amount of BPA they take in. But what about those soft, bendable containers at the take-out counter? Are they a safer alternative? Unfortunately, they might not be. Many soft plastics contain different types of plasticizers, called phthalates, to keep products flexible. And there are growing concerns about phthalates as well. Like BPA, phthalate compounds may sometimes act like hormones. Some researchers consider them endocrine disruptors, although the American Chemistry Council disagrees. Parents have been warned not to allow babies to chew on phthalate-containing soft plastic toys and to choose phthalate-free baby powder and lotions. Another hidden source of phthalates can be pill coatings. Both over-the-counter and prescription drugs may be covered with phthalate-containing plastic. Every time you swallow such a pill your exposure increases dramatically. Researchers have found that phthalate levels can rise as much as 100 fold after a few months of taking such a medication (Environmental Health Perspectives, Feb. 2009). Both BPA and phthalates can migrate from plastic containers into the food inside. No one knows for sure whether this poses a significant risk for adults, but it seems prudent to minimize the exposure of infants and pregnant women. Here are some guidelines that will help: * Never use plastic containers (hard or soft) to heat food in the microwave. * Look for canned food or beverages that do not have BPA in the lining. * Do not use BPA-containing baby bottles or pacifiers that contain phthalates. * Avoid pills that have plastic coatings containing phthalates. Ask your pharmacist to check with the manufacturer.
<urn:uuid:68480fb8-c043-474c-a407-3ab97d1d3404>
CC-MAIN-2016-26
http://www.peoplespharmacy.com/2010/02/08/are-your-pills-poisoning-you-with-plastic/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942588
646
3.328125
3
Cellulitis is an acute infection of the skin and the tissues beneath the skin. Most often, cellulitis develops on the face, arm, or legs. Signs & Symptoms A bacterial infection causes cellulitis. Bacteria usually enter the skin layers through a break in the skin, such as a cut or other wound. Antibiotics are given by mouth or through an IV. This depends on how serious the infection is. Medicine may be needed to relieve pain. Questions to Ask Are any of these signs present with an existing skin wound? Are any of these signs of an infection present? Self-Care / Prevention To Prevent Cellulitis To Treat Cellulitis
<urn:uuid:96e1f38b-fe3d-4d3e-9898-b8f193a4c773>
CC-MAIN-2016-26
http://www.healthy.net/Health/Article/Cellulitis/8155
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.868265
143
3.25
3
William Faulkner is widely considered to be one of the great American authors of the twentieth century. Although his greatest works are identified with a particular region and time (Mississippi in the late nineteenth and early twentieth centuries), the themes he explores are universal. He was also an extremely accomplished writer in a technical sense. Novels such as The Sound and the Fury and Absalom, Absalom! feature bold experimentation with shifts in time and narrative. Several of his short stories are favorites of anthologists, including ‘‘A Rose for Emily.’’ This strange story of love, obsession, and death is a favorite among both readers and critics. The narrator, speaking for the town of Jefferson in Faulkner's fictional Yoknapatawpha County, Mississippi, tells a series of stories about the town's reclusive spinster, Miss Emily Grierson. The stories build up to a gruesome revelation after Miss Emily's funeral. She apparently poisoned her lover Homer Barron, and kept his corpse in an attic bedroom for over forty years. It is a common critical cliché to say that a story ‘‘exists on many levels,’’ but in the case of ‘‘A Rose for Emily,’’ this is the truth. Critic Frank A. Littler, in an essay published in Notes on Mississippi Writers regarding the chronology of the story, writes that ‘‘A Rose for Emily’’ has been read variously as ‘‘… a Gothic horror tale, a study in abnormal psychology, an allegory of the relations between North and South, a meditation on the nature of time, and a tragedy with Emily as a sort of tragic heroine.’’ These various interpretations serve as a good starting point for discussion of the story. The Gothic horror tale is a literary form dating back to 1764 with the first novel identified with the genre, Horace Walpole's The Castle of Ontralto. Gothicism features an atmosphere of terror and dread: gloomy castles or mansions, sinister characters, and unexplained phenomena. Gothic novels and stories also often include unnatural combinations of sex and death. In a lecture to students documented by Frederick L. Gwynn and Joseph L. Blotner in Faulkner in the University: Class Conferences at the University of Virginia 1957-1958, Faulkner himself claimed that ‘‘A Rose for Emily’’ is a ‘‘ghost story.’’ In fact, Faulkner is considered by many to be the progenitor of a sub-genre, the Southern gothic. The Southern gothic style combines the elements of classic Gothicism with particular Southern archetypes (the reclusive spinster, for example) and puts them in a Southern milieu. Faulkner's novels and stories about the South include dark, taboo subjects such as murder, suicide, and incest. James M. Mellard, in The Faulkner Journal, argues that ‘‘A Rose for Emily’’ is a ‘‘retrospective Gothic;’’ that is, the reader is unaware that the story is Gothic until the end when Homer Barron's corpse is discovered. He points out that the narrator's tone is almost whimsical. He also notes that because the narrator's flashbacks are not presented in an ordinary sequential order, readers who are truly unfamiliar with the story don't put all the pieces together until the end. However, a truly careful first reading should begin to reveal the Gothic elements early in the story. Emily is quickly established as a strange character when the aldermen enter her decrepit parlor in a futile attempt to collect her taxes. She is described as looking ‘‘… bloated, like a body long submerged in motionless water, and of that pallid hue.’’ She insists that the aldermen discuss the tax situation with a man who has been dead for a decade. If she is not yet a sinister character, she is certainly weird. In section two of the story, the unexplained smell coming from her house, the odd relationship she has with her father, and the suggestion that madness may run in her family by the reference to her ‘‘crazy’’ great-aunt, old lady Wyatt, are elements that, at the very least, hint at the Gothic nature of the story. Emily's purchase of arsenic should leave no doubt at that point that the story is leading to a Gothic conclusion. It is Emily's awful deed that continues to captivate readers. Why would she do something so ghastly? How could she kill a man and bed his corpse? This line of questioning leads to a psychological examination of Emily's character.... (The entire section is 1819 words.) In a recent article, Hal Blythe discusses the central role played by the narrator in William Faulkner's gothic masterpiece ‘‘A Rose for Emily.’’ Focusing on Miss Emily's bizarre affair and how it affronts the chivalric notions of the Old South, the narrator, according to Blythe, attempts to assuage the grief produced by Miss Emily's rejection of him by relating her story; telling her tale allows him to exact a measure of revenge. Faulkner's speaker, without doubt, serves as a pivotal player in this tale of grotesque love. Although Blythe grasps the significance of the narrator's place in the story, he bases his argument on a point that the story itself never makes completely clear. Blythe assumes that Faulkner's narrator is male. The possibility exists, however, that Faulkner intended his readers to view the tale-teller as being female. Hints in the text suggest that Faulkner's speaker might be a woman. The narrative voice (the ‘‘we’’ in the story), a spokesperson for the town, appears very concerned with every detail of Emily's life. Faulkner provides us with an important clue concerning the gender of this narrator when he describes the townspeople's reaction to Emily's attachment to Homer Barron: ‘‘The men did not want to interfere, but at last the ladies forced the Baptist minister … to call upon her.’’ Jefferson's male population seems apathetic regarding Emily's tryst; the men are not the least bit scandalized. The females in town (the ‘‘we’’ in the tale) are so concerned with Emily's eccentricities that they force their men to act; one very interested female in particular, the narrator, sees to it that Emily's story is not forgotten. This coterie of Jefferson's ‘‘finer’’ ladies (represented by the narrator) seems highly offended by Emily's actions. This resentment might stem from two primary causes. First, the ladies (the phrase ‘‘the ladies’’ appears throughout the tale and might refer to the ‘‘proper’’ Southern belles living in town) find Miss Emily's pre-marital relationship immoral. Second, they resent Emily's seeing a Yankee man. In the eyes of these flowers of Southern femininity, Emily Grierson becomes a stain on the white gown of Southern womanhood. Despite their bitterness toward Emily, the ladies of Jefferson feel some degree of sympathy for her. After her father's death, the ladies reminisce: ‘‘We remembered all the young men her father had driven away.…’’ Later, Homer Barron disappears, prompting this response: ‘‘Then we knew that this was to be expected... (The entire section is 1062 words.) Nearly everyone familiar with the writings of William Faulkner is aware of the fracturings of time so common in his work. Many of his major characters spend much of their fictional lives trying to piece together their experiences and lives, to put them in some kind of chronological or existential order. Few of them succeed; and when they do, as is perhaps the case with Quentin Compson (The Sound and the Fury and Absalom, Absalom!) they most often find that to make sense of their lives is to create the necessity for self-destruction. But, most often, Faulkner's characters are like Charles Bon of Absalom, Absalom! who, when he leaves for college, is only on the periphery of an area of knowledge about himself and his world. Bon is described as ‘‘almost touching the answer lurking, just beyond his reach, inextricable, jumbled, and unrecognizable yet on the point of falling into a pattern which would reveal to him at once, like a flash of light, the meaning of his whole life.’’ But if Faulkner's characters are often at a loss with respect to the movements of their existences through time, his critics cannot be. Indeed, such detailings of temporal chronology, together with structural elaborations, provide some of the most lucid and meaningful understandings of Faulkner's fiction. Almost all of Faulkner's stories and novels can be better appreciated and more accurately understood and interpreted through a detailing of the interrelationships of time and structure. In Faulkner's world theme exists as the hyphen in the compound temporal-structure. Not the least of such cases is ‘‘A Rose for Emily.’’ ‘‘A Rose for Emily’’ is divided into five sections, the first and last section having to do with the present, the now of the narration, with the three middle sections detailing the past. The story begins and ends with the death of Miss Emily Grierson; the three middle sections move through Miss Emily's life from a time soon after her father's death and shortly after her beau Homer Barron, ‘‘had deserted her,’’ to the time of her death. Late in the fourth section of the story, Faulkner writes of Miss Emily, ‘‘Thus she passed from generation to generation—dear, inescapable, impervious, tranquil, and perverse.’’ On first reading, this series of adjectives appears to be only another catalogue so familiar in Faulkner. Often it seems that Faulkner simply lists such a series of adjectives as if to say, ‘‘Take your choice of these, I don't care.’’ Not so in this instance. Rather, it would seem that Faulkner uses these five adjectives to describe Miss Emily with some care and for a specific purpose. It could be argued that they are intended to refer to the successive sections of the story, each becoming as it were a sort of metaphorical characterization of the differing states through which the townspeople of Jefferson (and the readers) pass in their evaluation of Miss Emily. Correlating the two present sections with the adjectives that fall to them, we see Miss Emily as the paradox she has become in death, ‘‘dear’’ and ‘‘perverse,’’ while before her death she was ‘‘inescapable, impervious, tranquil.’’ Thus, during her life, the enigma of Miss Emily's personality, which kept her seemingly immortal, impenetrable, and almost inevitably inescapable, has been clarified and crystalized by her death. A woman who, alive, ‘‘had been a tradition, a duty, and a care,’’ and thus ‘‘dear’’ in several senses of that word, is revealed, in death, to have been what for... (The entire section is 1499 words.) The first clues to meaning in a short story usually arise from a detection of the principal contrasts which an author sets up. The most common, perhaps, are contrasts of character, but when characters are contrasted there is usually also a resultant contrast in terms of action. Since action reflects a moral or ethical state, contrasting action points to a contrast in ideological perspectives and hence toward the theme. The principal contrast in William Faulkner's short story ‘‘A Rose for Emily’’ is between past time and present time: the past as represented in Emily herself, in Colonel Sartoris, in the old Negro servant, and in the Board of Aldermen who accepted the Colonel's attitude toward Emily and rescinded... (The entire section is 2970 words.)
<urn:uuid:3e1e5f47-4085-495f-90ff-f953237f7dcd>
CC-MAIN-2016-26
http://www.enotes.com/topics/rose-emily/critical-essays/essays-criticism
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963659
2,616
2.96875
3
Raised panel doors have long been a hallmark of fine cabinetry. Unfortunately, many hobbyists and weekend woodworkers think that making cabinet doors requires years of woodworking experience. In fact, that's not the case at all. Below, we'll take a look at some of the tools and techniques that make building perfect frame-and-panel doors a process that anyone with a few basic woodworking skills can enjoy. You'll see just how easy the process can be as we walk you through the process of making a classic arched-top raised panel door. When you're ready to get started making your own frame and panel doors, you'll find all of the best-quality door making tools, equipment and supplies at Rockler. How Raised Panel Doors "Work" Raised panel doors are an example of frame and panel construction, a method developed hundreds of years ago to combat the effects of moisture on solid wood used in cabinetry and furniture making. In a frame and panel construction, a large panel is fitted into a groove in the interior edge of a more dimensionally stable frame made of narrow strips of wood. The panel is sized slightly smaller than actual dimension that the grooved frame will accommodate, and simply rests in the groove without being physically attached to the frame. Since the panel is given a little "room to move" and isn't physically attached to the frame, it is free to expand and contract with seasonal changes in humidity without affecting the stable shape and size of the frame. Tools for Fast, Accurate Frames For any frame and panel construction project, the first and most important task is to mill the parts of a sturdy, flat and square frame. There are a number of ways to accomplish this, and a variety of joinery methods that can be used for the all-important joints of the frame. For frame and panel cabinet doors, where joint stresses are usually light to moderate, the most popular choice is the fast and easy-to-master "cope and stick" method. The method works out especially well for larger projects, like making a set of kitchen cabinet doors. In cope and stick joinery, the frame is held together by a joint between the edge of the "stiles" (the vertical members of the frame) and the ends of the "rails" (the horizontal members of the frame). The "sticking" - the panel groove and the decorative profile on the interior edge of the frame - is matched by a special cut in the end of the rail called a "cope." To complete the joint, the two matching profiles are simply glued and clamped together. The strength of the joint relies on a near-perfect match between the cope and the sticking, which is achieved by using router bits designed especially for the purpose. Stile and Rail Router Bits Stile and rail router bits are available in a variety of designs and configurations. The "matched set" of stile and rail bits is among the most popular and easy to use. The sets are comprised of two router bits that are "matched" to produce an exact fit between the sticking profile and cope. Matched sets of stile and rail router bits are available in a variety of profiles, including ogee, bead edge, round edge and traditional. The most important consideration, however, is to look for a bit set that's manufactured to precision standards with cutters machined from quality carbide. Stile and rail router bits are designed to be used strictly on a router table. The performance of the bits, in fact, depends to a large degree on the quality of the router table and on the availability of a few key accessories. To produce the perfectly square and level router cuts required in cope and stick joinery, the table needs, at minimum, to be flat, well supported, have a straight and reliable fence, and an accurate miter gauge. Beyond that, a few related pieces of equipment can go a long way in making the process smooth and successful: The Rockler Rail Coping Jig Getting a cope cut that's square and consistent in height over the length of the cut is extremely important. Using a miter gauge to cope rails is an option, but care should be taken. A miter gauge set at an angle that's even slightly off 90 degrees will cause incorrectly cut rail ends and make square, close-fitting joints virtually impossible. And if the rails aren't kept perfectly flat on the surface of the router table, the result is a cope that's out of level or at the wrong height. The result is a joint that's twisted or isn't flush. The Rockler Rail Coping Jig really helps out, both in producing a square cut and in keeping the rail flat during the cut. The jig is pre-set at a 90 degree angle - you'd really have to try to make a cope that isn't square. The jig clamps the rail securely in place (with the flip of a lever) so there's no chance that the stock will wander backwards out of the cut during the operation. With the rail clamped flat against the surface of the jig, getting a cut that's perfectly level and at exactly the right height is just a matter of keeping the jig flat against the surface of the table during the cut, an operation that the jig's ergonomic, "hand-plane" design makes easy and natural. The replaceable hardwood backer completes the package by virtually eliminating the problem of tear-out when the coping bit exits the cut. Router Bit Set Up Jigs Setting the height of the stile and rail bits is a crucial step. A good set of stile and rail bits makes perfectly matching cuts, and there's no opportunity to "fudge" the joint in one direction or another once the cuts are made. If the height of the sticking profile bit and the coping aren't set correctly, the surfaces of the stiles and rails won't be flush when joint is assembled. Rockler Router Bit Set-Up Jigs make setting the height of stile and rail bits almost impossible to get wrong. Each set up jig is cut at the optimum height with exactly matching sticking and cope profiles. You just adjust the height of the bit until it matches the profile of the jig and you're ready to start cutting. Cathedral and Arched Door Templates Cathedral and arched-top doors are the "top of the line" when it comes to raised panel doors. But many intermediate woodworkers consider them out of their league. The truth is, making doors with curved top rails and panels is no more difficult than making any other type of door - provided you have the right equipment. Rockler's Cathedral Door Templates and Arched Door Templates make cutting perfectly shaped arched rails and cathedral style rails and panel quick and easy. Each set comes with matched rail and panel templates that cover a range of common cabinet door widths. Keeping the stock flat against the surface of the router table and up tight against the router table fence while cutting the sticking profile is of primary importance. "Feather boards" apply gentle even pressure on the stock while you are making sticking profile cuts, leaving you free to concentrate on moving the stiles and rails through the cut at the slow, even rate necessary to produce a clean edge. Feather boards are nothing new - they've been around for a very long time. But certain improvements over the years have made them easier to use and set up. The feather boards available as an accessory for Rockler Router Table Packages, for example, are designed to attach and adjust to the perfect position in a few seconds. In many ways making the panel is the least challenging part of raised panel door construction. The panel is really just a passenger in the door frame, and doesn't really contribute to the structural strength of the door. The main challenge in the panel-making process is to create a smooth edge profile that's exactly the right thickness to fit snugly in the panel groove. To do that, you need a quality raised panel router bit. Here, you have a few options: Vertical Raised Panel Bits Vertical raised panel bits are a good option for smaller router / router table set-ups. Because the panel is run vertically along the router table fence, the bit has a small cut radius compared to a horizontal bit. The small cut diameter of the bit makes it a safer tool for routers under 1-1/2 HP, and routers with no speed adjustment feature. These bits will cut a perfectly smooth profile and a panel that fits the panel groove perfectly, although they may take a little more practice to set up than some other types of panel-raising bits. Horizontal Raised Panel Bits Horizontal raised panel bits have the added feature of a pilot bearing to guide the router cut along the edge of the panel. This is a necessary feature for arched and cathedral style door-making. Horizontal bits require a more powerful router and slower operating speeds because of their large cut diameter. Horizontal Raised Panel Bits with Back Cutters This is as good as raised panel router bits get. The back cutter on these router bits rabbets the back edge of the panel, which makes for a perfect panel-to-groove fit every time. The back cutter also allows you to use stock for the panel that is the same thickness as the frame stock, while still placing the panel on the same plane as the surface of the frame. Space Balls Stop Panel Rattle "Panel Rattle" happens when changes in humidity cause a door panel to shrink down to loose fit in the panel groove. Space Balls are 1/4' diameter rubber balls designed to be installed in the panel groove before glue-up. The compressed rubber expands and contracts along with seasonal changes in humidity, keeping door panels centered and rattle-free year round. Assembly Tips for Flat Doors You have your perfectly machined stiles and rails ready to go. The panel profile is perfectly smooth and fits in the groove just right. You're home free, right? Not exactly. Assembly can be the make-or-break point of the entire process. For a door to end up flat, it has to be glued up flat - it's that simple. For the glue-up procedure, a perfectly flat surface is essential. But even if your workbench is dead-on flat, it won't matter unless your clamps follow through. Jet Parallel Clamps and Rockler Sure-Foot Clamps are both designed to stay upright and to maintain the consistent work-surface-to-clamp-surface distance that keeps your doors flat during assembly. Putting Theory into Practice Now that you have an idea of the tools and techniques that go into successful raised panel door construction, the next step is to see them in practice. That's what we'll do on the next page, where we'll go through the steps in making an arched-top raised panel door. Raised Panel Door Tools and Techniques - Part II An Arched-Top Raised Panel Door in Ten Steps Below, we'll go through the steps involved in making an arched-top raised panel door. As you'll see, with the right tools and equipment, the process is actually fairly simple and straightforward. Woodworkers with intermediate skills and some experience using a router table will have no trouble mastering the techniques in a short time. Step 1 - Selecting and Preparing the Stock The first step in making a frame and panel door is to select and prepare the stock. This step is important. The squareness and flatness of the finished door will depend in a large part on the quality of the stock selected. In selecting the stock for our door, we sorted through several pieces of cherry hardwood before selecting a few pieces that appeared flat, free of twists and relatively straight-grained. It's a good idea to prepare and mill more stock than you are actually planning to use. You'll usually want to do a few test cuts along the way, and there is always the possibility of making a mistake. Weighed against the time it would take to back up and repeat a step after making a mistake, cutting a few extra pieces of stock is worth the added effort. After selecting the stock, cut the stiles and bottom rail to a width (2-1/4' is fairly common). Next, cut the blank for the arched top rail. You'll want to cut the blank 1/16' wider than its finished width to leave yourself a little room for final trimming. Use the rail template from the Arched Door Template Set as a guide when determining how wide the blank should be. At this point, you can cut the stiles to a length of 1 or 2 inches longer than their finished size - you'll trim them to their exact length later on. Prepare the stock for the panel at this point also. You may also need to edge glue two or more pieces of stock together to come up with a piece that's both wide enough for the panel and as flat as possible. Don't like math? The Woodshop Calculator will do it for you. This affordable software system does all of the calculations necessary for frame and panel door construction. Note: If you are planning to trim the edges of the finished door, or finish them with a Custom Door Edge Router Bit, you'll want to add 1/16' or so to the width of the stiles and rails. Remember also that you will be trimming this off when you finish assembling the door. You need to leave the extra width out of your calculations for the length of the rails. Step 2 - Calculating the Length of the Rails Once you have the bottom rail and the blank for the top rail cut to width, you can begin preparing for the cope cut by calculating the length of the rails. This can be a little tricky. To determine the length of the rails, you need to subtract the combined width of the two stiles from the overall width of the door, and then add back in the depth of the panel groove (to account for the overlap between the panel groove and the coped end of the rail). Here is a formula for calculating the length of the rails: |Rail Length =| |-||Width of stiles x 2| |+||Panel Groove Depth x 2| Most stile and rail bits (including the Rockler Stile and Rail Bit Set used here) cut a 3/8' deep panel groove. Using the Rockler bit set, for a door to be 18' wide when it's finished, with a 2-1/4' finished stile width, the rail length should be: |Rail Length =| Step 3 - Setting the Coping Bit Height With the bottom rail and the blank for the arched top rail cut to length, you're ready to set up to cope the ends of the rails. Setting the coping bit to the correct height is important. The match between the height settings of the stile and rail bits will determine whether the stiles and rails meet flush with one another. The Rockler Router Bit Set-Up Jig makes setting the bit height easy. The jig has the sticking profile cut at optimum height into one of its sides, and the matching cope cut into the other. You just raise the bit until it fits into the jigs cope profile and the bit is set at the optimum height. When it comes time to cut the sticking profile, simply repeat the procedure using the sticking profile on the other side of the jig. Step 4 - Cutting the Copes Getting a cope cut that's square and as close to perfectly level as possible is crucial. The cut has to be square if the parts of the frame are to fit together correctly, and it needs to be level (consistent in height over the length of the cut) to insure that the door will lay flat and that the stiles and rails will meet flush with one another. To make coping the rails easier, and to insure our coping accuracy, we used the Rockler Rail Coping Jig. The jig's backer is set to make a perfectly square cut. The jig is also designed to keep the end of the rail level with the coping bit during the cut. Just butt both the edge of the jig and the end of the rail up against the fence and clamp the rail in against the hardwood backer. The backer (included with the jig) prevents "tearout" that would otherwise happen when the rail exits the cut. When we have one coped, we just flip the rail around and do the other side. Note: When making the second cope cut on each rail, be sure to turn the rail around end for end - don't flip it over. It is surprisingly easy to forget that both cope cuts have to be made with the same side of the rail facing up. Step 5 - Cutting the Sticking Profile Next, set the height of the sticking profile bit - again, we recommend using the Rockler Router Bit Set-Up Jig. For the straight-line profile cuts on the stiles and bottom rail, position the fence so that the profile cut will end up on the very edge of the stock. Use a straightedge to line the stile bit's pilot bearing up with the surface of the fence. Since the sticking profile and groove are cut along the length of the grain, tearout at the end of the cut is not a concern. Just feed the stock at a moderate rate, and make sure that it's kept in firm contact with the fence and the surface of the router table over the entire length of the cut. Once you have the fence set up, run all of the straight edges - the two stiles and the bottom rail. While you're at it, put a profile on a couple of extra pieces. You'll use them in the next step. Step 6 - Cutting and Milling the Arched Top Rail Now you're ready to cut the arch in the top rail. Here's a trick that will make the process safer and much easier: Fit "temporary stiles" (short pieces of stock milled with the sticking profile) into the end of the blank and clamp the three pieces together with a lightweight bar clamp. The purpose of the temporary stiles is to provide a place to start the profile cut. Skipping this part means that you would have to start the cut on the very tip of the arch and run the risk of the bit catching hold of the corner of the rail, which would at the very least damage the corner or coped end of the rail. Apart from being potentially dangerous, having this happen is an unpleasant surprise that turns a perfectly good arched rail into a piece of scrap. The temporary stiles also protect the fragile end of the arch from breaking off when you exit the sticking profile cut. With the three pieces clamped together, mark off the arc, using the rail template that comes with the Arched Door Template Set (be sure to choose the rail template that has the correct width range for the door you are making) and rough-cut the shape 1/16' oversized with a band saw. To make the sticking cut in the arched top rail, attach the pattern to the blank with double-sided tape. The template will guide the router bit's pilot bearing, so you want to make sure that it's secure and won't slip during the cut. Note: With the temporary stiles clamped securely in place, and the pattern attached securely to the top of the rail, cutting the top rail sticking profile is usually a smooth procedure, but also requires extra care. Starting a router cut in the middle of an edge can cause the bit to climb (pull the stock in the opposite direction from the intended cut direction). Be sure to ease into the cut slowly, hold on to the rail/temporary stile firmly, and keep your fingers well away from the path of the bit. If you have limited experience using a router and router table, making a few test cuts on a piece of scrap to get the feel of the procedure would be a good idea. Step 7 - Cutting the Panel to Width The first question to ask yourself when figuring the width of a solid wood panel is "how's the weather?" In other words, you'll need to factor in how the humidity level in your shop while you are making the panel is affecting the stock. If you are working in a northern climate in the dead of winter in an un-humidified shop, the wood you are about to make into a panel is probably about as dry as it's likely to get. In that case, you'll need to reduce the width of the panel by 1/8' for every 12' of width to allow for expansion when the humidity is high. The width of a fully moisture-expanded panel should still be 1/16' less than the length of the door's rails to provide a little assembly clearance. The formula for the width of a door panel is: |Panel Width =| |Rail Length (from tip to tip)| |-||1/16' Assembly Clearance| Note: If you're planning to use Space Balls to keep the panel centered and to forestall any possibility of "panel rattle," increase the assembly clearance for both the width and height of the panel from 1/16' to 1/4' total (1/8' for each side). It was a sultry summer day when we cut the parts for our door, so we assumed that our stock was about as loaded up with moisture as it was likely to get. We just subtracted 1/16' from the total length of our rails to find the width of the panel. Step 8 - Cutting the Panel to Height Since wood expands far more across the grain than lengthwise along the grain, the height of the panel can be made just 1/16' smaller (for assembly clearance) than the actual dimension of the available space inside the panel groove. If you are making an arched top door, the width of the top rail, of course, varies. Using the width of the rail at its narrowest point (midpoint between the two ends) will allow you to determine the height of the panel at its tallest point: |Panel Height =| |-||Top Rail Width (at the center)| |-||Bottom Rail Width| |-||1/16' Assembly Clearance| |+||Panel Groove Depth x 2| When you've calculated the height of the panel at its tallest point, and the panel blank is cut to its finished width, you are ready to mark and cut the curved top of the panel. Use the panel template to mark off the shape of the arched top of the panel. Be sure that the template is square with the edge of the panel, and that it's placed so that you'll end up - after rough-cutting the arch - with a piece that's 1/4' or so longer than the finished panel height you've calculated. Note: Be sure to use the panel template from the Arched Door Template Set that corresponds to the rail template you used in Step 6! The rail templates are clearly marked to help make this easy. When rough-cutting the panel, keep the band saw's blade well on the scrap side of the line - you'll use the line again when you attach the template to the panel in order to trim it smooth with a flush trim router bit. When you have the arch trimmed smooth, simply measure down the panel along the center line from top to bottom to mark off the panel height you've calculated. Crosscut off the excess to finish sizing the panel. Step 9 - Cutting the Panel Profile With the panel cut to size, you're ready to profile the panel with a raised panel router bit. Having decided on a cove profile for the edge of the panel, we cut our panel profile with a cove profile horizontal panel raising bit. The bit we used was equipped with a "back cutter," meaning that it makes a cut in the back of the panel as well as the front. There are two advantages to using this type of bit: Simply raising and lowering the height of the bit allows you to quickly and easily place the raised field of the panel anywhere you'd like in relation to the front surface of the frame. The back cutter also insures that the panel edge will be exactly the right thickness to fit snugly into the panel groove. We decided to have the surface of our panel protrude up past the surface of the frame by as much as possible, but we set the bit height so that the back cutter would take off 1/32' of material to insure that the edge of the panel would be exactly the right thickness for the panel groove. To cut the panel profile, set the raised panel bit at the desired height and use the bit's pilot bearing to carefully guide the curved top of the panel through the cut. Be sure to grip the panel firmly and keep it pressed tight against the surface of the router table at all times. Then, for a little added support while cutting the straight sides of the panel, set the fence in line with the panel bit's pilot bearing as you did with the sticking profile bit in. Use the fence as a guide while you cut the straight sides of the panel, beginning with the bottom edge. Step 10 - Assembly Assembling the door is a fairly straightforward process, but there are a few thing to watch out for. Most importantly, the surface you're working on when gluing-up the door should be as close to perfectly flat as possible. Here, we used the Jet Parallel Clamps, along with Rockler Parallel Clamp Blocks, which, when used together, keep parts flat and square during glue-ups. We recommend cutting the stiles to their exact finished length (plus any extra width you've added to the rails for trimming) before assembly. That allows you to take advantage of the clamp block system. The blocks make it possible to arrange clamps for four-sided clamping, and allowed us to use the length-wise clamps to align the top and bottom ends of the stiles with the top and bottom edges of the rails. It's a good idea to "dry-fit" the parts before applying the glue, just to make sure that everything fits together correctly, and that you have the procedure down. When you're satisfied with the fit of all of the door's parts, apply a judicious amount of glue to the ends of the rails and assemble the door. Gently clamp the parts together (you don't need a lot of clamping pressure - just look for a tight fit between the stiles and rails). It is extremely important to press the door down flat against the surface of the clamps while tightening them down. When the clamps are tightened, check the door for flatness with a straightedge or level. If the door is out of flat, loosen the clamps, press the door down flat against them and re-tighten them. Finally, wipe off any excess glue and check the squareness of the door by measuring diagonally cross-corners in both directions. If the rails have been cut and coped square, the door should clamp up square almost "automatically." But it's a good idea to check anyway. If the cross-corner measurements don't match, the door is out of square. Loosen the clamps slightly and apply a clamp across the two corners that have the longest corner-to-corner measurement. Slowly tighten the diagonal clamp while monitoring the opposite corner-to-corner measurement. When the difference between the two measurements is equalized, re-tighten the clamps and check again for flatness. Follow the glue manufacturer's instructions for clamping time. After unclamping the door, trim the edges of the door as needed, or finish them with a Custom Door Edge Router Bit. In most cases, you'll just need to give the door a light sanding before finishing with a quality wood finish.
<urn:uuid:9416a94a-1686-4598-bad7-c68af8b7a99d>
CC-MAIN-2016-26
http://www.rockler.com/how-to/raised-panel-door-tools-techniques/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94489
5,780
2.6875
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Ernst Mach (February 18, 1838 – February 19, 1916) was an Austrian physicist and philosopher and is the namesake for the "Mach number" (also known as Mach speed) and the optical illusion known as Mach bands. Ernst Mach was born in Chrlice (now part of Brno), Czech Republic. Up to the age of 15 he was educated at home by his parents. He then joined the local Gymnasium and in 1855 the University of Vienna. There he studied mathematics, physics and philosophy, and received a doctorate in physics in 1860. His early work was focused on the Doppler effect in optics and acoustics. In 1864 he took a job as Professor of Mathematics in Graz, in 1866 he was also appointed as Professor of Physics. During that period Mach became interested also in the physiology of sensory perception. In 1867 he took the chair of Professor of Experimental Physics at Charles University, Prague, where he stayed for 28 years. In 1897 he suffered a stroke and in 1901 retired from the University and was appointed to the upper chamber of the Austrian parliament. On leaving that post in 1913 he moved to his son's home in Vatterstetten, near Munich where he continued writing books until his death. Most of his studies in the field of experimental physics were devoted to interference, diffraction, polarization and refraction of light in different media under external influences. These studies were soon followed by his important explorations in the field of supersonic velocity. Mach's paper on this subject was published in 1877 and correctly describes the sound effects observed during the supersonic motion of a projectile. Mach deduced and experimentally confirmed the existence of a shock wave which has the form of a cone with the projectile at the apex. The ratio of the speed of projectile to the speed of sound vp/vs is now called the Mach number. Sensory perception Edit Philosophy of science Edit Mach developed a philosophy of science which was influential in the 19th and 20th centuries. Mach held that scientific laws are summaries of experimental events, constructed for the purpose of human comprehension of complex data. Thus scientific laws have more to do with the mind than with reality as it exists apart from the mind. Some quotations from Mach's writings will illustrate his philosophy. These selections are taken from his essay The Economical Nature of Physical Inquiry, excerpted by Kockelmans (citation below). - The goal which it [physical science] has set itself is the simplest and most economical abstract expression of facts. - When the human mind, with its limited powers, attempts to mirror in itself the rich life of the world, of which it itself is only a small part, and which it can never hope to exhaust, it has every reason for proceeding economically. - In reality, the law always contains less than the fact itself, because it does not reproduce the fact as a whole but only in that aspect of it which is important for us, the rest being intentionally or from necessity omitted. - In mentally separating a body from the changeable environment in which it moves, what we really do is to extricate a group of sensations on which our thoughts are fastened and which is of relatively greater stability than the others, from the stream of all our sensations. - Suppose we were to attribute to nature the property of producing like effects in like circumstances; just these like circumstances we should not know how to find. Nature exists once only. Our schematic mental imitation alone produces like events. In accordance with this philosophy, Mach opposed Ludwig Boltzmann and others who proposed an atomic theory of physics. Since atoms are too small to observe directly, and no atomic model at the time was consistent, the atomic hypothesis seemed to Mach to be unwarranted, and perhaps not sufficiently "economical". Mach had a direct influence on the Vienna Circle philosophers and the school of logical positivism in general. Albert Einstein called him the "forerunner of [the] Theory of relativity", though Mach would later, to Einstein's disappointment, reject Einstein's theory. Mach's positivism was also influential on many Russian Marxists, such as Alexander Bogdanov. In 1908, Lenin wrote a philosophical work Materialism and Empirio-Criticism in which he criticized the views of "Russian Machists". The question is, did Mach deny the relevance of the a priori apropos of some of its possible significances or did he deny the a priori all together? If we are able to conlcude the former (rather than the positivist position: the latter), then we will be in a good position to suggest that a great mind such as Mach's would never have denied the significance of Kantian doctrine for an understanding of the scientific endeavor (including the significance of the a priori). Mach's central text, as far as this debate is concerned, is his quasi-textbook entitled, "The Science of Mechanics." Thankfully, we can cut to the chase here as there are only a couple times that a priori or a posteriori are mentioned and, to complicate things further, it is always in the same vain; Mach denies the use of any a priori in arriving at the concepts that make up law-based propositions. In one of his only explicit mentions of the a priori in the Mechanics, Mach writes, "All this[, the derivation of the valid application of certain laws within certain circumstances,] has often led men to attribute knowledge of this kind to an entirely different source, namely, to view it as existing a priori in us (previous to all experience). That this opinion is untenable was fully explained in our discussion of the achievements of Stevinus. (p.83)" The specific 'achievements of Stevinus,' though they are indispensable so far as Mach's narrative of mechanics is concerned, is dispensable as far as his treatment of the a priori is concerned and so the reader's time will not be wasted with such explanation. With our goal in mind, it is only necessary to understand here that Stevinus arrived at a special application (static) application of a special law (a law of equilibrium, retrospectively derivative from the law of the composition of forces) that he could only have arrived at through experiment in the physical world. He could not pull a 'Bernoullian' trick, as Mach would have it; that is, he could prove the applicability of the law of the parallelogram of forces to the real world just because such a law is mathematically demonstrable prior to experience in the world; i.e. just because the law is a priori. A HA! exclaim the positivists, so Mach does throw the a priori to the wolves. Well, no, unfortunately for the positivists, that is not true. A more careful reading or, might one even go so far as to say, any fair reading of Mach's text at all, would reveal that Mach only denies the a priori in a specific sense: in knowing how to validly or, in his terminology, 'economically' apply laws to the physical world. Only experience can tell us that, Mach insists, so don't go believing you can fly because you can dream about it--Mach warns the Bernoullians of the world. This, however, seems to be the advice of every great mind of the period, from Poincare to Einstein. Not-a-one of them would advocate for such a pre-experiential application of the concepts of the a priori and, surprise-surprise, neither would Kant. Some, such as Hans Reichenbach, suggest, however, that there is a more subtle strain in Mach's conception of the way that laws arise and are validly applied. Such academics insist that Mach's conception of the inception of physical law is based upon empirical circumstance, is empirically given, and therefore contingent only upon circumstance. This tendency towards the given, which is so central to logical atomism, is nowhere to be found in Mach! Mach is all about the 'imagination': the Kantian constructor of concept. Therefore it is no surprise that Mach acclaims Newton and his universal law of gravity. This law is, for Mach, forever true and applicable (universal). Yet this law is of a synthetic a priori nature: the very consideration that lead Kant to embark upon the Critique of Pure Reason. Furthermore, Mach's text is rich with suggestions as to the timelessness of laws, be it those of Gallileo or those of Newton (which, as he later points out, are to be conflated/subsumed one in the other). What is contingent for Mach is not what Newton called a 'law', but what he called a 'proposition'; namely that the law of gravity take on the form of the inverse square. Yet experience has proved it to be so, and therefor, though it may be a synthetic statement that's application to the world would have been arbitrary and audacious otherwise, because experience has validated it, it has taken on a synthetic a priori nature (which it could not have done without experience). This is the place that Mach reserves for the a priori for, at the same time that it is implied by his text, he never suggests or says otherwise. Indeed, it is worth considering Mach's one explicit mention of Kant within his entire text. Mach writes, "It is scarcely necessary to remark that in the reflections here presented Newton has again acted contrary to his expressed intention only to investigate actual facts. (p.229)" He is here speaking of Newton's conceptions of Absolute Space and Absolute Time. Newton's example of the rotating bucket is invoked to delineate between the cetrifugal forces acting on the water and the forces of the rotating bucket as not indicating anything about Absolute Space, as he imagines Newton thought it did, but merely of the relative action of complex forces upon the water. Newton's point, however, could be read to mean that, while we are repeatedly confronted by the action of complex forces, the fact that, as scientists, we are made to consider the potential of relative motions in an infinite regress (the bucket acting upon the water at the same time as the relatively stationary earth actus upon it, etc.---why stop at the earth?) and so we are faced with a humbling fact: as far as our finite minds might consider relative motions, there seems to be the infinite possibility of further considerations of relativity. "But if we take our stand on the basis of facts," Mach admits, "we shall find we have knowledge only of relative spaces and motions. (p.232)" How then, is the scientist to decipher the outline of a law-like proposition, such as that of the inverse square, in the face of this daunting relativity? Newton thus concludes that we must assume that the relativity stops somewhere, that there is such a thing as an absolute space and an absolute time, for by such a conclusion the scientist is once again liberated to act as if the propositions that he constructs as he observes the relative motions of physical phenomena have absolute/universal significance: the universal gravitation that Mach so enthusiastically applauds. "No one is competent to predicate things about absolute space and absolute motion; they are pure things of thought, pure mental constructs, that cannot be produced in experience. (p.229)" Precisely, Mr. Mach, that is precisely what they are, however, he writes of the Absolutes as, "idle metaphysical conception[s]. (p.224)" Mach understands the necessity of relinquishing man's grip on attempting to nail down relativity from its infinite regress, as he writes, "It is certainly fortunate for us, that we can, from time to time, turn aside our eyes from the overpowering unity of the All, and allow them to rest on individual details, (p.235)" and so one has to wonder exactly why Mach took such an aversion to such utterances of the absolutes: poetic utterances, metaphysical concepts and practical ones at that. The answer stares us in the face: Mach, for whatever reason(s), was hellbent on keeping metaphysics separate from science. Is this symptomatic of a poor understanding of the relation between metaphysics and science in the long-time established Kantian doctrine, or is this a symptom of a time period in which ghosts like the ether and the atom haunted physics? I am inclined, at this point, to suggest the latter, for (while Mach seems to have supported the ether) we have Mach's rabid anti-atomism as at least one salient symptom of what may fairly be portrayed as a desire to keep physics pure of the kinds of dialectical illusions that Kant illustrates in his antinomies (atoms versus plenum). Mach concludes, "No one is warranted in extending these principles beyond the boundaries of experience. In fact, such an extension is meaningless, as no one possesses the requisite knowledge to make use of it. (p.229)" Admittedly, this is an alarmingly dismissive tone when one considers the subtlety of the Newtonian conception... Mach explains, "No one is competent to say how the experiment would turn out if the sides of the vessel increased in thickness and mass till they were ultimately several leagues thick. The one experiment only lies before us, and our business [as physicists] is, to bring it into accord with the other facts known to us, and not with the arbitrary fictions of our imagination. (p.232)" But what of our business as metaphysicians? While Mach is thankful for our ability to act as if we observe absolute motions (though we know epistemologically that our knowledge reflects only relative motions), he warns, "But we should not omit, ultimately to complete and correct our views by a thorough consideration of the things which for the time being we left out of account (in our practical discursions into the realm of the absolues). In an appended remark, Mach reveals, "I regard... Newton's distinction as an illusion. (p.543)" This remark is revealing because, while he shows that he understands the necessity of acting as if motions that we observe occur in absolute space, he is here demonstrating that he does not understand Newton to be alluding to this practical necessity. Perhaps his admiration for Newton as a 'first rate philospher' falls short of affording Newton the subtle reading that his words demand (for such an interpretation of his words is possible). Or perhaps Mach has resigned such a subtle reading to the persistent co-opting of Newton by his contemporaries in the name of cruder reading (Streintz's criticism of Mach seems to suggest this latter position). In any case, Mach does portray Newton as having been sloppy in form as he writes, "Like the commander of an army, a great discoverer cannot stop to institute petty inquiries regarding the right by which he holds each post of vantage he has won... Newton might well have expected of the two centuries to follow that they should further examine and confirm the foundations of his work. (p.245)"--i.e. clean it up, not in substance but in presentation, for, as Mach writes, "[Newton] was, as it is possible to prove, not perfectly clear himself (in his writings) concerning the import and especially concerning the source of his principles. (p.244)" Mach says what Newton ought to have said (and, of course, can be read as having said). However, there is one last point in treating of the philosophy underlying Mach's scientific endeavor, and that is whether he was actually aware that his well grounded treatment of the a priori (negative) kept the a priori within the bounds that Kant intended it to be: for that he certainly did. That is, was Mach aware of the positive place of the a priori within his own exposition or was he unwittingly re-inventing the wheel? Signs point to no, that he was not aware of his inherited philosophical foundation, as he writes, "Newton might well have expected... that, when times of greater scientific tranquillity should come (Mach is here implying the future after even his own time), the principles of the subject might acquire an even higher philosophical interest than all that is deducible from them. (p245)" What is there left to imagine in terms of deducing from principles, if all phenomenal deductions are excluded, than a 'transcendental' deduction of the Kantian variety? The evidence seems to suggest that Mach didn't so much deny the significance of the Kantian doctrine, but merely that he wasn't aware of the full implications of that doctrine. - J. Kockelmans. Philosophy of science: the historical background. New York: The Free Press, 1968. - B. Bliumis. "Mach and the A Priori". Bard College: Home Computer, 2007 - Various Ernst Mach links - Short biography and bibliography in the Virtual Laboratory of the Max Planck Institute for the History of Science - Vladimir Lenin: Materialism and Empirio-Criticism |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:30a2505a-f82d-4bf9-80da-8cf5761f96f7>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Ernst_Mach
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00148-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967925
3,561
3.265625
3
Definitions for mackerelˈmæk ər əl, ˈmæk rəl This page provides all possible meanings and translations of the word mackerel flesh of very important usually small (to 18 in) fatty Atlantic fish any of various fishes of the family Scombridae An edible fish of the family Scombridae, often speckled. Origin: From maquerel, from a source. a pimp; also, a bawd any species of the genus Scomber, and of several related genera. They are finely formed and very active oceanic fishes. Most of them are highly prized for food Origin: [OF. maquerel, F. maquereau, fr. D. makelaar mediator, agent, fr. makelen to act as agent.] Mackerel is a common name applied to a number of different species of pelagic fish, mostly, but not exclusively, from the family Scombridae. They are found in both temperate and tropical seas, mostly living along the coast or offshore in the oceanic environment. Mackerel typically have vertical stripes on their backs and deeply forked tails. Many species are restricted in their distribution ranges, and live in separate populations or fish stocks based on geography. Some stocks migrate in large schools along the coast to suitable spawning grounds, where they spawn in fairly shallow waters. After spawning they return the way they came, in smaller schools, to suitable feeding grounds often near an area of upwelling. From there they may move offshore into deeper waters and spend the winter in relative inactivity. Other stocks migrate across oceans. Smaller mackerel are forage fish for larger predators, including larger mackerel. Flocks of seabirds, as well as whales, dolphins, sharks and schools of larger fish such as tuna and marlin follow mackerel schools and attack them in sophisticated and cooperative ways. Mackerel is high in omega-3 oils and is intensively harvested by humans. In 2009, over five millions tonnes were landed by commercial fishermen. Sport fisherman value the fighting abilities of the king mackerel. The numerical value of mackerel in Chaldean Numerology is: 7 The numerical value of mackerel in Pythagorean Numerology is: 5 Sample Sentences & Example Usage We also add kusaya, a sun-dried salted horse mackerel that gives off the smell of dog dung. It was like, Holy mackerel! I get to sing that? I'm in! it's the best score I've heard since 'West Side Story.'. The Mediterranean has the color of mackerel, changeable I mean. You don't always know if it is green or violet, you can't even say it's blue, because the next moment the changing reflection has taken on a tint of rose or gray. We still recommend that women avoid the fish that are highest in mercury like catfish, shark, swordfish and giant mackerel, typically the larger fish that have longer lifespans and they tend to concentrate more mercury in their tissue. Images & Illustrations of mackerel Translations for mackerel From our Multilingual Translation Dictionary - verat, cavallaCatalan, Valencian - rionnachScottish Gaelic - פלמידה, מקרלHebrew - サバ, 鯖Japanese - MakréilLuxembourgish, Letzeburgesch - skumbrija, makreleLatvian - tawatawa, teweteweMāori - макрель, скумбрияRussian - skuša, скушаSerbo-Croatian - cá thuVietnamese Get even more translations for mackerel » Find a translation for the mackerel definition in other languages: Select another language:
<urn:uuid:87b57020-7f41-4f72-a586-836ad983f7a0>
CC-MAIN-2016-26
http://www.definitions.net/definition/mackerel
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.871509
865
3.15625
3
1Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, Berkeley, USA 2Civil and Environmental Engineering Department, University of California, Berkeley, USA 3Civil and Environmental Engineering Department, Cal Poly, San Luis Obispo, USA Received: 28 Jan 2012 – Published in Atmos. Chem. Phys. Discuss.: 23 Feb 2012 Abstract. A spectroscopic analysis of 115 wintertime particulate matter samples collected in rural California shows that wood smoke absorbs solar radiation with a strong spectral selectivity. This is consistent with prior work that has demonstrated that organic carbon (OC), in addition to black carbon (BC), appreciably absorbs solar radiation in the visible and ultraviolet spectral regions. We apportion light absorption to OC and BC and find that the absorption Ångström exponent of the light-absorbing OC in these samples ranges from 3.0 to 7.4 and averages 5.0. Further, we calculate that OC would account for 14% and BC would account for 86% of solar radiation absorbed by the wood smoke in the atmosphere (integrated over the solar spectrum from 300 to 2500 nm). OC would contribute 49% of the wood smoke particulate matter absorption of ultraviolet solar radiation at wavelengths below 400 nm and, therefore, may affect tropospheric photochemistry. These results illustrate that BC is the dominant light-absorbing particulate matter species in atmospheres burdened with residential wood smoke and OC absorption is secondary but not insignificant. Further, these results add to the growing body of evidence that light-absorbing OC is ubiquitous in atmospheres influenced by biomass burning and may be important to include when considering particulate matter effects on climate. Revised: 24 May 2012 – Accepted: 22 Jun 2012 – Published: 16 Jul 2012 Kirchstetter, T. W. and Thatcher, T. L.: Contribution of organic carbon to wood smoke particulate matter absorption of solar radiation, Atmos. Chem. Phys., 12, 6067-6072, doi:10.5194/acp-12-6067-2012, 2012.
<urn:uuid:641d8178-cbdd-467a-b91f-c6bfe1999a00>
CC-MAIN-2016-26
http://www.atmos-chem-phys.net/12/6067/2012/acp-12-6067-2012.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.863338
424
2.609375
3
You can find them on a map. Barely. Little towns that used to be rather important hubs dot the Virginia countryside, dating from the days when agriculture ruled along with the horse and buggy or mule and wagon. These central spots, often near rail stations, rivers, or better roads, were communities in their own right and many have faded away as the interstate system grew. The Lost Communities of Virginia, by Terri Fisher and Kirsten Sparenborg, takes a look at these fading places, several of them near our area, including Mineral, Woodford, and Milford. Fans of Fried Green Tomatoes at the Whistle Stop Café can relate to little Milford, situated in Caroline County and still located on a railroad line. Originally the popular area here was Doguetown, named for the Dogue Indians who used the Mattaponi River for transportation. Milford, named for a nearby plantation in 1792, also used the river as a point for shipping—and inspecting—tobacco. The Mattaponi River was connected to both the York River and the Chesapeake Bay. By the early 1840s, the Richmond, Fredericksburg, and Potomac Railroad ran from Richmond to Aquia Creek with a stop in Milford. Milford’s North-South railroad connections made it a target in the Civil War. This account has been compiled from the Free Lance newspaper of Fredericksburg, Virginia, October 16, 1894 through September 27, 1895, by Robert A. Hodge. Alum Spring Park is a 34-acre woodland retreat off Greenbriar Drive with a playground and hiking trails. Its sandstone cliff, also known as the Alum Spring Rock, is 400 feet long and 40 feet high. In 1852, Fredericksburg business men were concerned with the failure of the Rappahannock Canal (see Fredericksburg Times, Jan., 1978), the impassability of the turnpike, the incomplete state of the plank road and the loss of county trade to the Alexandria markets via the railroad. When Dr. Edward Alvey, Jr., died at the age of 97 on July 11, 1999, generations of Mary Washington College students remembered him as their beloved Dean. They -- and generations of Fredericksburgers -- also remembered him as a writer and historian who illuminated the life and times of our area.
<urn:uuid:2c2029ba-5c82-455d-bed1-ed06969a32d3>
CC-MAIN-2016-26
http://www.librarypoint.org/taxonomy/term/258
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968994
487
2.71875
3
Curcumin is not to be confused with cumin which is a completely different spice with a similar name. Curcumin is the principal curcuminoid of the Indian curry spice turmeric, the other two curcuminoids being demethoxycurcumin and Bis-demethoxycurcumin. The curcuminoids are polyphenols and are responsible for the yellow color of turmeric. Curcumin can exist in at least two tautomeric forms, keto and enol. The enol form is more energetically stable in the solid phase and in solution. It is also hepatoprotective. Curcumin can be used for boron quantification in the so-called curcumin method. It reacts with boric acid forming a red colored compound, known as rosocyanine. Since curcumin is brightly colored, it may be used as a food coloring. As a food additive, its E number is E100. Curcumin: potential for hepatic fibrosis therapy? O'Connell MA, Rushworth SA. School of Chemical Sciences and Pharmacy, University of East Anglia, Norwich, UK. m.***@**** The beneficial antioxidative, anti-inflammatory and antitumorigenic effects of curcumin have been well documented in relation to cancer and other chronic diseases. Recent evidence suggests that it may be of therapeutic interest in chronic liver disease. Hepatic fibrosis (scarring) occurs in advanced liver disease, where normal hepatic tissue is replaced with collagen-rich extracellular matrix and, if left untreated, results in cirrhosis. Curcumin inhibits liver cirrhosis in a rodent model and exerts multiple biological effects in hepatic stellate cells (HSCs), which play a central role in the pathogenesis of hepatic fibrosis. In response to liver injury, these cells proliferate producing pro-inflammatory mediators and extracellular matrix. Curcumin induces apoptosis and suppresses proliferation in HSCs. In addition, it inhibits extracellular matrix formation by enhancing HSC matrix metalloproteinase expression via PPARgamma and suppressing connective tissue growth factor (CTGF) expression. In this issue, Chen and co-workers propose that curcumin suppresses CTGF expression in HSC by inhibiting ERK and NF-kappaB activation. These studies suggest that curcumin modulates several intracellular signalling pathways in HSC and may be of future interest in hepatic fibrosis therapy. PMID: 18037917 [PubMed - indexed for MEDLINE] Curcumin protects the rat liver from CCl4-caused injury and fibrogenesis by attenuating oxidative stress and suppressing inflammation. Fu Y, Zheng S, Lin J, Ryerse J, Chen A. Department of Pathology, School of Medicine, Saint Louis University, 1402 S. Grand Blvd., St. Louis, MO 63104, USA. We previously demonstrated that curcumin, a polyphenolic antioxidant purified from turmeric, up-regulated peroxisome proliferator-activated receptor (PPAR)-gamma gene expression and stimulated its signaling, leading to the inhibition of activation of hepatic stellate cells (HSC) in vitro. The current study evaluates the in vivo role of curcumin in protecting the liver against injury and fibrogenesis caused by carbon tetrachloride (CCl(4)) in rats and further explores the underlying mechanisms. We hypothesize that curcumin might protect the liver from CCl(4)-caused injury and fibrogenesis by attenuating oxidative stress, suppressing inflammation, and inhibiting activation of HSC. This report demonstrates that curcumin significantly protects the liver from injury by reducing the activities of serum aspartate aminotransferase, alanine aminotransferase, and alkaline phosphatase, and by improving the histological architecture of the liver. In addition, curcumin attenuates oxidative stress by increasing the content of hepatic glutathione, leading to the reduction in the level of lipid hydroperoxide. Curcumin dramatically suppresses inflammation by reducing levels of inflammatory cytokines, including interferon-gamma, tumor necrosis factor-alpha, and interleukin-6. Furthermore, curcumin inhibits HSC activation by elevating the level of PPARgamma and reducing the abundance of platelet-derived growth factor, transforming growth factor-beta, their receptors, and type I collagen. This study demonstrates that curcumin protects the rat liver from CCl(4)-caused injury and fibrogenesis by suppressing hepatic inflammation, attenuating hepatic oxidative stress and inhibiting HSC activation. These results confirm and extend our prior in vitro observations and provide novel insights into the mechanisms of curcumin in the protection of the liver. Our results suggest that curcumin might be a therapeutic antifibrotic agent for the treatment of hepatic fibrosis. Copyright 1994-2016 MedHelp International. All rights reserved. MedHelp is a division of Aptus Health. This site complies with the HONcode standard for trustworthy health information. The Content on this Site is presented in a summary fashion, and is intended to be used for educational and entertainment purposes only. It is not intended to be and should not be interpreted as medical advice or a diagnosis of any health or fitness problem, condition or disease; or a recommendation for a specific test, doctor, care provider, procedure, treatment plan, product, or course of action. Med Help International, Inc. is not a medical or healthcare provider and your use of this Site does not create a doctor / patient relationship. We disclaim all responsibility for the professional qualifications and licensing of, and services provided by, any physician or other health providers posting on or otherwise referred to on this Site and/or any Third Party Site. Never disregard the medical advice of your physician or health professional, or delay in seeking such advice, because of something you read on this Site. We offer this Site AS IS and without any warranties. By using this Site you agree to the following Terms and Conditions. If you think you may have a medical emergency, call your physician or 911 immediately.
<urn:uuid:833f0992-39e7-4f3c-9e99-6a7b77332392>
CC-MAIN-2016-26
http://www.medhelp.org/posts/Hepatitis-C/orleans/show/558284
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892582
1,299
2.609375
3
Now that you've found some web resources, you have to decide if the information on them is trustworthy and is really what you need. To decide if a site is credible, ask yourself these questions: Who is the author? If the website is run by a well-known organization or is part of a scholastic organization (like a university or a research institute), the information is more likely to be true. But be careful! Websites that the public can edit, such as Yahoo! Answers or Wikipedia, have little control over what their users post. You can also look at what kind of domain the site is. The domain is the last part of the URL: .gov sites are government sites, .edu sites are educational sites, like a school or college, and .org sites are usually non-profit organizations, like churches and museums. These are more likely to be reliable than .com sites, which are usually commercial sites. Where did they get their information? Look for sites that include lists of references and source materials, which will explain where the author found the information. If they list their resources, it shows they did their homework! Is it True? Sometimes even reliable websites can unintentionally contain false information. News sites can have false reports if the journalist didn't properly verify a story or they were tricked by a hoax. Sites like Wikipedia allow anyone on the Web to contribute content and the information can be incomplete, biased, or just plain wrong. Always double check your Web research against as many sources as possible, especially information from sites that contain disclaimers about their content. How old is it? A resource is a lot more likely to be useful if it is up-to-date. Find out when the website was created and when it was last updated. Sometimes old information can be incorrect, especially if it is related to current events. You can also check to see if all the links on the website still work. Broken links can mean that a site has not been updated in a long time. Does it look professional? Reliable sites are usually built and maintained by a team of designers and content experts and look very professional. If a site is full of errors such as typos and broken links and inconsistencies such as images and advertisements that donít relate to the content, it is much less likely to be a reliable site. To be on the safe side, no matter what the site looks like, you should always double-check the information on it against a trusted source. What do others think of the site? Reliable websites are frequently recommended by other websites. You can check to see how many backlinks there are to a site by typing "link:" followed by the full site URL into a search engine. Sites with fewer links to them also tend to be listed further down in search engine results. You may also want to search for other people's opinions about the site and its author and see what they have to say about them. Try to be objective when reading other people's opinions and reviews though. Some people will write nasty reviews just to be mean. Is it Fact or Opinion? Facts are known to be true and can be used to prove a point. Opinions are what someone thinks about a topic. Instead of proving a point, they can only agree or disagree with it. A site's information could be opinion if: - It only presents one side of the story, or key information about the topic is left out - It is paid for by an organization that has a specific position on the topic, or the author has something to gain by only presenting one side of the story - It does not cite other resources for the information presented - It presents an extreme view of the topic - It does not state the reasons for why it is presenting information on the topic Reports usually focus on facts, but sometimes it's necessary to research opinions on a topic as well. If you're unsure whether you should be researching facts or opinions about your topic, check with your teacher.
<urn:uuid:3f69416e-84b8-453e-af02-08627499c39c>
CC-MAIN-2016-26
http://www.carnegiecyberacademy.com/libraryGuides/evaluating.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96617
829
3.390625
3
When natural disasters or unforeseen events occur, you know that being physically prepared for an emergency with backups and reserves of food, water, power andshelter is usually the difference between security and peace of mind, or uncertainty and possible tragedy. What role can a solar cooker play during times of Emergency or Disaster? You would be surprised at how much a solar cooker can replace and do when routine items and services are not available. A Simple Solar Oven and Cooker can play a Vital part in assuring that you have all possible scenarios covered in your emergency preparedness plans...after all, what good are your emergency food preparedness efforts if you have no means to cook it? As we are a site focused on the use of solar power and alternative energy resource use, namely for the purposes of cooking , we want to focus on the ways that a solar cooker can benefit people from all walks of life, especially in the event of a disaster or crisis. We realize of course that conditions within any locale at any given time may not be favorable for using a solar cooker. Stormy, cloudy and violent weather will most surely make it difficult, even impossible to use a solar oven. But, as everyone knows, these conditions will not always be present, or remain that way; in fact they are usually very short in duration. The resultant effects of events such as; power outages, tornadoes, storms and such usually leave many hundreds and thousands without the basic necessities that are so common and vital to our customary standard of living; these usually include power, clean water, shelter, food and medical care. A solar powered oven can address several of these emergency needs in varying ways and with great results. A solar oven though, needs no physical source of fuel, unlike the previously mentioned cooking instruments which use either fossil fuel or biomass fuel. As long as the sun is shining you can cook any of your food stores, with very little exception. Having a solar oven will add to your level of preparedness no matter the extent or degree of situations and conditions you may encounter. see more tips and ideas on using a solar oven more effectively The Year 2011 started out in a volatile manner with much change across the globe and much political unrest. Food and fuel costs surged and governments around the world are taking measures to assure a stable food supply. Food Costs at record highs and Global Food Staples rising costs Despite the extraordinary measures to prepare for difficulties there is no guarntee any government will be able to help all of those in need, thus the need to prepare ones own stores, emergency preparedness supplies etc. As shown in the above images; The Sun Oven can be carried like a suitcase and can carry food items inside the empty space to be used for later on when sheltering or on the move in times of emergency Surviving an "Electromagnetic Pulse" (EMP), how to prepare and what to look for. Help us to provide access to the best EMERGENCY PREPAREDNESS Sites on the Internet. Submit your links or references to sites that you consider the best on the Subject of being prepared for times of disaster or emergency. Please submit only relevant sites and information. We reserve the right to accept or reject that which we feel does or does not meet our criteria. Click below to see contributions from other visitors to this page... Hurricane, Utah Community Preparedness Fair (showing our solar cookers) Not rated yet Why are people not more open to emergency preparedness? Saturday September 28th 2013 we cooked up quite a bit of food for the City of Hurricane, Utah … My Preparedness Blog: Emergency Preparedness, Food Storage... Not rated yet My Preparedness Blog Emergency Preparedness, Food Storage, First Aid, Frugal Living, Budgeting, Health, Wellness, and Fun Activities for Families www.bridensolutions.ca/blog Not rated yet This blog covers the 8 areas of Emergency Preparedness in a very straight-forward way that allows even those who are new to Emergency Preparedness …
<urn:uuid:260bcc1d-4831-4b66-b596-0096cfdf11c9>
CC-MAIN-2016-26
http://www.solarcooker-at-cantinawest.com/emergency_food_preparedness.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927601
842
2.5625
3