text stringlengths 199 648k | id stringlengths 47 47 | dump stringclasses 1 value | url stringlengths 14 419 | file_path stringlengths 139 140 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 50 235k | score float64 2.52 5.34 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
The folks at The Wolfram Blog sent us a link to this story about using Mathematica to design unconventionally shaped, but (hopefully) structurally sound, brick walls that robotic masons might build. The author of the piece, Chris Carlson, Wolfram’s Chief Interactive Graphics Developer, writes:
A few groups have begun to experiment with the idea of robotically laid brick construction, most notably the Swiss firm Gramazio & Kohler (Facade Gantenbein Winery, Structural Oscillations), and recently, students at the Harvard University Graduate School of Design (On the Bri(n)ck). Inspired by these efforts, I set out to investigate the possibilities of robotic brick-wall construction with Mathematica….
There are lots of possible effects to investigate: displacing bricks, rotating them, leaving gaps between them, creating ledges of various depths for shadow effects, combining bricks of various colors, and so on.
Not really sure how feasible or sound these walls would be to build in the real world, but it certainly shows you some cool possibilities for design and special F/X in brickwork. | <urn:uuid:fee758ab-fb89-4e06-a7a1-c71c2fcf36bd> | CC-MAIN-2016-26 | http://makezine.com/2009/07/09/designing-walls-for-robot-bricklaye/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00052-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922377 | 235 | 2.59375 | 3 |
On Aug. 28, on the 50th Anniversary of the March on Washington and Dr. Martin Luther King’s famous “Dream” oration, President Barack Obama spoke from the steps of the Lincoln Memorial, joined by former presidents William Clinton and Jimmy Carter.
Much of the press speculated on whether the president can reach the “King standard”? Can he deliver an address with the poetry and the vision that made Dr. King’s speech timeless?
But, I suggest to you, that is the wrong standard by which to measure the president. Barack Obama isn’t the leader of a March on Washington for Jobs and Freedom. He is the leader of the government. The March on Washington 50 years ago was a call by an oppressed people seeking justice. As the call to the march detailed, we marched to “help resolve an American crisis,” a crisis “born of the twin evils of racism and economic deprivation.”
The marchers carried 10 demands to the nation’s capital, calling for comprehensive civil rights legislation, including the end to segregation and the right to vote, for immediate desegregation of the schools, for a “massive federal program to train and place all unemployment workers – Negro and white – in meaningful and dignified jobs at decent wages,” for an increase of the minimum wage, and for federal action against discrimination in employment, housing and federal programs. Dr. King’s speech called on the nation’s elected leaders to act.
The president’s task is to respond to this call. That was true 50 years ago. It is equally true today. Last Saturday, tens of thousands gathered once more on the Mall, calling for action. Once more, we gathered to “help resolve an American crisis.” Once more we carried an agenda – jobs, an increase in the minimum wage, defense of the right to vote, an end to discriminatory stop and frisk and stand your ground policies, an end to discriminatory sentencing, comprehensive immigration reform. For President Obama, the question is the response – legislation, executive action, enforcement, and appropriations.
The president need not and cannot meet the King standard. He might best be measured against the Johnson standard. In response to the 1963 March, President Kennedy sought to move civil rights legislation. And when he was struck down, Lyndon Johnson took up the cause, expanded it and made things happen.
In 1965, President Johnson delivered a commencement Address at Howard University titled “To Fulfill These Rights.” There he laid out his response. He paid tribute to the protests that provided “the call to action.” He reported on the progress made. Passage of the 1964 civil rights legislation. Soon, passage of the Voting Rights Act, guaranteeing the right to vote. The barriers to freedom, he reported, “are tumbling down.”
But President Johnson acknowledged, “freedom is not enough. You do not take a person who, for years, has been hobbled by chains and liberate him, bring him to the starting line of a race and then say, “you are freed to compete with all the others.”
So Johnson argued that next and the “more profound stage of the battle for civil rights” is not just “equality as a right and a theory, but equality as a fact and equality as a result.” Johnson then detailed the structural inequalities still facing African Americans – unequal unemployment, incomes, rates of poverty, infant mortality, and more. And he laid out the strategy of his war on poverty to address this crisis. He announced his intention to call a White House Conference to address the theme of “To Fulfill These Rights.”
Johnson understood how difficult this was. He launched his war on poverty in Appalachia, choosing to “whiten” the face of poverty, to reflect the reality that more poor people were white than black. He drove hard to push legislation and appropriations and executive action. The minimum wage reached levels not seen in comparable dollars since. Infant mortality and poverty declined. Real progress was made.
But as Dr. King warned, the war on poverty was lost to the war in Vietnam that robbed resources, attention and political energy. When he was assassinated, Dr. King was planning another march on Washington – a Poor People’s Campaign, to bring the impoverished from across the races and the regions to camp in the nation’s capital and to call on our elected leaders once more to act.
So the question for President Obama isn’t whether he can match the poetry of Dr. King’s call. It is whether he can match the energy of President Johnson’s response. Will he call revive the U.S. Commission on Civil Rights? Will he announce steps to guard the right to vote now under assault from North Carolina to Texas? Will he call on Congress for appropriations to ensure every child has access to a high quality, public education? Will he move more aggressively to curb discriminatory sentences? Will he drive an increase in the minimum wage, a strengthening of our laws protecting workers and their right to organize, the move for comprehensive immigration reform? We will listen to what he says. But as president, he will be measured by the hard prose of his actions, not the poetry of his words. We will be looking for what he does, not simply what he says.
Jesse L. Jackson, Sr. is founder and president of the Chicago-based Rainbow PUSH Coalition. You can keep up with his work at www.rainbowpush.org. | <urn:uuid:44d2f26d-a5d6-4756-9fc0-5340253e4ae4> | CC-MAIN-2016-26 | http://atlantadailyworld.com/2013/08/28/50-years-later-the-president-s-response/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00043-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959453 | 1,155 | 2.6875 | 3 |
Earth First article
Article written by Pete Forsyth. Published in Earth First Journal, summer 2006. Ran alongside an article about the Basel Action Network's Exporting Harm report from last fall. (Note: The #SIDEBAR did not run in the published version, and there may be a couple other changes as well.)
Oregon non-profit attacks e-waste at its source
Far from the villages of Africa, and the third-world dumping grounds described in the Basel Action Network’s reports, an Oregon-based non-profit has pioneered a novel approach to e-waste management.
At the Free Geek Community Technology Center in Portland, discarded technology (including over 35,000 PCs over five years) is processed locally, not shipped to a distant continent and an unknown fate. In the process, Free Geek’s dedicated volunteers generate fully functional refurbished computers (over 6,500 to date) to give away or sell for cheap in the local community. They put equipment many would consider “outdated” to good use, and they dismantle the rest, disposing of it safely.
In 2005, Free Geek collected 500 tons of unwanted computer equipment from local individuals and companies upgrading to newer technology. This e-waste is similar to what BAN reported is shipped to places like Lagos: piles of worthless (and potentially dangerous) junk, but also a few treasures, and a reasonable amount of junk that can be turned into treasure with a moderate amount of work.
In its five year history, Free Geek has developed and honed methods to process both treasure and junk responsibly and efficiently. The goal is to get the best possible use out of every kind of donated item. The process is under constant refinement as the incoming equipment changes, and as creative minds come up with new uses for it.
Most of the e-waste Free Geek receives has value and can be sold, either as scrap or after being refurbished into working systems. But some is hazardous, has no value for reuse, and costs money to dispose of properly.
Ideally, revenue from selling the “good stuff” (like gold, power cords, aluminum, and refurbished PCs) would cover the processing costs for the “bad stuff” (like lead, mercury, cadmium.)
But the high cost of properly disposing of one increasingly common item—the cathode ray tube monitor (CRT)—makes that ideal unattainable.
If you’ve ever picked up a CRT monitor (that’s the kind that looks like an old TV), it should come as no surprise that it contains a great deal of lead. All that heavy metal shields your eyes from the harmful cathode rays that put an image on the screen. But when you “throw out” the monitor, that lead is no longer your friend: if handled improperly, it will seep out of a landfill into the water supply, causing brain damage in children.
Free Geek is flooded with old CRT monitors as many people abandon them in favor of newer LCD-based “flat panel” monitors.
Processing these donated monitors represents one of Free Geek’s highest expenses, and so it is the one area where donators are required to pay a fee. Dropping off a monitor at Free Geek—whether it’s working or not—costs $10. Sometimes donors are put off, especially when dropping off a working monitor. But they generally come around once they learn about the dangers of lead and the challenge of disposing of it safely.
But explaining these issues to each donor is time consuming and inefficient—and if there’s one thing geeks hate, it’s inefficiency.
Free Geek founder Oso Martin advocates a different approach. Ideally, makers of computers and electronics would provide funding for responsible disposal at the end of their products’ life cycle. Sponsorship of local grassroots organizations like Free Geek would solve the e-waste problem in the communities where these type of efforts were put forth. Unfortunately, companies have little incentive to take this approach, and have failed to take the lead in developing this kind of program.
Currently there is a big effort by local and state governments to require a disposal fee to be paid at the time of purchase - kind of like a bottle deposit. This is the approach taken in many European countries, and in the state of Maine. Free Geek is supporting legislative efforts along those lines in Oregon and nearby Washington. If passed, such legislation would direct money to Free Geek for processing monitors and other items. Individual donors would no longer have to pay a fee when dropping off monitors, so Free Geek’s staff and volunteers could do less explaining and more recycling.
If this kind of legislation sounds like an uphill battle, you’re right. But an uphill battle has never stopped a geek. Five years ago, Free Geek’s unpaid founding members struggled to come up with rent; today, Free Geek has 12 paid employees, operates in a 15,000 square foot facility, and has hundreds of volunteers who process over 40 tons of e-waste per month. New Free Geeks have followed Portland’s model in Ohio, Indiana, Chicago, and Washington, and the Portland organization is exploring an expansion into the suburb of Gresham.
Free Geek’s volunteers maintain the web site, at http://freegeek.org. If you think your community might be ready for a Free Geek of its own, check the “startups” web page at http://wiki.freegeek.org/index.php/Free_Geek_Startups, and join the mailing list at http://www.freegeek.org/about/intergalactic/startup-list-signup/ . Free Geek volunteers will answer your questions, and attempt to put you in touch with others in your community who share your interest.
The Geeks looking forward to hearing from you. But if it takes them a day or two to respond, don’t be surprised…breaking apart and rebuilding computers can be surprisingly addictive!
Every day, Free Geek provides a great service to third world nations by providing…well, a whole lot of nothing. The less Portland garbage that ends up in places like Lagos and Uganda, the better.
But recently, Free Geek has been able to supplement that "nothing" with a little something extra.
Free Geek currently lacks the resources or the knowledge to plan and carry out delivery of working PCs overseas. But when Gulu University and the Beaverton Rotary Club approached Free Geek about donating computers to the Uganda university, they were glad to help.
Martin Chase, the coordinator of Free Geek’s Build Program, is currently in Uganda (in central southern Africa), hooking up oveer 100 PCs and other equipment that Free Geek donated, and training local system administrators to use free software.
Martin has been keeping a web log of his journey, which makes fascinating reading. Below are his thoughts about 23 days into his journey. He paints a grim picture, but it is Free Geek’s hope that building ties will open unknown opportunities down the road.
- people waste computers here.
- they don't know how to use them,
- but they want them desperately.
- they'll buy the best computers they can,
- following the advertisements call,
- but once in possesion of an advanced piece of technology,
- they sit there in boxes,
- or get used for a tenth of their potential.
- the best funded organizations are the worst,
- as they try to become more like the their developed idols.
- this country does not know how to deal with trash.
- true that they are loath to actually throw away one of these expensive
- boxes after they have, through disuse and time spent in a harsh
- environment, let it fall beyond repair,
- but the end game for these things is to be thrown in a pile and burned.
- that's what i see.
- i'll have better organized and developed thoughts afterwards,
- i think.
- - martin
Clearly, as much cultural work needs to be done as technological. As countries like Uganda struggle to avoid being left behind, their resources are stretched thinner and thinner.
If organizations like Free Geek can slow the flow of unsorted garbage, Ugandans will be more able to focus their attention on productive things, learning to use tech to its fullest potential. Assisting with education, where possible, is important as well. | <urn:uuid:e5ebf6b0-5051-4335-86fb-90b51c61a34a> | CC-MAIN-2016-26 | http://wiki.freegeek.org/index.php/Earth_First_article | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00141-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950695 | 1,771 | 2.609375 | 3 |
Ideal education for 21st century doctors involves simulation center
In the 10,000 square feet of the Mayo Clinic Multidisciplinary Simulation Center, life hangs in the balance every day.
And the physicians, surgeons and medical students honing their skills at the center are grateful that it does — because the outcomes are always excellent and powerfully instructive in this newest form of experiential medical education. No lives are lost, and vital lessons are learned from computer simulations, robotic mannequins that can be programmed to respond as a patient might, and trained human actors cast as "standardized" patients who react in scripted ways. Just as aviation pilots master complexity through cockpit simulators, at Mayo Clinic's Multidisciplinary Simulation Center, health care professionals across disciplines improve performance and reduce errors through comprehensive medical care simulation training.
Case study: Anesthesia complication
Consider this real-life anesthesia story about a rare and usually fatal complication of regional anesthesia:
Mayo Clinic anesthesia residents undergo a number of simulation-based exercises on crises that may happen to patients who are undergoing regional anesthesia. The exercises are both teaching sessions and opportunities to "rehearse." During a real situation in the summer of 2007, a patient undergoing a regional anesthetic suffered a severe toxic reaction and their condition began to deteriorate rapidly. But because the Mayo Center Multidisciplinary Simulation Center prepared physicians to know what to expect and handle it, they knew how to quickly and safely reverse the situation.
Explains Mayo Clinic anesthesiologist Laurence Torsher, M.D., "Two of the people participating in the patient's care had undergone simulation-based training approximately eight weeks earlier in which they had experienced a simulated decompensation (heart failure) very similar to what they saw occurring in front of their eyes. The resuscitation was executed seamlessly and, in under 10 minutes, the patient was stable again without any permanent sequelae (an aftereffect of disease or injury)."
In a debriefing that occurred after the event, those two residents identified specific points from their simulation session that directly contributed to the successful and rapid resuscitation of the patient. "This experience clearly demonstrates the value of rehearsal in managing medical emergencies. Through living the experience in simulation, the team knew what to do and had the confidence to act swiftly, even though the correct treatment in this case was very unorthodox," says Mayo Clinic physician and simulation committee chair Roger Harms, M.D.
Distinguished with accreditation
In 2007 the American College of Surgeons (ACS) accredited the Mayo Clinic Multidisciplinary Simulation Center as one of 11 outstanding simulation centers in the United States.
ACS accreditation is testament to the Mayo center's embodiment of the highest standards of excellence. Opened in fall 2005, in its first 12 months the center hosted 4,053 learners in classes each typically a half-day long. "That high level of programming and usage is unprecedented in any simulation center to my knowledge," explains the center's medical director, William Dunn, M.D., who also is president-elect of the international Society for Simulation in Healthcare (www.ssih.org). "We are deeply honored by this recognition because it signifies that we have achieved a benchmark of excellence in adult learning through experiential education in simulation." Adds David Farley, M.D., Mayo Clinic surgeon and vice chair for education in Mayo's Department of Surgery in Rochester, "Health care needs simulation centers of excellence such as ours to support educational vision. We are preparing teams to care for patients not only now — but in the future as well."
Caring for patients of the future
With today's pace of technological advances in health care, physicians are increasingly faced with the daunting tasks of mastering new bodies of knowledge, technologies and procedures. How will physicians and surgeons now in practice learn new best-practice techniques while maintaining daily workloads taking care of their patients? Dr. Dunn's answer is that they can attend a Mayo ACS-accredited simulation center for intensive course work, seminars and team training.
"That's exactly what we are doing now with our own physicians, surgeons and medical students, and we are having excellent results," Dr. Dunn notes. "We're committed to it because the rationale for simulation-center-based medical education is persuasive: administering medical care is a high-risk proposition, and the more experience, exposure and practice team members receive, the likelier we are to improve the care we give to real patients, at minimized risk." Says Dr. Farley, "Surgeons need centers of excellence such as ours to serve as resources that have the vision to prepare them for caring for patients of the future. For example, when new technology comes along for robotic eye surgery, there needs to be a way to prepare surgeons to take advantage of it and get it to patients as safely and quickly as possible. That's where we come in. We're planning for that day now — not waiting for 2015."
Inside Mayo's Simulation Center
The term simulation center can mean many things. Some medical centers call the closet where a mannequin is kept for demonstrating cardiopulmonary resuscitation the simulation center. The Mayo Clinic Multidisciplinary Simulation Center is a state-of-the art complex. Four of its larger rooms can be configured to be exact replicas of surgical suites that Mayo Clinic surgeons operate in: or emergency rooms; or rooms within intensive care units; or a cardiac catheterization laboratory. Like independent stages in a theater, all rooms are multipurpose. Additional smaller rooms (up to 10) provide realism for training within spaces configurable as inpatient hospital rooms, an outpatient clinic, or other areas. The equipment and its placement in the rooms are identical to those used in real patient care at Mayo Clinic.
Control booths, observation rooms, and/or high-tech cameras attached to every room allow medical educators to teach new techniques and simulate medical emergencies requiring hands-on, real-time problem solving. Physician instructors watch the scene unfold from behind the control booth glass. The simulated situation is often recorded, and then reviewed and critiqued by the entire health care team and instructors. Participants then can repeat the situation and practice proper responses.
Simulator as educational tool for experiential learning
While the Mayo Clinic simulation center's facilities are impressive, its most distinguishing feature is the educational expertise behind it. "Our simulator is set up to be an educational tool to achieve goals that foster deep learning, such as cognitive imprinting," Dr. Dunn explains.
He relates a common learning experience doctors undergo. "If you talk to the average doctor about his or her most potent learning experience," he says, "it would go something like this: 'I saw this difficult patient and I didn't know enough about the condition, even though I remembered studying it. So I reread the literature, and because I had the experience of taking care of a patient with the condition, what I read stuck like it never stuck before.' Our simulator offers the same capacities to utilize this same learning mechanism of active learning."
Health care simulation centers are a relatively recent phenomenon, evolving over the past decade. They originated in critical care and anesthesiology. Because of medicine's traditional rigid boundaries between disciplines, simulation centers have tended to remain confined to narrow areas of expertise.
Mayo is an exception. Because Mayo Clinic has been a multidisciplinary group practice for more than a century, movement across specialties is fluid, and encouraged. As a result, the concept of a simulation center at Mayo is rooted in its ability to serve all fields of practice.
"By using simulation centers in medical education, we are at the forefront of an avalanche of change," Dr. Dunn says.
The future of simulation in medicine
Within medical training in the United States, the medical licensing exam process requires health care professionals to pass test components based in simulated settings, using actors as "standardized patients." This occurs at only one point within the many years of training required for the average physician. In the future, simulation techniques may be utilized much more within all fields of specialty training, and well beyond the medical student years, including continuing medical education, and recertification for licensure from specialty boards.
To assure that the highest quality of health care professionals are in practice, Drs. Dunn and Farley agree that future medical licensing, training programs, and board-level specialist credentialing processes should involve outstanding performance on select simulation-based exams. Says Dr. Dunn, "We need to demand uniform excellence in training and assessment of health care professionals, in fair, standardized, respectful manners — raising the bar and modeling for others. And comprehensive, accredited advanced simulations centers such as ours will play a pivotal role in achieving this." | <urn:uuid:d7e750ab-36fd-471b-a57b-83252e7e49cf> | CC-MAIN-2016-26 | http://www.mayo.edu/education/simulation-education | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00004-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95393 | 1,775 | 2.84375 | 3 |
The Marshall Islands Campaign
For the FLINTLOCK, or Marshall Islands campaign the ships involved were those of the amphibious force with the attack, support, and garrison groups; those of the fast carrier striking groups, and a few assigned to the defense forces, a total of some 359 vessels of all types for combat work, except submarines. The principal part of the forces involved based at Pearl. About half the amphibious-force vessels came from San Diego and were replenished in the Hawaiian islands en route to the Marshalls. The large transports were at Lahaina roads, Maui, and the tractor groups (landing ships (tank) and landing craft (infantry), etc.) at Nawiliwili, Kauai. For the smaller craft (submarine chasers, mine sweepers, landing craft, tank, and mine layers) a 12-hour period was allowed for taking fuel from landing ships (tank) at sea while en route. At Lahaina Roads, fuel was supplied by the fleet oilers Tallulah, Millicoma, Caliente, Chikaskia, Kaskaskia, and Neosho, some of which had sailed from San Diego with the Northern Attack Force. At Nawiliwili, fueling of small craft was done from the landing ships (tank), which had such an enormous fuel supply, that it involved them in no shortage. Again while en route the transports and others needing it were fueled between the Hawaiian Islands and the Marshalls, the transports and larger ships taking fuel from the accompanying fleet oilers and the smaller Diesel-engine craft from the landing ships (tank). Food, ammunition, and stores, with such repairs as were necessary, were attended to at the last point of departure.
The Truk Strike
The battleships, the large carrier Bunker Hill, and the smaller carrier Monterey, Cruiser Division Five, and a few smaller vessels were at Funafuti. The rest of the carrier force based at Pearl, whence it sortied with service completed in all departments.
Pearl was 2,500 miles from Kwajalein Atoll, the main point of attack.
The resistance expected might delay capture for a longer period than was anticipated, and there was also no telling but that the Japanese main fleet might give battle. Therefore, "fill with everything," was the order; and, on top of that, replenishing of fuel en route, adequate supply of fuel, ammunition, and provisions in the area for further replenishment subsequent to D-day. These services were stated in Spruance's logistic annex. Seventeen fleet oilers were used, before and after D-day. Of that number, three task groups of three each--Caliente, Pecos, and Tallulah; Ashtabula, Lackawanna, and Saugatuck; Cimarron, Kaskaskia, and Platte--were at sea in designated areas to care for the oiling after D-day, with eight extra tankers shuttling back and forth between Funafuti and these areas. The eight were the Millicoma, Neosho, Suamico, Neshanic, Chikaskia, Neches, Tappahannock, and Sabine. In addition there was a Liberty tanker at each of the two objectives, each with 50,000 barrels of fuel; and at Tarawa one slow tanker and a supply of Diesel oil in gasoline barges. At Funafuti 300,000 barrels in commercial tankers was available for reloading fleet oilers on 26 January, 200,000 in commercial tankers at the same place on 2 February, and 300,000 on 5 February. Each of the fleet oilers carried approximately 15,000 barrels of Diesel oil and 200,000 gallons of aviation gasoline. On the basis of estimated consumption it was planned to have from two to three loaded commercial tankers available until the operation was concluded.
What the operation might produce in fuel requirements was unknown, and the amounts scheduled were at best only estimates. The fuel paragraph of the operations plan begins with the words "Conserve fuel. The success of Flintlock requires large fuel supplies. The availability of fleet oilers is limited. In establishing the speed to maintain the required advance, and in prescribing the engineering condition to be employed, the conservation of fuel as well as the military situation will be considered." Only with fuel was there real concern at this stage of the war. It was finally clear that our fuel consumption was and would continue to be in excess of all earlier ideas, and that we were not yet quite fully geared to handle it under too continuous full-power steaming. Therefore the word of caution.
The other logistics concerned food. This would be distributed from one provision supply ship, fleet-issue loaded, available at Funafuti on 10 February, and a commercial solid load of refrigerated and frozen items in the United Fruit Company's Antigua on 25 February. These two were ordered to Majuro when it was decided early in February to use that atoll for the fleet anchorage.
Ammunition was available in barges at Tarawa and Funafuti for 5-inch and smaller guns. The ammunition ships Rainier, Mauna Loa, and Lassen had the supply for all other types of guns, large and small, and generally in sufficient quantities. The Sangay carried aircraft ammunition and bombs. All these ships were scheduled to be in Tarawa 1 February, and were diverted or ordered to Majuro.
The replacement of pilots and planes for the combat carriers would be from CVE's. There were also about 45 fighter planes available in the Ellice Islands.
Emergency repair facilities in Funafuti consisted of some of Captain Scull's squadron: Two destroyer tenders, two repair ships, one battle-damage-repair ship, one internal-combustion-engine repair ship, one floating drydock of 3,000 tons capacity, and one repair barge. These were 1,200 miles from Kwajalein, near which the damage was most likely to be inflicted. Pearl was 2,500 miles away, so the repair picture was not very bright. The answer was quickly found by the task-force commander himself: Use Majuro for the main fleet, with Service Squadron Ten to furnish service there, and Kwajalein with Service Squadron Four at that point to service cargo vessels, escorts, and small groups operating in that area. When the time came, the orders were issued accordingly.
With the securing of Kwajalein and Majuro, Admiral Spruance took the Fast Carrier Force into the latter place on 4 February 1944, after giving Kwajalein a trial of a few days. Service Squadron Ten was ordered there with instructions to service the fleet immediately. Fortunately the squadron commander was in Majuro with the garrison group of transports he had temporarily commanded during the illness of the assigned commander, so he was able to get into immediate personal touch with Admiral Spruance and get preliminary and makeshift operations under way pending arrival of the squadron staff and the supporting vessels.
Service Squadron Ten at Majuro
The battleship Washington, damaged in a night collision with the Indiana, was used as a temporary administrative center for the squadron while the protruding metal of her bow was being removed and bulkheads shored preparatory to her return to a navy yard. A number of officers were temporarily assigned to help with communications and operations, and the servicing of the fleet started. It was pretty ragged and hectic. There were not boats enough, nor tugs enough. When boats
or tugs were available, there was often delay because of lack of knowledge of the anchorage and berths. Even when the position of a ship was given, as in such and such a berth, there was often no chart available by which the servicing craft could locate it. At night it was even worse.
All the boats of the transports present were commandeered, and these formed the beginning of Squadron Ten's fleet boat pool. This at its beginning included 50 boats--always a few were broken down--with an organization of 3 officers and 150 men. Commander Service Force at Pearl was urged to send boats by every possible vessel. This was done throughout the war by using tankers, cargo vessels, and any other craft which could carry them. Boats were available in the rear areas, but the problem was to find transportation means to get them to the squadron in sufficient numbers to make up for losses, and for the growing requirements of the constantly increasing fleet.
In this first servicing the ammunition was replenished by the ammunition carriers previously mentioned as diverted from Tarawa. There was shortage in a few items, some of which was made up by cannibalizing ships returning to Pearl for repair. The senior captain of the ammunition ships present was made temporary head of the "ammo" department, and the job was done, though not without confusion. Moving such ships about a crowded anchorage, especially in wartime, can be hazardous. Nevertheless, it was done. About the time the captain in charge got the hand of things and had some definite ideas of the berthing, his ship would sail, and the next senior captain would take over and have to start from scratch learning what had to be done, what was needed, and which came next.
The repairs made, except those by ship's company, were very meager as only the repair ship Vestal and the battle-damage-repair ship Phaon were available. They were fully occupied getting the Washington and Indiana ready to leave.
Food and fueling went better. While there was not enough fresh and frozen food available to meet the demand, and the cargo of the provision supply ship Bridge was quickly exhausted, no one went hungry. More fresh and frozen foods were due on 10 and 25 February. There was sufficient fuel in a sufficient number of oilers so it could be handled in the time available, though oiler crews got very little of their badly needed rest.
Meanwhile the Washington sailed, and the administration of Squadron Ten had to move to a temporary set-up on a tanker, with a landing ship (tank) alongside to furnish the quarters. This was for 4 days only.
Harbor communication facilities on these were poor, and this was a setback for a few days. However, most of the heaviest servicing had been accomplished, and with the arrival of Ten's flagship, the destroyer tender Prairie, on 13 February with the members of the staff from Pearl, a real start was made. Spruance was off for the first Truk strike, and Commander Service Squadron Ten had promised him that when he returned he would get logistic services with more system, order, and greater dispatch. The promise was fulfilled.
On 12 February Spruance sortied for Truk, which was a part of Operation CATCHPOLE, the capture of Eniwetok, taking with him Admiral Mitscher's entire carrier force, consisting of 6 battleships, 5 large and 4 small carriers, 5 heavy cruisers, 4 cruisers, and 28 destroyers. To fuel this force a task unit of 5 fleet oilers, the Cimmarron, Kaskaskia, Guadalupe, Platte, and Sabine, escorted by 2 cruisers, 1 destroyer and 2 destroyer escorts, was sent from Majuro on 11 February. The first fueling, for the run-in, took place 14 February approximately 640 miles northeast of Truk. After this the oilers put into Kwajalein and refilled from commercial tankers there. After the raid the next fueling rendezvous was about 500 miles northeast of Truk on 19 February. Then the whole oiler group left for Majuro.
After the fueling on 19 February Admiral Mitscher with reorganized task groups made the raid and photographic reconnaissance of 21-22 February on the Marianas, topping off his destroyers from heavy ships before the run-in, at a point about 430 miles north of the previous fueling from the fleet oilers on the 19th. After the raid, retiring eastward he again fueled his destroyers from heavy ships on 24 February and proceeded to Majuro.
The only battle damage received in these raids was to the carrier Intrepid at Truk, caused by an aerial torpedo. She was able to proceed under her own power, steering by propellers only, to Kwajalein, and thence to a navy yard. Truk, as naval men knew, was the pivotal base for the Japanese mandated islands, and the enemy's principal Central Pacific base for operations as well as a key supply point and staging base for units bound to the South and Central Pacific. It was generally thought to be a Gibraltar, though Admiral F.C. Sherman, in his book "Combat Command," considered it overrated. When the news was broadcast that our task force was striking it, sinking ships and shooting down planes, not only the Navy Department and others at home were thrilled, but also Service Squadron Ten, waiting at Majuro. Thrilled and relieved was the squadron commander who alone knew where the strike was to be
and had thought of many disagreeable things which could happen during the attack.
Meanwhile the Service Squadron Ten flagship Prairie, Captain O.A. Kneeland, had reached Majuro with the staff. It consisted of only 16 officers at that time and of those the supply officer was in San Diego fitting out and loading the first six of the 3,l000-ton capacity concrete barges which later proved so useful. The starting organization of Ten immediately had the duties of port director thrust upon its operations department. The survey ship Bowditch, Captain J.H. Seyfried, made a complete survey of the anchorage, producing charts with numbered berths and establishing better navigational aids. A splendid job was done very rapidly and charts were turned out by the hundred so that all ships, tugs, barges, and boats could have them.
As fast as he could, Vice Admiral Calhoun, Commander Service Force, sent forward the service craft to Squadron Ten. The floating drydock ARD-13, Lieutenant Commander W.L. Travis, the high-speed transport APD-16, the repair ship Ajax, Captain J.L. Brown, 2 tugs, 2 yard oilers, a YP refrigerator, and six 500-ton ammunition barges were the early arrivals. Then came the destroyer tender Markab, Captain L.B. Farrel, repair ship Hector, Captain J.W. Long, and the Argonne, Captain H.A. Houser, with others following later.
The staff was called together, the work pointed out, the methods just used to replenish the fleet, with their good and bad features, and the tasks to come discussed in detail. Finally came the adoption of a motto by the squadron: "If we've got it, you can have it." This was meant to be literally true. It did not mean "if we have it to spare." More than once the squadron gave of its own in living up to its motto. Several guns were dismounted from Squadron Ten ships to be remounted as replacements of battle-damaged pieces on the combatant vessels of the striking groups. During the Marianas campaign every pair of socks in the storerooms of Ten's ships was sent to the fighting units. For some 3 weeks or more the messes of Ten, including the squadron commander's own, ate some sort of "colored putty" for butter. All the real butter had gone to the fast carrier groups. The staff was instructed that if something unheard of was requested, the answer was to be" We'll get it for you as soon as possible." With the full realization that its work was just beginning, and would grow in degree and broaden in scope to points beyond anything visualized at the moment, the staff began preparing for Spruance's return from Truk.
The cargo ship Vega arrived with a load of pontoons and fittings so
stowed that, as they were unloaded, pontoon barges could be constructed by the ship, with her special detail of Seabees trained for this purpose. Twenty barges were completed and put into service by Squadron Ten in 21 consecutive days--before the shore-based barge-construction unit had completed a single one. Most of the barges were propelled by large outboard engines. These twelve 100-ton cargo, six 50-ton cargo, and two 10-ton crane barges were all put to very hard service. Not only did they carry ammunition and stores of all kinds, but they were used as drydocks for boats, as camels (buffers) between ships, to ferry planes and liberty parties, and one even as a light-ship. The crews of these barges built cabins of dunnage lumber and pieces of tarpaulin or scraps of canvas on the sterns and practically lived in the, scrounging their meals wherever they could during those early days when everyone was overworked, underfed, and underslept, and often miles away from the regular berthing place when there was any time for a shore relaxation.
While the fleet was on the Truk strike, the staff of Squadron Ten prepared an information bulletin giving a schedule of fueling, provisioning, and ammunitioning. It gave destroyer assignments alongside tenders, anchorage berths, and special berths for ships firing antiaircraft target practice at sleeves or drones. It told where and how to make contact with any of the departments of Ten when it was necessary to deal with something not mentioned in the bulletin; and it named the recreation beaches and the forbidden islands. Thereafter on entering the anchorage, ships were met by patrol vessels and supplied with bulletins and anchorage charts, the latter continuously revised and kept up to date.
A floating fleet post office was established on LST-119 until one could be established by the Island Commander, Captain Vernon Grant. Two coastal transports were used for distribution of mail and for ferrying of personnel among the ships.
Arrivals of ARD-13, the first floating drydock to be sent into the Central Pacific drive, and the smaller AFD-16 were events of considerable importance at Majuro. The ARD had an 85-percent green crew which had never operated the dock and had never been to sea, so a period of intensified training in phraseology, station duties, and some seamanship was carried out. Eight days after her arrival the first vessel, a destroyer, was efficiently docked. AFD-16, which had lost its commanding officer by illness, was put under the same command as ARD-13 (Lieutenant Commander Travis) for operation and was located beside ARD-13. This proved fortunate, for by operating them as a team the
Small floating drydock
efficiency of both docks increased. The record of ARD-13 from this time to the end of the war was splendid and illustrated one of the many phases of winning.
High-speed mine sweepers, for towing, and sea-sled targets were procured from Pearl, and target practice arrangements were made for the ships of the fleet. Planes for towing sleeves were obtained and three firing positions established for that practice.
A fleet motion-picture exchange was established on board the Prairie. While this does not sound very important compared to the serious matters of sinking ships, killing, destroying enemy installations, and the vexing problems of fuel, food, ammunition, etc., that had to be solved, it was nevertheless a vital factor in keeping up morale. The men were spending long periods aboard ship, with very infrequent mail and very limited opportunities for diversion and recreation. Though the situation did not always permit of showing movies, even an infrequent display contributed materially.
With the return of the fleet from the Truk-Marianas strikes, Squadron Ten went to work servicing it. It was far from perfection, but there was some system and a general knowledge, on the part of those both giving and receiving the services, of the when and how of it. Admiral Spruance was pleased, and while he saw the work was imperfect he realized it would improve as more experience, study, and equipment were applied. He was so well satisfied that he said he saw no reason for the Fast Carriers Force going to Pearl any more. It never again returned there during the war. Individual vessels were sent back for repairs from time to time, but the force as such remained in the advanced areas and received its servicing from Squadron Ten as it repeatedly struck and advanced, to the consternation and confounding of the enemy.
The first 3 weeks of March were spent in consolidating gains. This gave the fleet opportunity for considerable overhaul and target practice, and time to harass Squadron Ten for things wanted but not yet available. In many ways this was advantageous because it revealed shortcomings at a period when there was time to start something remedial. Several vessels were added to the squadron about this time, including old merchant-marine tankers. The Gargoyle was commissioned by the squadron commander as the Arethusa, the Osmond and the Quiros, the Standard Arrow and the Signal, and the Polonaise as the Manileno. Several others came later.
Late in February the food situation did not seem quite so good as it should have been. The squadron commander indicated this in a letter
to Commander Service Force dated 28 February 1944, giving his estimate of minimum space requirements for 10 days' provisions for 150,000 men as:
Type of provisions Ratio of issue Pounds Long tons Cubic feet Dry 62% 5,360,850 2,393 155,555 Chilled 241/2% 2,118,400 946 84,140 Frozen 131/2% 1,167,300 521 44,285
At the time, the Prairie was the only storage at Majuro. Her capacity was 67,934 cubic feet, or only about one-fourth of the total. However, some refrigerator barges, steel and concrete, had been promised. These would make up the total required, and it was mainly to hasten their arrival that the letter was sent. The figure of 150,000 men used as a basis for the estimate was exceeded in a very short time by the rapid growth of the fleet in the advanced areas. More space was, of course, necessary, and was forthcoming.
Late spring of 1944 saw the first of the "crockery" ships come into Majuro. They were the Trefoil and the Quartz, large concrete barges with power plants for refrigeration, lighting, and windlass, but not for movie power. They had a capacity of 3,000 tons of general naval stores, including food, clothing, canteen, tools, material (not including heavy metal), and boatswain's stores. Later barges included ordnance items, electronics parts, and Diesel-engine spares. These barges were extremely useful, since they came at a time when there was a shortage of hulls, but they were so fragile that a bump by a good-sized boat would crack a side. One was lost on a reef in a storm. A steel hull would have been salvaged.
On 23-24 February 1944, after the capture of Kwajalein and Majuro Atolls, tows were dispatched from Funafuti to Kwajalein using the Diesel-engine repair ship Luzon, two fleet tugs, a rescue tug, three ocean tugs of old type, two commercial tugs, and the Navy oiler Sepulga. These vessels hauled an assortment of 500-ton barges, yard oil craft, pontoon cranes, pontoon barges, and small harbor-type tugs. In the excitement and fascination of strikes and other actual combat operations the importance of such an uninspiring movement as this might easily be overlooked. These were not merely barges as such. These were
Service Squadron Four, Funafuti to Kwajalein
The concrete stores barge Quartz, one of the many of this type construction.
some of the storehouses, yard cranes, workshops, and facilities which rendered the services that enabled the combat ships to make the strikes. The distance to Kwajalein was more than 1,200 miles and the speed of advance was slow--about 4 knots--but this vital equipment had to get through to be used against the enemy. It did.
After the tows reached Kwajalein, Squadron Four was short-lived. On 17 March 1944 it was absorbed into the new Squadron Ten. Captain Scull became Chief of Staff to Rear Admiral Hoover, Commander Forward Area, Central Pacific, and Captain S.B. Ogden in the Cascade became representative "A" of Commander Service Squadron Ten in command of the Kwajalein and Roi detachment. Squadron Four had been very much worth while. Commander Service Force Pacific stated: "Compared to the size and accomplishments of Squadron Ten and its various detachments as the war progressed to the Western Pacific, the scope of Squadron Four's operations was small, and its assigned equipment seemed limited indeed, but many capable officers received practical experience while serving therein and went on to responsible duties in Squadron Ten and other commands."
Though only a few large vessels and not many destroyers, smaller ships, and aircraft were serviced at Funafuti, that location was the scene of logistic support of naval forces from floating equipment only. No shoreside facilities such as cranes, workshops, and storehouses, generally associated with navy yards or bases, were present there. Scull relied solely upon his mobile units, and later this type of servicing was rendered to all classes of naval vessels, with more appropriate supporting equipment and in locations as yet not visualized.
Table of Contents ** Previous Chapter (11) * Next Chapter (13) | <urn:uuid:fe4c5206-e699-44d2-b5c2-5e81e62afce3> | CC-MAIN-2016-26 | http://www.ibiblio.org/hyperwar/USN/BBBO/BBBO-12.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00101-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.98365 | 5,305 | 2.59375 | 3 |
The Constitution specifically indictes that Congress is the legislative body of the government and, as such, is the only body that can create laws. The Courts, however, have the power of ensuring Congress acts within the law and that its laws are constitutiona. Therefore, the Courts have the power and duty to correct Congressional actions when they violate the law. However, Congress, due to the express and implied powers provided in the Constitution, has the ability to delegate specific authority for the efficient management of laws and programs. This is why it delegates power to the federal administrative agencies. Further, given the fact that the leaders of these agencies have the specific knowledge necessary to write the regulations that enforce the broaders laws Congress writes. If Congress failed to do this it would need to hire aviation experts (for the FAA's job), scientists (for the FDA's job), and so on. This would lead to a massive Congressional body and be less efficient to operate. In the current system each agency focuses on only one single area and body of laws, making its work more direct, better informed, and more effective. The courts do not defer too much to Congress in this area for, under the Constitution, it is Congress's sole judgment as to what to defer and how to defer. For the court's to deny Congress this power would be, as many claim, for the court to "write law" and unconstitutional action on the court's part. Alternatives to these agencies are difficult to think of. The only other alternative would be a massive number of Congressional offices with each senator and member of house responsible for a specific area of law, similar to the committees they have now, which serve to write the laws, write the regulations needed to enforce the laws, and a reporting agency to investifgate and prosecute violations of the laws. For example, this is the way Congress handles issues of treason - directly. This process would stop the regular work of Congress and, if Congress had to do this for all federal laws it writes, it would soon be unable to do anything but enforce, investigate, and prosecute violations of its laws. Indeed, the Ken Starr report investigation took nearly a year to complete, as an example.
For Jane -
Can you help me with another problem? I notice that you have already answered it for someone else but the file is unavailable at mediafire. I am willing to pay extra for it.
The question is as follows:
Rebecca Businesswoman had been searching for a job since graduation. She interviewed with a large firm, and they made her an offer for employment starting at an annual salary of $50,000. Taking the job meant that Rebecca would have to move a considerable distance, leaving her friends and family behind. Before accepting, Rebecca asked for reassurances that the firm would keep her for at least two years, making it worth her while to move. During the phone call, the firm did tell her that she was highly qualified and they would love her in their employ for at least two years if not more! Rebecca, as diligent as she is, wrote the following letter:
Dear Big Firm:
Thank you for your time and interest in my professional pursuits. Although it will be hard leaving my friends and family and moving to a new city, I am looking forward to working with you. I have decided to accept your offer for $50,000.00 per year commencing on May 1, 2007. I understand per our conversation that my employment will at least be for a two-year period.
Although things went well for eight months at the new firm, Rebecca was terminated on January 1, 2008. Rebecca sued for wrongful termination and breach of employment contract. Big Firm has asserted the Statute of Frauds as a defense. Does Rebecca have a viable suit for breach of contract? Does the Statute of Frauds apply to any purported contract? Was there a contract created between Rebecca and Big Firm?
Prepare an interoffice memorandum to your supervising attorney that answers the following questions:
Be sure to support your analysis with cites to applicable case law and other scholarly legal authority.
I am sorry, but I cannot provide that answer on this question. | <urn:uuid:6e5c993e-ed14-4713-81ad-6bf88c034e77> | CC-MAIN-2016-26 | http://www.justanswer.com/homework/4kfxd-does-congress-delegate-power-federal-administrative.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00155-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975819 | 845 | 2.765625 | 3 |
Cervical cancer is the fourth most common cancer in women worldwide, and it has the fourth highest mortality rate among cancers in women. Most cases of cervical cancer are preventable by routine screening and by treatment of precancerous lesions. As a result, most of the cervical cancer cases are diagnosed in women who live in regions with inadequate screening protocols.
Incidence and Mortality
Did You Know?
Under the Affordable Care Act, many health insurance plans will provide free women’s preventive services, including mammograms, birth control and well-woman visits. Learn more.
Estimated new cases and deaths from cervical (uterine cervix) cancer in the United States in 2014:
New cases: 12,360.
The uterine cervix is contiguous with the uterine body, and it acts as the opening to the body of the uterus. The uterine cervix is a cylindrical, fibrous organ that is an average of 3 to 4 cm in length. The portio of the cervix is the part of the cervix that is visible on vaginal inspection. The opening of the cervix is termed the external os. The os is the beginning of the endocervical canal, which forms the inner aspect of the cervix. At the upper aspect of the endocervical canal is the internal os, a narrowing of the endocervical canal. The narrowing marks the transition from the cervix to the uterine body. The endocervical canal beyond the internal os is termed the endometrial canal.
The cervix is lined by two types of epithelial cells: squamous cells at the outer aspect, and columnar, glandular cells along the inner canal. The transition between squamous cells and columnar cells is an area termed the squamo-columnar junction. Most of precancerous and cancerous changes arise in this zone.
Cervical carcinoma has its origins at the squamous-columnar junction; it can involve the outer squamous cells, the inner glandular cells, or both. The precursor lesion is dysplasia: cervical intraepithelial neoplasia (CIN) or adenocarcinoma in situ, which can subsequently become invasive cancer. This process can be quite slow. Longitudinal studies have shown that in patients with untreated in situ cervical cancer, 30% to 70% will develop invasive carcinoma over a period of 10 to 12 years. However, in about 10% of patients, lesions can progress from in situ to invasive in a period of less than 1 year. As it becomes invasive, the tumor breaks through the basement membrane and invades the cervical stroma. Extension of the tumor in the cervix may ultimately manifest as ulceration, exophytic tumor, or extensive infiltration of underlying tissue, including the bladder or rectum. | <urn:uuid:796028d4-9ff8-4b36-a87b-bde9358ba4ae> | CC-MAIN-2016-26 | http://www.webmd.com/cancer/cervical-cancer/tc/ncicdr0000062759-general-information-about-cervical-cancer | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00020-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922541 | 582 | 3.296875 | 3 |
Learn English Free
Learn English Grammar
What is the present participle?
The present participle is a participle that ends in ing. It can be used with the auxiliary verb 'to be' to form the continuous tense. It always takes the ‘ing’ form of the verb, even irregular verbs have an ‘...ing’ form, in fact virtually all English words that end with ‘ing’ are present participles.
I am learning English. (Learning is part of the continuous verb phrase 'am learning')
We were running through the woods. (Running is part of the continuous verb phrase 'were running' ).
It can also be used as an adjective.
As an adjective: I am a working woman. (Working is used here as an adjective.)
The present participle can also be used as a noun denoting the action of a verb a gerund. But remember the present participle can be used as a verb or an adjective whilst the gerund is used as a noun. | <urn:uuid:c5915eb0-8ad2-438c-9eb7-447572074b6d> | CC-MAIN-2016-26 | http://www.learnenglish.de/grammar/participlepresent.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00031-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958114 | 214 | 3.109375 | 3 |
The self-guided professional development by the science faculty at Concord High School in New Hampshire has never been formally evaluated. But Thomas Crumrine’s students have benefited from techniques he’s learned during his 7 a.m. meeting with colleagues every other Friday for the past five years. While using interactive clickers during a unit on the conservation of matter, Crumrine found that 86 percent of his students incorrectly thought that the mass of a pile of iron nails in an open container would remain the same as the nails rusted, failing to take into account the additional oxygen. “In the past, if that question had been asked on a test, I would have been saddened but probably would have moved on to the next unit,” he says. Instead, he stopped the lesson, inserted a discussion about rusting and oxidation, and then continued.
School officials in Richardson, Texas, wanted a math program that could lift up low-performing middle schools and close a yawning achievement gap across racial and socioeconomic lines when they asked for help from the city’s largest employer, Texas Instruments (TI), in 2004. After considering several models, TI developed its own program. Tapping national experts in math education, the company provided professional development for teachers. They also supplemented the existing curriculum with lessons that incorporated technology—much like the interactive clicker system that Wieman and others use with undergraduates—and trained teachers to use it. For its part, the district doubled the amount of time spent on math and gave teachers shared planning time to prepare additional lessons.
The new program, called Math Forward, draws upon the work of Deborah Ball, dean of the School of Education at the University of Michigan, who believes that effective math teachers have an understanding of their subject that goes beyond what they have learned in course work and what they are required to teach in the classroom. This mathematical knowledge for teaching, as she calls it, allows them to resolve, for example, student misconceptions that aren’t addressed by the textbook. But training teachers in the concept isn’t enough, says Ball: “Interventions have to affect what happens in the classroom. Otherwise, they don’t do any good.”
Richardson officials say they have such tangible results. A program at one Richardson middle school in 2005 and 2006 helped one-third of the students who had failed the state math assessment the previous year pass the test the next spring. Last year the program was expanded to five middle schools and an algebra 1 component was added, and this fall its monitors will follow the original cohort into high school. Meanwhile, TI plans to go national. “We’ll offer it to any school district willing to make the necessary commitment to implement it with integrity,” says TI’s Lisa Brady Gill.
Scaling up a successful classroom intervention is tricky. Just ask Sharon Lynch, a professor of education at George Washington University, who’s been studying the use of three middle-school science units by the Montgomery County Public Schools in suburban Maryland. The federally funded project began in 5 schools and hopes to reach 35 of the 38 middle schools in the district.
Lynch found that only two of the three units actually “worked” in the sense of producing modest but statistically significant gains in student understanding. The third unit has since been dropped, and Lynch is unsure whether the remaining two lessons will be implemented consistently and whether the district can support the units properly after the grant ends.
The $5.2 million cost of Lynch’s federally funded project, which included extensive use of classroom observers, may push it beyond the reach of most efforts to monitor school reform. And the decentralized nature of U.S. education pretty much ensures that interventions will remain local, not national. “Frankly,” she says, “I think the idea of scaling up anything in the United States is a ludicrous notion.”
Bev Marcum, a biology professor at California State University in Chico, is more optimistic about prospects for improvement. Marcum directs the Hands-On Science Lab, a campus facility for elementary school children that features experimental stations staffed by undergraduates. The lab is a tool to train future teachers, a site of professional development for teachers, and a fun place to learn science.
In fact, the teachers at one school in this hardscrabble farm community have revised their entire science curriculum to make use of the concept. Last year Citrus Avenue Elementary School began offering Science Fridays, during which the school’s fourth, fifth, and sixth graders spend 90 minutes rotating among a half-dozen stations, just as they would at the university lab. “Our biggest problem is finding time to do lab-based science,” says Richard Aguilera, a former principal who four years ago decided to return to the classroom, “and our large ethnic population [the Hmong of Southeast Asia] poses a special challenge. So the hands-on lab approach is just great.”
What is the effect on student learning? The only research on the lab has shown that it improves teacher confidence and increases their knowledge. A rigorous study documenting the lab’s impact on student achievement awaits another day. “I don’t have enough resources to do [anything] credible,” Marcum admits. “And without it, I don’t want to make any elaborate claims.”
Coming up with that evidence is the challenge facing Marcum, Wieman, and other reformers. They agree it’s the only way to achieve the system of science education that the nation needs. | <urn:uuid:6c7beb75-8132-490d-84fd-05d687d727ec> | CC-MAIN-2016-26 | http://discovermagazine.com/2007/oct/making2029the2028grade | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00197-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.962675 | 1,157 | 3.140625 | 3 |
How the Internet Is Changing
What We (Think We) Know
A speech for "the locals"--Upper Arlington Public Library, January 23,
2008. This is a more general discussion; the Citizendium is not
Information is easy, knowledge is difficult
There is a mind-boggling amount of information online. And this is a wonderful thing. I’m serious about that. A good search engine is like an oracle: you can ask it any question you like and be sure to get an answer. The answer might be exactly what you’re looking for, or it might be, well, oracular—difficult to interpret and possibly incorrect. I draw the usual distinction between knowledge and information. You can find information online very easily. Knowledge is another matter altogether.
Now, this is not something new about the Internet. It’s a basic feature of human life that while information is easy, knowledge is difficult. There has never been a shortage of mere data and opinion in human life. It’s a very old observation that the most ignorant people are usually full of opinions, while many of the most knowledgeable people are full of doubt. Other people are certainly sources of knowledge, but they are also sources of half-truths, confusion, misinformation, and lies. If we simply want information from others, it is easy to get; if we want knowledge in any strong sense of the word, it is very difficult. Besides that, long before the Internet, there was far more to read, far more television shows and movies to watch, than anyone could ever absorb in many lifetimes. Before the Internet, we were already awash in information. Wading through all that information in search of some hard knowledge was very difficult indeed.
The Internet is making this old and difficult problem even worse. If we had an abundance of information in, say, the 1970s, the Internet has created a superabundance of information today. Out of curiosity, I looked up some numbers. According to one estimate, there are now over 1.2 billion people online; Netcraft estimated that there are over 100 million websites, and about half of those are active. And those estimates come from over a year ago.
With that many people, and that many active websites, clearly there is, as I say, a superabundance of information. Nielsen ratings of Internet search showed that there were some six billion searches performed in December, 2007, in one month—that’s about 72 billion in a year! Google, by the way, was responsible for two thirds of those searches. Now, you might have heard these numbers before; I don’t mean to be telling you news. But I want to worry out loud about a consequence of this situation.
My worry is that the superabundance of information is devaluing knowledge. The more that information piles up on Internet servers around the world, and the easier it is for that information to be found, the less distinctive and attractive that knowledge will appear by comparison. I fear that the Internet has already greatly weakened our sense of what is distinctive about knowledge, and why it is worth seeking. I know this might seem rather abstract, and not something worth getting worked up about. Why, really, should you care?
It used to be that in order to learn some specific fact, like the population of France, you had to crack open a big thick paper encyclopedia or other reference book. One of the great things about the Internet is that that sort of searching—for very specific, commonly-sought-after facts—has become dead simple. Even more, there are many facts one can now find online that, in the past, would have taken a trip to the local library to find. The point is that the superabundance of information has actually made it remarkably easy to get information. Today, it’s easy not just to get some information about something or other, it’s easy to get boatloads of information about very specific questions and topics we’re interested in.
For all that, knowledge is, I’m afraid, not getting much easier. To be quite sure of an answer still requires comparing multiple sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. In short, knowledge still requires hard thought. Sure, technology is a great time-saver in various ways; it has certainly made research easier, and it will become only more so. But the actual mental work that results in knowledge of a topic cannot be made much easier, simply because no one else can do your thinking for you. So while information becomes nearly instantaneous and dead simple, knowledge is looking like a doddering old uncle.
What do I mean by that? Well, you can find tons of opinions online, ready-made, but there is an interesting feature of a lot of the information and opinion you find online: not only is it easy to find, it is easy to digest. Just think of the different types of pages that a typical Web search turns up: news articles, which summarize events for the average person; blogs, which are usually very brief; Web forums, which only rarely go into depth; and encyclopedia articles and other mere summaries of topics. Of course, there are also very good websites, as well as the “Deep Web,” which contains things like books and journal articles and white papers; but most people do not use those other resources. The point is that most of the stuff that you typically find on the Internet is pretty lightweight. It’s Info Lite.
“Right,” you say, “what’s wrong with that? Great taste, less filling!” Sure, I like easy, entertaining information as much as the next guy. But what’s wrong with it is that it makes the hard work of knowledge much less appealing by comparison. For example, if you are coming to grips with what we should do about global warming, or illegal immigration, or some other very complex issue, you must escape the allure of all the dramatic and entertaining news articles and blog posts on these subjects. Instead, you must be motivated to wade through a lot of far drier material. The sources that are more likely to help you in your quest for knowledge look very boring by comparison. My point here is that the superabundance of information devalues knowledge, because the means of solid knowledge are decidedly more difficult and less sexy than the Info Lite
that it is so easy to find online.
There is another way that the superabundance of information makes knowledge more difficult. It is that, for all the terabytes upon terabytes of information on the Internet, society does not employ many more (and possibly fewer) editors than it had before the advent of the Internet. When you go to post something on a blog or a Web forum, there isn’t someone called an editor who decides to “publish” your comment. The Internet is less a publishing operation than a giant conversation. But most of us still take in most of what we read fairly passively. Now, there’s no doubt that what has been called the “read-write Web” encourages active engagement with others online, and helps us overcome our passivity. This is one of the decidedly positive things about the Internet, I think: it gets people to understand that they can actively engage with what they read. We understand now more than ever that we can and should read critically. The problem, however, is that, without the services of editors, we need our critical faculties to be engaged and very fine-tuned. So, while the Internet conversation has instilled in us a tendency to read critically, still, without the services of editors, there is far more garbage out there than our critical faculties can handle. We do end up absorbing a lot of nonsense passively: we can’t help it.
In short, we are reading reams of content written by amateurs, without the benefit of editors, which means we must as it were be our own editors. But many of us, I’m afraid, do not seem to be prepared for the job. In my own long experience interacting with Internet users, I find heaps of skepticism and little respect for what others write, regardless of whether it is edited or not. Now, skepticism is all well and good. But at the same time, I find hardly anything in the way of real critical thinking. The very opinionated people I encounter online rarely demonstrate that they have thought things through as they should, given their strength of convictions. I have even encountered college professors who cite easy-to-find news articles in the commission of the most elementary of logical fallacies. So it isn’t necessarily just a lack of education that accounts for the problem I’m describing. Having “information at our fingertips,” clearly, sometimes makes us skip the hard thinking that knowledge requires. Even those of us who ought to know better are too often content to be impressed by the sheer quantity and instant availability of information, and let it substitute for their own difficult thought.
The nature and value of knowledge
Easy information devalues hard knowledge, I say. But so far I have merely been appealing to your understanding of the nature and value of knowledge. Someone might ask me: well, what do you mean by knowledge, anyway, that it is so different from mere information? And why does it matter?
Philosophers since Plato have been saying that knowledge is actually a special kind of belief. It must be true, first of all, and it must also be justified, or have good reasons or evidence to support it. For example, let’s suppose I read something for the first time on some random blog, such as that Heath Ledger died. Suppose I just uncritically believe this. Well, even if it’s true, I don’t know that it is true, because random blogs make up stuff all the time. A blog saying something really isn’t a good enough reason to believe it. But if I then read the news in a few other, more credible sources, then my belief becomes much better justified, and then I can be said to know.
Now, I don’t want to go into a lot of unnecessary details and qualifications, which I could, at this point. So let me get right to my point. I say knowledge is, roughly speaking, justified, true belief. Well then, I want to add that knowledge is difficult not because getting truth is difficult, but because justifying our beliefs is. In other words, it’s really easy to get truth. Google is a veritable oracle of truth. The problem is recognizing truth, and distinguishing it from falsehood. The ocean of information online contains a huge amount of truth. The difficulty comes in knowing when you’ve got it.
Well, that’s what justification is for. We use reasons, or evidence, to determine that, indeed, if we accept a piece of information, we will have knowledge, not error. But producing a good justification for our beliefs is extremely difficult. It requires, as I said before, good sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. This all takes time and energy, and while others can help, it is something that one must do for oneself.
Here you might wonder: if justification, and therefore knowledge, is really so difficult, then why go to all the trouble? Besides, justification is not an all-or-nothing matter. How much evidence is needed before we can be said to know something? After all, if a blogger says that Heath Ledger is dead, that is at least some weak evidence that Heath Ledger is in fact dead. Do I really need stronger evidence? Why?
These are very difficult questions. The best brief answer is, “It depends.” Sometimes, if someone is just telling an entertaining story, it doesn’t matter at all whether it’s true or not. So it doesn’t matter that you know the details of the story; if the story entertains, it has done its job. I am sure that celebrity trivia is similar: it doesn’t matter whether the latest gossip in the Weekly World News about Britney Spears is true, it’s just entertaining to read. But there are many other subjects that matter a lot more. Here are two: global warming and immigration reform. Well, I certainly can’t presume to tell you how much evidence you need for your positions on these issues, before you can claim to have knowledge. Being a skeptic, I would actually say that we can’t have knowledge about such complex issues, or at least, not very certain knowledge. But I would say that it is still important to get as much knowledge as possible about these issues. Why? Quite simply because a lot is riding on our getting the correct answers, and the more that we study issues, and justify our beliefs, the more likely our beliefs are to be correct.
To passively absorb information from the Internet, without caring about whether we have good reasons for what we believe, is really to roll the dice.
Like all gambling, this is pleasant and self-indulgent. But if the luck doesn’t go your way, it can come back to bite you.
Knowledge matters, and as wonderful a tool for knowledge as the Internet can be, it can also devalue knowledge. It does so, I’ve said, by making passive absorption of information seem more pleasant than the hard work of justifying beliefs, and also by presenting us with so much unedited, low-quality information that we cannot absorb it as carefully as we would like. But there is another way that the Internet devalues knowledge: by encouraging anonymity. So here’s a bit about that.
Knowledge and anonymity
We get much of our knowledge from other people. Of course, we pick some things up directly from conversation, or speeches like this one. We also read books, newspapers, and magazines; we watch informational television programs; and we watch films. In short, we get knowledge either directly from other people, or indirectly, through various media.
Now, the Internet is a different sort of knowledge source. The Internet is very different, importantly different, from both face-to-face conversation and from the traditional media. Let’s talk about that.
The Internet has been called, again, a giant conversation. But it’s a very unusual conversation, if so. For one
thing, it’s not a face-to-face conversation. We virtually never have the sort of “video telephone” conversations that the old science fiction stories described. In fact, on many online knowledge websites, we often have no names, pictures, or any information at all, about the people that we converse or work with online. Like the dog in the famous New Yorker cartoon said, “On the Internet, nobody knows you’re a dog.”
In the three-dimensional online virtual world, Second Life, there is an elaborate system in which you can choose the precise physical characteristics for the person you are online—your “avatar.” Not surprisingly, in Second Life, there are a lot more beautiful and striking-looking people than there are in “First Life”—real life. This practice of make-believe is very self-conscious, and many academic papers have been written about how “identity” is “constructed” online in general.
When I went to make an avatar for myself for Second Life a few years ago, I was pretty uncomfortable representing myself as anything other than what I am. So I actually made an avatar that looks like me. (I didn’t really get it right.) I’ve always been personally uncomfortable representing myself online in any other way than how I really am. But I realize that I am unusual in this regard. Obviously, privacy matters.
Now, think of this. People who care very much about getting their facts right generally consult authoritative sources; they don’t usually get their knowledge from casual conversation with friends and relatives. But at least, when we do get knowledge from a friend or relative, we have some idea of how reliable they are. Maybe you have an eccentric acquaintance, for instance, who is a conspiracy theorist, and he doesn’t spend a lot of time considering the merits of his sources, or the plausibility of their claims. Let’s say you also know that he barely got through high school and basically doesn’t care what the mainstream media or college professors say. Your acquaintance may have many fascinating factoids and interesting stories, but probably, you aren’t going to take what he says very seriously.
But imagine if you were chatting online about politics or UFOs, or other weird stuff, with someone you didn’t know was actually your acquaintance. You might actually take him more seriously in that case. You might take his bizarre claims somewhat more seriously. I don’t mean that you would simply believe them—of course you wouldn’t—but you would not have any specific reasons to discount them, as you would if you knew you were talking to your acquaintance. Your only positive reason to discount the claims would be: I don’t know this person, this person is anonymous. But you know that there can be brilliant and reliable people anonymous online, as well as thoroughly unreliable people.
Well, I think many of us would actually trust an anonymous person more than we would trust our more eccentric acquaintances. Now don’t get me wrong, I don’t mean to accuse anyone of being a dupe. Of course, we are able to spot really daft stuff no matter who it comes from. But without knowing who a person is, we are operating without a basic bit of information that we are used to having, in evaluating what people tell us face-to-face. If we lack any information at all about how reliable a source is, we will not simply conclude that the source is completely unreliable; we will often give the person the benefit of the doubt. And that is sometimes more respect than we would give the person if we knew a few basic facts about him or her.
More generally, there is a common attitude online that it is not supposed to matter, in fact, who you are.
We are all perfectly equal in many online communities, except for what we say or do in those communities. Who we are offline is not supposed to matter. But it does matter, when it comes to evaluating what people say about offline topics, like science and politics. The more time we spend in the Internet’s egalitarian communities, the more contempt we might ultimately have for information about a person’s real-world credibility. The very notion of personal credibility, or reliability, is ultimately under attack, I think. On a certain utopian view, no one should be held up as an expert, and no one should be dismissed as a crackpot. All views, from all people, about all subjects, should be considered with equal respect.
Danger, Will Robinson! Personal credibility is a universal notion; it can be found in all societies and throughout recorded history. There is a good reason that it is universal, as well: knowledge of a person’s credibility, or lack thereof, is a great time-saver. If you know that someone knows a lot about a subject, then that person is, in fact, more likely to be correct than some random person. Now, the expert’s opinion cannot take the place of thought on your part; usually, you probably should not simply adopt the expert’s opinion. It is rarely that simple. But that doesn’t mean the information
about personal credibility is irrelevant or useless.
Two ideas for a solution
So far, I have mainly been criticizing the Internet, which you might find it odd for me to do. After all, I work online.
I don’t think that the Internet is an unmitigated bad influence. I won’t bore you by listing all the great things there are about the Internet, like being able to get detailed information about every episode of Star Trek, without leaving home, at 3 AM. Besides, I have only focused on a small number of problems, and I don’t think they are necessarily Earth-shatteringly huge problems, either. But they are problems, and I think we can do a little bit to help solve them, or at least mitigate them.
First, we can make a role for experts in Internet communities. Of course, make the role so that does not conflict with what makes the community work. Don’t simply put all the reins of authority in the hands of your experts; doing that would ensure that the project remains a project by and for experts, and of relatively little broader impact. But give them the authority to approve content, for example, or to post reviews, or other modest but useful tasks.
My hope is that, when the general public work under the “bottom up” guidance of experts, this will have some good effects. I think the content such a community might produce would be more reliable than the run of the mill on the Internet. I would also hope that the content itself will be more conducive to seeking knowledge instead of mere information, simply by modelling good reasoning and research.
I do worry, though, that if expert-reviewed information online were to become the norm, then people might be more likely to turn off their critical faculties.
Second, we can create new communities, in which real names and identities are expected, and we can reward people in old communities for using their real names and identities. This is something that Amazon.com has done, for example, with its “real name” feature on product reviews. If contributors are identified, we could use the same sort of methods to evaluate what they say online, that we would use if we were to run into them on the street.
I began by laying out a general problem: superabundance of information online is devaluing knowledge. I don’t know if we can really solve this problem, but the two suggestions I just made might go a little way to making it a little better. If we include a modest role for experts in more of our Internet communities, we’ll have better information to begin with, and better role models. Moreover, if we identify the sources of our information, we will be in a better position to evaluate it. | <urn:uuid:1f22c765-09a0-48f2-a391-c5f887805eff> | CC-MAIN-2016-26 | http://www.larrysanger.org/hownetchangesknowledge.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00116-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963126 | 4,708 | 2.8125 | 3 |
Friday, August 3, 2012
A persistent problem with some of my law students is that they do not read carefully. It troubles me that this problem seems to cut across class years and class ranks and appears to be getting more wide-spread.
So much of our lives as attorneys revolves around tasks that call for precision. If our students do not learn to be more precise during law school, how are they going to excell in their work?
I am not talking about common first-year mistakes in understanding cases. I am talking about students who simply never learned to read with care. Here are some examples:
- Students regularly ask professors questions that were covered in detail in class syllabi.
- Students do not follow instructions on an assignment or exam even when clearly provided.
- Students are asked to read a document carefully, but come to class with only a gist of that document.
- Students do not read a complete e-mail or the attachments provided - even when they know that deadlines and task completions are required.
- Students fail to read law school announcements, Orientation packets, registration instructions, Student Handbooks, and other items that they told are important.
When I have talked to colleagues about this problem, the following thoughts have been shared:
- The Internet, e-mail, and text messages have turned students into grazers who never read for depth.
- The parents of this generation of students kept track of everything for them so they are unaccustomed to being responsible for reading carefully and retaining information.
- Students these days do not know how to read because they do not read books in their leisure time; they watch video clips on YouTube, watch TV, play video games, but rarely sit down to read books that are not assigned.
- Undergraduate professors told them exactly what they needed for the exams so they did not have to read carefully - in some cases did not read at all for most classes.
- They think they can look everything up on the Internet later, so why worry about boring text.
- They got As and Bs without having to work very hard because of grade inflation in lower education, so they do not know that precision might be important for graduate-level academics (and life).
So, what can we do to get our students ready for the careful reading, thinking, and writing that they will have to accomplish successfully in law practice? Below are a few things that I have become more conscious about doing with my students. I am sure that my colleagues can provide other thoughts and techniques.
- Discuss professionalism in one's work as a law student and how that becomes professionalism in one's work product as an attorney.
- Go over a case or fact pattern in great detail so they begin to see the information that they missed with only a cursory reading.
- Parse a complex statute so that they see why every word and punctuation mark matters.
- Ask questions that go to the legal nuances of the material they have read so that they begin to see the depth of understanding needed.
- Resist telling them the answer. At times I have to bite my tongue and reply along the lines of "turn to page 3 in your syllabus and read point 8 on the format for your presentation" or "read the facts paragraph again and tell me what the court said about the defendant's acts."
- Encourage them to review exams with C+ or lower grades with their professors to see how they could have improved the grade (more careful reading of the fact pattern, more care in the organization, more precise rule statements, more depth in analysis).
- Give examples of where a lack of careful reading or precision would cause problems for an attorney in practice - real-life examples are best.
- Allow them to experience consequences for missed assignment deadlines, incorrect format, lack of proof-reading, or misread instructions. Consequences learned in the law school environment will usually be less dire than than consequences learned later in practice. (Of course, there are times when a student's circumstances warrant an exception to this suggestion.)
Part of being a professional is being conscious of one's responsibility for a high quality work product. By mastering care in their everyday reading and class work, our students will learn to turn out work products that are professional. (Amy Jarmon)
Thursday, August 2, 2012
LESSON 2: YOU DON’T HAVE TO BE PERFECT.
Optimal yard care involves proper performance of several critical duties: (1) cutting the lawn, (2) edging at the borders of the lawn to create clean lines, (3) pulling weeds from the garden, (4) pruning plants, bushes, and trees, (5) watering plants, and (6) other related tasks like fertilizing, overseeding, aerating, raking, stump removal, and replanting. (Just writing that last sentence stressed me out! I’m going to take a break. Be back shortly.) . . . . . So, when I am sitting in my house on the weekend, I know (really, I feel) that “mowing the lawn” is waaaay more than just mowing the lawn. I find it really difficult to commit to what could be several hours of lawn care.
It’s not that I don’t have time to mow the lawn, but I don’t have time to do all of that other stuff. I start thinking about the fact that the last time I was edging, the weed-whacker ran out of line, so I’m going to have to replace that. I make a mental list of all of the things that I’m going to have to do in addition to the mowing. No way I can accomplish all of that. It’s too much. I resolve to do nothing for now. Save it until later, when I can do it all.
Eventually, I say to myself, “The lawn isn’t going to mow itself.” If I let the grass continue to grow, I’m going to need a pith helmet and a machete instead of a lawn mower. The homeowners association is going to send me a notice. My wife will start talking to me about the dangers of deer ticks. So, I get up and mow the lawn. It doesn’t take forever. And, if I really don’t have time to do all of the other things, I realize that they can wait. But I did what I really needed to do. Maybe the lines aren’t as perfect as they need to be, but I have prevented my lawn from being a nightmare. Even though I did not trim, edge, prune, plant, weed, cultivate, water, fertilize, or any of the other things I could have done, I finished the important part. The rest can wait, and my lawn looks good. Frankly, the other things I need to do seem far less daunting, now that the big part is finished.
What this taught me about law school:
Perfectionists come in two extremes. The first type works her fingers to the bone, getting every aspect of a task right, laboring until that moment that the task is thoroughly complete, shiny, and, yes, perfect. Type-1 perfectionists sometimes can lose sleep, go hungry, or otherwise sacrifice self-care in an effort to achieve perfection in their work. The second type of perfectionist looks at all that needs to be done to achieve perfection and starts to feel a little overwhelmed. This feeling of being overwhelmed may paralyze the Type-2 perfectionist, who may choose to do nothing, fearing that any attempt to do something just won’t be good enough. I don’t think either type of extreme perfectionist is healthy, but I’m going to focus on Type-2 for now.
In law school, it is easy to feel overwhelmed. 4 courses. 2 classes per week per course. 50
pages of reading per class. 1 legal writing paper. 5 job applications due. 2 student group meetings. 5 gym workouts. Saturday night party. If you are a Type-2 perfectionist, all of this work makes you want to crawl into a hole. Rather than attacking this invincible mountain of work, you do nothing.
A better way to approach this might be to do what you can now. You can accomplish a lot in the time you have, even if there is not enough time to make it perfect. Do the reading, even if you don’t think there’s enough time to finish a brief. Get the first draft of your paper done. Do what you can, and don't try to do everything at once. Good may not be perfect, but it's a darn sight better than nothing at all.
(Writer’s Note: I am going to resist the perfectionist urge in me to hold onto this blog post any longer and edit it a few more weeks. It may not be perfect, but I hope it is good enough to make a good point!)
Monday, July 30, 2012
My feet are wet. In fact, my jeans are wet all the way up above my knees. I have been standing in the surf of the Atlantic Ocean watching lightning off in the distance.
I called my wife while I stood there. She isn't here, but she should've been. I had to come to a conference to speak, and we thought we should not spend the money it would take for her to come down with me.
You see, we just spent a couple of weeks in the Colorado Rockies on vacation. We figured that we should be a little more careful with our money after that trip, so we thought it better that she not join me this time, given how expensive flights to Florida from Kansas are.
It sounded wise and responsible at the time. She was originally going to come with me because our 35th anniversary takes place while I am in Florida. We had thought it would be romantic to spend it together on the beach, even if I had to take some time out to attend sessions and present a talk.
But money considerations won out, and she stayed home. We decided to celebrate our anniversary when I return.
Sometimes wisdom is not all that wise. Looking out over the ocean as it crashed against my feet, I realized that my wife should have been standing next to me, whether we could afford it or not. I called her from the surf and asked her to get on a plane tomorrow and fly down here –whether we could afford it or not.
Flights and other arrangements may not work out on such short notice. I wish I had gotten my feet wet three weeks ago and arranged for her to come with me.
I don't tell you this story to say that you should waste money. You know the saying by now, no doubt, "Live like a lawyer while you are in law school, and you will live like a law student when you get out."
On the other hand, when you look back at your life, you will realize that some things just mattered more than good money management. Or maybe, good money management includes making stupid decisions for wise reasons sometimes.
I don't really know. But after 35 years of raising kids, dealing with life, and falling asleep in each others arms, we should not have worried about the cost of a plane ticket on the eve of our anniversary.
Sometimes, you ought to get your feet wet when the opportunity arises, rather than stay dry and in miss something important. (Dan Weddle)
Sunday, July 29, 2012
I am surprised every summer when August 1st comes around. The summer looks so long and full of possibilities right after graduation. However, it always ends too quickly for everything I would like to accomplish in my grandest dreams.
There are some things, however, that I try to complete each summer to prepare for the next semester as well as recharge my batteries.
Here are some of the things that I find help me most to "get my house in order" and approach the upcoming academic year with enthusiasm:
- I critique the handouts and Power Point slides that I use for student workshops to see what changes need to be made. Often during the academic year, I have thought of new examples to use, new ways of explaining information, or gained insights from my students. By revamping my materials regularly, I am able to offer better information and get excited about the new techniques that I can pass on to students in the coming months.
- I revamp my four-week course for our Summer Entry Program. It is easy to get lulled into doing things exactly the same each year because the program works so well with our current format. However, by challenging myself to find better ways of teaching the material and by incorporating suggestions from last summer, I keep myself and the material fresh. The changes may be small tweaks in many places and major rethinks in a few spots, but they all focus on giving 100% to the students.
- I review publishers' catalogs and order library books for our study aids/academic success library to get the newest editions or series within my budget allotment. It is always exciting to see what new volumes my ASP colleagues have published!
- I critique administrative tasks to find ways to be more efficient and effective. For tasks where I interface with other offices, I brainstorm better ways that we can communicate. For my own tasks, I review my calendar for the last year to make notes about when I should schedule certain tasks during the coming year and changes that I need to make.
- I sort through my e-mail archives and delete e-mails that are not needed any longer. If I have time, I also sort through my Word files to delete outdated or unwanted items that have been overlooked.
- I catch up on some professional reading. During the summer, I try to read at least one book related to legal education, academic success, or education theory. I also work my way through a stack of articles that I have collected throughout the year but never had time to read.
- I pull out my folder of thank you notes and e-mails from students and read through them. This task allows me to remember why I do what I do and encourages me to continue to impact student lives for the better. It reminds me to focus on being a blessing to my students in small as well as large ways.
By the time Orientation begins, I am ready for a new crop of 1Ls and our returning students. My housekeeping for ASP is done, and I am ready to start the cycle all over again. (Amy Jarmon) | <urn:uuid:144668a3-cda4-4289-8a77-3c498444e029> | CC-MAIN-2016-26 | http://lawprofessors.typepad.com/academic_support/2012/week31/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00056-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967082 | 3,035 | 2.625 | 3 |
back to Health & Wellness
Pernicious Anemia Learning Center
Pernicious anemia is a decrease in red blood cells that occurs when the body cannot properly absorb vitamin B12 from the gastrointestinal tract. Vitamin B12 is necessary for the proper development of red blood cells. Pernicious anemia is a type of...
Pernicious anemia occurs when your body can't absorb enough vitamin B-12 to function properly. Learn about the signs and symptoms of...
back to top | <urn:uuid:dffe64a6-04b0-422c-98cb-feae83c9f4d3> | CC-MAIN-2016-26 | https://www.aarpmedicareplans.com/channel/pernicious-anemia.html?hlpage=health_center&loc=basic_info_tab | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00186-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.808547 | 101 | 2.859375 | 3 |
Yellow Mongoose [Cynictis penicillata]
The Yellow Mongoose has a total length is 400-600 mm, with a tail length of 180-250 mm. Mass is about 450-900 g. Has a triangular shaped face with a pointed snout. When running the tail is held horizontal to the ground. Throughout the southern and central range the coat is a reddish-yellow to a tawny-yellow colour, and the fairly brushy tail has a very distinctive white tip. In the northern parts the tail is not as white tipped and the body colour is greyish. The paler chin, throat and upper chest prevails throughout the range.
Feeds mainly on insects and other invertebrates, but it will also readily hunt amphibians, rodents and reptiles.
Litters of two to five are born from October to January in the south portion of its range. In the northern areas birthing extends through to March. Births have exceptionally been recorded each month of the year, even though birthing peaks clearly exist.
Some nocturnal movement has been recorded during full moon, but this species is fundamentally diurnal.
Where they are found:
Endemic to the central and western regions of Southern Africa. Particularly common in agricultural areas of the Western Cape and Free State, where it probably benefits from insect pests in wheat- and maize fields. Numbers are estimated to be 100 000. Absent from the Namib desert coastal strip, neither does it occur in the high-rainfall southern coastal areas of the Eastern Cape, most of KwaZulu-Natal, Mpumalanga and the Northern Province's lowveld area.
The Yellow Mongoose is also known as the Red Meerkat in places due to its reddish tinge. The body colouring varies from yellow to reddish brown - presumably depending on diet and minerals.
It is believed that the Yellow Mongoose is an asymptomatic carrier of rabies, i.e. it carries rabies but is not affected by the virus. This is cause for concern in many areas where they occur and there is a danger that they will be severely persecuted because of this. | <urn:uuid:cda94b91-5342-4cc1-b120-7c7e0c74ba47> | CC-MAIN-2016-26 | http://www.krugerpark.co.za/africa_yellow_mongoose.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00006-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960593 | 447 | 3.71875 | 4 |
Yes, this is the final season of the MythBusters. The greatest part of the show is the fact Adam Savage and Jamie Hyneman aren’t scientists but they still do some awesome science. They show us that everyone can do science—the MythBusters simply have better building skills than most of us.
As a physics educator, I’ve been impressed by the excellent physics demos the MythBusters make. Some of these are classic examples straight from your physics textbook and some are surprising results that no one expected.
Let’s take a look at a few of my favorite examples from previous seasons.
Shoot a Bullet, Drop a Bullet
When you take introductory physics, just about every book says something like this:
When we are looking at projectile motion, the vertical and horizontal motions are independent. In fact, if you shot a bullet horizontally and dropped a bullet from the same height, the two bullets would hit the ground at exactly the same time.
The instructor may then proceed to show you, using some small balls at low speed instead of real bullets. But would it work with a pistol? As you can see above, the answer is “mostly yes”. In the high speed video, the dropped bullet hits just slightly ahead of the fired bullet—a time difference of just 39.6 milliseconds. Honestly, this is close enough for me.
But what if you include air resistance on both bullets? Do they still hit the ground at the same time? The answer is no. The dropped bullet will hit the ground first. Here is a more detailed explanation of why that happens. The short version is that the fired bullet has much more air resistance and this produces more vertical air drag than the dropped bullet.
A Real Lead Balloon
The band Led Zeppelin was possibly first named Lead Balloon because everyone knows a balloon made of lead would fall out of the sky. But it is possible to make a lead balloon float. But how does it work?
Whenever something is displacing either a fluid or a gas, there is a buoyancy force on that object pushing it up. This buoyancy force depends on the density of the fluid or gas and the volume of the displacement. For many objects—like a car or a human in air, this buoyancy force is too insignificant to matter. If you take a simple party balloon, it has a significant volume and very little weight so that it can float.
The trick to making a lead balloon float is size. Think of a balloon as consisting of two parts. First, there is the filling gas—almost always this is helium since it has a density lower than air. Second, there is the “skin” to contain that gas. For a party balloon, this skin is rubber or something, but the MythBusters use lead. Why does size matter? If you double the radius of your balloon, you increase the volume by a factor of 8 (radius cubed) but the area of the skin only increases by a factor of 4. So making a bigger balloon increases the lift (from the volume) faster than it increases the weight of the skin.
The MythBusters achieved this floating lead balloon by making it a cube with sides of 10 feet. Of course, you could probably even make it smaller and still float—here are my detailed calculations of the smallest possible floating lead balloon.
Car Crash Into a Wall vs. A Head to Head Crash
Another classic physics textbook problem goes something like this:
A car is traveling at 50 mph. Would it do more damage to crash into a brick (and unmoving) wall or a head on collision with a similar car also traveling at 50 mph?
Most students would say that a head-on collision is worse than crashing into a wall. The thinking is that there are two cars in the head-on collision so it should be twice as bad. If it is indeed twice as bad then it should be the same as one car traveling at 100 mph crashing into a wall. But it’s not.
Consider one car traveling at 50 mph crashing into a wall. The car goes from 50 mph to 0 mph over some time interval. This means the car has a change in momentum due to some external force (from the wall). Now switch to two cars crashing head on at 50 mph. Both cars still go from 50 mph to 0 mph so they each have the same change in momentum (but in opposite directions) requiring the same stopping force.
What about a car at 100 mph? Yes, this car would have double the momentum of a single 50 mph car. However, it would have 4 times the kinetic energy of a car at half the speed. As you can see from the MythBusters episode, crashing one car at 100 mph is way worse than a head-on collision between two 50 mph cars.
Yes, Humans Actually Landed on the Moon.
It is hard to believe that some people think it would be easier for hundreds of employees to keep their mouth shut about a fake moon landing than it would be to actually land on the moon—but there are those that think this. In this episode of MythBusters, Adam and Jamie tested theories about the moon landing being an elaborate hoax. The first deals focuses on a picture, shown above, of Buzz Aldrin in the shadow of the lunar lander. Conspiracy theorists argue that because Aldrin is in the shadow, he should be all but dark because the sun is the only light source. This is of course wrong because light also reflects off the surface of the moon and onto the astronaut.
What makes this episode so awesome is the attention to detail in the recreation of the Aldrin image. The MythBusters made a lunar lander and a replica moon surface to capture a model image using a single light source. The two images are fairly close to being identical.
I also like this myth because of it’s demonstration of the nature of science. Does this photo prove that humans landed on the moon? No. Science never proves anything to be true. However, it does say something significant. Suppose you have an idea (or I would say model) that this image could only be created with multiple light sources. The MythBusters experiment proves this idea is wrong—since they did in fact make a similar image with just one source. So, science can prove things wrong, but not right.
The Ultimate in Relative Velocity
What would happen if you shot a ball with a speed of 60 mph from the back a car traveling at 60 mph? Would the ball remain stationary? Yup—that’s exactly what the MythBusters did. That animation is so awesome to watch.
This is all about relative velocity. The speed of the ball with respect to the car is 60 mph to the right (negative) and the speed of the car with respect to the ground is 60 mph to the left (positive). This means that the speed of the ball with respect to the ground is 60 mph + (-60 mph) = 0 mph. The math doesn’t seem so complicated, but getting the relative speeds just right isn’t so simple.
Even More MythBusters Science
These are just five examples of MythBusters science; there are countless more. Perhaps I will have to share another five science explanations soon. Oh, and what about the final season? Will there be more great examples of science? I am certain there will be. It’s going to be great.Go Back to Top. Skip To: Start of Article. | <urn:uuid:b340a32a-2c6c-4c90-b659-69e11f9bfd40> | CC-MAIN-2016-26 | http://www.wired.com/2016/01/5-of-the-greatest-physics-demos-from-the-mythbusters/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00114-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947271 | 1,545 | 3.8125 | 4 |
Corrie Cox DTC 618 Minipaper #3 7/20/2010 Functional Foods & Heart Health According to the American Dietetic Association (ADA), functional foods are considered“conventional foods, modified food, medical foods, and foods used for special dietary use that mayprovide additional health benefits, reduce the risk of disease, and promote optimal health (1).” Withinthe past few decades, researchers have conducted studies on food products, with a specific focus onfunctional foods. They have found significant links between nutrition-health connections, and as aresult food product companies petitioned to change food labeling laws. In 1997, an American cerealcompany along with the National Cancer Institute was awarded the first food-specific health claim fortheir oat product. This led to changes in food labeling laws including FDA health claims regulations.Dietary supplements also began to promote links between nutrition-health connections which increasedprofit margins (1). This paper focuses on functional foods, their dietary sources, and how they effectheart disease. Common functional foods that contain dietary sources associated with heart healthinclude oats, soy, flaxseed, tea, wine, grapes, fish, and dairy products (2). Statistically significant results have shown the consumption of oat products containing thecholesterol-lowering soluble fiber b-glucan can reduce total and low density lipoprotein (LDL), thusreducing the risk of coronary heart disease (CHD) (2). Between 1980 and 1995, the Quaker OatsCompany conducted 37 human clinical intervention trials and the majority of these studies showedreduction on total and LDL cholesterol in subjects with high cholesterol. It was determined that 3 g ofb-glucan is required to achieve a 5% reduction in serum cholesterol; therefore a food bearing healthclaim must contain 13g of oat bran or 20 g of oatmeal and at least 1.0 g of b-glucan per serving.
Soy, which has a cholesterol-lowering effect, is a high quality protein now thought to play apreventative and therapeutic role in cardiovascular disease (CVD). In 1995 a meta-analysis, whichinvolved 38 studies, found that the consumption of soy protein significantly reduced total cholesterol,LDL cholesterol, and triglycerides; in addition, there was a significant increase in HDL cholesterol.The exact dietary source responsible for lowering cholesterol has not been specified, althoughisoflavones have been associated. However, at 25 g of soy intake, the effects on blood lipids becamesignificant. On August 12, 1998 the FDA awarded Protein Technologies International (PTI) the healthclaim the soy protein containing products can reduce the risk of CHD. Fish is a functional food which is high in the dietary source omega 3 (n-3) fatty acids. Thecurrent Western diet is suggested to be deficient in n-3 fatty acids when compared to the diet on whichhumans evolved. As a result, researchers examined the role of n-3 fatty acids in diseases, particularlyCVD. Mixed results have been reported and the heart healthy effect of fish consumption has beenobserved in some studies (2), but not in others (3). This could be due to the fact that n-3 fatty acidshave been shown to lower triglycerides by 25-30%; however, they do not lower LDL cholesterol. Areview of 72 placebo-controlled human trials, showed that n-3 fatty acids increased LDL cholesterol(2). On the other hand, a Chicago Western Electric Study reported consumption of 35 g or more of fishdaily significantly reduced the risk of death from nonsudden myocardial infarction (4). In addition, astudy conducted by Albert and colleagues reported one serving of fish per week significantly reducedthe risk of total cardiovascular mortality after 11 years in more than 20,000 U.S. male physicians (5). Red wine has been linked to reducing the risk of CVD, and recent evidence supports this healthclaim (2). Researchers dating back to 1979, reported a strong negative correlation between wine intakeand death from ishcemic heart disease in both men and women from 18 countries. Red wine has a highphenolic content and consumption of it has shown to increase HDL levels. Phenolic substances havethe ability to prevent oxidation of LDL levels, high levels of LDL cholesterol are associated with heart
disease. Recent research is focusing on consumption of non-alcoholic components of wine such asflavonoids. Another beverage commonly associated with reducing the risk of heart disease is tea. Currentresearch shows some evidence that tea consumption may also reduce the risk of CVD, however furtherresearch needs to be conducted. A study conducted by Hertog and colleagues found that teaconsumption in elderly men in the Netherlands provided the major source of flavonoids in their diet.The five major flavonoids included: quercetin, kaempferol, myricetin, apigenin, and luteolin. Thesefive flavonoids, derived from tea consumption, showed a reduced mortality rate associated with CHDin the elderly male population. Strong evidence supports that functional foods contain dietary sources that may enhance hearthealth, but more research must be conducted (1). Functional foods should not be a substitute for a wellbalanced diet, which is the foundation for good nutrition, rather they should be added into a hearthealthy diet (6). Even though strong evidence supports functional food and heart health, consumersmust understand that by adding a particular food to ones diet does not mean the nutrient will guaranteethe desired effects. Therefore, consumers must consult a Registered Dietitian and their physician to seewhat will work best for them. The ADA states that all foods, on some level, are classified as functionalfoods (1). Food provides necessary nutrient and substances for optimal growth, energy, maintenance,and repair (1).
References: 1. ADA. Position of the American Dietetic Association: Phytochemicals and functional foods. J. Am. Diet. Assoc. 1995; 95: 493-496. 2. Hasler C. Functional Foods: Their Role in Disease Prevention and Health Promotion. Food Technology. 1998; 52(2):57-62. 3. Ascherio A., Rimm E., Stampfer M., Giovannucci E., and Willett W. Dietary intake of marine n-3 fatty acids, fish intake, and the risk of coronary disease among men. New Eng. J. Med. 1995; 332: 977-982. 4. Daviglus M., Stammer J., Oretic A., Dyer A, Il K., Greenland P., Walsh M., Morris D., and Shekelle R. Fish consumption and the 30-year risk of fatal myocardial infarction. New Eng. J. Med. 1997; 336: 1046-1053. 5. Albert C., Hennekens C., ODonnell C., Ajani U., Carey V., Willett W., Ruskin J., and Manson J. Fish consumption and risk of sudden cardiac death. J. Am. Med. Assoc. 1998; 279: 23-28. 6. American Council on Science Health [Internet]. [updated 2004 June 30; cited 2010 July 19]. Available from: http://www.acsh.org/factsfears/newsID.396/news_detail.asp. | <urn:uuid:08b5597d-fd12-47cc-8742-b71af5d2a7e1> | CC-MAIN-2016-26 | http://www.slideshare.net/debut80/hearthealthy-foods | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00172-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.920385 | 1,507 | 2.53125 | 3 |
Prof. Emeritus James Cooper Cited in Column on 1840 Culvert
May 20, 2008
May 20, 2008, Greencastle, Ind. - "The road that passes over this masterpiece was probably constructed in 1887, according to bridge expert and DePauw University history professor Dr. James Cooper," notes Indiana's Logansport Pharos-Tribune in a column on the Burnett's Creek Culvert. Mark A. Smith provides history of the culvert, which was constructed circa 1840, and the property on which it sits.
Smith writes, "To explain what a culvert is, we all know that a bridge is constructed to carry both passengers and freight over bodies of water, both large and small, at right angles (roughly) to the stream. Well, a canal culvert goes one step further, in that it usually carries the canal over small streams, also at right angles to the stream. So, in the case of the Burnett's Creek Culvert, you have a tidy piece of construction which carries the canal over Burnett's Creek -- an engineering miracle in days before the internal combustion engine."
Read the full text at the newspaper's Web site.
Jim Cooper, professor emeritus of history at DePauw, also served as vice president of academic affairs. His books include Iron Monuments to Distant Posterity: Indiana's Metal Bridges, 1870-1930 and Artistry and Ingenuity in Artificial Stone: Indiana's Concrete Bridges, 1900-1942. Read more about him in this previous story.Back | <urn:uuid:d89136f3-66a8-4821-bcde-af019ece3365> | CC-MAIN-2016-26 | http://www.depauw.edu/news-media/latest-news/details/21577/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00050-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943158 | 317 | 2.625 | 3 |
The Galesburg Land Company
by Terry Hogan
Many of us know at least a bit about Bishop Hill, the Swedish religious community that was established north of Galesburg. It was settled in the mid-1800’s and populated by Swedes who wanted to worship in a manner inconsistent and unacceptable to the Lutheran State Church of Sweden. But I’m betting not many of us, including me, know much about the Galesburg Land Company and its efforts to establish Swedish Lutheran settlements in Kansas.
It’s hard to know where to start this story as it has no clear starting point, so I’ll start it a little early with the Kansas-Nebraska Act of 1854. This act had more than a little to do with the North-South disagreement on the expansion of slavery, and the Civil War. It also acted to accelerate the settlement of the area by both pro- and anti-slavery advocates. Despite this pre-Civil War turmoil, Kansas was also a new location of cheap farm land. It was being touted by railroads which owned large parcels of land that they wanted to sell. The railroad wanted to sell the land for at least two reasons - (1.) immediate cash flow, and (2.) the creation of future railroad customers for shipping agricultural items out and finished goods in to Kansas.
In this environment, European immigrants, including many Swedes, in the 1850’s and 1860’s were coming to America. It was the country of promise. America was the source of cheap land, no class society, and the opportunity to improve oneself by hard work and good sense. The Swedes knew this by “American letters” sent back to Sweden by those who had already arrived in America. Swedes also knew it because of aggressive recruitment efforts by private interests, including American railroad companies, seeking sources of cheap labor and new customers.
According to a published Kansas history, the settlement of Swedes in Kansas dates to 1855 when John A. Johnson arrived from Galesburg, Illinois and settled in Kansas (Blackmore, 1912). But this hardly started a trend.
An early supporter of Swedish immigration to Kansas in the mid-1850s was Rev. T. N. Hasselquist. Hasselquist was an important and influential Swedish Lutheran clergyman in Galesburg and the surrounding communities. He became the publisher of the first Swedish-language newspaper in America. It was called Hemlandet and was founded in 1855. It was initially published in Galesburg. Hemlandet served Swedes mostly in Illinois and nearby states, but its issues also found their way to Sweden. Hasselquist used the paper to express his views and to guide Swedes. For example, in the July 14, 1857 issue there was a special feature, titled “Some Words to Recently Arrived Immigrants and Others Who Are Seeking Their Luck in America”. The article reported that the eastern states offered few opportunities and that land in Illinois and Iowa was already too costly for immigrants of limited resources. He advocated that such Swedish immigrants would be wise to go to some new territory like Kansas or Nebraska.
Dr., C. H. Gran, from Andover, Illinois, was also a big supporter of Swedes settling in Kansas. Dr. Gran had visited Kansas and had a dream that never materialized, of establishing a Scandinavian colony in Kansas. However, he also used the Hemlandet to warmly endorse the merits of Kansas. In the December 3, 1857 issue, Dr. Gran writes, “…when one gets up on a bluff and looks about these fruitful plains and woods, and sees these wonders of God’s creation, the soul is filled with a stirring that words cannot describe.” Dr. Gan visited Big Blue, Smoky Hill and Republican River valleys, which were later to become significant Swedish settlement areas. (Lindquist, 1963)
Perhaps not to be outdone, Henry L. Kissel who had been in America for about 8 years also put his endorsement of Kansas in the Hemlandet on December 15, 1857:
“Countrymen in New York and in other eastern states! You who work hard every day for your small daily wage, now is the chance for you to get your own home where you can live independent of Americans, and you will escape working so hard and cease to be dependent upon your daily wages…. Hurry to Kansas.” (Lindquist, 1963)
However, as time passed, and the killing of slaves and abolitionists in Kansas occurred with greater frequency, the warm endorsements of Kansas were replaced by a much more somber note. An article in the July 6, 1858 issue of Hemlandet, written by A. Thorson provides the revised view:
“Kansas is the battle ground and the scene of conflict between two great political parties and the end of the struggle is far off. For this reason at present Kansas can only with difficulty be settled and occupied by peaceable people, who must earn their bread by the sweat of their brows.”
Hemlandet reported in the March 9, 1859 issued that a letter “earnestly urging Swedes not to come to Kansas” had been read at a large meeting in Galesburg on February 28. Finally, in 1860, Kansas suffered a bad draught that created severe hardship on new settlers. This received significant coverage in the American press and further discouraged immigrants from going to Kansas.
However, after the American Civil War ended in 1865, Kansas received renewed attention as a place to settle. This was promoted not only by railroads, but also by Swedes already living in America. There were two notable Swedish-developed companies that promoted Swedish immigration to Kansas. One was “The First Swedish Agricultural Company” that was organized in Chicago. The second was “The Galesburg Land Company”, formed in Galesburg in the fall of 1868.
The Galesburg Land Company purchased 14,080 acres (22 sections) of land in Saline and McPherson counties in Kansas from the Kansas Pacific Railroad. The centers of activities in the parcel were Freemount and Salemsborg. In 1868, the Scandinavian Agricultural Society of Chicago was reported to have also purchased 12 sections of land along the Republican River.
The motive force behind the Galesburg Land Company in 1868 was Rev. Anders W. Dahlsten, pastor of the Galesburg Lutheran Church. A meeting was held at the church, which was attended by over 300 people. It is recorded that most of the members of this group were Swedes from Smaland and that the organization was fostered through the efforts of both the Galesburg and Andover Swedish Lutheran churches. At the meeting, the group decided to send a committee to Kansas to investigate the area for potential settlement. The investigative committee was headed by Anders W. Dahlsten. The committee visited the Smoky Hill of Central Kansas and found it to have significant potential for a Swedish settlement. Thus, the Galesburg Land Company bought a large block of land in that area. First, there was settlement by a few Swedes from Illinois. However, this was soon followed by much larger numbers of Swedish immigrants. In turn, many Swedes moved to the Smoky Hill area of Kansas.
One of the early Illinois settler families was John Peter and Matilda Stromquist. They were originally from Smaland, Sweden, but had settled in Galesburg. In 1868, they settled in Fremont, Kansas. John Peter Stromquist was secretary of the Galesburg Land Company (Blackmore, 1912).
Lindsborg became the central Swedish town for the Swedish settlements promoted by the Galesburg Land Company and several others in the region. Salina, Fremont, Salemsburg, Assaria, Falun, Marquette and Smolan are towns all located with the territory controlled by the Galesburg Land Company. These immigrants via the Galesburg Land Company founded both the Freemount and Salemsborg Lutheran Churches in 1869.
It wasn’t long before Swedish was heard as commonly as English in the Smoky Hill region. The Swedish culture slowly started to become Americanized, but the Swedish language stayed the language of the Swedish Lutheran Church long after English was spoken in Swedish homes. By 1890, there were 17,096 Swedish-born settlers in Kansas, representing 11.6% of the foreign-born population (Lindsborg C of C, undated).
At the time of this writing, you can log onto the Internet and find the history of Salemsborg Lutheran Church in Smolan, Kansas (Anon. undated). It reads, in part,
“In the summer of 1868, Anders W. Dahlsten, pastor of 1st Lutheran, Galesburg, lead a party representing the Galesburg Land Company to Kansas to search for a place where a large group of Swedes could settle, find economic opportunity, retain their Swedish culture and most importantly worship God’s environment.”
Perhaps one of the unanticipated effects of the influx of Swedes to Kansas was the effect on politics. Swedes were by very nature, more comfortable with the Republican Party in the mid and late 1800’s. One Kansas Swede was quoted to sum it up this way: “I am a Lutheran, I am a Republican, and I drive a McCormick self-binder.” (Lindquist, 1963)
With the help of the Galesburg Land Company and other Swedish immigrant companies, Swedes found fertile American soil to place their Swedish roots. Dr. Carl Swensson wrote in 1887 (Lindquist, 1963):
The sod-house yields to a comfortable wood house, the shade trees are large, the fruit trees have already started to bear fruit, the fields are extensive and well-cultivated, the farmer is the happy proprietor in the largest and best country in the world. So, it goes from year to year. Children are born, the family grows up, there are churches and school, and with respect for and love of God’s word they will become a large and happy people.”
Even today, a trip to Lindsborg (“Little Sweden”), Kansas will confront the traveler with bits and pieces of old and new Sweden. If you arrive at Swedish mid-summer, you may find Swedish folk dancing, storytelling, and song and theater (Anon. 2000). Or as one author put it,
“As you visit this lovely valley, pause occasionally and listen for a violin, a voice or a piano. A child will be diligently mastering a classical phrase. Pass a studio and notice the deft fingers of a local artist bringing beauty to an empty canvas or a bit of wood. Visit a classroom and observe the children learning about their heritage. Better yet, visit a secluded spot overlooking the valley, gaze quietly across the bounteous fields, and perhaps you too will hear, just audibly above the wind, the whispered strains of a hymn, sung in Swedish, and reminding us of the blessings we share in the Smoky Valley of Kansas.” (Holmquist, 2007).
Such is the story of how the Galesburg Land Company helped plant Swedish roots in central Kansas soil over 125 years ago.
Anon. 2000. Lindsborg. www.nordicway.com/search/Places/places_lindsborg.htm
Anon. undated. Salemsborg Lutheran Church, Smolan, Kansas. www.luthernsonline.com
Blackmore, Frank. 1912. Kansas: a cyclopedia of state history, embracing events, institutions, industries, counties, cities, towns, prominent persons, etc. Standard Publishing Company. Chicago.
Holmquist, Tom. 2007. The Smoky Valley of Kansas. Svensk Hyllningsfest 2007.
Lindquist, Emory. 1963. The Swedish Immigrant and Life in Kansas. Kansas Collection: Kansas Historical Quarterlies
Lindsborg, Kansas Chamber of Commerce. Undated. Lindsborg. www.lindsborg.org/community_info.html | <urn:uuid:414909a4-1e60-4e73-8310-badb430bea1f> | CC-MAIN-2016-26 | http://www.thezephyr.com/backtrack/gbglandco.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00026-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970943 | 2,524 | 3.03125 | 3 |
It was the largest and most important investigation of treatments for high blood pressure ever conducted, with a monumental price tag to match. U.S. doctors enrolled 42,418 patients from 623 offices and clinics, treated participants with one of four commonly prescribed drugs, and followed them for at least five years to see how well the medications controlled their blood pressure and reduced the risk of heart attack, stroke and other cardiovascular problems. It met the highest standards of medical research: neither physicians nor their patients knew who was placed in which treatment group, and patients had an equal chance of being assigned to any of the groups. Such randomized controlled trials have long been unmatched as a way to determine the safety and efficacy of drugs and other treatments. This one, dubbed ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial), cost an estimated $120 million and took eight years to complete. | <urn:uuid:46e75de0-771f-4fd0-b862-f3b4cc113858> | CC-MAIN-2016-26 | https://www.scientificamerican.com/article/the-best-medicine-july-11/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00199-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968693 | 184 | 2.75 | 3 |
Trouvelot's Magnificent Astronomical Drawings
E.L. Trouvelot (1827-1895) was born in Aisne, France and emigrated to Massachusetts in his late 20s. An interest in astronomy helped him secure a position on the staff of the Harvard College Observatory in 1872. Working regularly with Harvard's legendary 15-inch refractor, Trouvelot prepared hundreds of sketches for the series Astronomical Engravings from the Observatory of Harvard College (1876), which boasted 35 plates, many in color.
This early work prompted Trouvelot to create a detailed series of large pastel drawings of Mars, Jupiter, Saturn, the Moon of Earth, and a number of celestial phenomena, such as sunspots and the zodiacal light. His aim was to represent "the celestial phenomena as they appear to the trained eye and to an experienced draughtsman through the great modern telescopes." The drawings were displayed at the United States Centennial Exhibition held in Philadelphia in 1876, as part of the Massachusetts exhibit in the Department of Education and Science. Charles Scribner's Sons reproduced fifteen of the drawings in a limited edition using extremely high-level chromolithography, an illustration process that was at the zenith of its development in the 1880s. Production was time-consuming and costly, but so exacting was the work of the Riverside Press that to this day the color plates in The New York Public Library retain their original intensity and clarity. NYPL's copy is the only complete edition of the Astronomical Drawings held by a public library.
An interesting sidelight on Trouvelot: originally his interest lay in raising silk worms. His efforts to hybridize the native Polyphemus silkworm with the gypsy moth resulted in an accidental escape "” and, hence, the annual devastation from these pests that continues to plague the agriculture industry.
E.L. (Étienne Léopold) Trouvelot. The Trouvelot Astronomical Drawings. New York: Charles Scribner's Sons [Reproduced from the original drawings by Armstrong & Company, Riverside Press, Cambridge, Massachusetts], 1882.
The New York Public Library, Science, Industry and Business Library, from the Astor Library.
More Images from Trouvelot's Astronomical Drawings
Online Exhibition: Heavens Above: Art and Actuality
Online Exhibition: Seeing is Believing: 700 Years of Scientific and Medical Illustration
Related Publication: Seeing is Believing: 700 Years of Scientific and Medical Illustration
Collection Guide: Pictures of Science: 700 Years of Scientific and Medical Illustration
About the Science, Industry and Business Library
Library Catalog Record | <urn:uuid:f647bcd7-1ac6-4914-80b9-78ae66f03111> | CC-MAIN-2016-26 | http://exhibitions.nypl.org/treasures/items/show/132 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00119-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.917102 | 553 | 3.15625 | 3 |
1997-1998 indexDistributed February 12, 1998
A presentation to the AAAS
Study of arts, music may enhance young pupils' math and reading skills
Martin F. Gardiner of Brown University's Center for the Study of Human Development will discuss his work investigating how youngsters who studied the arts and music in their classrooms showed improved math and reading skills. He will speak Tuesday, Feb. 17, 1998, at a 10:30 a.m. news briefing and during a 2:15-4:15 p.m. session titled "Alternate Mechanisms for Motor and Visual Spatial Cognition."
PROVIDENCE, R.I. -- Training children in arts and music may enhance their acquisition of reading and mathematics skills, according to Martin F. Gardiner, a visiting research fellow at Brown University's Center for the Study of Human Development.
Data reported previously in Nature (May 23, 1996) by Gardiner and his colleagues showed that first-grade students who received visual and musical arts training as a regular part of classroom studies showed improved reading skills and were significantly ahead in math skills compared to control groups in other first-grade classrooms. By second grade, the group of students who received the arts training again were significantly ahead of the control group on math skills, but not on reading skills.
At a news briefing on Tuesday, Feb. 17, at 10:30 a.m., and during a 2:15-4:15 p.m. session that day titled "Alternate Mechanisms for Motor and Visual Spatial Cognition," Gardiner will discuss new data as well as his earlier findings and implications for issues of implicit versus explicit learning.
Gardiner's research points to a cross-fertilization in learning. The children receiving the test arts program were given one hour of music and one hour of visual art each week. (Students in the standard arts program received 40 minutes of music and one hour of visual arts in alternate weeks.) The test arts music lessons used the Kodály method, which requires sequential learning in the same sense as reading or mathematics. For instance, the brain must learn how to accomplish such things as pitch, rhythm and melody. In the data published previously, there appear to be similarities in processing certain components in music, mathematics and, quite possibly, reading, and the brain seems able to apply what is learned from one application to other applications, Gardiner said.
The extent to which a student is aware of the cross-fertilization is unclear, Gardiner said. Neither teachers nor students involved in the studies he will review were told explicitly to make connections. The question remains as to whether making the connection explicit would be advantageous. "The possibility should be left open that implicit learning best takes place when it is discovered by the learner himself," Gardiner said.
One implication of the research, Gardiner said, is that the arts should be considered as important and essential as math, reading and writing. "Learning may be a much more rich experience than we currently understand," Gardiner said. If art and music are cut from a curriculum, "you may be losing more than the piece you're leaving out."
The data emphasize skill-building in multiple areas. "The arts are wonderful, essentials, in themselves," Gardiner said. "That they can also give broader benefit - that you can have your cake and eat it, too - seems lovely," Gardiner said.
While in Philadelphia, Gardiner can be contacted through the Brown News Bureau at (401) 863-2476.###### | <urn:uuid:707a5d68-2619-4e76-8a9e-6a7412645033> | CC-MAIN-2016-26 | http://www.brown.edu/Administration/News_Bureau/1997-98/97-080i.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00083-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970786 | 727 | 3.171875 | 3 |
As of date, automobiles that use fossil fuels like petrol, diesel or gasoline, invariably have an exhaust system built in them. This system consists of hardware such as flanges, clamps, connecting pipes, extension pipes, tail pipes, resonators, mufflers, mangers, resonance chambers, perforated pipes, exhaust pipes, air injecting tubes, Oxygen sensors, and catalytic converter. These hardware components collectively formed the exhaust system and used to let the exhaust gases escape from the automobile's engine. Basically, the system is mainly a set of pipes or ducts. These pipes effectively steer exhaust gases away from the automobile's combustion unit as well as automobile user. If left within combustion chamber, they can bring down the fuel's efficiency due to additional pressure. Exhaust gases often contain toxic compounds that are formed because of fossil fuel combustion.
Such exhaust systems are also present in other motorized vehicles such as motorbikes. These systems of different automobiles and motorized machines may have varying components. In some systems, exhaust gases are passed through a catalytic converter. This converter brings down the pollutants by converting some of the toxic byproducts in exhaust gases into non-polluting substances. Such converting system is generally found in engines with internal combustion system but may also be used in other exhaust systems. Some of the exhaust systems may entail sending these gases through what is known as a silencer, or muffler. This process reduces any noise pollution from exhaust systems. Different countries and continents have different set of rules and regulations relating to noise and air pollutions. Therefore, the simple function of traditional exhaust systems has been increased to include new functions, i.e., reduce noise as well as harmful gases being released in the atmosphere. Effectiveness of exhaust systems in pollution control is one of the important factors while considering purchasing any vehicle.
Exhaust gases from different parts of the engine are at times collected by different pipes, and all such gases may be eventually be collected and passed through single pipes, exhaust manifolds or cylinder heads. In some other automobile engine designs, exhaust gases may also be used in turbochargers to increase the engine's power. | <urn:uuid:df528483-176c-45b3-97e8-878734e7406a> | CC-MAIN-2016-26 | http://www.socalpundit.com/exhaust-systems/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943874 | 432 | 3.875 | 4 |
m. abt 1867
Facts and Events
John Nance Garner IV, known among his contemporaries as "Cactus Jack" (November 22, 1868 – November 7, 1967), was an American Democratic politician and lawyer from Texas. He was a state representative from 1898 to 1902, and U.S. Representative from 1903 to 1933. He was the 44th Speaker of the House in 1931–1933. In 1932, he was elected the 32nd Vice President of the United States, serving from 1933 to 1941. A conservative Southerner, Garner opposed the sit-down strikes of the labor unions and the New Deal's deficit spending. He broke with President Franklin D. Roosevelt in early 1937 over the issue of enlarging the Supreme Court, and helped defeat it on the grounds that it centralized too much power in the President's hands. | <urn:uuid:42f542bd-2744-43a0-ac98-8ab946bdf1bd> | CC-MAIN-2016-26 | http://www.werelate.org/wiki/Person:John_Garner_(26) | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00001-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.981019 | 173 | 3.015625 | 3 |
Will farm livestock become endangered species? Social, economic and environmental drivers are converging to not only look at producing food more efficiently and sustainably, but are also stimulating new ways to produce meat or remove the need for it altogether. Such changes, if successful, could have substantial effects on New Zealand’s agricultural and economic landscapes.
Lab-grown meat has been worked on for a while, and convergence with other technologies is starting. Modern Meadow is aiming to print meat. In vitro production of meat still has a long way to go, technically, economically and socially. There is scepticism that it will become economically viable and sufficiently scaleable. Or even appeal to consumers. But would it really be that different from currently available mechanically extracted meat products , insects or some of the delights whipped up by molecular gastronomists?
Will our culinary future need meat? Bloomberg BusinessWeek notes the emergence of start-ups looking at plant proteins to replace meat and egg products. This goes way beyond that culinary favourite of yore – textured vegetable protein. What is particularly noteworthy about these developments is that some of the key backers are VCs who have also focused on clean technologies. Some of them are hoping that the food companies will give quicker returns on investments.
Like the “food pills” of yesteryear these new technologies may not come to pass, but they warrant careful consideration because of their potential to disrupt traditional practices. If a good cheap substitute for milk powder is created what effect will that have on our milk producers? A premium price for real milk or a quaint cottage industry?
Alongside vertical farming and technologies to enhance food safety these developments signal a higher tech approach to food production, and increasing emphasis on production practices, that New Zealand may not be sufficiently prepared for. | <urn:uuid:0688d558-c731-46c8-8247-b56ea33f039f> | CC-MAIN-2016-26 | http://sciblogs.co.nz/ariadne/2013/02/26/future-foods/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00096-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959373 | 360 | 2.84375 | 3 |
Check out a new program 'On Stage Across America' covering the best of Broadway.
You'll find it only on Bright House Networks Local On Demand, Channel 999.
The holiday season is a time of peace, celebration and reflection for many. Learn more about the customs and traditions behind this magical time of year.
Christmas, coming from humble beginnings, has evolved into arguably the largest celebration in the world.
Roman Catholics and Protestants celebrate the birth of Jesus on Dec. 25.
The familiar Nativity scene seen around town every year refers to the biblical story. According to accounts in the gospels of Matthew and Luke, Jesus was born to the Virgin Mary in the city of Bethlehem, in what is now known as the West Bank.
Christians believe Jesus is the son of God, sent to Earth to wipe clean the sins of mankind, and that his birth fulfilled prophecies made hundreds of years earlier.
Over time, Christmas celebrations adopted many of the traditions still celebrated today, such as the Christmas tree, Santa Claus and giving gifts.
In Hebrew, the word "Hanukkah" means "dedication." It is also known as the Festival of Lights, and is an eight-day Jewish holiday.
It begins every year on the 25th day of the month of Kislev, which falls between late November and late December.
The holiday commemorates the rededication of the holy temple in Jerusalem after the Jews' victory over the Hellenist Syrians in 165 B.C.E.
This festival of Hanukkah is observed in Jewish homes by lighting the eight candles on the menorah each night of the holiday, one on the first night, two on the second night, etc., from the ninth, larger candle, called the "shamash," or "servant." | <urn:uuid:44268742-31d3-4c34-9768-3db99daf9d8e> | CC-MAIN-2016-26 | http://baynews9.com/content/news/baynews9/on-the-town/holidays/holiday-history.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00191-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952527 | 369 | 3.125 | 3 |
The report, done by the United Nations Development Programme (UNDP), and entitled `Fighting Climate Change: Human solidarity in a divided world', which was launched on December 11 in Kingston, warns that the world should focus on the developmental impact of climate change that could lead to unprecedented reversals in poverty reduction, nutrition, health and education.
Resident Representative of the UNDP, Minh Pham said that the report has shown that the tourism industries of Jamaica and the Caribbean at large, would be severely affected as climate change would also lead to serious beach erosion and a disruption and possible extinction of marine life.
"Temperature rising will affect also marine life leading to coral reef bleaching, leading to extinction of marine animals and the combined consequence of that will affect the tourism sector, loss of employment and revenues, loss of livelihoods and employment within the fisheries sector as well," he outlined.
"In fact, the report pointed out that an increase in 50 centimetres in sea level will wipe out roughly one third of the Caribbean beaches," he added.
Pham noted that a rise in temperature would also lead to a rise in sea level, thereby creating more intensive and frequent hurricanes. "As temperature rises the sea level will rise as well, creating an increased energy of the ocean and the water mass which will lead to tremendous tropical cyclones, more intensive, more
frequent hurricanes," he explained.
He said that since water is a fundamental source of life and livelihood, water scarcity would impact food production, agriculture, nutrition and Jamaica's export earnings from agricultural products, coffee, coffee beans and bananas. He is therefore urging Jamaicans to see climate change as a "real, manmade" issue and also as a matter of national and regional importance.
He pointed out that the health issues that we are currently experiencing with dengue and malaria result from the change in climate.
Highlighting the link between climate change and human development, Pham said that a rise in temperature would severely affect the climate of the island. "As temperature rises and we see increased evaporation, we will see more droughts, but also we will see severe flooding and we will see a higher concentration of natural disasters in developing countries because of the global pattern," he reasoned.
Looking towards the future, Pham issued a word of caution, arguing that future generations would "judge us harshly, if we look at the evidence today and do nothing about it."
"In essence, we could condemn future generations to severe poverty, to diminishing opportunities for education, health, employment, for a meaningful and productive life, and we are condemning future societies as well to an ecological disaster," he said. | <urn:uuid:b8006442-2f11-47ba-a775-0cad7148352a> | CC-MAIN-2016-26 | http://indigenouscaribbeancaucus.blogspot.com/2007_12_01_archive.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00182-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963477 | 543 | 2.984375 | 3 |
There are ways of predicting violence. In the wake of Trayvon’s death, why aren’t we implementing them?
Trayvon Martin’s death, as well as the lack of charges brought against his killer, was horrific for many, but was not surprising.
Women often tell me of their rape aversion plans. They usually include: carrying their keys between their fingers, never going anywhere alone, watching their drinks, not wearing revealing clothes, not going out at night, etc. “Sometimes,” they wearily declare, “I just wish I was a man.” What we don’t usually talk about is the dangers men face, especially men of color.
Men typically are not told to restrict their behavior in these limiting and largely unproductive ways. They are not warned that they’re being reckless or worse—provoking any violence they might encounter—if they don’t adhere to these rules.
Still, the concept of men as free from the effects of violence in our society is a deeply flawed one. When I teach about violence prevention in schools, the teen boys often meet me with resistance until they realize I’m not just there to teach the girls. Then they express confusion and resentment as they describe learning to evade physical assault by aggressive men and boys, while simultaneously having to demonstrate they are not a threat to others.
A recent story with Donna Britt, mother and author of Brothers (& Me), on NPR discussed how she raised her boys. She and others are doing a great job of making public what they call “the talk” that African American parents give their boys when they make that transition from cute child to possible threat in the public eye.
This is an everyday reality for boys and men of color, and I’m glad that this is getting some media attention and acknowledgment. Yet, in a culture where we say there’s no way to predict violence, profiling is inevitable.
Soon the evening news will interview another neighbor who testifies the murderer next door “seemed so normal” and everyone will nod that there was no way anyone could have known. Then, everyone, including the police, will grasp at whatever they can—the latest mug shots, the characters from the last movie – to be able to predict the next act of violence. Because we don’t want to be caught off guard.
Safety is not worth this cost to our communities. There are ways of predicting violence—which means there are ways of preventing violence—that don’t include profiling random men of color and blaming victims. Why are we not exploring these?
In an entertainment culture, it is not popular to explore predictability. When there is a mystery, you can be sure that everyone will stay glued to his/her screens, anxious to keep track of the newest development. Meanwhile, this also ensures that sponsors’ advertisements will be watched while viewers eagerly await the next installment of the story.
Despite Gavin de Becker’s book, The Gift of Fear, being on the bestseller list when it first came out over a decade ago, reporters still choose to interview the clueless rather than interview his team that studies how intuition works and the precursors to violence. As a culture, we treat those who know ahead of time as mystical outliers, never pausing to study how it is that the average person could foresee such a thing.
Those who accurately predict violence use behavior as their indicators, not hoodies or race. Concepts like forced teaming (pretending there is an alliance between you that doesn’t exist), using charm and niceness, giving too many details, typecasting, loan sharking, giving an unsolicited promise and ignoring the word “no” are all accurate ways of predicting when someone is acting manipulatively and possibly dangerous. Of course, in order to give credence to these ways of predicting violence and danger, we would have to admit that most violence happens by people we know and/or includes an “interview” process, rather than being a random shooting or a man behind the bushes who says nothing before grabbing his victim.
To say that violence is predictable is not equal to blaming those of us who have experienced it in the past. It is empowering everyone to have more tools for the future so that we may live fuller lives and feel less fear. It is challenging our society to distinguish between paranoia, prejudice and real intuition.
Photo Håkan Dahlström/Flickr | <urn:uuid:9049cf7e-bbeb-4d77-811b-eddca8bd7598> | CC-MAIN-2016-26 | http://goodmenproject.com/on-race/the-effects-of-fear-violence-mourning-trayvon/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00107-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.966318 | 926 | 2.65625 | 3 |
Electoral systems tend to reflect the political culture of a country, though there is always the question of the chicken or the egg. Indeed, sometimes it is difficult to decide whether a political system has been shaped by its electoral system or by the cultural values of the people. If it is not the intention of this long post to be a description of world electoral systems it is still good to visit a few electoral principles to understand better what chavismo is trying to do in Venezuela.
The basic debate in any electoral system through history has been to decide if it was more important to ensure a stable governmental majority or a genuine representation of all the trends of a society. The two extremes are thus the "winner take all" of Anglo-Saxon countries or the absolute proportional representation of which Israel is the prime example today.
In the anglo-saxon tradition of single circumscription, winner take all, the objective was always to create a clear parliamentarian majority. When the system works well, the representation of minorities is not necessarily a major issue as many mechanisms exist to temper the iron rule of winner take all, in addition to the risk that the winner does not dispose of 50% of the total expressed vote. For example the US has got around this by creating a strict separation of powers where the Supreme Court makes sure minorities get some form of protection from the abuse of the majority, and with the introduction of a wholesale primary system which allow the expression a many political currents within the main parties. As such the Dem and GOP can be seen as mere expression of a political coalition.
Another way to create diversity while still ensuring that a clear parliamentarian majority happens is the two round balloting systems such as the French one. In these systems the first round vote allows for the expression of diverse tendencies which reach an agreement among themselves to create a majority in the second round. In theory, you do not get a proportional representation, you may even get distortions as bad as the anglo-saxon system but due to alliances necessary for the second round vote every political party is guaranteed at least a spokesperson in parliament.
On the other side we have the belief that expression of all political tendencies of a country must appear in the parliament of that country. Today Israel offers an example of such system where people vote for a list and the list gets a certain number of representatives according to a direct correlation with the vote percentage they get. The basic weakness of these systems is that invariably they result in the dispersion of votes and as years pass it becomes increasingly difficult to create a stable governmental majority. Unless such systems evolve toward a presidential form of rule, they quickly risk to become paralyzed. That happened in post war Italy and more spectacularly in Wiemar Germany allowing Hitler to reach power without ever reaching an majority of the vote.
Post war Germany did not want to abandon the principle of proportional representation but did not want to go back to a paralyzed assembly and thus they invented what is in my opinion one of the best electoral system existing today. Half the chamber is elected on a winner take all system and half on a proportional system. Yet, that proportional list is limited at the federal level and in addition requires political parties to reach 5% of the vote. Thus, small but significant parties are ensured of a solid representation, but the winners are ensured of reaching a stable government provided they reach a political agreement to form a coalition. Historically in Germany since the war two parties have been enough to reach a stable majority. The beauty of the system is that it not only allows a significant proportional representation but it also maintains the major advantage of the anglo-saxon tradition, the direct contact between the representative and its circumscription voters, something missing in resolute proportional systems where people vote for lists, not for names.
The Venezuelan original choice
In 1958 Venezuela retained its strong presidential system, for better or for worse, and thus chose a proportional representation for its Congress thought to be more fair and more in tune with the contemporary Zeitgeist. Even the Senate, who by definition is an unfair system (consider that a California Senator has the same worth as a Delaware one), was elected in such ways that some Senators were not elected by state but according to how many votes did they party get. That is, if a party got zero Senator but 10% of the vote it would get at least one Senator. The House was elected on a strict proportional system but somewhat tempered by the fact that the states were the basic circumscriptions. In practice since a majority of states had less than a dozen representatives what happened is that you needed at least a 5% of the vote share to ensure that you would get a seat of two. Yet at the national level the House, like the Senate had some representatives elected according to the nation share of political parties. That way all would get at least a spokesperson.
The system served well a country without democratic tradition where the political class was asked to learn to debate according to rules, and not according to how many guns a given caudillo had. But by the late 70ies the system was already wearing down as it suffered from two factors: 1) increasing presidentialization and 2) lack of contact between the elector and its representative as this one hid behind the party who decided after all who would run. Eventually an important electoral reform was enacted, trying to get something a little bit more representative, along the German system model. As such it was decided that within each state about half would be elected according to districts within a circumscription.
To explain this reform I have imagined the following circumscription, which could represent a medium sized state like Bolivar or Anzoategui.
For clarity I have decided that this state has only 257 electors and that this population has not changed since the mid 90ies until today. The state elects 9 representatives. Thus 5 are elected in in three individual districts and 4 on an "at large" list that is supposed to be proportional. The quotient of previous vote was eliminated and as such it is quite clear that one objective pursued by that reform was to eliminate very small parties. In this imaginary circumscription it is clear that any party that gets less than 20% of the state vote risks highly not to get a single seat, depending on how those votes are spread.
Now the major flaw of this system was actually the district configuration. In a country where streets do not have a name and even less of a sequential numeration, urban areas are very difficult to split in individual districts. Thus district 2 is an urban district in our example and elects three, 3, representatives. Now, if each elector voted only once in district 2 it would not matter: people would just vote for one name in the list. The lists would be allocated their number of seats according to vote and those with the most vote within a list would be the elected folks. But that option was not selected: amazingly people are allowed to vote in a district as many times as there are representatives in that district. In this district 2, when you vote, you vote three times. Since in Venezuela there is a tendency to vote according party lines in overwhelmingly most circumstances the winning party carries ALL. That is, with 40% of the vote of district 2 you can carry 100% of the seats.
Supposedly this was solved when taking into account the list vote, that is, the representatives at large. The representative distribution in that "list vote" would depend on how many votes would elect on average a representative. That is, if you carry the 3 seats of district 2 you are much less likely to gain representatives from the at large district since your average vote per representative would drop to the benefit of those who did not get a single district vote. Below an example to understand this complexity (right hand click on image to enlarge in a different window).
Trying to simplify I have decided that there are only 6 political parties in this state, three pro Chavez and 3 NON- Chavez (not ANTI, just NON). Equally that the there are three parties for Chavez does not mean that they all engage easily in coalitions, as it was the case in this imaginary 2000 vote of the above figure.
District 1 and 3 are easily decided, the main pro Chavez party carries both seats. Note that district 3 is the most pro Chavez district, you will see the importance of this later.
District 2 got a majority for the non Chavez party 1. It gets the three seats. Note that this urban district is non Chavez majority anyway.
Now when we look at the list vote, that is representative at large, even if pro Chavez party should get 2 seats it only gets one because its average drop (note, there are different ways to calculate that "average" and that can affect the final repartition but I am trying to keep it simple here). As for non Chavez 1, it gets zero seats becasue it already got 3 seats in district 2. Its representative average is just not high enough. This system allows for 3 minor parties to get the seats they were not allowed to get on the "winner take all" of the district vote.
Now, look the columns on the right and you will see that the system is still reasonably fair. If as expected the vote list is slighly different than the district vote (personal character there affects the outcome), the overall percentage of the vote is generally reflected in the seat percentage (last column on the right).
We can thus say that the 1998 and 2000 voting system was still considering minority representation though it squeezed out the very minor parties, parties that until the early 90ies were guaranteed at least one spokesperson in Congress (for those old enough, remember Borregales?).
The first electoral cheating
In the 2000 vote the governor of Yaracuy found the first flaw of this system and used it to obtain 100% of Yaracuy representation even though he got only slightly more than 50% of the vote. This was very simple, run the districts with one party name and the vote list with another party name, the famous "morochas" (twins). That way the list vote is not punished by the success of the district vote since apparently we are dealing with two "different" parties.
In the example below I use the same numbers as above except that I assume that chavismo used the morochas system. Pro Chavez 2 and 3 form a coalition, 2 running only in the district and 3 on the list alone. I assume for effect that pro Chavez 1 does not participate in that coalition.
The non Chavez parties obtain the same number of votes as above but there is some difference in the pro Chavez vote becasue we can assume that some disgruntled voters did not follow the electoral consigns and voted for pro Chavez 1 in protest. Yet the result leaves no doubt: with the exact overall number of votes the pro Chavez folks get 7 representatives, 78% of them while their list share was ONLY 44%. The injustice is quite clear! A party that got 30% of the vote only gets now 10% of the representatives. In fact, depending on how you calculate the "average" non chavez 3 seat could end up with pro Chavez 3!
This morochas system as applied fully by chavismo in 2005 though the withdrawal of the opposition made the point moot. However, had the opposition run anyway, in 2005 it would have gotten at least 40% of the vote but probably no more than 20% of the seats, even with its own morochas.
The new ways to cheat
The new electoral law that was voted a few months ago was designed to create a built in advantage for chavismo. This one has realized that once the "morochas" system is applied by all parties in Venezuela it becomes in all practical aspect an anglo saxon system, a winner takes all system. The threat here is that the opposition could win an overwhelming legislative victory with say barely 55% of the popular vote. That is also true for chavismo but we already saw what happened when they got 100% of the Nazional Assembly.
Thus the new system made legal the "morocha" system but in addition allowed the gerrymandering of the districts by the electoral board, CNE, as it pleases. Until now, districts had to follow historical lines around the political divisions of counties and parishes. In the new system the CNE is allowed to break down parishes and counties to create electoral districts at will, and AS LARGE as it wants to create them instead of the tendency to evolve to single representative district. In other words Chavez wants large districts because as it was the case before the elected representatives depend more on the party (PSUV/Chavez) rather than any direct contact with the inhabitants of the district. Not to mention of course that the multiple voting favors the winning side, something that chavismo hopes to be no matter what.
In the example below I speculate on the following changes, based on the new electoral law:
1) less voting at large thus less possibility of minority representation: only 3 list representatives instead of 4
2) district 1, weakly chavista loses some voting centers that are anti Chavez to be added to a new district 2 that includes the most anti Chavez areas.
3) district 2 includes thus the areas most likely to vote against Chavez and becomes the district with the most electoral votes per representative.
4) district 3 has now three representatives with a weak but solid pro Chavez tendency.
In this new system, even though the two political groups get about the same vote share as above, chavismo still retains 6 out of 9 seats, thus annulling the possible advantage of the opposition now using the "morochas" system. All courtesy of old fashioned gerrymandering now legal in Venezuela.
For my last graph I use the same table as above but this time I change the total votes to favor the non Chavez option. That is I assume a slow decrease in popular for chavista parties support due to ten years in office and unfulfilled promises.
Now the opposition wins clearly but yet in spite of its unity, in spite of beating chavismo by 4 points, chavismo STILL gets 5 out of 9 seats!!!!!
Note: the morochas "coalition" allows for a repartition of seats to all groups which is why for example non Chavez 2 and 3 get each one seat even if Chavez 3 does not figure on the ballot. On the other hand, because chavismo lost the overall vote, pro Chavez 1 loses the seat it was getting when chavismo still was winning the vote.
By tweaking the vote totals a little bit more one could reach a 55 to 45 in favor of the opposition and yet still a chavista victory 5 to 4! All depending on how effective gerrymandering was, and how the political pressure targeted to some districts worked. For example in Zulia chavismo could give up altogether the Maracaibo districts and focus strictly on the oil areas, blackmail and all and carry them barely obtaining thus more seats than what it deserved in Zulia.
First, do not be surprised of such Manichean electoral system of multiple seats districts: in the US South such systems were routinely used to avoid African American to elect representatives, that is, districts were carved where more than one representative was elected, just as in Venezuela. Thus 60% of whites got 100% of seats.
Second, for all of its imperfections, the original system which I described at first allowed for Chavez to be elected and to gain a 2/3 majority in 2000 in the National Assembly with not even 60% of the vote. Now, it is not clear whether with 55% of the vote a united opposition would get a simple majority in the assembly.
Third, let's not forget the other electoral fraud already in use by chavismo, such as the shameless use of state property for the PSUV campaign. The system is pretty much sealed for chavismo, the more so as freedom of expression and information are under serious threat.
In other words, from what I have described here it should be crystal clear that the only way the opposition can win is:
- unique candidatures in EVERY district
- coordinated and focused campaigns
- civil society massive participation to compensate the abject state abuse
- presence at EVERY polling stations as gerrymandering makes a few dozens votes "added" critical enough to decide a single seat
not forgetting the notion that we need probably 55% of the vote to get a bare majority in the new parliament.
Now let's wait for the next posts to discuss ways we could reach at least an united front and how we can try to get more citizen involvement.
If electoral systems are a representation of the democratic nature of the people of a country we can say that in Venezuela the new electoral system does not reflect anymore the democratic and representative ideals of 1958 where diversity was sought and as much consensus as possible a goal. In the parliaments of the preceding decades even if two political parties on occasion represented up to 85% of the seats there was always room for small parties to have their voices heard without having to subject themselves to undesirable alliances. The system albeit not a perfect one allowed outsiders to be a voting surprise (see CCN in 1963, MEP in 1968, Causa R in 1993 or Chavez in 1998).
The main flaw of the old system, namely the lack of direct contact between the voter and its representative, a contact that was established only though political parties, was beginning to be addressed as specific constituencies began to be drawn.
But today the modified electoral law allows for the pernicious return of the political party rule since only through the strength of a strong party, or its leader as is the case of the PSUV, will someone be allowed to run for office and be elected eventually. As such the current electoral law is not even a return to the past as it is a less democratic system than the 1958 original election rules. Minority parties cannot elect representative on their own anymore, and to be elected you need to be anointed by a large group.
We are now into a plebiscite regime where the only thing that matters is the link between the leader and the masses and where it is becoming increasingly difficult for small groups to emerge as viable alternatives: they must lose some of their originality, a loss that even the use of primaries as a nomination system will not be enough to maintain.
From now on elections will be between two groups, in much harsher terms than the Anglo-Saxon systems since the link between the voter and its representative has all but been erased. "winner take all" with a vengeance!
The new system forces all parties to align themselves into two camps with two single messages, two camps where internal dissent must be muted and reserved for a post electoral moment. The winner of such an unfair and undemocratic system will in addition get an unfair share of the seats. Venezuelans still "vote" but their votes are becoming increasingly meaningless.
How can the Venezuelan opposition in its diversity can manage to create a single front? | <urn:uuid:b4347075-48f6-418a-964a-d3e5fed864b8> | CC-MAIN-2016-26 | http://daniel-venezuela.blogspot.com/2009/10/2010-votes-legal-cheating-in-pro-chavez.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00142-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.971008 | 3,898 | 3.046875 | 3 |
- a nice point of conduct, ceremony, etc.
- observance of petty formalities
Origin of punctilioaltered (infl. by L) ; from Spanish puntillo or Italian puntiglio, diminutive of punto ; from Classical Latin punctum, a point
- A fine point of etiquette.
- Precise observance of formalities.
Origin of punctilioObsolete Italian punctiglio, from Spanish puntillo, diminutive of punto, point, from Latin p&umacron;nctum, from neuter past participle of pungere, to prick; see peuk- in Indo-European roots.
- A fine point in exactness of conduct, ceremony or procedure. Strictness in observance of formalities.
Part or all of this entry has been imported from the 1913 edition of Webster's Dictionary, which is now free of copyright and hence in the public domain. The imported definitions may be significantly out of date, and any more recent senses may be completely missing. | <urn:uuid:55e1d6fd-cc14-4df1-a4d0-9a7996739e4c> | CC-MAIN-2016-26 | http://www.yourdictionary.com/punctilio | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00186-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.776314 | 216 | 2.59375 | 3 |
Rats that are socially isolated during a critical period of adolescence are more vulnerable to addiction to amphetamine and alcohol, found researchers at The University of Texas at Austin. Amphetamine addiction is also harder to extinguish in the socially isolated rats.
These effects, which are described this week in the journal Neuron, persist even after the rats are reintroduced into the community of other rats.
"Basically the animals become more manipulatable," said Hitoshi Morikawa, associate professor of neurobiology in the College of Natural Sciences. "They're more sensitive to reward, and once conditioned the conditioning takes longer to extinguish. We've been able to observe this at both the behavioral and neuronal level."
Morikawa said the negative effects of social isolation during adolescence have been well documented when it comes to traits such as anxiety, aggression, cognitive rigidity and spatial learning. What wasn't clear until now is how social isolation affects the specific kind of behavior and brain activity that has to do with addiction.
"Isolated animals have a more aggressive profile," said Leslie Whitaker, a former doctoral student in Morikawa's lab and now a researcher at the National Institute on Drug Abuse. "They are more anxious. Put them in an open field and they freeze more. We also know that those areas of the brain that are more involved in conscious memory are impaired. But the kind of memory involved in addiction isn't conscious memory. It's an unconscious preference for the place in which you got the reward. You keep coming back to it without even knowing why. That kind of memory is enhanced by the isolation."
The rats in the study were isolated from their peers for about a month from 21 days of age. That period is comparable with early-to-middle adolescence in humans. They were then tested to see how they responded to different levels of exposure to amphetamine and alcohol.
The results were striking, said Mickaël Degoulet, a postdoctoral researcher in Morikawa's lab. The isolated rats were much quicker to form a preference for the small, distinctive box in which they received amphetamine or alcohol than were the never-isolated control group. Nearly all the isolated rats showed a preference after just one exposure to either drug. The control rats only became conditioned after repeated exposures.
Morikawa said that this kind of preference for the environmental context in which the reward was received provides researchers with a more useful way of understanding addiction than seeing it as a desire for more of the addictive substance.
"When you drink or take addictive drugs, that triggers the release of dopamine," he said. "People commonly think of dopamine as a happy transmitter or a pleasure transmitter, which may or may not be true, but it is becoming increasingly clear that it is also a learning transmitter. It strengthens those synapses that are active when dopamine is released. It tells our brain that what we're doing at that moment is rewarding and thus worth repeating."
In an important sense, says Morikawa, you don't become addicted to the experience of pleasure or relief but to the constellation of environmental, behavioral and physiological cues that are reinforced when the substance triggers the release of dopamine in the brain.
Morikawa and Whitaker have also been able to document these changes at the neuronal level. Social isolation primes dopamine neurons in the rats' brain to quickly learn to generate spikes in response to inputs from other brain areas. So dopamine neurons will learn to respond to the context more quickly.
If the control, group-housed rats are given enough repeated exposure to amphetamine, they eventually achieve the same degree of addiction as the socially isolated rats. Even from this point of comparable addiction, however, there are differences. It takes longer for the socially isolated rats to kick the addiction to amphetamine when they're exposed to the same extinction protocols. (They spend time in the same environments, but amphetamine is no longer available.)
"So the social isolation leads to addiction more quickly, and it's harder to extinguish," said Whitaker.
Whitaker said that the implications of these findings for addiction in humans are obvious. There is a rich literature that documents the negative effects of social isolation in humans, as well as a great deal of evidence that addiction in rats and humans is functionally similar at the neurological level.
"It's not a one-to-one correlation, but there are socially impoverished human environments," she said. "There are children who are neglected, who have less social input. It's reasonable to make guesses about what the impact of that is going to be."
Morikawa points out that their findings may also have implications for how social isolation during adolescence affects conditionability when it comes to other kinds of rewards.
"We think that maybe what's happening is that the brain reacts to the impoverished environment, to a lack of opportunities to be reinforced by rewarding stimuli, by increasing its sensitivity to reward-based conditioning," said Morikawa. "The deprived brain may be overinterpreting any reward it encounters. And if that's the case, it's likely that you are more conditionable not only to drugs but to any sort of reward, including food reward. One interesting possibility is that it might also make adolescents more prone to food 'addiction,' and then to obesity."
Explore further: Alcohol helps the brain remember, says new study | <urn:uuid:e4f939a2-12f3-4efc-9cad-055e409179e0> | CC-MAIN-2016-26 | http://medicalxpress.com/news/2013-01-socially-isolated-rats-vulnerable-addiction.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00009-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968846 | 1,080 | 2.921875 | 3 |
In this project I wanted to find out how lubricants affected the mechanical advantage of a pulley. To find this out I first gathered all of the needed material. Then I rigged a five-pulley system with a twelve-pound weight on the bottom. I used a spring scale to pull up the weight on the five-pulley system. I read the spring scale and recorded the ounces of weight required to lift a 12-lb. weight from the ground and recorded the number in my notebook. Then I took that number and divided it by the actual weight and got the mechanical advantage for the pulleys. Then I washed off the pulleys and added a lubricant to the axles and repeated the process.
The data shows a very small difference between each lubricant but that difference could mean the entire world if you were using a hundred pulley crane. It shows that lubricants increase the amount of mechanical advantage because they reduce friction. It also shows that the WD-40 is the best lubricant that I tested. It did the best job in reducing friction and it got the closest to the calculated mechanical advantage. By doing this it proved my hypothesis correct. It also decided the next lubricant I am going to use on my bike axle or my roller blades.
This project was a good success! It proved that WD-40 was the best lubricant that I tested and it came the closest to the calculated mechanical advantage. I chose this project because I have always been interested in pulleys and lubricants. When I started I didn't know that it would get this complex, but I rose to the challenge. It proved to be a good project and important too.
What lubricant has the most effect on the mechanical advantage of a pulley? Which one comes closest to the calculated mechanical advantage? Does friction have an effect on the mechanical advantage?
I predict that in my tests I will find that the actual mechanical advantage of the pulleys will not be the same as the calculated mechanical advantage because of the friction on the pulleys. I know that the calculated mechanical advantage will be eight but I predict it will be lower for the actual test. I also predict that the best lubricant will be WD-40 because it is the slipperiest. Also WD-40 will come the closest to the calculated mechanical advantage.
I learned about friction in my background reading. Friction is the heat and resistance caused when one object meets another object. Friction slows down movement. I also learned about lubricants. A lubricant is a friction reducer. The very best lubricant is the cartilage in the human body. It reduces friction 100%. How it does this is a total mystery because cartilage only works when the body is pumping blood. We have found other friction reducers. They're things that we use in our every day life, like WD-40, wax, motor oil, and soap.
In the reading I found out that a pulley has a wheel with rope or a belt passing through it. Pulleys are useful because they make pulling things easier or change the direction of the pulling. A pulley's advantage is called mechanical advantage. A pulley can be rigged in different ways, which will effect its mechanical advantage. If a pulley is hung vertically and it has more than one wheel it is called a block and tackle. If there is only one pulley and you are pulling down it is called a fixed pulley. A fixed pulley does not create mechanical advantage. It only changes the pulling direction. Another rig is a single pulley attached to the load. This pulley rig only has one pulley and you are pulling up.
Mechanical advantage is the sacrificing of distance for effort. The pulley is making you pull farther but dividing the force equally throughout the longer distance. You end up pulling further but less force is needed to pull. For example, if you have two pulleys and you lift a weight one-foot off the ground you will have to pull back two feet. Therefore you will use half the effort and the mechanical advantage is two.
One wooden frame,
Two metal hooks
Four three wheeled pulleys that have the same design
Four two wheeled pulleys that have the same design
30 feet of non-stretch rope
12 pound weight
Acme vegetable oil
Dawn dish soap
Spring scale with a maximum weight of 15 pounds measuring in one-ounce increments
- Wash pulleys to remove any dirt or lubricants.
- Take the pulleys and mount them on a wood frame about two feet tall and one foot wide. Screw the hooks into the top of the wooden frame.
- Hang the pulley that has three wheels on the hook, putting the ring on the hook.
- Attach the weight to the bottom pulley that has two wheels.
- String rope between the hanging pulleys and the bottom pulleys starting at the bottom.
- Attach spring scale to the loose end of the rope and pull until the weight is off the ground.
- Read the number on the spring scale and record the data.
- Repeat steps six and seven five times.
- Repeat step 1 then lubricate pulleys with one of the lubricants.
- Repeat steps 3-7 for each lubricant.
The data of friction table shows that WD-40 is the best lubricant with an average of 23.2 oz of effort to lift the weight. It did the very best job at reducing friction causing less drag to slow down the wheel on the pulley and making it easiest to pull. The second best lubricant was the soap with an average of 26.8 oz. of effort. This shows that it can make a good lubricant but might not be the best thing to use when lubricating important equipment like mass-producing machines. Third best lubricant that I tested was the veggie oil. That had an average of 28.6 oz. of effort. Last, the control group had average of 29 oz. of effort. These numbers show us the effect of friction on surfaces and what lubricants can do when it comes to reducing friction. This is an indicator of the lubricants reducing friction. This data also says that the WD-40 helps the pulleys come the closest to the calculated mechanical advantage if not surpass it.
The chart of mechanical advantage shows the same outcome that the friction chart does. It has WD-40 closest to the calculated advantage. This is a good result considering that there is no lubricant that completely eliminates friction. This may not seem that important because there is such a small difference, but if you were using a thousand pulleys like on a crane it would mean the world to you because that small gain in mechanical advantage would build up to be big.
Overall my hypothesis was correct because it stated that the WD-40 would lubricate the pulleys best causing them to perform closest to the calculated mechanical advantage. WD-40 was the best friction reducer. This data shows that my project was a success. It also proved my hypothesis to be a correct.
In this project I wanted to find out how lubricants affected the mechanical advantage of a bunch of pulleys. I set up a five-pulley system that I calculated the mechanical advantage of to be eight. I got the calculated mechanical advantage by pretending that I pulled back ten feet then divided it by how far it went off the ground. Then I actually did the tests. I had four groups, control, WD-40, Veggie oil and soap. When I got my data it showed that friction had affected the mechanical advantage of the pulleys. The WD-40 was the best lubricant because it only took an average of 23.2 oz of pulling force. The hardest group to pull was the control group that had no lubricant at all. It had an average of 29 oz of pulling force.
After I had collected all of the data I divided the ounces of pulling force it took my to pull up the 12 pound weight into 12 pounds. This calculated the mechanical advantage for each lubricant. Then I compared the calculated to the actual mechanical advantage. I found that my hypothesis was correct because the WD-40 did do the best, beating the calculated by 3 tenths. This was probably because my spring scale was only accurate to whole ounces. The WD-40 did have the most effect on the mechanical advantage. The friction affected the mechanical advantage because as my graph states the control group has the very worst mechanical advantage because the friction wasn't reduced in any way and it was most affected by friction. This just goes to show that friction does have a large affect on the mechanical advantage.
I chose this project because I have always been interested in pulleys and have wondered how they work. The information that I have gathered would be important to many people. Like a bridge engineer that has to keep his suspension cables taut with 500 pound pulling force. If he only has one man to pull the cable he wouldn't be able to create enough pull. So he has to rig up a pulley system that will increase the man's pulling force. Many jobs would need to know this information to be successful in their line of work.
Overall I think that my project was a good success and it should be further looked into. Next time that I do this kind of a project I think that I should get some more accurate equipment. Also I would do some more tests with more lubricants. From this and the data I conclude that my hypothesis was correct. | <urn:uuid:60e40cbf-5ab9-4794-9692-27f992193dee> | CC-MAIN-2016-26 | http://www.education.com/science-fair/article/effect-lubricants-mechanical-advantage/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00041-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96496 | 1,941 | 3.28125 | 3 |
Dart, Gareth (2007) Provision for Learners with Special Educational Needs in Botswana; a Situational Analysis. International Journal of Special Education, 27 (2). ISSN 0827 3383
This paper considers the support of children with special educational needs in Botswana. A variety of sources including policy documents, literature, statistical data, interviews with key personnel and observation, are used to analyse the context and delivery of provision. Botswana is a middle-income country that has seen rapid economic expansion in a short period of time. Revenue has been used to expand the social sector including education. In the last decade HIV and AIDS has become a huge socio-economic challenge. Attitudes towards people with disability appear to be changing to become more inclusive but there is still evidence that many are still on the fringes of society. There are strong policy statements on the provision of special education. Policy has moved to an inclusive model but practise lags in the field. Most provision for children with special educational needs is at units for particular categories of disability attached to ordinary schools. The majority of these are at primary schools. There is a drive to build support for pupils in the ordinary school and classroom. This is slow in developing but might speed up in the near future as all newly trained teachers now have elements of special needs education as part of their initial training. Non-governmental organisations play a key role in pre school and vocational training for students with special needs and also in provision for hearing impairment. However they struggle with funding and staff training. Issues of poor coordination between key stakeholders, a lack of curriculum development and a shortage of specialist staff all conspire to limit the effectiveness of provision. There have been a number of studies made that highlight these issues and suggest improvements that could be made.
This article has been made available online by the International Journal of Special Education.
|Uncontrolled Keywords:||Special Education, Inclusion, Botswana, Africa, Situational Analysis|
|Subjects:||L Education > LB Theory and practice of education
L Education > L Education (General)
|Divisions:||Academic Departments > Institute of Education|
|Depositing User:||Gareth Dart|
|Date Deposited:||14 Jul 2008 08:22|
|Last Modified:||31 Mar 2010 05:00|
Actions (login required) | <urn:uuid:21193a6e-5812-4501-a9d0-d2bda5200bc1> | CC-MAIN-2016-26 | http://eprints.worc.ac.uk/445/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.93391 | 483 | 2.6875 | 3 |
|Heat & Temperature|
When the temperature is near absolute zero, i.e. near -273 Celsius, all of the boxes of elements of the periodic table contain an S except for helium which contains an L. This shows that all elements except helium are solids at absolute zero (and at 1 atmosphere pressure) except for helium which is still a liquid. As the temperature increases first one element, then another turns from solid to liquid. Then the helium box displays a G indicating that is has reached the temperature at which it becomes a gas. As the temperature increases the elements turn from solids to liquids and then to gasses. The pattern of their change is intriguing to watch. At any point the visitor can reduce the temperature and watch the elements condense and solidify as the temperature decreases.
This exhibit shows people the relationship between the pattern of the periodic table and the pattern of freezing and melting temperatures of the elements.
We'll be adding interesting info and links here. If you have a good one, we need your feedback | <urn:uuid:640f61fb-e534-482a-af6b-7b6cb0aaf7a9> | CC-MAIN-2016-26 | http://www.exploratorium.edu/xref/exhibits/periodic_table.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00048-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.909557 | 209 | 3.9375 | 4 |
Salt and polyelectrolyte affect on food colloid function : polygalacturonic acid and egg albumen case studies /
The primary purpose of this study was to try to (1) provide information on the chemical nature, including turbidity formation, of polygalacturonic acid in different concentrations of NaCl solution. (2) use turbidity formation characteristics to develop an assay for determining the activities polygalacturonic acid depolymerases by using a polyelectrolyte. (3) provide information on the chemical nature, including protein aggregation, of egg albumen in different temperatures of incubation. (4) develop a series of egg experiment to demonstrate the chemical nature of eggs. The initial phase of the study tested the solubility characteristics of polygalacturonic acid in different NaCl concentrations and different temperature incubations, in order to find the optimum incubation temperature and NaCl concentration which form the turbidity suitable for activity determination of polygalacturonic acid depolymerases. The results demonstrate that the commercial polygalacturonic acid product tested has the stable turbidity formation when incubation temperature is 60oC and higher, and has the critical turbidity concentration of 0.25M NaCl which needs more time to form a stable turbidity. This observation has the potential to develop a new turbidity assay to determine the activity for polygalacturonic acid depolymerase. To develop another turbidity assay for polygalacturonic acid depolymerase, poly(diallyldimethylammonium chloride) (PDADMAC) is used as the polycation to react with the polygalacturonic acid. The flocculation will form a quantitatively determinable turbidity which is proportional to the polygalacturonic acid concentration. The assay developed has been successfully used for the comparison of fresh tomato and molded tomato polygalacturonic acid depolymerase activities and can be used for quality control laboratories to monitor polygalacturonic acid depolymerase activities. Egg white protein behavior in different NaCl concentrations and different incubation temperatures as studied in the present study, is important for different egg products. The results revealed that high NaCl concentrations (2M and higher) stabilize the major protein of egg white ovalbumin. This will provide additional basic protein information to the egg industries.
School:Oregon State University
School Location:USA - Oregon
Source Type:Master's Thesis
Keywords:polygalacturonase albumins pectin
Date of Publication: | <urn:uuid:113e5b18-824d-4a01-ba38-3b4ba4693b3f> | CC-MAIN-2016-26 | http://www.openthesis.org/documents/Salt-polyelectrolyte-affect-food-colloid-409502.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00011-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.865978 | 525 | 2.703125 | 3 |
Note: The OMPTL is now included in the
CVMLCPP library, but
separate releases of the OMPTL alone will still be made available here.
Parallelization of C++ applications
The University of Geneva has developed two libraries to parallelize
C++ applications in a way that is very, very easy for developers. The two
libraries, OMPTL and MPTL,
both serve the same purpose and operate by similar principles, but use
different underlying technology. The MPTL has been developed by in
'Scientific and Parallel Computing'
group and the OMPTL has been developed in the
'Stochastic Image Processing' group.
The exchange of insights, key to the success of both libraries, is a
hallmark proof of the importance of coffee-breaks during work.
In a nutshell...
With 'Dual-Core' and
processors on many desktops, and more to come, current software must be
parallelized to take advantage of the available hardware. Parallelizing
programs is a non-trival task. The technology developed here at the
University of Geneva greatly eases the task of changing existing
software to take advantage of parallel processing, even if that software
was not originally designed to run in parallel.
In more technical words...
The OMPTL and MPTL re-implement the algorithm and numeric
part of the Standard Template Library of C++. The range is partitioned,
then the computation is executed in parallel. The MPTL uses posix-threads,
the OMPTL uses OpenMP.
Imagine the following piece of code, which is serial:
This example is the parallel code with OMPTL:
This example is the parallel code with MPTL:
The MPTL requires that you link your program with pthreads, i.e.
The OMPTL requires that compile your programs with an OpenMP-capable compiler,
for example the Intel(C) compiler or GCC 4.2:
To compile OMPTL-code with a compiler that does not support OpenMP, or
if you wish to disable OpenMP for another reason, simply compile without OpenMP.
All calls to algorithms will be redirected to their std:: counterparts.
Contrarily to what one might expect, the OMPTL is not all eager to
execute tasks in parallel. The truth of the matter is that
paralellization tends to introduce overhead and a loss of efficiency. In
many cases, using a serial version of an algorithm is simply the better
choice, a testimony to the excellent quality of the Standard Template
Library. Even if parts are executed in parallel, the application will
only undergo a significant speedup if the parallelized work represents a
significant part of the computation required by your application.
Thirdly, each call to an algorithm must be on a sufficiently large
range, and not successive calls on small ranges. The fourth restrictions
is that only calls to STL's 'algorithm' and 'numeric' components are
parallelized, so if your code does not use these, it will not benefit.
And the last bad news: not all algorithms are parallelized yet, and
some never will be.
Having said all these bad things, there is no penalty for using the
OMPTL, and changing your code to use the OMPTL is extremely easy, so you
really have only to gain from using it. If your application uses
time-consuming operations on large data, such as in Image Processing, you
will definately be interested.
The OMPTL and MPTL are free, open source software, available under the
LGPL, the 'Lesser
GNU Public License'. This license is the authorative text, here follows
a non-authorative summary:
If you find a bug in the OMPTL, please send a clear report, and if
possible a patch to Fokko dot Beekhof at
unige dot ch . For bugs in the MPTL, see its official
web page, or send an email to this address. We do not have much time to work
on this, so patches are preferred. | <urn:uuid:4149ea94-785b-442f-9926-fcd58494e4e2> | CC-MAIN-2016-26 | http://tech.unige.ch/omptl/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00006-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.902724 | 879 | 2.828125 | 3 |
1(Of land) having been formed into a number of level areas resembling a series of steps.
- In Bhutan, cows and yaks graze, rice and wheat grow, and people live up and down terraced land that seems to kiss endless blue sky above and melt into pristine waters below.
- Already, Glover has started clearing one of his hillsides and developing contoured fields similar to the terraced strips of land the Hmong worked in the mountains.
- Where land is life, highlanders turn mountainsides into terraced fields.
2chiefly British (Of a house) in the style of a row house.
- Six of the houses would be affordable low-cost homes, with the remainder a mix of two and four-bedroom houses and semi-detached and terraced housing.
- Firefighters had to don breathing apparatus when they went into the terraced house, where they finally found a fridge was leaking ammonia, a colourless gas with a pungent smell, which is also toxic, corrosive and explosive in air.
- One of the main problems is that many of these urban terraced houses only have access to the garden area through the house.
For editors and proofreaders
Definition of terraced in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. | <urn:uuid:7aa4e1a6-7d8d-4360-a5d3-4b6e9c7eddc1> | CC-MAIN-2016-26 | http://www.oxforddictionaries.com/definition/american_english/terraced | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00180-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955329 | 288 | 2.796875 | 3 |
The Ten Parenting Commandments
Part of the Parenting For Dummies Cheat Sheet
Once you become a parent, your job has important duties (whether you want them or not): Take good care of your child, keep them safe, and develop a good relationship. A few basic rules apply and must be done as a parent:
Thou shalt not make anything more important than thy children.
Thou shalt work to have great patience during times of stress.
Thou shalt feed thy children healthy, nutritious food.
Thou shalt nurture and love thy children.
Thou shalt work to protect and keep thy children safe.
Thou shalt communicate openly and honestly with thy children.
Thou shalt make of thyself a good example at all times.
Thou shalt treat thy children with respect.
Thou shalt not shake nor physically try to hurt thy children.
Thou shalt be a good friend and supporter of thy co-parent. | <urn:uuid:3142195a-f99f-4512-a8c6-c7d5d0606d61> | CC-MAIN-2016-26 | http://www.dummies.com/how-to/content/the-ten-parenting-commandments.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00079-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.903646 | 197 | 2.96875 | 3 |
Vizooalization: 5 things I have learned about visualization at the zoo
On Saturday we went to Columbus zoo, apart from enjoying the lions, penguins, birds, I have also learned 5 simple yet effective lessons in visualization.
1. Design your charts for kids – assume your audience knew bare minimum and curious
Above shown is a typical Zoo-mate info dashboard that you can see next to any animal in the Zoo, All the information that a kid needs to know to enjoy the reptile is shown there, a photo, whether the snake is venomous, what it eats, how endangered it is on a gradient scale and where its found usually.
2. Highlight information that is crucial, leave the rest
This is “How to read a Chameleon” next to Chameleon cage, quickly tells how to understand what you are seeing. But if you need to know more about the colors, the research behind it, there is a bigger section beneath this.
3. Use images, examples that your audience can relate to
This is the “measure your ecological footprint” exhibit. They have actually shown various sizes of shoes to compare ecological footprint of various Nationals – India, China, World average, US (not shown) so that kids can easily understand the point.
4. Remember, your audience is there to look at you, listen to your story, the chart is just an enabler
This is outside the area where tigers are kept. Obviously very few people stopped at this infographic, people are all curious to see the tigers. But this chart does tell a very good point about declining tiger habitat across the world, they have used 2 colors per each tiger to convey the point. No gradients or anything… just 2 colors, one for where tigers thrived earlier, and one for where they are today, and the point is obvious for even a kid.
5. Finally, have fun
Outside the building where reptiles were kept, I found this. Having fun in what you do is as much important as telling your story, selling your ideas.
Happy charting 🙂
PS: excuse the poor quality of images, it was dark in most of the places.
Do you want to be awesome in Excel?
Here is a smart way to become awesome in Excel. Just signup for my Excel newsletter. Every week you will receive an Excel tip, tutorial, template or example delivered to your inbox. What more, as a joining bonus, I am giving away a 25 page eBook containing 95 Excel tips & tricks. Please sign-up below:
Your email address is safe with us. Our policies
Leave a Reply
|Analyze your blog post’s readability with Google docs||Doing the NY times Olympic medals by country year visualization in excel| | <urn:uuid:6b0d3054-d336-4051-ae09-55c29295df20> | CC-MAIN-2016-26 | http://chandoo.org/wp/2008/08/05/visualization-lessons-from-zoo/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00009-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.9468 | 575 | 2.765625 | 3 |
Domesticated ducks are derived from wild mallards (Anas platyrhynchos-type) and Muscovy ducks (Cairina-type). Some breeds are specifically designed for egg production, others for meat production, and still others are ornamental.
Duck Care Tips
They range widely in size, shape, and color. Some lay white eggs, others brown, and still others rainbow colors. However, regardless of the breed(s) of ducks you may own, their basic needs are the same: a good quality diet, protection from the elements and predators, and a clean environment.
BUYING AND STORING FEED
ILLNESS & DISEASES
FIND AN AVIAN VETERINARIAN
Good nutrition is crucial to keeping your ducks healthy. All ducks should be fed a pellet or crumble diet formulated to meet their specific nutritional needs. Follow the guidelines below to find the right food for your duck(s). It is preferable to feed a diet created specifically for waterfowl, such as Mazuri feeds (made by Purina).
If your local feed store doesn't carry this brand, ask them to order it for you. Nutrena Waterfowl is an alternate brand.
Ducklings and adolescent ducks: Young ducklings grow very quickly and need food that is high in protein and calories. For the first 3 weeks of life, feed a starter diet that is 18-20% protein. Make sure the pellets are small enough for ducklings to eat (about 1/8"), or use crumble. From the fourth through the 13th week, switch to a maintenance diet with 14% protein.
After the 14th week, females should be switched to a layer pellet or crumble diet.
Drakes and non-laying ducks: Drakes and ducks that are not producing eggs should be fed a maintenance diet that is 14% protein. While a laying ration is critical to the health of laying ducks, this diet is too high in calcium and protein for non-laying ducks. Switch your ducks to a maintenance diet when they stop laying eggs (usually in the fall).
Laying ducks: Due to the large number of eggs they can produce (many more than wild birds do), laying ducks have very high requirements for calcium and protein, and must be fed a layer or breeder diet. Laying diets for ducks are typically 16-17 per cent protein and higher in calcium than are maintenance diets.
Treats: Treats should be given in small amounts. Although many ducks love foods like corn, carrots, and greens, these foods do not meet all of your ducks' nutritional needs, and should compromise no more than 5-10 per cent of the diet. Allowing your duck to graze in your yard is fine providing that your yard is pesticide-free.
Grit: Ducks that have access to dirt will pick up small stones on their own and don't need supplemental grit offered. If your ducks are kept in enclosures without access to dirt, sprinkle a small amount of grit on their food once a week.
Oyster Shell: Oyster shell is not usually necessary if your laying ducks are on a good quality-laying ration. If despite a good diet your ducks are laying pitted or thin-shelled eggs, you may choose to supplement the laying diet with crushed oyster shell.
What not to feed: Do not offer medicated feeds designed for other species, as these can be toxic to ducks. Do not feed oyster shell to non-laying ducks or to drakes, as this can lead to kidney disease. Do not feed moldy or insect-ridden foods.
BUYING AND STORING FEED
To ensure your duck receives the best diet possible, buy small quantities of a fresh, high quality feed on a regular basis. Unless you have a large flock, avoid buying very large (50 lb.) bags of food; vitamins degrade with storage, and you are better off buying smaller quantities more frequently.
Note the expiration date on the food you purchase, and do not feed after that date. Be wary of unknown brands; to ensure that you are purchasing a quality product, stick with a reputable manufacturer such as Purina, Mazuri, or Gunter.
It is important to store your duck's food in a tightly-sealed plastic can that is kept in a cool, dry location.
Avoid storing food in metal containers, as metal "sweats" and can lead to the growth of toxic molds. Protect the food from light and heat, as these can cause rapid breakdown of vitamins that are crucial to your duck's health.
Insects and rodents carry bacteria and parasites that can make your birds sick, and should not be allowed to come in to contact with your duck's food. Discard any food that is moldy, moist, or old.
If there is any question about the freshness of your duck's food, remember this basic rule of thumb; when in doubt, throw it out. Ducks should get plenty of fresh feed daily.
Extra feed should be disposed of daily, especially in wet or humid weather. Many ducks like to dip their pellets in water while eating, and should have a bowl of fresh water nearby.
Access to clean, fresh water is very important for all ducks. Ducks need a constant supply of fresh water for drinking, as well as access to water for swimming and bathing. Ducks are very susceptible to botulism, and for this reason it is important that their water supply is kept clean and well aerated.
In artificial ponds or pools, this means regularly removing dead leaves and other plant matter, as well as periodic draining and cleaning.
In a natural pond, this means making sure the water is not polluted by household or industrial effluents, and removing any dead animals from the water.
Plastic or stainless steel food and water dishes are safest for your birds. Zinc-coated, or galvanized, metal dishes can leach heavy metals into the water and food supply, causing toxicity. Wash containers with soap and water daily. Containers are best placed in a location where the birds can not defecate in them. Drip watering systems are difficult to clean, and encourage buildup or bacteria, algae,
and parasites. We do not recommend their use.
Ducks need a well-built shelter that will protect them from cold, wind, and rain. It must also be strong enough to protect them from predators. Don't underestimate the strength of a raccoon or bobcat when it has a duck dinner in mind. Digging predators can easily burrow into an insecure pen from below.
Predators can also come from above, so make sure you provide protection from raptors. Since ducks are messy, make sure the shelter is easy to clean.
A concrete floor works well, as it is relatively easy to wash and disinfect. Concrete alone, however, is too rough for duck feet, and the floor must be covered with a thick layer of sand, pea gravel, peat moss or straw. It is important that ducks be able to sit with their abdomen on a dry surface, and litter must be changed or over strewn regularly to remove fecal material.
If you need more information on building a safe enclosure for your ducks, there are numbers of books available that can provide you with step-by-step guidelines.
It is important to closely observe your duck's health every day. Birds are especially adept at hiding illness and injury. Frequent observation will allow you to get to know what is "normal" for your ducks, and allow you to determine if something is wrong.
Questions to keep in mind when evaluating the health of your duck(s) include:
-- Has there been a change in behavior or appetite?
-- Do they feel abnormally thin?
-- Do you see any parasites on their skin?
-- Do their droppings look normal?
-- Are they limping, or reluctant to stand?
-- Are they suffering from a loss of balance?
-- Is their appetite for favorite foods suddenly diminished?
-- Are they ruffled (fluffed up) and apathetic?
Any of these changes are signs that your bird is sick, and you should consult a certified avian veterinarian for advice.
ILLNESS & DISEASES
Also called slipped wing, the cause of angel wing is not conclusive. There are two basic theories, both concern overfeeding and proximity to humans. The first involves too much protein and the second involves too many sugars. To prevent angel wing, waterfowl should not be fed bread, popcorn or other human foodstuffs, especially in the first six weeks of life.
As birds grow, the (wrist) joints can become retarded in their development relative to the rest of the wing, and possibly due to the weight of incoming flight feathers. If only one wing is affected it is usually the left one.
The result is a wrist which is twisted outwards and unable to perform its usual function, with feathers that are out of alignment resulting in wings at odd angles. If caught early before young birds reach full growth, wrapping the wing and binding it against the bird's flank, together with feeding the bird a more natural diet, can mostly reverse the damage.
Ducks and geese are attracted to shiny things like pennies, screws, nails, staples, coins of any type and other metal objects. They will swallow them. This is extremely dangerous and most often fatal. As it is digested, the metal (and coating on various metals) poisons the bird, harming the kidneys and liver.
It can take days and even weeks for any symptoms to become apparent, at which point it is often too late. Early symptoms of hardware disease include a bird that keeps to itself, away from others and eats less. Fecal matter can be
fluorescent green (though this can also be a sign of other infection). Eventually the bird cannot walk, or walks and falls down. If these symptoms occur, the bird must go to a vet immediately. It must have a
x-ray to see if any foreign objects have be eaten. There is medication that is given to try to get rid of the poisons in the bird, but most often they need to have surgery to remove the objects. If you even suspect your pet has swallowed metal, an immediate xray may be your only chance for saving him or her.
This is a fungal infection that most often affects the lungs. Ducks and geese eat moldy feed or sleep on damp bedding and inhale
aspergillus's fungi spores. Birds with aspergillosis can die very quickly and it is an extremely difficult illness to treat. The best way to avoid the disease is to ensure birds have a clean, dry living area and throw out any food you suspect may have become moldy.
Bumblefoot is caused when a bird's feet get scuffed or cut and infection sets in. The most common cause is too hard of a living surface (such as concrete). It is very common in indoor birds with poorly-sized perches or caged rats, guinea pigs and hamsters whose cages have hard surfaces. The feet pads swell and redden, and may feel warm to the touch.
In all cases, antibiotics are usually recommended. In severe cases, the "bumbles" need to be opened up and cleaned, then dressed and kept dry.
FIND AN AVIAN VETERINARIAN
Finding a veterinarian who will see ducks and geese is not an easy task. Certified avian veterinarians undergo extra training to meet the special needs of waterfowl and other pet birds. Also, avian veterinarians who have extensive experience with indoor pet birds like parrots or parakeets may not often treat ducks or geese.
It is important to find a veterinarian who has the experience necessary to treat your ducks. To find an avian veterinarian, try the search fields on one or both of these websites:
Association of Avian Veterinarians | <urn:uuid:c8b141a5-3236-4562-b9f8-800c0a2f3b61> | CC-MAIN-2016-26 | http://www.animalliberationfront.com/Practical/Pets/WildAnimals/WildBirds/DuckCareTips.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00127-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949327 | 2,478 | 3 | 3 |
- About NRL
- Doing Business
- Public Affairs & Media
- Field Sites
- Visitor Info
- Contact NRL
In 1963, NRL astronomers made the first positive identification of discrete sources of stellar X rays. A new NRL-developed X-ray detector system was flown on an Aerobee rocket, and the result was the discovery of two X-ray sources - Scorpius X-1 and the Crab Nebula. These findings suggested the possibility that the source of the X rays was a neutron star, a densely packed body of neutrons formed from the collapse of a star. NRL scientists wanted to prove this hypothesis, and in 1964 NRL conducted an experiment on an Aerobee flight during the occultation of the Crab Nebula by the moon. NRL's data did not confirm the neutron star theory, which in turn spurred more intensive investigations. As a result, between 1964 and 1973, 125 discrete sources were discovered, including supernova remnants, pulsars, radio galaxies, and quasars. Specific NRL contributions included:
- the first X-ray detection of a pulsar in the Crab Nebula in 1969;
- the detection of X-ray galaxies during Aerobee flights in 1967 and 1968;
- the compilation of the first comprehensive galactic X-ray sources map;
- the discovery of a distinctive difference in time behavior between soft and hard X rays in 1971; and
- the discovery of the variability of Cygnus X-1, a possible black hole in the Cygnus constellation.
The rapid development of X-ray astronomy, combined with developments in infrared, ultraviolet, and cosmic-ray investigations, led in the 1970s to the utilization of satellites for high-energy astronomy research. In 1972, NASA initiated the High Energy Astronomy Observatory (HEAO) program to study cosmic ray,X-ray, and gamma-ray sources in deep space. NRL was selected to develop one of the four instrument packages to be flown on the HEAO I, which was launched in August 1977. The NRL package, the Large Area X-Ray Survey Array, was the largest space instrument ever to be flown on any satellite. Consisting of seven modules of large-area proportional counters, the instrument mapped the entire sky for high-energy sources, which included radio pulsars, binary pulsars, black holes, quasars, and extragalactic X-ray sources, resulting in a new map of nearly 1000 discrete X-ray sources. | <urn:uuid:f4b8b866-b02b-4e40-9a93-7bccd4356a0a> | CC-MAIN-2016-26 | http://www.nrl.navy.mil/accomplishments/astronomy/x-ray-astronomy/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.92599 | 502 | 3.71875 | 4 |
In June, President Obama's landmark speech at Georgetown University delivered a powerful message on the dangers the world faces from global warming – extreme weather, rising seas, increased wildfires, epic droughts. "We will be judged," he said, "as a people, as a society, and as a country, on where we go from here." So where do we go? In his first term, Obama enacted tougher vehicle mileage standards and doubled electricity from renewables. But he's still got a long way to go. Here are a few things the president could achieve in his second term.
Trap natural gas.
Fracking is a nasty business. Among other problems, methane leaks from natural gas wells and pipelines. One recent study found leakage rates of up to 17 percent of production. For the climate, that's a big problem: As a greenhouse gas, methane is 25 times more potent than carbon dioxide. Solution: an EPA crackdown on fugitive methane emissions. Obama is a big natural-gas booster and a quiet supporter of fracking, so this is likely to happen, but he may avoid harder problems like drinking-water pollution or wastewater disposal.
Credit: Rich LaSalle / Getty Images | <urn:uuid:629bcc24-e1d9-45be-8e50-f9bbc80ba8ec> | CC-MAIN-2016-26 | http://www.mensjournal.com/expert-advice/four-ways-to-confront-climate-change-20131024/trap-natural-gas | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00186-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936048 | 240 | 3 | 3 |
|Posted on Tuesday, October 28, 2003 - 17:19: |
How does Chinese medicine regard milk in terms of energy and nutrition?
|Posted on Wednesday, October 29, 2003 - 12:59: |
Milk which is fresh from the goat or cow and that is not industrialized is considered a tonifying and nutritious food. It contains plenty of nutritious Qi and is very adequate for infants or as a rehabilitating food after prolonged illness, child bearing etc. Yet, very nutritious foods such as milk, honey, eggs etc., must not be consumed in excess. If they are, they usually achieve the oposite goal and place a heavy burden on our system. Plenty of milk, or milk products, tend to cause a excessive dampness or phlegm, thereby inducing allergies, stomach problems, infections, skin problems and more. Those who have a weak or malfunctioning digestive system should eliminate milk products from their diet. Those who suffer from allergies, weak kidneys, breathing system disorders and the like need also to refrain from milk.
There is another thing that must be considered. Industrialized milk products usually contain substantial traces of steroids and antibiotics that are given on a routine basis to the milk producing animals in modern countries. These elements may severly harm allergic or otherwise weak and sensitive people. These elements adversely affect the Qi quality of the milk product as much as sterilization and pasteurization, or refrigeration...
As for the Calcium content of milk, this can easily be substituted by the more readily digested soy products, tehina, sardines/tuan, broccoli and more. | <urn:uuid:d29e30f9-82d1-4384-83cc-1f4eb8791da2> | CC-MAIN-2016-26 | http://www.acumedico.com/discus/messages/25/52.html?1067425181 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00029-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.945455 | 327 | 2.546875 | 3 |
Epigenetics Shapes the Future of Health
By Matt Windsor and Emily Delzell | Illustrations by Ron Gamble
Trygve Tollefsbol believes you can change your destiny—with broccoli. The UAB biologist, a pioneer in the booming field of epigenetics, has the data to make his case. In a widely publicized review paper published this spring in the journal Clinical Epigenetics, Tollefsbol and colleagues at UAB explained how a diet rich in broccoli, green tea, grapes, and other key ingredients can fight off cancer and other aging-related diseases.
UAB scientists are hardly the first experts to tout the health benefits of “superfoods” like leafy vegetables and wine. But epigeneticists like Tollefsbol explain how they help on a genetic level. Their investigations offer new insights on ways to slow the aging process, reduce cancer risk, and more.
Story continues after video.
Tollefsbol, who holds doctorates in molecular biology and osteopathic medicine, has published eight books on epigenetics, with more on the way. He is a leader in a discipline that contains a heartening message of biochemical empowerment. Epigenetics is the study of factors that affect your genes without changing the underlying DNA code. To put it another way, epigeneticists try to understand how the genetic instructions contained in our DNA are carried out in the real world.
“The most important concept of epigenetics is that you can take control of your genes,” Tollefsbol says. “What you do affects your genes. In other words, you're not predestined to a certain life because of your genetics, as we once thought. The genes you get from your mother and father aren't going to necessarily limit you for the rest of your life.”
In the grand nature-versus-nurture debate, epigenetics offers a surprising middle ground. Genes are profoundly important, epigeneticists say, but so are environmental factors. The food we eat, the viruses we catch, and the cigarette smoke we breathe all have the power to shape our bodies' underlying structures, even when they don't directly alter our genetic code.
Chemistry and Consequences
Scientists now know that DNA doesn't have to change in order to create serious problems. The outside of the DNA helix is actually covered with a series of chemical markers that act as crucial crib notes for the body's cellular machinery. One common marker is DNA methylation, which occurs when the enzyme DNA methyltransferase adds a methyl group to the DNA base cytosine. Although the cytosine remains cytosine, the methyl add-ons make it more likely that the affected gene will be suppressed.
These epigenetic markers tell the body to ramp up or slow down gene expression, or the production of key proteins. DNA is also wrapped around large proteins called histones, whose properties also have a significant impact on gene expression. (See “Epigenetics in 60 Seconds.”)
These factors are referred to as the epigenome, since they are above and beyond, but directly linked with, the genome. The field of epigenetics has exploded in the last 10 years, as researchers at UAB and elsewhere have discovered that the epigenome plays a vital role in our lives from conception to death—and even reaches beyond the grave to influence our children and grandchildren.
Epigenetics and Cancer
“Epigenetics does not get much hotter than it is right now," Tollefsbol says. “And probably the biggest area is in cancer research.” Scientists once thought that mutations were the most important factor causing cancer, Tollefsbol explains. “Now the prevailing opinion is that epigenetic modifications probably cause cancer more often than mutations do.”
The body has a host of tumor suppressor genes, “which put the brakes on cells and keep them from proliferating,” Tollefsbol says. “Research has shown that these genes can undergo key epigenetic changes so that we become more predisposed to developing cancer.” The gene p53, for example, plays a role in “at least 50 percent of cancers,” Tollefsbol notes. “Essentially, what epigenetics does is turn genes on or off, and a lot of cancer—and aging, for that matter—has to do with the turning on or off of particular genes.”
Enter the Telomere
Another crucial target for epigenetics researchers are tumor promoters, especially the telomeres. One of the most intriguing structures in all of biology, the telomere resides at the endpoints of the chromosomes. These repetitive sections of DNA (which repeat the genetic “code” TTAGGG) solve one of the great problems of mammalian cell division. Every time cells divide—and many divide daily—they lose a little bit of the ends of each DNA strand. If these losses contained important genetic information, life would quickly peter out. That’s why DNA strands are capped with telomeres, which provide a cushion of expendable DNA that can be lost with each rotation of the cell cycle.
The trouble is that “the gene for the enzyme that maintains the telomeres, referred to as telomerase, is actually inactivated before we are born,” says Tollefsbol, “which causes telomeres to get shorter and shorter with each cell division.” Telomeres are a kind of slow-burning fuse—when they run out, cells often self-destruct or become dormant. Scientists theorize that the loss of telomeres could be a major factor driving the aging process.
So what if we found a way to build telomeres back up? Many researchers, including Tollefsbol, think this could be a promising way to make people live longer. “There was a paper in Nature in November 2010 where researchers were able to continually express telomerase in a mouse model,” Tollefsbol says. “The mice lived longer and had fewer diseases. So there is hope that telomerase may be able to extend our lifespan.”
Unfortunately, there is a piranha lurking in this fountain of youth: cancer. “Cancer cells are addicted to telomerase,” says Tollefsbol. “They need it in order to maintain telomeres and keep proliferating.” And these cells have already figured out a way to epigenetically turn on the gene that encodes telomerase, which regenerates telomeres and allows cancer cells to become effectively immortal.
“Just like in Star Wars, telomeres have a good side and a dark side,” Tollefsbol says. “There is a fine balance here, because if we turn on telomerase in people and lengthen their telomeres, we could find out that they’re getting a lot more cancers.” Telomere research is a booming area of epigenetics, and Tollefsbol’s UAB lab, which has studied telomerase and telomeres for more than a decade, is a major player in this work.
Where Nature Meets Nurture
A telomere-based approach to disease would involve miraculous feats in the laboratory, but Tollefsbol is even more excited by the simple-but-powerful epigenetic changes people can accomplish in their own homes.
“We think that environmental and lifestyle factors are probably the most interesting thing about epigenetics,” Tollefsbol says. “Life affects epigenetics: What you eat, whether or not you smoke, your exposure to the sun—all of these bring changes in the epigenome.”
The importance of environmental factors was highlighted in a seminal 2005 study in the Proceedings of the National Academy of Sciences. The paper explained why identical twins, who share the same genes, aren’t carbon copies of each other: Their epigenetic markers are different from birth, and they continue to diverge as twins age.
“The reason for that is the interaction between epigenetics and the environment,” Tollefsbol says. “Maybe one of the twins stayed out in the sun at the beach more than the other and got more wrinkled skin or developed cancer. Studies like this showed that the epigenetic expression of many genes changes each year.”
Scientists once thought that epigenetic changes were temporary, but it is now clear that they can be passed down to the next generation—and beyond. “We are becoming very interested in ‘transgenerational epigenetics,’ or how epigenetic changes can be transferred from the mother to the fetus,” Tollefsbol says. “We do have studies that indicate that what a mother eats while she is pregnant can affect the genome of the fetus that she is carrying. In fact, it can have a major influence on that individual throughout his or her life.”
In one famous epigenetics paper, Swedish scientists reported that the grandchildren of men who had survived famines in a rural county in Sweden in the 19th century were more likely to live longer—six years on average—than the grandchildren of men who had been born during times of relative abundance. “This is still fairly new, but there are some studies that say what your grandfather ate may influence how your life is going,” Tollefsbol says. “I think more proof is needed, but it’s exciting.”
Superfoods to the Rescue?
Dietary factors, which are a key focus of Tollefsbol’s lab, seem to be particularly important, he says. One major takeaway from that research, Tollefsbol says, is that reducing “empty calories” from sugar is beneficial for anyone. “Our work has shown that sugar can predispose a person to cancer,” he notes. “We took precancerous cells—these are cells destined to become cancer cells—and we found that when we reduced the amount of sugar in the culture where the cells were growing, it killed those cells.”
But the “real exciting” development, Tollefsbol says, was his lab’s work on a 2009 study published in the Journal of the Federation of American Societies for Experimental Biology(FASEB Journal). “We found that we could make normal cells live longer by reducing their glucose. That was the first time that was ever seen. A major part of our aging is due to epigenetic processes, and both the quantity and quality of the calories we eat can affect both aging and cancer.”
What if we could reset epigenetic markers with a drug? “There are several so-called ‘epigenetic drugs’ that have been approved by the Federal Drug Administration,” Tollefsbol says—including decitabine. “These are compounds that, for example, inhibit the DNA methyltransferases, which allows tumor suppressors to reactivate in cancer patients.” These drugs, while exciting, have so far been most successful in blood-borne cancers, which are more accessible to drug treatments than solid tumors (cancer of the breast or prostate, for example).
But as Tollefsbol and his team demonstrated in their Clinical Epigenetics paper in 2011, it is fairly easy to eat your way to better health, one cup of green tea or one broccoli spear at a time. They even coined a term for their vegetable prescription: the epigenetics diet.
The Epigenetics Diet
“Many of these dietary compounds work on the particular enzymes that cause epigenetic changes,” Tollefsbol says, “in order to moderate them and keep them in check so that they will not allow cancers to form.” The vitamin folate, found in high quantities in dark green leafy vegetables such as spinach, for example, is a methyl donor. “It keeps genes methylated, which counters the decreased methylation that comes with increasing age.”
“The epigenetics diet can be adopted easily, because the concentrations of the compounds needed for a positive effect are readily achievable,” adds lead author Syed Meeran, Ph.D., a research assistant professor in Tollefsbol’s lab.
Drinking the equivalent of three cups of green tea per day has been shown to suppress breast cancer growth in animal models, Meeran says. The equivalent of a cup of broccoli sprouts each day has been shown to reduce the risk of developing several different cancers in animals.
The beneficial effects of healthy foods aren’t restricted to cancer, either. “Compounds in the epigenetics diet foods can, at the very least, help us to lead healthier lives and help prevent potentially debilitating diseases” such as Alzheimer’s, Tollefsbol says. He is now working on a book called Epigenetics in Human Disease, which will detail the epigenetic components of Alzheimer’s and other conditions, including schizophrenia, dementia, and diabetes.
Tollefsbol says he eats a “90 percent plant diet” and has “cut down on sugar a lot” over the course of his epigenetics research, but he is quick to point out that the epigenetics diet is not a magic cure-all. “We’re not saying that all you have to do is drink a few cups of green tea and you’ll be cancer-free guaranteed,” he says. But the evidence indicates that avoiding excess sun exposure, eating a healthy diet, and adopting other behaviors that are conducive to avoiding negative epigenetic changes can pay off in the long run.
“We are learning more and more that your behavior, in terms of the choices you make during your life, has a huge effect on your body,” Tollefsbol says. “Studies indicate that the earlier you start, the better, but these behaviors are beneficial to everybody.” | <urn:uuid:5e335f12-58c3-4867-9e3c-67ab6d6929b6> | CC-MAIN-2016-26 | http://www.uab.edu/uabmagazine/epigenetics | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00150-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955163 | 2,921 | 2.78125 | 3 |
|FISHERIES DEPLETION - GOING,
Through 1989, when the annual global fish catch peaked at 86.1 million metric tons - a nearly fivefold increase over the recorded haul in 1950 - the notion of unlimited bounty prevailed. Since then, we've witnessed a precipitous decline, especially in the Atlantic, Pacific and Mediterranean. Canada's Grand Banks and New England's Georges Banks - once among the most plentiful fishing grounds anywhere - have undergone complete collapse. With the virtual disappearance of haddock, cod and yellowtail flounder, an emergency federal closure of more than 6000 square miles off the Massachusetts coast was ordered in late 1994, shutting down a $200-million-a-year industry.
And that's only part of the problem. The population of the majestic Atlantic bluefin tuna stands at less than 20 percent of its 1970 abundance. Large swordfish are so depleted that many restaurant steaks are likely to have been filleted from a juvenile that hasn't yet had a chance to spawn. Several salmon species are on the brink of commercial extinction. Pollock in Russia, redfish in the Caribbean, red snapper in the Gulf of Mexico - the list of troubled waters goes on and on.
At the top of the food chain, sharks play a pivotal role, affecting every marine creature below them. When overfishing depleted sharks in Tasmania some years back, their main prey - octopus - boomed and devoured so many spiny lobsters that it caused the fishery to crash. A recent marked increase in the numbers of stingrays and jellyfish along the Florida Panhandle has been attributed to a dearth of sharks, which between 1985 and 1990 became the state's fourth largest commercial fishing industry.
Industries have capitalised on lucrative new markets for shark meat, and fins exported to the Orient for shark-fin soup. And shark cartilage is being put into pill form and touted in health-food stores as a possible "cancer preventative." The cartilage trade has taken a heavy toll, notes Dr Robert Hueter of Florida's Mote Marine Laboratory. "In Costa Rica, they are harvesting more sharks in one area in a single month than are allowed under the quota for the US East Coast fishery in a year - literally hundreds of thousands. It's a real slash-and-burn mentality of the worst kind."
The result of all this? Large coastal sharks like the sandbar that migrate along the Atlantic coast are down to only 15 to 20 percent of their numbers of only 15 years ago; pelagic species aren't faring much better. Things are so bad that Dr Jack Musick of the Virginia Institute of Marine Science believes that many large shark varieties ought to be classified as "threatened" or "endangered" species. But the warnings of Musick and other scientists are still being given short shrift by the federal managers.
While fish can become scarce, species have seldom disappeared entirely. The exact populations of many fish, from minnows to sharks, remains somewhat mysterious, and therefore out of the glare of publicity given to the fading tiger, rhinoceros and whale. But if we continue to vacuum the seas with nets that scoop up everything in their path, the oceans will soon be as empty as a college campus in August.
At sea 200 miles southwest of Iceland, the crew of a supertrawler big enough to contain a dozen Boeing 747 jumbo jets unloaded a staggering 50 tons of oceanic redfish into flash-freezers down below, as the Icelandic ship's captain began manuevering against nearby Russian and Japanese vessels for the next set. Emotions were running high, as there was a lot at stake. Each ship was trawling nets with opening circumferences of almost two miles; that's the equivalent of 10 New York City blocks wide by two Empire State Buildings high. Soon the Russian boat steamed over the Icelander's net, and the Japanese trawler ripped loose the Russian's lines.
Such conflicts are now commonplace on the high seas, says Dan Middlemiss, a professor of military and strategic studies at Nova Scotia's Dalhousie University. "An important food source is being decimated [and] fish have become something seen as worth fighting for," he says. The global industrial fishing fleet has doubled in size since 1970, now comprising about one million large-scale vessels. Fisheries scientists consider this number to be twice the capacity that can maintain future fish populations.
"About 70 percent of the world's marine fish stocks are heavily exploited, overexploited, depleted or slowly recovering," according to a 1995 report by the United Nations' Food and Agriculture Organisation (FAO). "This situation is globally non-sustainable and major ecological and economic damage is already visible." Indeed, nine of the world's 17 major fishing grounds are in serious decline, and four have been commercially "fished out". If this trend continues, the FAO foresees a shortfall of some 30 million tons of fish for human diets by the year 2000 - at a time when the planet's population is rising by about 100 million people annually.
Yet the out-of-control, high-tech slaughter occuring in every ocean and sea has only recently begun receiving widespread attention from politicians, the media and environmental organisations. Last December, a United Nations treaty calling for fisheries to be managed under an enforceable framework of international law was finally signed by the United States and 27 other countries and territories. In a reversal of the Republican-controlled Congress' efforts to undermine nearly every other environmental law, the House of Representatives has voted overwhelmingly to stengthen existing US fisheries regulations and the Senate is expected to follow suit. Somewhat belatedly, groups such as the World Wildlife Fund (WWF), Greenpeace, the National Audubon Society and the Natural Resources Defence Council (NRDC) have put the devastation of the oceans high on their agendas - and were heavily involved in lobbying for both mandates.
With strict controls on fishing pressure, a research study published last year in the prestigious journal Science indicates that nearly all of the depleted commercial species could, in fact, bounce back. The scientists' goal was to ascertain whether fish populations that have been reduced to very low levels become significantly less successful at reproduction. Of 128 fish stocks evaluated, only three were determined to have been overfished to permanent commercial extinction. There are numerous examples of recoveries once strong management was put in place, including the striped bass along America's eastern seaboard, Atlantic herring in Iceland and Norwegian cod.
The question is whether the will exists to turn around a looming catastrophe, for solving the fishing crisis is going to require unprecedented cooperation among fishers, and within and between nations. It will also demand far more attention to the escalating loss of vital fish habitat (wetlands, mangrove forests, sea grasses and coral reefs) resulting from coastal development and pollution. When you add in the unknown factors, such as the potential effects of global climate change on fish migration and breeding, clearly the oceans are in a state of emergency.
The first big clue that something might be amiss was the collapse of the world's largest anchovy fishery in Peru during the early 1970s. At that time, a competitive fishing free-for-all between countries was at full throttle. In 1974, for example, New England fishermen were harvesting only 12 percent of the fish caught in their waters - the rest were taken by boats from the Soviet Union, Poland and elsewhere. In the face of this, many countries (including the US) began imposing what are known as Exclusive Economic Zones extending up to 200 miles from their territorial limits. This kept the foreigners out, but was accompanied here and in Europe by large government subsidies to encourage development of home-grown fleets.
"Few controls, and unrestricted access in most fisheries, ultimately led to overcapitalisation," recalls Gerry Studds, former Chairman of the House Merchant Marine and Fisheries Committee. For example, the Arctic Alaska Fisheries Corporation (since produced by the world's largest chicken producer, Tyson Foods) received about $100 million in federal loan guarantees. The European Union increased its fisheries subsidies from $80 million in 1983 to $580 million by 1990, one-fifth of that money going to build new boats or improve old ones.
Simultaneously, fishing proficiency was booming beyond anyone's wildest expectations. Not only in vessel size, but with automatic trawl nets that electronically detect the approaches of fish schools; navigation aids including satellite positioning systems, and the use of "spotter" planes as fish-finders. Floating fish factories became commonplace, with 80 miles of submerged longlines containing thousands of baited hooks or 40-mile-long driftnets corraling everything in their path. In 1995, the Russians even announced the creation of an "Acoustic Fish Concentrator," a small torpedo-like object that snares fish in a trawler's net by using technology first developed for anti-submarine warfare.
Despite increased awareness of the situation, practises such as "pulse fishing" (fishing area species until they dry up, then moving on to target a different species) persist. Between 1986 and 1992, distant water fleets fishing in international waters off the Grand Banks removed 16 times the quotas of cod, flounder and redfish permitted by the Northwest Atlantic Fisheries Organisation. Little wonder that the Canadians, in a celebrated high-seas incident in 1995, were outraged enough to sieze a Spanish ship - one whose illegal small-mesh nets had captured 350 metric tons on juvenile halibut before the fish had reached reproductive age, and which maintained two sets of logbooks (one true and one false.)
"The Spanish," says WWF's fisheries expert Michael Sutton, "are well known as an outlaw fishing nation and one of the most overcapitalised fleets in the world." One Spanish multinational corporation now owns a global network of some 30 companies in 18 nations of Africa, Asia and Latin America. And cash-starved underdeveloped countries, which long depended on local small-scale fisheries, are now selling permits to foreign boats to fish their waters, or cutting deals with outside investors to expand their own fleets.
Indonesia revealed its intention in 1994 to procure over 81,000 new vessels within the next five years, with most of the $4 billion investment coming from foreign sources. Spain and the US are sharing a $200 million order to deliver 50 longline boats to Indonesia in kit form for deep-freezing tuna and swordfish. While the European Union officially says that it is planning to decommission 40 percent of its fishing capacity, at the same time it is providing "exit grants" to companies for relocating boats away from European waters. According to the WWF's Sutton, "Japanese money also goes to a lot of underdeveloped countries for developing fisheries, partly to buy their votes for the International Whaling Commission." (Japan is one of a few nations that still pursues whaling.)
The deep-water regions, those beyond the 200-mile sovereign national limits, are domainted by six countries - Japan, Russia, South Korea, Spain, Taiwan and Poland - which account for 90 percent of the world's high-seas catch. It is their practices that the new United Nations treaty seeks to address, in a landmark agreement that NDRC scientist Lisa Speer hopes "marks the end of untrammeled plundering of ocean fisheries." The treaty's most crucial provision is its "precautionary, risk-averse" approach, meaning basically that nations must err on the side of the resource if marine scientists are unsure whether fishing pressure is damaging a particular stock's sustainability. The accord also calls for improved enforcement, monitoring and scientific assessments, as well as protection of marine biodiversity by minimising pollution and the needless destruction of non-target fish, also known as "bycatch".
In US waters, where the National Marine Fisheries Service has classified over 82 percent of the commercial stocks as being overfished, the amended version of the Magnuson Fisheries Conservation and Management Act will require managers to reduce fishing volumes and meet specific timelines. Under the Act, first passed by Congress in 1976, eight regional councils acquired the authority to set annual catch limits within the US 200-mile jurisdiction. But, explains Bill Mott of the Marine Fish Conservation Network (an alliance of 100 sportsmen's and environmental groups whose intensive lobbying to improve the Act prodded Congress), the domination of these councils by representatives of the fishing industry "is like letting the fox guard the henhouse."
The Gulf of Mexico Council, for instance, permitted red snapper to be fished down to five million pounds per year, where they'd once been so abundant as to yield 30-million-pound annual catches. The New England Council refused to put a lid on the groundfishery at Georges Banks, even after the Commerce Department declared several species at or near commercial extinction. At long last, though, the Council's attitude appears to be changing. A newly-amended New England groundfishing plan, approved in late January, aims to reduce fishing levels by 80 percent through severely limiting the number of days a boat can be at sea. The plan also specifies several closures in the Gulf of Maine to protect juvenile fish. While many fishermen complain that these measures will put them out of business, council chairman and commercial fisherman Joseph Branceleone says simply, "Without fish, there will be no fishermen."
The latest groundfish protection effort, however, does not address the effects of towed gear being dragged across the ocean bottom - which scientists are increasingly viewing as gravely damaging to the fragile habitat where juvenile fish feed on smaller organisms. Mike Leach, head of a Cape Cod commercial fishing group, believes that, in order to rebuild the stocks, "Dragging should be banned and draggermen should be given assistance to switch to a more appropriate gear type." Funds to achieve this, however, are scarce. The federal government has already committed $25 million to a program for buying out a relatively small number of fishing boats and an additional $62 million in loans, grants and matching funds to the beleaguered New England fishing industry.
Certainly, better enforcement is one key to improving the situation and, in April, federal regulators sent a strong message in seeking a record $5.8 million fine against two Massachusetts brothers and their 12 employees. Their five boats were charged with illegally taking millions of scallops, cod and other groundfish in 1995, breaking the law 300 times and filing false reports to cover up their violations.
But other recent "solutions" to the crisis - such as the National Marine Fisheries Service's (NMFS) encouragement to fishermen to begin focussing on so-called "under-utilised species" - can have unanticipated consequences for marine ecosystems. Take the little squid, for example, which as a food source is crucial to the survival of tuna, billfish and sharks, as well as marine mammals and many smaller fish. From a "trash fish" of scant interest to the American consumer a decade ago, squid have become popular as pan-fried calamari. With advances in refigeration technology, their value (and their harvesting) has skyrocketed. Back in 1964, less than 1,000 metric tons of the Atlantic long-finned and short-finned squid were being caught. By 1994, that figure had soared to more than 40,000 tons.
At the same time, marine experts have noted an alarming trend. With far fewer schools of squid as bait in coastal waters, according to Robert Pride of the Atlantic Coast Conservation Association of Virginia, "many species of game fish are caught, even in late summer and in the fall, with empty bellies and a gaunt appearance." Bob Schoelkopf, founding director of the Marine Mammal Stranding Centre in New Jersey, has similarly observed increasing numbers of emaciated harbour porpoises and seals ending up beached or entangled in near-shore nets. "We are seeing another canary in the mine, which could be the starvation of many marine species," Schoelkopf says.
An idea favoured by many in government, with the ostensible aim of reducing the number of fisheries participants and thereby curbing overfishing, is known as Individual Transferrable Quotas (ITQs). Under this scheme, and elsewhere, quota "shares" are allotted based on catch records and fishermen then buy, sell or lease these shares on the open market. That way, the thinking goes, you weed out the inefficient fishermen and replace them with professionals.
But turning fish resources into "private property" has come under fire from Greenpeace, which has forged alliances with many smaller-scale commercial fishermen. The fundamental problems with ITQs, as Russell Cleary of The Massachusetts' Commercial Anglers Association puts it, is that they "presage a corporate takeover" and threaten the very existence of fisheries-based coastal communities. The notion has merit. Since New Zealand introduced ITQs in 1986, its three largest fishing corporations have snapped up half the awarded quotas. Two of the largest holders in America's ITQ-based Atlantic surf clam/ocean quahog fishery are now the National Westminster Bank of New Jersey and a US subsidiary of the world's biggest accounting firm, Holland-based KPMG. Another big ITQ purchaser is the Caterpillar Corporation.
None of this bodes well for the environment, since ITQs encourage the overexploitation of "higher yield" fishing grounds. Enough pressure has been generated - Greenpeace activists seized a factory trawler in a Washington port last summer to dramatise the situation - that the Senate Commerce Committee voted in March for a five-year moratorium on such privatisation while the effects on coastal communities and small-boat fleets are studied.
What then are the best approaches to the fisheries crisis? Last February, Greenpeace released a preliminary series of "Principles for Ecologically Responsible Fisheries", which the organisation is urging fish buyers to use as benchmarks for seafood purchases. These include a shift from large-scale intensive fisheries to smaller, community-based ones with sound practices. Fishing gear and methods damaging to fish populations or habitats should be phased out. No fishery ought to open or expand until "a verifiable, scientifically-based, dynamic management procedure has been established."
At a recent conference, fisheries economist Francis Christy urged a "limited entry" policy on managers, pointing out that restrictions on the number of licenses issued to fish have proven effective in revitalising the Maryland blue crab industry.
Fishermen themselves can and should play a greater role in ensuring their own survival. A 1995 survey by a British magazine, The Ecologist, cites numerous examples of coastal communities around the world evolving often-unwritten rules to regulate their fisheries. The Cocamilla people in the Peruvian Amazon, observing that their lake was being overfished by commercials from other regions, ruled that only subsistence fishermen be allowed to fish there. In Newfoundland and Japan, some communities hold annual lotteries for the best fishing areas. Among the Cree people of St. James Bay, Canada, and in Donegal, Ireland, fishermen competing for particularly good spots agree to fish in turns. The Boston-based Conservation Law Foundation is currently working with fishermen in "developing economic structures for them to take on greater responsibility as ecosystem managers," says its program director Peter Shelley.
All this, of course, supports a small-scale emphasis but does not address the industrialised fleet problem. But what about a tonnage fee imposed by governments on the massive hauls of the big boat operations? The more fish you bring in, the more you pay. And the funds could be earmarked not only for fisheries research and management, but for job retraining. A federal pilot program for the failing Pacific Northwest salmon fishery already funds jobs in the restoration of river habitats. The limited dollars available in the government's Fishing Industry Grants programme are currently going to fishermen with inventive ideas for reducing waste by modifying fishing gear, and to commercial vessels helping conduct surveys of Atlantic herring spawning stocks.
There is no reason why fishing captains and crews couldn't stay on the water and plant shellfish beds, help the Coast Guard with harbour oil spill cleanups, conduct fish counts, aid in public education, take water samples and serve on enforcement teams. But this will take a commitment where commercial lobbying interests like the National Fisheries Institute worry less about allocation and more about Congress' current plans to gut the NMFS budget and to remove from protection almost 70 percent of the remaining American wetlands.
When concerned citizens wake up to the ramifications of a dying ocean ecosystem, miracles can happen. It was the outcry of sports fishermen that forced managers to impose drastic sanctions on the commercial striped bass harvest a decade ago - and the fish that enabled the Pilgrims to survive has made an unparalleled comeback. In both Louisiana and Florida, successful ballot referenda spurred by sportsmen's groups have recently brought an end to the indiscriminate use of inshore entanglement gillnets. And in India, protesting fish workers have brought a halt to the registry of any new fishing boats in Indian waters.
There is little time to lose. Without greater mobilisation against the rapaciousness and greed that are devastating the world's oceans, we are looking at a future where the wonders and sustenance of the sea are, if not gone altogether, confined to fish-farming pens. And that would be an unthinkable tragedy.
Contacts: American Oceans Campaign, 201 Massachusetts Avenue NE, Washington DC 20002/(202)544-3526; American Sportfishing Association, 1033 North Fairfax Street, Alexandria, VA 22314/(703)519-9691; Centre for Marine Conservation, 1725 DeSalles Street NW, Washington DC 20036/(202)429-5609; Marine Fish Conservation Netowrk, 408 C Street NE, Washington DC 20002/(202)548-0707; National Coalition for Marine Conservation, 3 West Market Street, Leesburg, VA 22075/(703)777-0037; Ocean Wildlife Campaign, 666 Pennsylvania Avenue SE, Washington DC 20003/(202)547-9009.
Dick Russell is a freelance writer living in Boston.
It's hard to believe, but the United Nations estimates that about 27 million tons of fish each year - a third the volume of the regular commercial catch - are caught and then tossed back (usually dead) because they are the wrong species, too small, damaged in capture or exceed a particular quota. And some estimates peg the real amount at closer to 40 million tons. In the industry, it's known as unwanted "bycatch".
Shrimp and prawn trawlers are the worst culprits; in some fisheries, 15 tons of fish are dumped for every ton of shrimp landed. The US National Marine Fisheries Service reports that 84 percent of the shrimpers' hauls in the Gulf of Mexico are bycathc. This includes some 35 million juvenile red snapper killed annually, in a commercial and recreational fishery that's already on the brink of collapse from overfishing.
Longline vessels setting as many as 80 miles of hooks in pursuit of tuna and swordfish are the biggest source of mortality for billfish such as the marlin, a sports species whose sale is outlawed in the US. Between 1989 and 1992, American longliners reported about 30,000 billfish taken in the Atlantic, half of which were dead when returned to the water.
Nor does the longliners' impact stop with fish. A recent study conducted by Charles Wurster of the Marine Sciences Research Centre at the State University of New York indicated that bycatch mortality on diving birds caused by tuna longlines may be "even more severe than from driftnets." (The UN General Assembly has voted unanimously for a moratorium on large-scale driftnets on the high seas - "walls of death" which, in 1990, entangled 42 million non-targeted animals.)
An estimated 100 million longline hooks are set (mostly by the Japanese) in the Southern oceans each year, resulting in the death of about 180,000 albatross and petrels that hit the bouyed lines as they feed into the water. That total cannot account for the offspring that subsequently starve; and it doesn't do much for the fishermen either, since about half the birds that dive for the bait get away with it.
Marine mammals are equally vulnerable. Despite the UN-imposed global ban on high seas driftnets, some 600 Italian vessels continue to utilise them in the Mediterranean Sea, where sperm whales wrapped in these netted curtains have washed up dead along beaches. After the Humane Society of the US filed suit, a federal judge last February ordered the State Department to either work out a compromise or impose an embargo against Italy.
Dolphins often swim with schools of tuna and, in the Eastern tropical Pacific, an estimated seven million dolphins have been killed in the US tuna fleet's encircling purse nets since 1959. A public outcry, spearheaded by the Earth Island Institute, forced precautionary dolphin methods to be implemented in 1990. That same year, major canneries including Bumble Bee, Starkist and Chicken of the Sea refused to sell tuna caught by foreign purse-seiners that kept entrapping dolphins. By last year, the wanton slaughter of dolphins dropped to below 5,000 - down from nearly 500,000 annually before measures were taken. Still, Congress is now debating lifting a ban on tuna imports from Mexican ships that persist in setting where dolphins swim, claiming this creates a problem with the North American Free Trade Agreement (NAFTA), which encourages tariff-free trading between the US, Canada and Mexico.
US shrimpers are now obliged to install Turtle Excluder Devices (TEDs), which shunt the turtles out a trapdoor of their trawls, and will go a long way toward saving the estimated 55,000 adult turtles that were dying in them previously. Other technologies are also being developed by fishermen themselves - including acoustic net alarms to warn dolphins away from swordfish driftnets, and lobster pots with biodegradable vents.
According to the United Nations Food and Agriculture Organisation, a 60 percent reduction in bycatch could be achieved quickly by improving the selectivity of fishing gear, developing greater cooperative research between scientists, industry and managers; and the application of new technologies. Unfortunately, the final version of the landmark United Nations high-seas treaty last December weakened a provision that would have forced nations to use selective gear to cut bycatch. The US, under pressure from its own fishing industry, succeeded in amending the treaty language to require countries to do so only "to the extent practicable."
Twillingate, Newfoundland - Kill seals, save cod. That's the rule followed by the Canadian government, which this year authorised the killing of a quarter-million harp seals, whose burgeoning population could be having an effect on depleted stocks of Atlantic cod, their favourite food. By early May, the hunters were mopping up, shooting the seals bobbing along Newfoundland's northern shore.
Fishermen in Canada's Maritime Provinces cringe at the mention of seals, which are multiplying while the cod boats are idled at the pier. Last year, the Canadian government released counts showing the harp seal population in Atlantic Canada had reached 4.8 million, twice its size 15 years ago, and is increasing by 287,000 seals a year.
Scientists have found that seals feed primarily on small first- and second-year fish, which increases their impact on the species' recovery. But scientists like W. Donald Bowen of the Bedford Institute of Oceanography in Halifax, Nova Scotia say the idea that reducing seal populations would cause cod to rebound is "unfounded." He adds, "We don't know what all the causes of the decline are; it's very difficult to generalise and blame it on the seals." Seals have been harvested for clothing, food and income in Canada for five centuries; they have few other predators. Most seals today are killed by rifles. There is no sport hunting, and killing young harp or hooded seals is prohibited. Killing an animal for "parts" - the penis is an aphrodisiac in Asia - is also illegal.
There are some surprising critics of the hunt, including Garry Troake, a professional sealer and fisherman since 1977. Troake is worried that too many seal carcasses will create a glut on the market and generate massive bad publicity. "I say, let's develop this industry like anything else, timber or mining," Troake says. "It shouldn't have anything to do with cod."
Source: E: Environmental Magazine, July/August 1996. | <urn:uuid:560907bd-3cd0-4091-a2c6-fbfb12b6e720> | CC-MAIN-2016-26 | http://www.converge.org.nz/pirm/frames/fish!b.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00114-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.951831 | 5,879 | 2.828125 | 3 |
The TCP three-way handshake in Transmission Control Protocol (also called the TCP-handshake; three message handshake and/or SYN-SYN-ACK) is the method used by TCP set up a TCP/IP connection over an Internet Protocol based network. TCP's three way handshaking technique is often referred to as "SYN-SYN-ACK" (or more accurately SYN, SYN-ACK, ACK) because there are three messages transmitted by TCP to negotiate and start a TCP session between two computers. The TCP handshaking mechanism is designed so that two computers attempting to communicate can negotiate the parameters of the network TCP socket connection before transmitting data such as SSH and HTTP web browser requests.
This 3-way handshake process is also designed so that both ends can initiate and negotiate separate TCP socket connections at the same time. Being able to negotiate multiple TCP socket connections in both directions at the same time allows a single physical network interface, such as ethernet, to be multiplexed to transfer multiple streams of TCP data simultaneously.
TCP 3-Way Handshake Diagram
Below is a (very) simplified diagram of the TCP 3-way handshake process. Have a look at the diagram on the right as you examine the list of events on the left.
Host A sends a TCP SYNchronize packet to Host B
Host B receives A's SYN
Host B sends a SYNchronize-ACKnowledgement
Host A receives B's SYN-ACK
Host A sends ACKnowledge
Host B receives ACK.
TCP Three Way Handshake
SYNchronize and ACKnowledge messages are indicated by a either the SYN bit, or the ACK bit inside the TCP header, and the SYN-ACK message has both the SYN and the ACK bits turned on (set to 1) in the TCP header.
When the communication between two computers ends, another 3-way communication is performed to tear down the TCP socket connection. This setup and teardown of a TCP socket connection is part of what qualifies TCP a reliable protocol. TCP also acknowledges that data is successfully received and guarantees the data is reassenbled in the correct order.
Note that UDP is connectionless. That means UDP doesn't establish connections as TCP does, so UDP does not perform this 3-way handshake and for this reason, it is referred to as an unreliable protocol. That doesn't mean UDP can't transfer data, it just doesn't negotiate how the conneciton will work, UDP just transmits and hopes for the best.
Protocols Encapsulated in TCP
Note that FTP, Telnet, HTTP, HTTPS, SMTP, POP3, IMAP, SSH and any other protocol that rides over TCP also has a three way handshake performed as connection is opened. HTTP web requests, SMTP emails, FTP file transfers all manage the messages they each send. TCP handles the transmission of those messages.
TCP 'rides' on top of Internet Protocol (IP) in the protocol stack, which is why the combined pair of Internet protocols is called TCP/IP (TCP over IP). TCP segments are passed inside the payload section of the IP packets. IP handles IP addressing and routing and gets the packets from one place to another, but TCP manages the actual communication sockets between endpoints (computers at either end of the network or internet connection).
- More Tutorials about Transmission Control Protocol (TCP)
- More Tutorials about Internet Protocol (IP)
- More Tutorials about User Datagram Protocol (UDP)
- Introduction to Transmission Control Protocol (TCP)
- TCP: A Reliable Protocol
- TCP Header
- TCP 3-Way Handshake
- TCP Connections
- TCP Sockets
- TCP Data Transfer
- TCP Segmentation and Reassembly
- TCP Flow Control
- TCP Multiplexing
- TCP Precedence
- TCP Transport Stream Push (TCP Push) | <urn:uuid:a59cef4b-e965-4136-80b0-ca361e2d7530> | CC-MAIN-2016-26 | http://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00199-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.892014 | 821 | 4.03125 | 4 |
- Automobiles, trucks and buses are considered to be major contributors when it comes to air pollution particulates.
- New York State consumers purchase 750,000 new cars each year.
- There are over 5,000 transit buses operating in New York City.
- Motor vehicle particle emissions and the particles formed by the transformation of gaseous emissions tend to be in the fine particle range.
- Fine particles (less than 2.5 micrometers) are of health concern primarily because they travel deepest into the lungs.
View of automobile exhaust. Samples generally consist of 1 micron or less carbon particles resulting from the incomplete combustion of gasoline and additives.
View of diesel particulate collected on a teflon filter.
Automobile exhaust on a necleopone filter. Gas vehicles produce a somewhat smaller particle than diesel vehicles. Scientists are still trying to determine what chemical differences there are between the two.
Diesel particulate on a nucleopone filter. Diesel particulate matter (PM) emissions are 3.5-10 times higher than those from gasoline vehicles.
A recent study of particulate matter in NY City revealed that the predominant particulate collected was from diesel sources (52%). Automobiles contributed 6% of the total. | <urn:uuid:12377113-f313-406b-9959-6d3b0933e971> | CC-MAIN-2016-26 | http://www.dec.ny.gov/chemical/28817.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00016-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.876281 | 255 | 3.046875 | 3 |
by Evan Moore
1854 and 1929, approximately 200,000 youngsters rode the Orphan Trains
from the East to the growing frontier. Some were adopted, some
indentured and some simply taken in. Some came to call themselves
'the white slaves' of America.
|A photo of unknown origin taken around 1900 shows orphans on the Atchison, Topeka 7 Santa Fe Railway.|
|Author: Jacob Riis (May 3, 1849 - May 26, 1914)|
Jacob A Rus photographed New York orphans in works such as Street Arabs, Barelegged Mulberry Street, 1889. | <urn:uuid:3b9bd74c-81f8-435d-afe7-7b5cf37a1999> | CC-MAIN-2016-26 | http://www.rootsweb.ancestry.com/~txgrayso/OrphanTrain/OrphanTrains1.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00022-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.931919 | 126 | 3.09375 | 3 |
Just about everyone recalls feelings of dread as a child when they were chosen to read aloud in class. But for some students, these moments of dread can develop into a serious and growing problem known as aliteracy: acquiring basic reading skills, but disliking, or even refusing, to read. Because aliteracy usually can be traced to feelings of fear, shyness and embarrassment, the best way to combat the problem is to find a way for children to read in a fun and empowering environment.
The Marin Humane Society’s SHARE A Book program works closely with local teachers, reading specialists and librarians to help children improve literacy skills while offering them an opportunity to have fun reading—to dogs! SHARE A Book’s human-dog volunteer teams visit schools, libraries, book fairs, after school programs, reading labs and community outreach events.
Our event calendar has listings of SHARE a Book programs at your local library.
Read a profile on SHARE A Book volunteer Kathie Meier
Dogs are ideal reading companions because they offer unconditional love and acceptance. As opposed to an audience of their peers, children feel comfortable reading to dogs because their fears and limitations dissolve in the process. Additionally, a child who previously lacked self-esteem as a remedial reader is presented with an opportunity to become a “teacher” by explaining what they are reading to their canine partner.
There are also physiological benefits of being with dogs, including lower blood pressure and increased relaxation. In this calm environment, children tend to forget about limitations. It doesn’t take long before we see marked improvements in reading. The most evident improvement is in the child's confidence and desire to read. Several studies have show very positive results; read about them here.
Before participating in the program, dogs and their humans complete a six- week training class to become a SHARE animal assistance therapy team. After becoming a SHARE team, there is an additional four-week training class to learn ways their dogs can be an integral part of the reading sessions. To become literacy mentors in the SHARE a Book program, the dogs pass the classes and have a health certificate completed.
For more information on bringing the SHARE A Book program to your school or library, to have your child participate in a reading session, or to become a volunteer team with your dog, please contact the SHARE department. | <urn:uuid:ba5c44cd-9429-45a5-b5c5-fa45e079f9b0> | CC-MAIN-2016-26 | http://www.marinhumanesociety.org/site/c.aiIOI3NLKgKYF/b.7727647/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00131-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958824 | 488 | 3.234375 | 3 |
Not Medical Advice: Small, round, blue-colored pill with ABG 15 imprint has been identified as Morphine Sulphate extended-release 15 mg.
Morphine belongs to the group of drugs called narcotic analgesics. It is an opioid pain medication used in the treatment of moderate to severe pain. This drug has a high potential for abuse that may lead to severe physical or psychological dependence.
Tip! Learn 5 ways to handle chronic pain at CNN.
In medical news, the opium poppy may soon meet its match. Researchers in the United States and Canada report today that they are closing in on a long-standing goal of engineering a complex suite of genes into yeast that would allow the microbes to synthesize morphine, codeine, and other medicines that have been harvested from poppies since before written history began. Find out more at the American Association for the Advancement of Science. | <urn:uuid:87371757-9b4b-4bf5-bccb-76136cdac6a5> | CC-MAIN-2016-26 | http://www.kgbanswers.com/what-is-this-little-blue-pill-with-15-on-one-side-and-abg-on-other-side-its-round/18441651 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00111-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946607 | 182 | 2.546875 | 3 |
In real estate, every property is different – and that makes determinations of value (outside of an actual sale) potentially difficult. Appraisers and other professionals who are charged with this task generally estimate property values utilizing three approaches: market data, cost, and income.
Property valuation is an art, not a science; there are so many factors involved in estimating value that appraisers always make clear that they are only giving an opinion of a property’s value. The underlying economic bases of value include: at what cost a substitute property can be obtained; estimating future income of the property; how change (nearby development and other trends) affects value; how competition will affect profits; whether contemplated improvements make a net contribution to value; whether the property’s use is in conformity with other activities in the area; supply and demand; and what the highest and best use of the land might be.
Because of these complexities, an appraiser will normally follow not a single path toward a value conclusion but will use three approaches:
- Market Data (comparable sales) – what are similarly situated properties worth?
- Cost – what would be the replacement value of the building?
- Income – what future cash flows is the property expected to generate?
Market data, or comparable sales, is similar to the comparison shopping done everyday when purchasing a new car or buying products in the local supermarket. It works best when very similar properties are bought and sold on a relatively frequent basis. No property is exactly the same, of course, so differences must be identified and compensating adjustments made when making the comparisons.
These differences are usually based on property characteristics; location, size of the lot, square footage of the usable building space and the type and quality of construction are perhaps the most important factors. Age, design, land terrain and interior layout can also play a role as well. Other dissimilarities might include whether unusual financing terms are involved (e.g., low interest rate loans will encourage better prices), or if a property sale was not an “arm’s-length” sale, or involved free post-sale rent or repairs. An estimate as to the value of the distinguishing features must be made, and the adjusted sales prices can then be correlated to give an indication of value to the subject property.
Cost analysis can start with a simple estimate using current local building costs; local builders can probably provide a generalized per-square-foot cost of new construction. A more rigorous approach would utilize one of the available published construction cost indexes, which break down costs by material types and region. A buyer seriously considering building a new structure may well retain a building contractor to make a complete building cost estimate.
The building’s current replacement cost then needs to be adjusted to reflect the actual state (and value) of the subject property. This involves reductions to reflect the property’s physical deterioration, functional obsolescence, and economic depreciation. The physical deterioration can involve some curable “deferred maintenance” that can be repaired, but it also involves the very real effects of age and the fact that some repairs are no longer economically justifiable. Functional obsolescence refers to the impact on income of changes in building technology and consumer tastes and preferences on the building’s value (an older building’s low ceilings may not be able to accommodate modern central air conditioning systems, for example). Economic depreciation refers to the external influences that could affect income compared to other locations; these could include pollution, noise, or zoning changes that permit industrial uses in a formerly residential neighborhood.
Income capitalization places a dollar valuation on the future stream of cash flows. This approach requires determining the amount, certainty, and length of time of future income from the property, and then applying an appropriate capitalization rate (or “cap rate”) to convert the future income into a present value. This approach is generally used by determining the net operating income of a property and then choosing a capitalization rate suited to the property’s type, location, age, and quality of tenants.
These three approaches will often provide three different values for a property, so an appraiser must reconcile the varying results. Depending on the property and the appraisal’s objective, one of the approaches may be more suitable than the others, so a simple averaging of the three results is usually not appropriate. An appraiser will need to meld the results of the three approaches in a sensible, reasoned way.
The three primary approaches to valuation each have positive and negative aspects. Comparable market data are always beneficial in a valuation analysis, and can indicate what cap rates investors are currently paying for comparable properties. Still, recently sold properties are often not truly comparable to the property being evaluated, and might not adequately reflect the property’s actual projected income. Where newer buildings are involved, the cost approach is often more reliable; it may be the only choice where market data is scarce. In all cases, an appraisal of value should be viewed as a complement, not a substitute, for sound underwriting or investment analysis. | <urn:uuid:b6d1bdf8-430d-499e-a25b-52abd866f548> | CC-MAIN-2016-26 | http://www.nuwireinvestor.com/articles/real-estate-valuation-methods-61603.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00113-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.920057 | 1,038 | 2.90625 | 3 |
A child or young person has special educational needs (SEN) if he or she has learning difficulties or disabilities that make it harder for him or her to learn than most other children and young people of about the same age.
Many children and young people will have special educational needs of some kind during their education. Early years settings, schools, colleges and other organisations can help most children and young people overcome the barriers their difficulties present quickly and easily. A few children and young people will need extra help for some or all of their time an early years setting, school or college.
So special educational needs could mean that a child or young person has:
Children and young people make progress at different rates and have different ways in which they learn best. Teachers take account of this in the way they organise their lessons and teach. Children and young people making slower progress or having particular difficulties in one area may be given extra help or different lessons to help them succeed.
You should not assume, just because your child is making slower progress than you expected or the teachers are providing different support, help or activities in class, that your child has special educational needs. | <urn:uuid:b38f4c7a-01ef-4e92-9fc3-16487faf7a07> | CC-MAIN-2016-26 | http://www3.hants.gov.uk/mobile/childrens-services/specialneeds/sen-home.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974257 | 232 | 4.09375 | 4 |
The relationship between language and other aspects of conceptual development is one of the central issues in child language acquisition. One view holds that language is a special capacity, separate from other areas of cognition and learning. Another maintains that language is part of a larger, more general cognitive system, and is crucially dependent on other cognitive domains.
Recent research has turned to blind children and their acquisition of language as a way of evaluating whether and how language development relies on the non-linguistic context. Vision and the Emergence of Meaning addresses this complex problem through a detailed empirical analysis of early language development in a group of blind, partially sighted and fully sighted children who took part in a pioneering longitudinal investigation at the University of Southern California. By exploring the strategies which blind children bring to selected aspects of the language learning task, Anne Dunlea not only identifies some important differences between blind and sighted children, but also offers new insights on semantic and pragmatic development in general. Further, the study demonstrates the role of conceptual information in language learning and, at a more fundamental level, reveals a convergence of early language and conceptual development.
Preface; Acknowledgements; List of symbols; List of codes; 1. Introduction; 2. Blindness and childhood; 3. Methodology and introduction of subjects; 4. First words; 5. First multi-word utterances; 6. Developments in the use of illocutionary force; 7. Conclusion; Appendices; References; Indexes. | <urn:uuid:b71aac2d-4de3-4f60-a2a1-aee166fb71e6> | CC-MAIN-2016-26 | http://linguistlist.org/issues/17/17-3274.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00195-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.911664 | 302 | 3.078125 | 3 |
Democracy explains the rise of this form of government and how women and minorities struggled for and won democratic rights for themselves. In clear prose, author James Laxer relates the story of the replacement of Communist regimes in the Soviet Union and Eastern Europe by ostensibly democratic political systems. In some of these countries, he argues, democracy has flourished, while in others authoritarianism is on the rise. Showcasing examples from all over the world, the book examines the current status of democracy in both developed and developing nations. Throughout, Laxer demonstrates that democracy is about much more than the right to vote, warning readers that globalization and the widening gap between rich and poor threatens to weaken democracy and the vigor of democratic regimes -- even in countries where it has been long established. Only sustainable environmental policies and basic economic fairness, says Laxer, offer hope for democracy’s survival.
Back to top
Rent Democracy 1st edition today, or search our site for other textbooks by James Laxer. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Groundwood Books.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now. | <urn:uuid:a23cfd5a-ca8a-490b-8369-651186fef609> | CC-MAIN-2016-26 | http://www.chegg.com/textbooks/democracy-1st-edition-9780888999139-0888999135 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00035-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947324 | 251 | 2.765625 | 3 |
Assessment Advice & Forms
||Assessment can be a tricky task. Our collection of ideas and forms will make it easier for you to evaluate and grade your students' work, no matter their grade level. There are as many variations of assessment as there are students. Look below to find ideas on assessment strategies, modifications, and enhancing your existing methods. These assessment forms and techniques will work across the curriculum, so use them for math, science, reading, language arts, social studies, and your other subjects.
Advice for Teachers
- Rubrics Library for Teachers
Our extensive library of rubrics will help you assess your students' work in all subjects, from reading & language arts, to science, to social studies.
- State Departments of Education
A complete list of links to all of the individual U.S. departments of education, state standards, job opportunities, and more.
- Awards and Certificates
Use printable certificates and awards to recognize great effort, achievement, or progress by your students. Express gratitude, encourage good behavior, and reward a job well done with these resources.
- Student Progress Report
A ready-to-use form for assessment of students' class work, homework, and participation is a quick and easy way to keep students' evaluations organized. Send home this form every six weeks to keep parents informed of students' progress. New teachers will find this resource particularly valuable when preparing for parent-teacher conferences.
If you need to teach it, we have it covered.
Start your free trial to gain instant access to thousands of teacher-approved worksheets, activities, and over 22,000 resources created by educational publishers and teachers.
Start Your Free Trial | <urn:uuid:f58dea97-9c57-4a5d-b4c8-879b1d51876e> | CC-MAIN-2016-26 | https://www.teachervision.com/assessment/resource/5815.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00099-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923994 | 349 | 3.421875 | 3 |
How do Robins Find Worms?
One of the very first birds any child can identify is the robin. They have bright red breasts and sit in the middle of your lawn, quite easy to see. Which leads kids to wonder - just how do they find those worms?
First, robins are usually seen in the lawn doing their worm-hunting first thing in the morning. There's a phrase "The early bird gets the worm" that comes from their habit. The worms are closest to the surface in the morning, because the ground is nice and cool and damp with dew. Later during the day, the ground gets quite hot and the poor worms would roast if they were near the surface.
So now you have a red-breasted robin hopping around on the cool ground, looking for his worm. A study done in the 1990s isolated each of the robin's various senses to see which the robin used most. It turns out that hearing is the most important sense - that the robin listens for the small noises a worm makes while burrowing along in the ground.
The robin DOES use its other senses too - watching for movement, feeling for rumbling with its feet. But the main sense that helps out the most is the robin's hearing. One sense a robin does NOT use is its sense of smell. The robin has a really poor sense of smell, and it would not help out a robin at all.
Photos of the American Robin
European Robin Information
American Robin Information
Birding Help Hints Tips and Information | <urn:uuid:baff4de8-1126-48e6-9e0c-46faf6d450d8> | CC-MAIN-2016-26 | http://www.lisashea.com/birding/encyc/robin/art10862.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00157-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.966298 | 324 | 3.265625 | 3 |
LENEUF DE LA VALLIÈRE DE BEAUBASSIN, ALEXANDRE, naval captain, knight of the order of Saint-Louis; b. 22 June 1666 and baptized 2 Feb. 1667 at Trois-Rivières, eldest son of Michel Leneuf de La Vallière de Beaubassin, governor of Acadia, and of Marie, daughter of Nicolas Denys*; d. 1712.
Like his father, he was a good sailor and became a daring sea-captain. In May 1685, in his father’s name, he arrested Bergier Des Ormeaux on Île Royale (Cape Breton Island) and seized his pelts; the latter was the son of Clerbaud Bergier*, the director of the Compagnie des Pêches sédentaire de l’Acadie.
Leneuf became a sub-lieutenant in 1690. In May 1691, on Buade* de Frontenac’s orders, he went by boat from Quebec to Beaubassin to take gifts to the Indians. This mission seems to have been criticized by the king in a memorandum of April 1693 addressed jointly to the governor and the intendant; the king accused the person responsible for distributing the gifts to the Indians in this region of having used for his personal advantage the powder which was intended for them; according to some authors, this was one of the causes of the defection of a certain number of Acadian Indians.
Leneuf was promoted lieutenant on 1 March 1693. In a letter to the minister on 25 Oct. 1694, concerning a petition by La Vallière, the governor supported him as follows: “Everything he says therein is true; he is a very fine lad, liked and esteemed by everybody here, and has distinguished himself in several encounters.”
During the winter of 1698–99 he carried on fur-trading by boat in Baie Verte. The year 1703 gave him an opportunity to take vengeance for the destruction of Beaubassin by Church in 1696. Rigaud de Vaudreuil sent him in the direction of Boston with a detachment of Abenakis and French. On 21 August (10 August, o.s.) Leneuf and his men made a surprise attack on the little town of Wells and took possession of it. According to François de Beauharnois* de La Chaussaye and Vaudreuil, they laid waste more than 15 leagues of enemy country and killed or captured more than 300 persons. Casco was saved thanks to the resistance put up by Captain Southack* and John March.
On 12 Oct. 1705 Ramezay recommended La Vallière to the minister for promotion. In 1707 he became associated with Guillaume Gaillard, and as commander of the Nostre-Dame-de-Victoire, he set out from Quebec in July for Newfoundland, with Thomas Moore, Henri-Louis Deschamps de Boishébert, and a crew of 100 Canadians, in search of Englishmen. He arrived there on 20 August. This privateering venture had little success, since no prizes were taken during it.
Alexandre was raised to the rank of naval captain on 5 May 1710. He was captured on the Neptune and taken as a prisoner to England in August 1711, when he was returning from France to bring aid to Acadia. In October he found himself penniless in Paris. The king recognized his services by granting him the cross of the order of Saint-Louis in June 1712. He died at sea on the Héros, in September of the same year.
Alexandre’s naval career continued the tradition of the La Vallière family, which up to the period of the conquest furnished an unbroken line of distinguished soldiers and sailors.
AJTR, Registres d’état civil de Trois-Rivières, 1636–99 (copy in PAC, MG 24, B 14, Papiers Lafontaine). AN, Col., B, 16, ff.193v, 237; 32, f.33v; 34, f.45; C11A, 11, ff.222, 267; 13, f.77; 21, f.13; 22, f.341; C11C, 5, f.114; 7, ff.97–98; C11D, 1, f.192; 3, f.226; C11G, 3, ff.153–54; D2C, 222/1, p.36 (copy in PAC); E, 277 (dossier de La Vallière), pièce 21; F1A, 17, f.17. AQ, NF, Coll. de pièces jud. et not., liasse 57, pièce 2268. NYCD (O’Callaghan and Fernow), IX, 745, 756, 762. Gagnon, “Noms propres au Canada français,” 121. Pierre Daviault, Le baron de Saint-Castin, chef abénaquis (Montréal, 1939), 152, 209. Fauteux, Les chevaliers de Saint-Louis, 105. Sylvester, Indian wars, III, 41–45. Samuel Niles, “History of the Indian and French wars,” Mass. Hist. Soc. Coll., 3d ser., VI (1837), 248–50. | <urn:uuid:bf988f3d-18d0-4d71-b505-54890609df01> | CC-MAIN-2016-26 | http://biographi.ca/en/bio/leneuf_de_la_valliere_de_beaubassin_alexandre_2E.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00014-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936472 | 1,170 | 2.671875 | 3 |
Skip to comments.Astronomy Picture of the Day -- The Milky Trail
Posted on 06/01/2013 9:05:25 AM PDT by SunkenCiv
Explanation: Have you ever hiked the Queen's Garden trail in Bryce Canyon, Utah, USA, planet Earth? Walking along that path in this dark night skyscape, you can almost imagine your journey continues along the pale, luminous Milky Way. Of course, the name for our galaxy, the Milky Way (in Latin, Via Lactea), does refer to its appearance as a milky band or path in the sky. In fact, the word galaxy itself derives from the Greek for milk. Visible on moonless nights from dark sky areas, though not so bright or quite so colorful as in this image, the glowing celestial band is due to the collective light of myriad stars along the plane of our galaxy, too faint to be distinguished individually. The diffuse starlight is cut by dark swaths of obscuring galactic dust clouds. Four hundred years ago, Galileo turned his telescope on the Milky Way and announced it to be "...a congeries of innumerable stars ..."
(Excerpt) Read more at 220.127.116.11 ...
I tried to take a photo of the Milky Way, like this one from Bryce Canyon, UT, but couldn't get it because I have butter fingers.
SunkenCiv, you have done it again!
This is a great picture! Thanks.
Gorgeous picture, thanks!
I hiked that trail several years back, but in the daytime. It was in May, but it was still very hot. Neat trail and a whole different perspective down in the canyon than looing down from the top. The hoodoo shapes were really net.
Beautiful and yet haunting. Makes you feel like you would be alone in the world. Love it. Thanks, Sunky!
We used to see the Milky Way, every summer, when we went on vacation with hubby’s family. And this was in Perdido Key FL, on the end just next to Johnson Beach, where there was not a lot of ambient lighting from condo buildings, restaurants and stores. It was just beautiful!
My pleasure, but I’m just the messenger of course. :’) Thanks for the kind remarks!
Thank you for posting and pinging SunkenCiv. That is awesome!
A downside of living in a city is not seeing magnificent sights like this.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works. | <urn:uuid:6acac204-06db-41d9-b4ea-a655c596092a> | CC-MAIN-2016-26 | http://freerepublic.com/focus/f-chat/3026151/posts | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00130-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.948448 | 569 | 2.578125 | 3 |
"What are antidepressants and how do they work?
Antidepressants are a class of drugs that reduce symptoms of depressive disorders by correcting chemical imbalances of neurotransmitters in the brain. Chemical imbalances may be respo"...
Worldwide exposure to fluoxetine hydrochloride is estimated to be over 38 million patients (circa 1999). Of the 1578 cases of overdose involving fluoxetine hydrochloride, alone or with other drugs, reported from this population, there were 195 deaths.
Among 633 adult patients who overdosed on fluoxetine hydrochloride alone, 34 resulted in a fatal outcome, 378 completely recovered, and 15 patients experienced sequelae after overdosage, including abnormal accommodation, abnormal gait, confusion, unresponsiveness, nervousness, pulmonary dysfunction, vertigo, tremor, elevated blood pressure, impotence, movement disorder, and hypomania. The remaining 206 patients had an unknown outcome. The most common signs and symptoms associated with non-fatal overdosage were seizures, somnolence, nausea, tachycardia, and vomiting. The largest known ingestion of fluoxetine hydrochloride in adult patients was 8 grams in a patient who took fluoxetine alone and who subsequently recovered. However, in an adult patient who took fluoxetine alone, an ingestion as low as 520 mg has been associated with lethal outcome, but causality has not been established.
Among pediatric patients (ages 3 months to 17 years), there were 156 cases of overdose involving fluoxetine alone or in combination with other drugs. Six patients died, 127 patients completely recovered, 1 patient experienced renal failure, and 22 patients had an unknown outcome. One of the six fatalities was a 9 year-old boy who had a history of OCD, Tourette's syndrome with tics, attention deficit disorder, and fetal alcohol syndrome. He had been receiving 100 mg of fluoxetine daily for 6 months in addition to clonidine, methylphenidate, and promethazine. Mixed-drug ingestion or other methods of suicide complicated all 6 overdoses in children that resulted in fatalities. The largest ingestion in pediatric patients was 3 grams which was nonlethal.
Other important adverse reactions reported with fluoxetine overdose (single or multiple drugs) include coma, delirium, ECG abnormalities (such as QT interval prolongation and ventricular tachycardia, including torsades de pointes-type arrhythmias), hypotension, mania, neuroleptic malignant syndrome like reactions, pyrexia, stupor, and syncope.
Studies in animals do not provide precise or necessarily valid information about the treatment of human overdose. However, animal experiments can provide useful insights into possible treatment strategies.
The oral median lethal dose in rats and mice was found to be 452 and 248 mg/kg, respectively. Acute high oral doses produced hyperirritability and convulsions in several animal species.
Among 6 dogs purposely overdosed with oral fluoxetine, 5 experienced grand mal seizures. Seizures stopped immediately upon the bolus intravenous administration of a standard veterinary dose of diazepam. In this short-term study, the lowest plasma concentration at which a seizure occurred was only twice the maximum plasma concentration seen in humans taking 80 mg/day, chronically.
In a separate single-dose study, the ECG of dogs given high doses did not reveal prolongation of the PR, QRS, or QT intervals. Tachycardia and an increase in blood pressure were observed.
Consequently, the value of the ECG in predicting cardiac toxicity is unknown. Nonetheless, the ECG should ordinarily be monitored in cases of human overdose.
Management Of Overdose
Treatment should consist of those general measures employed in the management of overdosage with any SSRI.
Ensure an adequate airway, oxygenation, and ventilation. Monitor cardiac rhythm and vital signs. General supportive and symptomatic measures are also recommended. Induction of emesis is not recommended. Gastric lavage with a large-bore orogastric tube with appropriate airway protection, if needed, may be indicated if performed soon after ingestion, or in symptomatic patients.
Activated charcoal should be administered. Due to the large volume of distribution of this drug, forced diuresis, dialysis, hemoperfusion, and exchange transfusion are unlikely to be of benefit. No specific antidotes for fluoxetine are known.
A specific caution involves patients who are taking or have recently taken fluoxetine and might ingest excessive quantities of a TCA. In such a case, accumulation of the parent tricyclic and/or an active metabolite may increase the possibility of clinically significant sequelae and extend the time needed for close medical observation [see DRUG INTERACTIONS].
Based on experience in animals, which may not be relevant to humans, fluoxetine-induced seizures that fail to remit spontaneously may respond to diazepam.
In managing overdosage, consider the possibility of multiple drug involvement. The physician should consider contacting a poison control center for additional information on the treatment of any overdose. Telephone numbers for certified poison control centers are listed in the Physicians' Desk Reference (PDR).
Monoamine Oxidase Inhibitors
The use of MAOIs intended to treat psychiatric disorders with SARAFEM or within 5 weeks of stopping treatment with SARAFEM is contraindicated because of an increased risk of serotonin syndrome. The use of SARAFEM within 14 days of stopping an MAOI intended to treat psychiatric disorders is also contraindicated [see DOSAGE AND ADMINISTRATION and WARNINGS AND PRECAUTIONS].
Starting SARAFEM in a patient who is being treated with MAOIs such as linezolid or intravenous methylene blue is also contraindicated because of an increased risk of serotonin syndrome [see DOSAGE AND ADMINISTRATION and WARNINGS AND PRECAUTIONS].
The use of SARAFEM is contraindicated with the following:
Last reviewed on RxList: 12/29/2014
Additional Sarafem Information
Sarafem - User Reviews
Sarafem User Reviews
Now you can gain knowledge and insight about a drug treatment with Patient Discussions.
Report Problems to the Food and Drug Administration
Find out what women really need. | <urn:uuid:f17d5385-3f84-4a23-ba49-f89688fbf173> | CC-MAIN-2016-26 | http://www.rxlist.com/sarafem-drug/overdosage-contraindications.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00032-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922822 | 1,331 | 2.953125 | 3 |
PARIS — Were it not for the Jews, France’s trademark pain au chocolat wouldn’t exist.
Fleeing the Inquisition, Portuguese Jews settled in nearby Bayonne in southwestern France in the early 16th century and established there the country’s first chocolate factories. The region’s residents quickly learnt the trade, and by the 17th century the Jews would be evicted again from what was by then France’s chocolate capital.
Next week, however, as part of its annual Chocolate Days festival celebrating the city’s 500-year chocolate tradition, Bayonne will pay homage to these intrepid Sephardi Jewish chocolate pioneers, starting May 10.
“Since we are the inheritors of the Jews’ savoir faire, it was our duty to thank them, but also to restore a historical truth: after they introduced chocolate in France, Bayonne Jewry was gradually evicted from the chocolate industry in the 17th century by the very people who had learned everything from them,” says Jean-Michel Barate, head of the Chocolate Academy and CEO of the Bayonne-based chocolate brand Daranatz.
For two days, the streets of Bayonne will be brimming with thousands of chocolate aficionados making the pilgrimage, and visiting local brands and factories – such as Andrieu, Cazenave, Daranatz, Pariès, and Pascal.
Michèle Kahn, author of the 2003 novel “Cacao” and Jewish history expert, will be one of the event’s ambassadors and will co-host a conference on the history of Sephardi Jews and chocolate in Bayonne’s 176-year-old national heritage site synagogue on May 10.
Her book tells the struggle of a Jewish family in 17th century Bayonne to defend its rights against the anti-Semitic chocolatiers guild which sought to evict the Jewish community from the chocolate trade.
Kahn told the Times of Israel the idea for the novel germinated from a visit to the Biarritz Chocolate Museum ten years ago. There she saw a memorial plaque for the area’s marranos – Jews who originally lived in the Iberian Peninsula and converted or were forced to convert to Christianity.
“When I saw the plaque, I became completely hooked on this part of Jewish history,” she says. “Because French people have ignored that it was Jews who brought chocolate to France, and because I come from this region myself, I felt compelled to write this book and tell this story.”
Before the French succumbed to chocolate, cocoa was first brought to Spain by Christopher Columbus, when he returned from his fourth voyage to America in 1502. There he had found cocoa beans used as currency by the local population.
However, unimpressed by the bitter-flavored chocolate beverage imbibed by the Aztecs (nicknamed “the drink of gods”), King Ferdinand and Queen Isabella’s court didn’t much partake.
Years later, another Spanish explorer, Hernán Cortés, presented King Charles V with cocoa beans and the secret of palatable preparation: Cocoa mixed with other colonial imports – vanilla, cane sugar, and cinnamon – formed a sweet, tasty concoction.
Eventually, cocoa became the fashionable drink of the Spanish aristocracy.
“We don’t know much about how Jews first got into the chocolate trade in the New World, but my opinion is that some Jews must have sailed across the Atlantic with Cortés, and then capitalized on their contacts with European Jewish communities,” says Kahn.
After the 1492 edict of expulsion of Spanish Jews and the onslaught of the Portuguese Inquisition in 1536, some Jews sought asylum in France, and settled near the Spanish border in Saint-Esprit, on the outskirts of Bayonne, across the Ardour River.
Jews were granted rights of residence as conversos – “New Christians” – by local authorities in 1550, but were subjected to many restrictions on land ownership, retail trade, and travel.
Importing the tools and knowledge of cocoa, along with their contacts in the New World, Bayonne Jewry taught local workers the secrets of processing chocolate, but were eventually prohibited from working in this industry by the chocolatiers guild. A Bordeaux court annulled the decree in 1767.
By 1854, Bayonne was home to at least 34 chocolate companies and became known as the first chocolatier city in the country. Today there are some 200 families in the area.
“Chocolate as we know it today probably wouldn’t have existed, or entered Europe at this time of history, had it not been for the participation of Spanish and Portuguese Jews in the international chocolate trade,” Kahn concludes. | <urn:uuid:5e99d87e-69ac-48a3-914e-5cac8112b04e> | CC-MAIN-2016-26 | http://www.timesofisrael.com/france-thanks-sephardi-jews-for-chocolate-500-years-too-late/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00076-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963167 | 1,009 | 3.140625 | 3 |
For Release: April 12, 2001
Media Relations Office
Researchers at the NASA Glenn Research Center, Cleveland, OH, have received a second patent for their method of growing atomically flat surfaces, without a single step even one atom high, on commercial semiconductor wafers. The step-free surfaces hold particular promise for improving the performance and reliability of a new class of microelectronic devices that could be used in everything from DVDs to jet engines.
Commercial semiconductor wafers are covered with steps, typically one to eight atoms high, despite careful polishing by their manufacturers. The Glenn research team makes step-free surfaces by first etching device-sized arrays of mesas into the wafers. Next, by controlling conditions, the researchers limit crystal growth to the riser, or side, of each atomic step. The crystal at each step grows sideways until the step reaches the edge of the mesa, leaving behind an atomically flat surface.
"We've flattened silicon carbide mesas as large as 0.4 by 0.4 millimeter and, depending on the mesa size, over half of the mesas on a wafer," said Tony Powell, senior research physicist at Glenn. "What's so attractive about our method is that, with just one extra patterning step in the fabrication, manufacturers can make these step-free surfaces," he said.
Those mesas that were not made flat contained screw dislocation defects -- so called because of the warped spiral stacking of the crystalline planes -- which are not amenable to this flattening method. An added benefit of the method is that it isolates the screw defects into mesas that can be identified and avoided.
Studies by other scientists have linked surface steps in the wafer to defects in semiconductor films that are different from the wafer material. The defects cause poor performance and reliability and have been troublesome to the development of new electronics for aerospace applications.
"We believe that step-free surfaces will enable remarkable improvements in devices based on silicon carbide and gallium nitride . These are the materials of choice for making high power solid-state switches as well as electronics for hostile environments, such as pollution and noise control devices inside aircraft engines," Powell said.
Industry researchers are using these materials for blue light emitting diodes (LEDs) for lighting, blue lasers for higher capacity DVDs and high efficiency transistors for more reliable electrical power switching and improved wireless communications.
Glenn is one of the Nation's leading aeropropulsion research laboratories and conducts instrumentation and controls research on sensors, electronics, photonics and microelectromechanical systems (MEMS) for aeronautics and space applications.
# # #
A print-quality image supporting this news release is available at http://www.grc.nasa.gov/WWW/PAO/pressrel/images/Stepfree.jpg
- end -
NASA Glenn Research Center news releases are available automatically by sending an Internet electronic mail message to:
Leave the subject and body blank. The system will reply with a confirmation via e-mail of each subscription. You must reply to that message to begin your subscription.
To unsubscribe, address an e-mail message to:
Leave the subject and body blank. | <urn:uuid:91a4977e-a908-47c5-a412-00ae94b4e22b> | CC-MAIN-2016-26 | http://www.nasa.gov/centers/glenn/news/pressrel/2001/01-024.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00158-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.908339 | 671 | 3.046875 | 3 |
Is dyslexia really just a meaningless label as Professor Elliot suggests? Is there a difference between a child labelled ‘dyslexic’ and a child labelled ‘a poor reader’? In the UK we have around 375,000 diagnosed as dyslexic, but what does this actually mean? Of the hundreds of dyslexic students I’ve taught, many have languished helplessly in the doldrums of illiteracy while some seem suddenly to make rapid and remarkable progress.
This year, two students who were presented to me as dyslexic have experienced very different trajectories.
One, let’s call him Ben, had spent Years 7 and 8 being taught English in very small groups of students identified as having ‘specific learning difficulties’. In Year 9 such students are put back into mainstream classes with the expectation that the work they’ve done in the previous two years will have equipped them to cope.
Ben arrived in my class very worried about whether he was going to ‘look thick’ and with a very low estimation of his ability. He’s a quiet, hard-working chap, however, and wants to do well. I spent a fair bit of time working with Ben at the beginning of the year and, frankly, failed to see what the problem was: his reading was a little hesitant and his writing was inaccurate but full of good ideas and definitely showed signs of conscious crafting.
One lesson, I was talking to him about his work and suggested some ways he could improve his spelling. The despondency of his response was heartbreaking; ”But I can’t spell, sir. I’m dyslexic.”
“Ben,” I told him. “That’s nonsense! Of course you can.”
We spent some time going over doubling consonants, ‘i before e’ and a few other easy to implement gems and before we knew it, his spelling had improved! We also did some work on various reading strategies like skimming and scanning and, guess what? His reading comprehension showed similar improvements. His confidence has grown massively and he’s now consistently producing C grade work. We’re now talking about what he needs to do to get an A in Year 11. If he carries on the way he has this year, he’s a shoe-in.
Then there’s 'Carrie'. She has terrible attendance, her behaviour is awful and she produces little or no work. When I met her parents at parents’ evening, they told me that none of this was Carrie’s fault; she was dyslexic you see. I didn’t see. I pointed out that even though she might find English difficult that was no excuse for not trying. At that point we reached a bit of an impasse.
Things have got a little better because, frankly, I’m not prepared to accept the bare minimum of work that Carrie feels it’s acceptable to produce. Although she doesn’t work anyway near as hard as Ben, her reading and writing have improved and she’s making what we might describe as ‘steady’ progress. But her attendance is slipping, she’s regularly excluded and there’s been talk of her having a ‘fresh start’. Through it all, her parents maintain that her dyslexia isn’t being catered for. I worry that she may not make it.
It’s understandable that parents want to get this label because they believe the label will lead to an intervention, but arguably the interventions are ineffective, expensive and time-consuming. Resources are diverted away from helping all children with reading problems. If resources are thrown at a particular group of students suffering from a particular syndrome, what happens to students who haven’t paid the £300 quid or so needed to receive this label?
As ever, educated, middle-class parents will have both the cultural capital and the cash to ensure their child receives a dyslexia diagnosis, and children who come form a more deprived background will not. How can it be fair that our system further disadvantages the disadvantaged?
I’m not dismissing the anxiety caused by struggling to read. There’s no doubt that life tends to be harder if you find reading difficult. You are more likely to have other problems (clumsiness, hyperactivity and poor short-term memory, for example) and having one such problem increases the likelihood you’ll have another. Yet, there is no evidence that these problems actually cause reading difficulties. Maybe we’re putting the cart before the horse. Maybe the reading difficulties cause the problems?
Maybe we should agree that either every child with poor reading ability is dyslexic, or that none of them are.
Read more from David Didau at his blog www.learningspy.co.uk | <urn:uuid:91870b92-28b0-4851-8afb-61f64c628e00> | CC-MAIN-2016-26 | http://www.independent.co.uk/voices/comment/dyslexic-students-get-extra-support-but-what-about-those-without-the-label-who-struggle-with-reading-9156970.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00054-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.979536 | 1,024 | 2.859375 | 3 |
Broad, MIT researchers reveal structure of key CRISPR complex
Work uncovers secret of how genome-editing tool works as a “guided missile” to correct errors in the genetic code
Researchers from the Broad Institute and MIT have teamed up with colleagues from the University of Tokyo to form the first high definition picture of the Cas9 complex – a key part of the CRISPR-Cas system used by scientists as a genome-editing tool to silence genes and probe the biology of cells. Their findings, which are reported this week in Cell, are expected to help researchers refine and further engineer the tool to accelerate genomic research and bring the technology closer to use in the treatment of human genetic disease.
First discovered in bacteria in 1987, CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats) have recently been harnessed as so-called genome editing tools. These tools allow researchers to home in on “typos” within the three-billion-letter sequence of the human genome, and cut out and even alter the problematic sequence. The Cas9 complex, which includes the CRISPR “cleaving” enzyme Cas9 and an RNA “guide” that leads the enzyme to its DNA target, is key to this process.
“We’ve come to view the Cas9 complex as the ultimate guided missile that we can use to target precise sites in the genome,” said co-senior author Feng Zhang, a core member of the Broad Institute, an investigator at the McGovern Institute for Brain Research, and an assistant professor at MIT. “This study provides a schematic of the entire system – it shows the missile (the Cas9 protein), the programming instructions (the guide RNA) that send it to the right location, and the target DNA. It also reveals the secret of how these pieces function together to make the whole system work.”
To deconstruct this system, Zhang approached the paper’s co-senior author Osamu Nureki at the University of Tokyo. Together, they assembled a team to work out the complicated structure.
“Cas9-based genome editing technologies are proving to be revolutionary in a wide range of life sciences, enabling many new experimental techniques, so my colleagues and I were excited to work with Feng’s lab on this important research,” said first author Hiroshi Nishimasu, an assistant professor of biophysics and biochemistry who works in Nureki’s lab at the University of Tokyo.
The two teams worked closely to reveal the structural details of the Cas9 complex and to test their functional significance. Their efforts revealed a division of labor within the Cas9 complex. The researchers determined that the Cas9 protein consists of two lobes: one lobe is involved in the recognition of the RNA and DNA elements, while the other lobe is responsible for cleaving the target DNA, causing what is known as a “double strand break” that disables the targeted gene. The team also found that key structures on Cas9 interface with the guide RNA, allowing Cas9 to organize itself around the RNA and the target DNA as it prepares to cut the strands.
Identifying the key features of the Cas9 complex should enable researchers to improve the genome-editing tool to better suit their needs.
“Up until now, it has been very difficult to rationally engineer Cas9. Now that we have this structural information, we can take a principled approach to engineering the protein to make it more effective,” said Zhang, who is also a co-founder of Editas Medicine, a company that was started last year to develop Cas9 and other genome editing technologies into a novel class of human therapeutics.
Currently, Cas9 is used in experiments to silence genes in mammalian cells – sometimes at multiple sites across the genome – and large libraries of RNA sequences have been created to guide Cas9 to genes of interest. However, the system can only target specific types of sites. Some studies have also shown that the RNA could lead Cas9 “off-target,” potentially causing unexpected problems within the cellular machinery.
The researchers plan to use this new, detailed picture of the Cas9 complex to address these concerns.
“Understanding this structure may help us engineer around the current limitations of the Cas9 complex,” said study author F. Ann Ran, a graduate student in Zhang’s lab. “In the future, it could allow us to design versions of these editing tools that are more specific to our research needs. We may even be able to alter the type of nucleic acid sequences that Cas9 can target.”
Such technological improvements will be needed if the CRISPR-Cas system is to evolve into a therapeutic tool for the treatment of genetic disease.
The study was supported by the National Institute of Mental Health (NIMH); an NIH Director’s Pioneer Award; the Japan Science and Technology Agency; the Japan Society for the Promotion of Science; the Keck, McKnight, Poitras, Merkin, Vallee, Damon Runyon, Searle Scholars, Klingenstein, and Simons Foundations; as well as Bob Metcalfe and Jane Pauley.
Other researchers who worked on the study include Patrick D. Hsu, Silvana Konermann, Soraya Shehata, Naoshi Dohmae, and Ryuichiro Ishitani.
Nishimasu H et al. “Crystal structure of Cas9 in complex with guide RNA and target DNA.” Cell DOI: 10.1016/j.cell.2014.02.001
About the Broad Institute of Harvard and MIT
The Eli and Edythe L. Broad Institute of Harvard and MIT was launched in 2004 to empower this generation of creative scientists to transform medicine. The Broad Institute seeks to describe all the molecular components of life and their connections; discover the molecular basis of major human diseases; develop effective new approaches to diagnostics and therapeutics; and disseminate discoveries, tools, methods and data openly to the entire scientific community.
Founded by MIT, Harvard and its affiliated hospitals, and the visionary Los Angeles philanthropists Eli and Edythe L. Broad, the Broad Institute includes faculty, professional staff and students from throughout the MIT and Harvard biomedical research communities and beyond, with collaborations spanning over a hundred private and public institutions in more than 40 countries worldwide. For further information about the Broad Institute, go to http://www.broadinstitute.org.
About the McGovern Institute for Brain Research at MIT
The McGovern Institute for Brain Research at MIT is led by a team of world-renowned neuroscientists committed to meeting two great challenges of modern science: understanding how the brain works and discovering new ways to prevent or treat brain disorders. The McGovern Institute was established in 2000 by Patrick J. McGovern and Lore Harp McGovern, who are committed to improving human welfare, communication and understanding through their support for neuroscience research. The director is Robert Desimone, formerly the head of intramural research at the National Institute of Mental Health. | <urn:uuid:3f84a59b-dfe7-450f-bd04-4205a6715e5e> | CC-MAIN-2016-26 | https://www.broadinstitute.org/news/5515 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00127-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.935513 | 1,471 | 2.796875 | 3 |
The Yellow Wallpaper
Author Charlotte Perkins Gilman (18601935)
First Published 1892
Locale Northeastern Atlantic Coast
Time of Plot A summer during the 1890s
Type of Plot Psychological realism
JANE, the narrator
JOHN, her husband, a physician
JENNIE, Johns sister and Janes nurse
The story unfolds slowly over many weeks, beginning with the arrival of the narrator (whose name, Jane, is not revealed until the end of the story) at an estate in the country. Jane has gone into a gradual decline, losing interest in her family and her surroundings, since the birth of her baby. Her husband, John, and her brother believe that a long rest is what she needs to feel more like herself. Because both men are respected physicians, Jane believes that they know what is best for her and tries to put on a good face, despite her increasing suspicions that her rest cure may do her more harm than good.
At first, the colonial estate where she is the only guest appears harmless and quaint, with large gardens and spacious rooms. Jane later reveals that her windows have bars and her bed is bolted to the floor. The only people whom she sees are her husband, who comes from the city to check on her, and her nurse, Johns sister, Jennie. Jane never has contact with her recently delivered child nor with friends. Her summer home takes on a more sinister tone as her mental condition deteriorates, with the very wallpaper in her room coming to grotesque life.
Janes husband blames her thinking for all of her problems and forbids her to do anything that will employ her mind productively. Jane rebels at first and keeps a secret journal, but as she weakens, even that endeavor becomes too tiring. She withdraws into her thoughts, which form the running interior monologue of her mental collapse. Apparently accepting the separation from her infant, Jane slowly loses control of her imagination and her motivation to seek human contact. After she collapses and is forced to keep to her room, she becomes fascinated with the patterns on the yellow wallpaper, seeing in the papers swirls faces and patterns that first amuse and then terrify her.
From her barred window, Jane begins seeing women creeping about the gardens on their hands and knees. Soon she discovers that another woman is trapped behind the wallpaper in her room, something that only she can see. At night, this woman pushes and struggles behind the paper in an effort to escape, rattling and ripping it as she fights to get free.
Jane says that the woman creeps along the walls, and she tries to help free her by gradually peeling back her wallpaper prison. Jane begins to notice signs of deterioration in her room: smears on the wall and bite marks on the bedstead. Gradually she no longer wants to leave her room; when John comes to take her home, she refuses to go and locks herself in with the creeping woman who is now free in the room.
Janes husband and sister-in-law gain entry and find only Jane creeping around and around the room, surrounded by shreds of wallpaper. The story concludes as she creeps over the form of her husband, who has fainted from the shock of seeing her in her madness.
Themes and Meanings
"The Yellow Wallpaper" is partly autobiographical. Charlotte Perkins Gilman wrote it after she fled from her husband with her infant daughter to California. More important than the storys similarities to Gilmans own experience is the larger issue of a womans right to be creative and autonomous. The story can be seen as advocating a womans right to act and speak for herself; the alternative clearly leads to madness, as it does for Jane.
At the time of the story, most people believed that women were delicate and prone to madness if overstressed. A common treatment for their presumed mental illnesses combined isolation, rest, and inactivitythe very things that cause Janes breakdown. From her own account, readers know that Jane enjoys writing and reading, yet John considers these to be dangerous activities to be avoided at all costs. At that time, it was common to remove a depressed woman from all sources of stress or sensory stimulation; women such as Jane were separated from their children, kept in bed, hand-fed, bathed, and massaged. It is precisely this type of treatment that drives Jane to begin hallucinating. The silent madness into which Jane withdraws is not only her reaction to the cure that men prescribe for her, but her only available form of rebellion against these tyrannies.
As Jane becomes more distanced from the world and from any source of sensory stimulation, she begins to hallucinate. Her visions of the creeping women and the woman enshrouded behind her bedrooms wallpaper symbolize her own binding and oppression. It is the rest treatment prescribed by physicians such as her husband and brother that metaphorically cause the women whom Jane sees to creep like infants rather than walk as independent adults. Janes rest cure becomes her own wallpaper prison, one that simultaneously drives her insane and pushes her to assert her own rebellious selfhood. By freeing the woman from behind the wallpaper, Jane succeeds in freeing herself. Sadly, however, her mental state has deteriorated so badly that she has become truly insane and will remain utterly dependent on her husband.
At the storys conclusion, the narrator locks herself in her room and ties a rope around her waist so that she cannot be removed. Jane, the woman from behind the yellow wallpaper, creeps about the edges of her prison, a room that she will now use as a fortress. It is significant that Jane waits to reveal her name to readers until after her husband faints in horror at seeing her reduced to a crawling madwoman.
Style and Technique
The most prominent technical and stylistic feature of "The Yellow Wallpaper" is Gilmans combining of the first-person narrator and present-tense narration. By allowing readers to see only what Jane sees as she sees it, Gilman duplicates as closely as possible the feelings of entrapment, isolation, and unreality that Jane experiences. Janes decline into true madness is so gradual and her narrative voice seems so level-headed, even when she describes events that one knows are impossiblesuch as the creeping women in the garden or the woman struggling to free herself from behind her rooms wallpaperthat one might misread this tale as a ghost story rather than as an account of Janes mental deterioration. By making the descriptions of the women, the room, and the malevolent shapes and faces in the wallpaper so immediate and realistic, Gilman tricks the reader into seeing Jane as simultaneously mad and in the grips of some haunting supernatural specters. This ambiguity increases the shock that readers experience when they realize that Jane has been talking in metaphors throughout her narrative, that she has been recounting her own sense of intellectual and emotional oppression, rather than seeing actual women crawling about on the ground in the gardens or moving behind her rooms wallpaper.
Some readers may be content to let their interpretation of "The Yellow Wallpaper" rest with the supernatural; if left here, however, readers will miss the more important point of Gilmans tale. Gilman forces readers to reconsider Janes entire narrative by means of the storys conclusion, when Jane finally speaks her own name for the first time as she creeps over her husbands inert body. Little of the story will then make sense unless reexamined. Gilman plants numerous clues throughout the story that express Janes interior struggle to be herself and to reclaim her independence: her need to be creative by keeping a journal, or the existence of the woman for whom Jane demolishes the yellow wallpaper to effect her escape. Similarly, the information that Jane offhandedly supplies readers in the storys early stagessuch as descriptions of the bars on her window, the bite marks on the bed that is bolted to the floor, and her increasing lassitudenow can be reinterpreted as describing the true nature of where Jane has been staying: at an asylum. On second reading, "The Yellow Wallpaper" becomes the story of a woman who, while she may have been depressed, was not insane when she began her cure.
Melissa E. Barth
Yellow Wallpaper, The (Womens Literature)
The Yellow Wallpaper
Author Charlotte Perkins Gilman (18601935)
First Published 1892
Locale New England
Time of Plot The late nineteenth century
Type of Plot Social criticism
Type of Work Novella
THE NARRATOR, an imaginative, creative woman apparently suffering from postpartum depression
JOHN, the narrators husband, a physician
JENNIE, Johns sister, who serves as housekeeper
WEIR MITCHELL, the real-life doctor who popularized the "rest cure" prescribed to the narrator (and the author as well)
Form and Content
The structure of The Yellow Wallpaper creates a sense of immediacy and intimacy. The story is written in a journal-style, first-person narrative which includes nine short entries, each entry indicated by a small space between it and the last. The journal entries span three months during which John attempts to cure his wifes "nervous condition" through the rest cure of Weir Mitchell, which assumes that intellectual stimulation damages a woman physically and psychologically. In the beginning of the story, the narrator appears sane and believable, but as the story continues, the reader realizes that she is unreliable because she withholds and confuses information. By the end, the structureshort paragraphs, fragmented and disjointed thought patterns reflects the narrators mental disorder. Through the revelations contained in the journal, the reader is allowed an intimate view of the narrators gradual mental breakdown.
The journal begins when John and the narrator move into a temporary home John has procured to provide the narrator the break from routine that he believes necessary for her rest and recovery. She, on the other hand, doubts the necessity of such a move and wonders if the mysterious house is haunted. John reveals his superior attitude toward his wife by laughing at her "fancies," a response which the narrator finds quite natural because, as she explains, one must expect such treatment in marriage. She even suggests that his indifference to her opinions on the house and her illness keeps her from getting well faster. Her suggestion turns out to be a fateful prediction.
Against her wishes, John decides that he and his wife will sleep in the attic room of the house, which at one point may have been a nursery. Actually, the room seems to be more of a prison than a place for children to play. The windows have bars on them, and the bed is nailed to the floor. There is even a gate at the top of the stairs. Even more disturbing to the narrator, however, is the yellow wallpaper, peeling or pulled off the walls in strips. In the beginning, the papers pattern jolts and annoys the narrators sensibilities, but later her attitude has a bizarre change.
The narrators morbid fascination with the yellow wallpaper is the first clue of her degenerating sanity. She begins to attribute lifelike characteristics to the paper, saying that it knows how it affects her and that its eyes stare at her. She even begins to believe that the paper has two levels, a front pattern and a shadowy figure trapped behind its bars. The narrator betrays the progression of her illness when she begins to believe that the figure behind the wallpaper is a woman, trapped like herself.
The woman behind the wallpaper becomes an obsession. The narrator begins to crawl, like the woman behind the paper, around the edge of the room, making a groove or "smooch" on the wall. The narrator begins to catch glimpses of the woman out the windows, creeping around the garden on her hands and knees. She also starts peeling off the wallpaper in an effort to completely free the woman (or women, as she soon believes) trapped in that second layer. John and his sister, Jennie, begin to suspect that something is terribly wrong, and yet they are pleased with her apparent progress. She appears more normal to them at times because she is saving her energy for nighttime, when the woman behind the paper is most active. Her apparent normality is merely a fašade.
The storys climactic scene occurs as their stay in the rented house is coming to a close. On their last night, John is once again in town attending to a patient, and the narrator asks Jennie not to disturb her. Left alone, the narrator locks herself in the nursery to allow uninterrupted time for peeling wallpaper and thus freeing the shadowy woman. As the narrator works, she identifies more closely and intensely with the trapped woman until, ultimately, she loses her sense of individual identity and merges with the woman behind the wallpaper. John breaks down the door to find his wife crawling amid the torn paper, proclaiming that she is free at last, and no one can put her back behind the wallpaper. John faints, and his wife continues her creeping over his fallen body.
Charlotte Perkins Gilman used her personal bout with postpartum depression to create a powerful fictional narrative which has broad implications for women. When the narrator recognizes that there is more than one trapped, creeping woman, Gilman indicates that the meaning of her story extends beyond an isolated, individual situation. Gilmans main purpose in writing The Yellow Wallpaper is to condemn not only a specific medical treatment but also the misogynistic principles and resulting sexual politics that make such a treatment possible.
The unequal relationship between the narrator and John is a microcosm of the larger gender inequity in society. Gilman makes it clear that much of Johns condescending and paternal behavior toward his wife has little to do with her illness. He dismisses her well-thought-out opinions and her "flights of fancy" with equal disdain, while he belittles her creative impulses. He speaks of her as he would a child, calling her his "little girl" and saying of her, "Bless her little heart." He overrides her judgments on the best course of treatment for herself as he would on any issue, making her live in a house she does not like, in a room she detests, and in an isolated environment which makes her unhappy and lonely. Johns solicitous "care" shows that he believes the prevailing scientific theories which claim that womens innate inferiority leaves them, childlike, in a state of infantile dependence.
Gilman makes John the window through which readers can view the negative images of women in her society. In Gilmans lifetime, womens right to become full citizens and to vote became one of the primary issues debated in the home, the media, and the political arena. As womens reform movements gained the strength that would eventually win the vote in 1920, the backlash became more vicious and dangerous. Noted psychologists detailed theories that "proved" womens developmental immaturity, low cognitive skills, and emotional instability. Physicians, who actually had little knowledge of the inner workings of the female body, presented complex theories arguing that the womb created hysteria and madness, that it was the source of womens inferiority. Ministers urged women to fulfill their duty to God and their husbands with equal submission and piety. In indicting Johns patronizing treatment of his wife, Gilman indicts the system as a whole, in which many women were trapped behind damaging social definitions of the female.
One can see the negative effects of Johns (and societys) treatment of the narrator in her response to the rest cure. At first, she tries to fight against the growing lethargy that controls her. She even challenges Johns treatment of her. Yet, while one part of her may believe John wrong, another part that has internalized the negative definitions of womanhood believes that since he is the man, the doctor, and therefore the authority, then he may be right. Because they hold unequal power positions in the relationship and in society, she lacks the courage and self-esteem to assert her will over his even though she knows that his "treatment" is harming her. Deprived of any meaningful activity, purpose, and self-definition, the narrators mind becomes confused and, predictably, childlike in its fascination with the shadows in the wallpaper.
In the end, the narrator triumphs over Johnshe literally crawls over himbut escapes from him only into madness. As a leading feminist lecturer and writer, Gilman found other options than madness to end her confinement in traditional definitions of womanhood. Eventually, Gilman divorced her husband, who married her best friend, and her husband and her best friend reared her child. The public, friends, and family so sharply censured Gilman for her actions that she knew many women would stay in unhealthy situations rather than risk such condemnation. By having the story end with the narrators descent into insanity, Gilman laments the reality that few viable options exist for creative, intellectual women to escape the damaging social definitions of womanhood represented by John. In her horrifying depiction of a housewife gone mad, Gilman attempts to warn her readership that denying women full humanity is dangerous to women, family, and society as a whole.
The publication of The Yellow Wallpaper had both immediate and long-term effects on womens issues. Gilman writes in her essay "Why I Wrote The Yellow Wallpaper" that the story was meant to save women from further suffering under the rest cure, and that her plan was successful. She says that after her former physician, Weir Mitchell, read a copy of the story that she had sent to him, he altered his treatment of women with nervous disorders. Therefore, the novella served an immediate purpose in the real, everyday lives of late nineteenth and early twentieth century women.
Originally viewed as a gothic horror story in the tradition of Edgar Allan Poe, The Yellow Wallpaper also helped to establish Gilman as an important woman writer in this genre. While few other critics gave it much attention, William Dean Howells praised the novella for its ability to "freeze the blood" and included it in his 1920 collection of The Great Modern American Stories. The novella became well known among such later horror writers as H. P. Lovecraft, who included it in Supernatural Horror in Literature (1945).
It was not until the 1970s and the advent of feminist scholarship, however, that critics began to explore the social, political, and cultural implications of The Yellow Wallpaper. Since then, feminist scholars have identified the novella as an indictment of a social structure which deters womens intellectual, psychological, and creative growth in an effort to keep women childlike and submissive. The work is now often included in American literature anthologies and feminist resources as a fine early example of fiction that criticizes social restrictions placed on women.
Feminist scholars have also found that the destructive impact of social definitions of womanhood on women of the late nineteenth and early twentieth centuries illustrated in this novella appear in other womens fiction of the time. For example, the central protagonist of Kate Chopins The Awakening (1899) faces similar damaging social definitions of womanhood and, not finding a place for herself among them, commits suicide (not madness, but a similar escape). In another example, Mary E. Wilkins Freeman writes of a woman, "Old Woman Magoun," who allows her beloved granddaughter to die rather than be traded in a card deal; she then goes mad. Gilman was not alone in showing how misogynistic attitudes destroy women.
Sources for Further Study
Gilbert, Sandra M., and Susan Gubar. The Madwoman in the Attic: The Woman Writer and the Nineteenth-Century Literary Imagination. New Haven, Conn.: Yale University Press, 1979. One of the premier critical works on nineteenth century women writers. Includes a discussion of The Yellow Wallpaper linking the pattern in the wallpaper to patriarchal text patterns that women writers had to escape.
Gilman, Charlotte Perkins. "Why I Wrote The Yellow Wallpaper." Forerunner 4 (1913): 271. A one-page article in which Gilman explains that her main reason for writing The Yellow Wallpaper was to save other women from fates similar to her own under the rest cure.
Golden, Catherine. The Captive Imagination: A Casebook on "The Yellow Wallpaper." New York: Feminist Press, 1992. This indispensable compilation includes the text of The Yellow Wallpaper with the original illustrations, useful biographical and background information, well-selected critical essays, and a solid introduction.
Kolodny, Annette. "A Map for Rereading: Or, Gender and the Interpretation of Literary Texts." New Literary History 11, no. 3 (1980): 451-467. In this article, Kolodny argues that Gilmans contemporaries did not understand the implications of The Yellow Wallpaper because they did not have the context to understand her point.
Meyering, Sheryl L., ed. Charlotte Perkins Gilman: The Woman and Her Work. Ann Arbor, Mich.: UMI Research Press, 1989. An important collection of critical essays on Gilman and her works, including one by Linda Wagner-Martin focusing on The Yellow Wallpaper.
Amy E. Hudock
Yellow Wallpaper, The (Short Story) | <urn:uuid:f8bc3b40-668a-4588-b4e5-5c025ead1b8a> | CC-MAIN-2016-26 | http://studies.tripod.com/ENGL2328/yellow_wallpaper.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00162-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.966061 | 4,363 | 3.03125 | 3 |
Structure of Prion The normal prion structure is characterized by four helices, denoted as PrPc The abnormal one consist of 2 helices and 4 beta-sheets, which derived from the two helices of PrPc, is denoted as PrPse
Figure 1: Proposed three-dimensional structure (a) PrPC and (b) PrPSc The Conversion of PrP c to PrP se (a) (b)
Chain-reaction Conversion of PrPc to PrPsc PrPc PrPsc Chain-reaction Conversion
How PrPsc attack the Brain? The harmful PrPSc form is very resistant to high temperatures, UV-irradiation and strong degradative enzymes. Brain consists of a mass of nerve tissue (Hundred billions of neuron) and neuroglia, supporting neural tissue. When enough PrPSc proteins have been made they form long filamentous aggregates that gradually damage neuronal tissue. When neuron in the brain are all dead, the appearance of the brain will become sponge-like appearance. And this eventually lead to death.
Different prions affect different regions of the brain Cerebral cortex : the symptoms include loss of memory and mental acuity, also visual imparement (CJD). Thalamus: Fatal Familial Insomnia (FFI). Cerebellum : lose the control of body movements and difficulties to walk (kuru, GSS). Brain stem: In the mad cow disease (BSE), the brain stem is affected.
Prions infect animals & Humans… What are the Prion-Related Diseases? Are all known prion diseases fatal ? YES, they are. & where Prion-Related Diseases Occur?
TSEs T: Transmissible (Communicable) S: Spongiform (Look like sponge) E: Encephalopathy (Brain diseases) Prion diseases are often called transmissible spongiform encephalopathies (TSE). Because after the infection, the brain appearance become a sponge-like apearance.
Six forms of TSEs: <ul><li>CJD - Creutzfeld-Jacob Disease </li></ul><ul><li>BSEs -Bovine-Spongiform-Encephalopathy </li></ul><ul><li>Alper's Syndrome </li></ul><ul><li>GSS - Gerstmann-Straussler-Scheinkner syndrome </li></ul><ul><li>5) FFI - Fatal Familial Insomnia </li></ul><ul><li>6) Kuru </li></ul>
CJD - Creutzfeld-Jakob Disease The disease causes fatal degradation of brain tissue and produces a dementia that affects men and women, often between the ages of 50 and 65. 90% cases progress to death within a year There is no record of anyone recovering from the disease and there is no known treatment.
Causes <ul><li>CJD is associated with prions, mutated forms of a normal protein produced in nerve cells , white blood cells , muscle cells , and the cells of many other tissues. </li></ul><ul><li>Just how prions cause the disease symptoms remains unclear. </li></ul><ul><li>There are three types of the disease: sporadic , genetic , and iatrogenic . </li></ul>
The 3 Type of CJD <ul><li>Sporatic CJD: Spontaneous mutation. 85% of the cases. </li></ul><ul><li>genetic CJD : a mutant prion gene is passed genetically from one generation to another. 10-15% of the cases. </li></ul><ul><li>Iatrogenic CJD: Contaminated directly to the brain (eg: Surgery) </li></ul><ul><li>New variant CJD: CJD with shorter incubation: a form of Creutzfeldt-Jakob disease that has a much shorter incubation period than previously recognized types but is clinically identical. It first appeared in the late 1980s. </li></ul>
BSEs: Bovine Spongiform encephalopathy How Cows are infected?
BSE Way of infection The cow eat offal of the infected sheep Prions are taken up from the gut and transported along nerve fibers to the brain stem. Prions accumulate and convert normal prion proteins to the disease-causing form, PrPSc. Years later, BSE results when a sufficient number of nerve cells have become damaged, affecting the behaviour of the cows. And eventually the cow is dead.
Symptom <ul><li>Bored with food, unsociable . </li></ul><ul><li>No capability in memorizing anything.. </li></ul><ul><li>Loss of control of the muscle movement. </li></ul><ul><li>Can not speak and swallow the food. </li></ul> | <urn:uuid:029bc8df-3379-4a07-ad0c-a9273c9ae011> | CC-MAIN-2016-26 | http://www.slideshare.net/stef10792/prions-and-diseases-8585842 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00034-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.858294 | 1,034 | 3.5 | 4 |
How It Works:
There is evidence that sports similar to bowling were played in ancient Egypt. Homer’s epic Greek poem The Iliad also contains hints of bowling. During the Middle Ages, a form of bowling became popular among monks. It was in the 1800s that ninepin bowling took hold in the United States where it was attractive to gamblers. To curb gambling, an 1841 law made ninepin bowling alleys illegal. To skirt the law, gamblers changed the rules of the game simply by adding an extra pin. Thus tenpin bowling was born. It is now the standard for the game.
Bowling has never officially been part of the Olympic program, but an exhibition of the sport was included at Seoul 1988. Bowling has been part of the Pan American Games since 1991.
Bowling is contested on wooden or synthetic lanes that are 60 feet long and 41.5 inches wide. Games are divided into 10 frames, with each bowler given two attempts per frame to topple the pins. Each pin is worth one point. A strike (recorded when all the pins are knocked down with the first ball) counts for 10 points plus bonus points for pins knocked over on the next two throws. A spare (recorded when all the pins are knocked down with two balls) counts for 10 points plus bonus points for the pins knocked over on the next throw. Strikes on the first two balls of the final frame allow the bowler to take a third throw, therefore a perfect game is a score of 300 points, equal to 12 consecutive strikes. | <urn:uuid:9ab31fbf-aa4b-4e19-8fa5-f31f5b7a2681> | CC-MAIN-2016-26 | http://olympic.ca/sports/bowling/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97388 | 315 | 3.03125 | 3 |
From South Africa to Kanesatake, indigenous peoples have, at times, turned to arms in their struggles against colonialism. The battle against British imperialism in Pakistan, India, and Bangladesh has often been pointed to by Western observers as the ultimate example of non-violent resistance. Such observers have attempted to dictate this false paradigm of resistance as the only legitimate form of anti-colonial action. The reality is, however, that armed resistance played a crucial role in ridding the Subcontinent of the British yoke. Keeping this in mind, a re-contextualization of the discussion around the current events in Gaza is warranted.
Make no mistake, Palestine is a colonized land and its Arab majority is its indigenous population. And yet, so much of the mass media coverage and popular discussion around the siege of Gaza treats the current incarnation of the conflict as a self-contained spat, initiated by ‘bloodthirsty and militant’ Arabs. Newspaper headlines that make uncomplicated reference to Israel’s ‘right to self defense’ neglect entirely the anti-colonial nature of armed resistance in Gaza.
Such headlines further disregard the sequence of events that brought the conflict to its current stage. From the beginnings of ethnic cleansing in 1947 to the start of the current blockade and siege in 2007, Palestinians have been struggling to survive in the face of Zionism’s racist oppressions. Some Palestinians have made the choice to take up arms in their resistance. When the Israeli government targets the elected representatives of the Gazan people, they are not defending themselves; they are further entrenching their occupation.
When the common question, ‘What is Israel supposed to do in the face of rocket fire?’ is asked, McGill Students for SPHR respond with: “End the siege! Free Palestine!”
—McGill Students for Solidarity for Palestinian Human Rights | <urn:uuid:7b755f49-fbd3-48b8-9df7-208bd01502aa> | CC-MAIN-2016-26 | http://www.mcgilldaily.com/2012/11/end-the-siege-free-palestine/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00069-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950333 | 382 | 2.828125 | 3 |
Students can learn to recognize sentence fragments using these lesson plans.
By Jo Ann Zimmerman
What's the difference between a sentence and a fragment?
A sentence has a main clause; that is, it contains a subject, a verb, and it expresses a complete thought.
Whereas a fragment doesn't.
If you copy and paste the line above into a word processing program, your grammar checker will tell you it is a fragment. By middle school, most students get the subject-verb requirement, but many continue to struggle with the concept of a main clause. Suppose we write the two clauses in the last sentence this way:
Students often have a lot of difficulty avoiding fragments in their own writing, usually because they struggle with the concept of a main clause.
Again, a grammar checking program will identify the second group of words as a fragment because it does not express a complete thought. We could also say it is a dependent (subordinate) clause, not a main clause. Many students miss this kind of sentence error because they mentally connect the two clauses despite the period between them. Often, even students who can recognize this kind of fragment in a grammar exercise fail to avoid it in their own writing.
Few conventions frustrate English teachers more than sentence errors, and the problem is not likely to go away anytime soon. In my own experience as a middle school language arts teacher, I have sat through many meetings with high school faculty who question what in heaven's name we are all doing down there in eighth grade, sending them writers who can't use complete sentences. Grrr....
OF COURSE we are all teaching our students to write in complete sentences. Yet it is also undeniable that many high school students still use fragments. After many years of frustration over this issue, I have found a surprising degree of success with one simple trick.
In addition to teaching (again!) the rules for avoiding fragments, I now also teach students to apply the "Guess what?" rule. I simply have them ask "Guess what?", then answer that query with the word group in question. Almost always, fragments fail this test. For example:
Guess what? Because I could not stop for death. FAIL
Guess what? He kindly stopped for me. PASS
Guess what? Because I could not stop for death, he kindly stopped for me. PASS
This example not illustrates the effectiveness of the "Guess what?" test, it offers a clue to fixing fragments. Which we'll explore in the next article. Oops...
In the meantime, here are some lessons to address fragments in the classroom:
Sentence Fragment Lesson Plans:
The Indiana Standards Resources Organization provides this brief introduction to fragments and how to fix them. This lesson is described as one for second grade, but would probably be useful for third through fourth graders as well.
From the Louisiana Department of Education, here's a lesson for upper elementary grades that introduces the concept of a complete sentence vs. fragments and run-ons.
This comprehensive lesson from Core Knowledge teaches middle school students not only how to identify sentence fragments, but covers sentence types and run-ons, as well.
For the many high school students who continue to struggle with sentence fragments, this lesson reinforces the definitions of main and subordinate clauses. | <urn:uuid:088ac568-623c-4324-a88f-d149f18c31f2> | CC-MAIN-2016-26 | http://www.lessonplanet.com/article/grammar/guess-what | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00015-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955863 | 672 | 3.984375 | 4 |
1996 to 2006, Life expectancy increases for men and women.
Education and training:
1997 to 2007, More Australians obtain a non-school qualification.
1997 to 2007, The unemployment rate decreases.
1996-97 to 2006-07, Australia experiences significant real income growth.
1994-95 to 2005-06, The real income of low income Australians increases.
1997 to 2007, Australia’s real net worth per person rises.
2005-06, Most Australians are not experiencing overcrowding.
1996-97 to 2006-07, Australia experiences productivity improvement.
|The natural landscape:|
2000 to 2007, The number of threatened birds and mammals assessed as extinct, endangered or vulnerable rises.
1995 to 2005, The rate of land clearing increased.
The air and atmosphere:
1997 to 2006, Air quality is generally good, even though bushfires have obscured this trend.
1990 to 2006, Net greenhouse gas emissions have risen.
Oceans and estuaries:
1996 to 2006, The number of fish species classified as overfished increases for Commonwealth fisheries.
|Family, community and social cohesion:|
2000 to 2006, More Australians are participating in voluntary work.
1998 to 2005, Rates of personal crime increase slightly, and household crime rates decrease.
Democracy, governance and citizenship:
2004 to 2007, The vast majority of eligible Australians are enrolled to vote. | <urn:uuid:d0447c5e-8df9-4a69-8f8e-ddd2f91162f9> | CC-MAIN-2016-26 | http://www.abs.gov.au/ausstats/abs@.nsf/Previousproducts/1383.0.55.001Main%20Features32008%20(Edition%202)?opendocument&tabname=Summary&prodno=1383.0.55.001&issue=2008%20(Edition%202)&num=&view= | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00153-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.85344 | 289 | 2.640625 | 3 |
From 11:00PM PDT on Friday, July 1 until 5:00AM PDT on Saturday, July 2, the Shmoop engineering elves will be making tweaks and improvements to the site. That means Shmoop will be unavailable for use during that time. Thanks for your patience!
Bring on the tough stuff - there’s not just one right answer.
What's the point of being so deliberately insulting about the filling station at the beginning of the poem? You know, the "it's so gross and dirty" tone that our speaker starts off with? Does that set anything up later in the poem? Or is it just straight up rude?
Why devote an entire poem to a filling station? Since when is a gas station poem material?
Okay, serious question here: what is this poem about? Family? Fuel? God?
What do you think the speaker's relationship to the filling station and the people who live there is? What evidence in the poem supports your opinion?
In a poem about something so specific (the filling station), why end on such a vague, and seemingly unrelated note? In your opinion, what does the last line mean in relationship with the rest of the poem?
The poem is very carefully detailed. Bishop goes to great lengths to describe how nasty the place is, and she also details the more charming touches of the place.
What effect does all of this detail have on your overall impression of the filling station?
What's the speaker's final impression of the filling station and how has it changed over the course of the poem? How can you tell? | <urn:uuid:29c87a3b-c09c-41a7-8c94-7255178b1276> | CC-MAIN-2016-26 | http://www.shmoop.com/filling-station-bishop/questions.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00033-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958937 | 327 | 2.953125 | 3 |
Glossary of Assessment Terms
|Assessment||A systematic, ongoing process aimed at understanding and improving student learning.|
|Curriculum Map||A curriculum map is a visual representation of what you do in your program to foster desired knowledge, skills, and values. It shows the alignment between a program’s curriculum and the learning outcomes of the program.|
|Direct Measures||Direct measures evaluate student work products in light of learning outcomes for the program. Examples of direct measures include exams and rubrics for capstone projects, portfolios, papers, and performances.|
|Formative Assessment||Formative assessment is aimed at understanding and improving learning along the progression of students’ studies. It involves gathering and interpreting evidence of student learning from at least one point prior to the end of the program.|
|Indirect Measures||Indirect measures evaluate student perceptions of their learning and the educational environment that supports learning. Examples of indirect measures include surveys, focus groups, and interviews.|
|Institution Level Assessment||Institution level assessment is aimed at understanding and improving student learning across the institution.|
|Learning Goals||Goals are general statements about knowledge, skills, attitudes and values expected in graduates of the program. Goals are written to align with the holistic vision of the mission. Typically, multiple goals are drawn from the mission statement.|
|Learning Outcomes||Learning outcomes are clear, concise statements that describe how students can demonstrate their mastery of program goals. There are usually multiple learning outcomes for each goal.|
|Mission Statement||A mission statement explains why your organization exists and what it hopes to achieve in the future. It articulates the organization’s essential nature, its values and its work.|
|Pedagogical Inventory||A pedagogical inventory is of the specific educational practices in your program that address a single learning outcome. It can help you determine how and where the student learning outcome is being addressed in the curriculum.|
|Program Level Assessment||Program level assessment is aimed at understanding and improving student learning within a program.|
|Reliability||Reliability describes how well a particular assessment method provides consistent results, regardless of who uses the method or when it is used.|
|Rubric||A rubric is a guide for evaluating student work along certain dimensions. Within the context of program assessment the dimensions can be specific skills or aspects of a learning outcome. For each dimension there are concrete descriptors for different levels of performance.|
|Summative Assessment||Summative assessment is aimed at understanding and improving learning at the completion of students’ studies. It involves gathering and interpreting evidence of student learning at the end of a program.|
|Validity||Validity describes how well a particular assessment method actually measures the learning outcome it is intended to measure.|
- Leskes, A. (2002). Beyond confusion: An assessment glossary. Peer Review. Retrieved January 7, 2009, from http://www.aacu.org/peerreview/pr-sp02/pr-sp02reality.cfm
- Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus Publishing.
- Palomba, C. A. & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass Publishers.
- Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass Publishers. | <urn:uuid:d75d0d55-52bb-42c7-b06c-06cef3ff4978> | CC-MAIN-2016-26 | http://www.uh.edu/class/oaas/Resources%20and%20Links/index.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00160-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.887692 | 751 | 3.875 | 4 |
Assignment 1, Summarize the hydrologic cycle (200-300 words minimum). How important is this cycle to Earth?
Review pp. 402-410 on Global Climate Change. Respond to the following:
Summarize the climate changes that have occurred from the past to the present.
Predict what you think the Earth will look like into the 21st century. Consider the weather and changes in geological features.
Should be a minimum of 700 words.
How do I put the pages 402-410 on here with out having to copy and paste? The first assgnment is due on Thursday and the second is due on Friday.... Thank you! | <urn:uuid:cd24ccc2-8f1b-48b9-92c6-3e8d400986e0> | CC-MAIN-2016-26 | http://www.justanswer.com/homework/4ifkw-hi-use-help-two-assignments-glg-101.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00123-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.930916 | 136 | 2.8125 | 3 |
The x coordinate on an (x, y) graph. The input of a function against which the output is plotted. y is the ordinate.
The sign convention is that measurements to the right from the axis of ordinates are positive, measurements to the left negative.
Abscissa = x value, Ordinate = y value
See also: Ordinate. | <urn:uuid:41fff2b8-b5c6-4841-9baf-300d6c08d3b0> | CC-MAIN-2016-26 | http://www.diracdelta.co.uk/science/source/a/b/abscissa/source.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00182-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.852611 | 75 | 3.15625 | 3 |
Please note: This information was current at the time of publication. But medical information is always changing, and some information given here may be out of date. For regularly updated information on a variety of health topics, please visit familydoctor.org, the AAFP patient education Web site.
Information from Your Family Doctor
Urinary Tract Infections During Pregnancy
FREE PREVIEW. AAFP members and paid subscribers: Log in to get free access. All others: Purchase online access.
FREE PREVIEW. Purchase online access to read the full version of this article.
Am Fam Physician. 2004 Jan 1;69(1):157.
What is a urinary tract infection?
Urinary tract infections (UTIs) are caused by bacteria (germs). The most common kind of UTI is a bladder infection. Other kinds of UTIs are kidney infections and infections of the urethra. The urethra is the small tube that goes from the bladder to the outside of your body.
How do I know I have a UTI?
UTIs may cause different symptoms in different people. You may feel a burning sensation when you urinate. You may need to urinate more often, sometimes every 30 to 60 minutes. Or, you may feel like you need to go again right after you have just urinated. You may notice blood in your urine or a strong smell.
Sometimes germs grow in the urinary tract but you do not have any of these symptoms. This is called asymptomatic bacteriuria (say this: “a-simp-toe-mat-ik bac-tear-ee-you-ree-ah”). Your doctor can test to find out if you have this. Asymptomatic bacteriuria should be treated in pregnant women but does not have to be treated in most other women.
How will a UTI affect my baby?
If you have a UTI and it is not treated, it may lead to a kidney infection. Kidney infections may cause early labor. Fortunately, asymptomatic bacteriuria and bladder infections usually can be found and treated before the kidneys get infected. If your doctor treats a UTI early and properly, the UTI will not hurt your baby.
How are UTIs treated?
Your doctor will prescribe a medicine that is safe for you and the baby. You can help by drinking a lot of water to help flush the germs out of your urine.
How do I know if the medicine is not working?
If you have a fever (higher than 100.5°F), chills, pain in your lower stomach, nausea, vomiting, or pain in your sides, you should call your doctor. You should call your doctor if you have any contractions, or if, after taking medicine for three days, you still have a burning feeling when you urinate.
Can I keep this from happening again?
You can help prevent UTIs in several ways. You should take any medicines just as your doctor says to. Also, drink plenty of fluids every day (water is the best), and urinate often. Do not hold your urine for a long time.
This handout is provided to you by your family doctor and the American Academy of Family Physicians. Other health-related information is available from the AAFP online at http://familydoctor.org.
This information provides a general overview and may not apply to everyone. Talk to your family doctor to find out if this information applies to you and to get more information on this subject.
Copyright © 2004 by the American Academy of Family Physicians.
This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact firstname.lastname@example.org for copyright questions and/or permission requests.
Want to use this article elsewhere? Get Permissions | <urn:uuid:d09f2a63-4847-434a-8a87-bbabd0a5e8c8> | CC-MAIN-2016-26 | http://www.aafp.org/afp/2004/0101/p157.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00133-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922098 | 853 | 2.75 | 3 |
Home Town Heroes | Why Wear a Poppy? | My Dearest Belle: World War I Letters | Holding Down the Homefront: 100 Year-old Archives
Early Remembrance Day poppies were made of cloth, often by disabled Great War veterans. © Parks Canada
Home Town Heroes - Why Wear a Poppy?
“In Flanders fields the poppies blow
Between the crosses, row on row.”
—Lieutenant John McCrae
Why do you wear a poppy? Maybe your parents, grandparents or teachers told you to wear one. “Put it on for two weeks before Remembrance Day,” they might have instructed. “Pin it over your heart.”
Prepare to feel a deeper connection to this tradition. From November 1 to November 11, 2015, Fort Rodd Hill will share photos and stories of ordinary home town heroes. Follow on the website, Facebook or Twitter.
Why do you really wear a poppy? To remember people who died in the World Wars and other conflicts. To remember Fred the farm boy from Lizard Lake, Lewis the twenty year-old boilermaker from Winnipeg and Charles, the 43 year-old single accountant from Victoria.
Early Remembrance Day poppies were made of cloth, often by disabled Great War veterans.
This poppy with a golden maple leaf was worn to Vimy Ridge in France during the 1936 pilgrimage. Surviving veterans and next-of-kin of those who were killed were invited to the official opening of the Vimy monument.
Reason 11 of 11
Hugh Hampton © Parks Canada
Hugh Hampton, a 16 year-old painter’s apprentice who lived on Hillside Avenue in Victoria, lied about his age when he volunteered for overseas service, saying he was 18.
Trained as a machine-gunner, he was cooking for members of his platoon behind the front line near Arras, France, when he was badly wounded by an enemy artillery shell. He died of his wounds the same day August 9, 1918, aged 19.
Reason 10 of 11
Fred Kenney © Parks Canada
Fred Kenney, of Chicago, Illinois, came north to Toronto join the Canadian Army before the U.S. entered the war, as did thousands of other Americans. A stonecutter by trade, he was killed during an attack on Rosiers, France, by his unit on August 9, 1918. The diary of the 4th Battalion notes that “Many gallant acts were performed by all ranks in the face of extremely severe machine-gun fire.”
His sister in Victoria, B.C. was sent this official photograph of his grave, shortly after the war. The original wooden crosses were replaced with stone markers in the 1920s.
Reason 9 of 11
Don Bowes © Parks Canada
Don Bowes (at left) of Winnipeg loved to sneak out of camp to go dancing. His squadron of Royal Canadian Air Force Lancaster bombers was ordered to attack a target near Hamburg, Germany on March 31, 1945—just five weeks before the Second World War ended.
It was a daylight raid, and Bowes’ aircraft of 419 Squadron was late over the target; his fighter escorts had to leave due to low fuel. A small group of German jet fighters attacked, and Bowes’ aircraft—along with eight others—was shot down. Three of the seven aircrew parachuted, but Don Bowes, age 21, did not.
Robert Selby Graham © Parks Canada
Reason 8 of 11
21 year-old Robert Selby Graham was a store clerk in Winnipeg before joining the Army. He wound up serving in the 16th Canadian Scottish, of Victoria. Wounded during the battle for Vimy Ridge, he was evacuated to a hospital in England, where he died of his wounds.
His mother, as only next-of-kin, received this aluminum identity disc that her son was wearing on the day he was wounded.
Canadian war graves in the shadow of Vimy Ridge, about six months after the battle, 1917.
Major Conn Smythe © Parks Canada
Reason 7 of 11
Many well-known hockey players joined the military in the Second World War, some serving in a “Sportsmans’ Battalion” commanded by Major Conn Smythe.
Wounded in Normandy after D-Day, Smythe had also seen action in the Great War: as an artillery officer and as an air observer in the Royal Flying Corps.
He was awarded a Military Cross for gallantry at Vimy Ridge. After his aircraft was shot down, he became a prisoner of war—and wound up in solitary confinement after two
Charles Adams © Parks Canada
Reason 6 of 11
At 43, Charles Adams would have been like a father to many of the younger men in his company of the 2nd Canadian Mounted Rifles. An accountant, and a single man, he lived with his mother on Gorge Road in Victoria before the war.
He sent this hand-embroidered silk postcard home from France to a lady friend after carrying it into the trenches only a few weeks before he was killed in the big attack on Vimy Ridge, April 9th, 1917. He is buried in La Chaudiere Cemetery, in the shadow of the Ridge.
Adams postcard text: “France, Feb 2/17. Dear Mrs Hewlings,
I have carried this card for you for some time and have carried it up to this front in the trenches, where Fritz startles us once in a while, the desperate old rat that he is. What a grim effort he is going to make which will be the last.
With best wishes to you all for 1917, yours sincerely, C.E. Adams 227750 B Coy. 2nd C.M.R.”
[Coy is Company, and C.M.R’ is Canadian Mounted Rifles.]
Winnipeg Grenadiers © Parks Canada
Reason 5 of 11
Happy to be alive, these soldiers of the Winnipeg Grenadiers smile on board a hospital ship in 1945 after being liberated from nearly four years in Japanese prison camps.
Sent in 1941 to defend Hong Kong, they were captured when that city fell on Christmas Day. During their captivity, they faced starvation, disease, torture and being used as slave labour in shipyards and mines.
Lewis Shipman © Parks Canada
Reason 4 of 11
Lewis Shipman was a 20 year-old boilermaker in Winnipeg when he joined the Army and became a medic, proudly wearing the red cross. He was serving on board the Canadian Hospital Ship Llandovery Castle, returning from Halifax to England when it was struck by a torpedo from a German submarine. 234 lives were lost, including 14 Canadian Nursing Sisters.
The attack on the unarmed, well-lit ship was considered one of the worst atrocities of the Great War; the captain and several officers were brought to trial in 1919.
The victims of the attack have no known graves, and are commemorated on the Halifax Memorial.
Frieda Hazenfratz © Parks Canada
Reason 3 of 11
Frieda Hazenfratz, a young Victoria woman, sewed three cigarettes into this letter to her friend, Frank Dunn, serving in France. He never received them. Dunn, aged 27, was killed on September 23, 1917, near Ypres, Belgium. The letter was returned to her with the stark note “Killed in Action” pencilled on the envelope.
Dunn lies buried alongside many of his men, just outside Wijtschate, in Belgium (known to the troops as “Whitesheet”).
Young Cliff Logan of Victoria © Parks Canada
Reason 2 of 11
Young Cliff Logan of Victoria joined the Royal Canadian Navy when the Second World War began. It must have been a great adventure, being posted to the destroyer HMCS Fraser in Esquimalt and sailing across to Europe.
During the confused and hurried evacuation of British troops from France in June of 1940, HMCS Fraser was accidentally rammed by British cruiser HMS Calcutta. Cliff and 44 crewmates were lost in the cold waters of the English Channel. He was 21 years old, and has no known grave.
Fred Giles© Parks Canada
Reason 1 of 11
Fred Giles was a farm boy from Lizzard Lake, Saskatchewan. He joined up when he was 17, and found himself in Belgium in October, 1917 for the Third Battle of Ypres—better known as Passchendaele.
During the attack across a sea of mud, Fred disappeared. He has no known grave, and 18 year-old Fred’s name appears on the Menin Gate Memorial for the missing in Ypres. | <urn:uuid:04fb63bf-0cb5-47b3-97b1-fe70ce0482d0> | CC-MAIN-2016-26 | http://www.pc.gc.ca/eng/lhn-nhs/bc/fortroddhill/natcul/natcul1/a.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00036-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.981095 | 1,803 | 2.859375 | 3 |
John Dewey was a famous American philosopher, psychologist and educational reformer. He was also the founder of functional psychology and one of the earliest developers of philosophy of pragmatism. His ideas made significant impact in social and educational reforms. Apart from writing primarily in publication works, he also wrote about many topics including experience, nature, art, logic, inquiry, democracy, and ethics. He served as a major inspiration for various allied movements that shaped the thought process of 20th century, including empiricism, humanism, naturalism and contextualism. He ranks among the highest thinkers of his age on the subjects of pedagogy, philosophy of mind, epistemology, logic, philosophy of science, social and political theory. Being one of the leading psychological and philosophical figures of his time, he was elected as the president of the American Psychological Association and president of the American Philosophical Association in 1899 and 1905 respectively. Dewey published more than 700 articles in 140 journals and approximately 40 books in his lifetime.
John Dewey Childhood & Early Life
John Dewey was born on October 20, 1859 in Burlington, Vermont. His parents were Archibald Sprague Dewey and Lucina Artemesia Rich; he was the third of the four sons born to the couple. His eldest sibling died in infancy. John, along with his two other brothers, attended the public school, after which the threesome took admission in University of Vermont in Burlington. During his education in University of Vermont, John Dewey was exposed to the evolutionary theory through the teaching of G.H. Perkins. He also studied the Lessons in Elementary Physiology written by famous English evolutionist, T.H. Huxley. His thought process was always influenced by the theory of natural selection which encouraged him to focus on the interaction between man and its environment, while considering questions of psychology. His teacher philosopher, H.A.P. Torrey always remained his close acquaintance and made critical contribution in his philosophical development.
After completing his graduation in 1879, Dewey taught in high school for two years. During this time period, he was struck with the idea of pursuing a career in philosophy. Consequently, he sent a philosophical essay to W.T. Harris, editor of the Journal of Speculative Philosophy. Harris’s acceptance of the essay gave him the encouragement he required to pursue a philosophical career and he traveled to Baltimore to get enrolled at Johns Hopkins University. At John Hopkins, Dewey was influenced by two powerful intellects namely George Sylvester Morris and G. Stanley Hall. George Sylvester Morris was a German-trained Hegelian philosopher, who told him about the organic model of nature characteristic of German idealism. G. Stanley Hall, on the other hand, was an American experimental psychologist who brushed him with the power of scientific methodology as applied to the human sciences. Dewey received his doctorate in 1884 and accepted a teaching post at the University of Michigan, wherein he worked for the next ten years.
While teaching in Michigan, Dewey published his first two books, “Psychology” (1887), and “Leibniz’s New Essays Concerning the Human Understanding” (1888). Both these books had the impression of his early commitment to the Hegelian idealism. At Michigan, he met his future philosophical collaborator and good friend, James Hayden Tufts. Dewey joined the newly found University of Chicago in 1894. During this time period, he transformed his early idealism into a new empirically based theory of knowledge by associating himself with newly emerging Pragmatic philosophy. He compiled his change in philosophy in four essays entitled together as, “Thought and its Subject-Matter”, which he published along with the other essays from colleagues at Chicago in the book, “Studies in Logical Theory”. He also initiated University of Chicago Laboratory Schools, where he got the opportunity to apply his developing ideas to pedagogic method. His experiences during the experimental processes provided the material for his first major work in education, “The School and Social Progress” published in 1899. The same year, he was elected president of the American Psychological Association. Following some discrepancies with the administration, he had to give resignation from the University of Chicago. Dewey’s philosophical reputation helped him in quickly joining the Department of Philosophy at Columbia University. From 1904 till his retirement in 1930, he remained as the post of professor at Columbia University. He became the president of the American Philosophical Association in 1905.
During the first ten years at Columbia, Dewey wrote numerous articles and essays on his theory of knowledge and metaphysics. These articles and essays were later published in the books, “The Influence of Darwin on Philosophy and Other Essays in Contemporary Thought” (1910) and “Essays in Experimental Logic “(1916). His interest in educational theory strengthened during his teaching years in Teachers College at Columbia. This led to the publication of “How We Think” (1910) and the very important, “Democracy and Education” (1916). Apart from gaining popularity as a leading philosopher, he also became famous for his comments on contemporary issues that were frequently published in esteemed magazines like, The New Republic and Nation. Few of his important writings during this period were, “Reconstruction in Philosophy” (1920), “Human Nature and Conduct” (1922), “Experience and Nature” (1925), “The Public and its Problems” (1927), and “The Quest for Certainty” (1929). Dewey took retirement from active teaching in 1930 but didn’t stop his activities as a philosopher and public figure. He didn’t stop writing vigorously even in his last years and wrote few more important philosophical books which included, “Logic: The Theory of Inquiry” (1938), “Art as Experience” (1934), “A Common Faith” (1934), “Freedom and Culture” (1939) and “Theory of Valuation” (1939).
John Dewey was married twice in his lifetime. With his first wife, Alice Chipman he had six children. He married again to Roberta Lowitz Grant.
John Dewey died on June 1, 1952 in New York City at the age of 92 years. He was buried at Dewey Memorial, Burlington. | <urn:uuid:0c62d127-eceb-4f0b-89f3-d057102db58a> | CC-MAIN-2016-26 | http://www.thefamouspeople.com/profiles/john-dewey-144.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00018-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97947 | 1,331 | 3.328125 | 3 |
Aristotle Biography 9: Corpus
The life and time of the Greek Philosopher
The corpus, Aristotle's texts
Although Aristotle wrote numerous texts, as Diogenes Laertius and others list them, a limited part of them remains - and from only one source, the edition made by Andronicus of Rhodes, 11th president of the Peripatetic school, about 70 BC, published sometime 43-20 BC. It consists only of the writing Aristotle called esoteric, interior texts for use at the school - either made by him, as notes for lectures to have, or made by pupils after listening to them. None of his exoteric writing, for the public outside of the school, remains - other than minimal fragments. In the Hellenistic period, though, the overwhelming majority of references to Aristotle were to his dialogues, the exoteric works.
Regarding the adventure of those books that ended up in the hands of Andonicus, some 250 years after the death of Aristotle, had gone through an adventure, the earliest source is that of Strabo (c.63 BC-c.24 AD) in his Geography, written early in the 1st century AD. Here is the Strabo text about the matter, in its entirety:
From Scepsis came the Socratic philosophers Erastus and Coriscus and Neleus the son of Coriscus, this last a man who not only was a pupil of Aristotle and Theophrastus, but also inherited the library of Theophrastus, which included that of Aristotle. At any rate, Aristotle bequeathed his own library to Theophrastus, to whom he also left his school; and he is the first man, so far as I know, to have collected books and to have taught the kings in Egypt how to arrange a library. Theophrastus bequeathed it to Neleus; and Neleus took it to Scepsis and bequeathed it to his heirs, ordinary people, who kept the books locked up and not even carefully stored. But when they heard bow zealously the Attalic kings to whom the city was subject were searching for books to build up the library in Pergamum, they hid their books underground in a kind of trench. But much later, when the books had been damaged by moisture and moths, their descendants sold them to Apellicon of Teos for a large sum of money, both the books of Aristotle and those of Theophrastus. But Apellicon was a bibliophile rather than a philosopher; and therefore, seeking a restoration of the parts that had been eaten through, he made new copies of the text, filling up the gaps incorrectly, and published the books full of errors. The result was that the earlier school of Peripatetics who came after Theophrastus had no books at all, with the exception of only a few, mostly exoteric works, and were therefore able to philosophize about nothing in a practical way, but only to talk bombast about commonplace propositions, whereas the later school, from the time the books in question appeared, though better able to philosophise and Aristotelise, were forced to call most of their statements probabilities, because of the large number of errors [in the available texts of Aristotle]. Rome also contributed much to this; for, immediately after the death of Apellicon, Sulla, who had captured Athens, carried off Apellicon's library to Rome, where Tyrannion the grammarian, who was fond of Aristotle, got it in his hands by paying court to the librarian, as did also certain booksellers who used bad copyists and would not collate the texts - a thing that also takes place in the case of the other books that are copied for selling, both here [Rome] and at Alexandria. However, this is enough about these men.
An approximate dating of the events, would make the books leave the Lyceum together with Neleus in the mid-3rd century BC, hid in the trench in the earlier part of the 2nd century, to stay there for upto almost a hundred years. Plutarch does in Parallell Lives, written in 99-107 AD, give additional information about what happened to the books from the time of Sylla:
Having set out from Ephesus with the whole navy, he [Sylla] came the third day to anchor in the Piraeus. Here he was initiated in the mysteries, and seized for his use the library of Apellicon the Teian, in which were most of the works of Theophrastus and Aristotle, then not in general circulation. When the whole was afterwards conveyed to Rome, there, it is said, the greater part of the collection passed through the hands of Tyrannion the grammarian, and that Andronicus the Rhodian, having through his means the command of numerous copies, made the treatises public, and drew up the catalogues that are now current. The elder Peripatetics appear themselves, indeed, to have been accomplished and learned men, but of the writings of Aristotle and Theophrastus they had no large or exact knowledge, because Theophrastus bequeathing his books to the heir of Neleus of Scepsis, they came into careless and illiterate hands.
The Andronican edition has got the following order by subjects, much repeated through time, as if the order in which the books were written, but probably they were not:
Physics and natural science
Psychology and biology (also Metaphysics)
Aesthetics (Rhetoric and Poetics)
The oldest list of Aristotle's works is thought to have originated by Ariston of Ceos, who headed the school, around 200 BC. Diogenes Laertius of the 3rd century AD gives a substantial list covering several pages, as does the Vita Menagiana by Hesychius Milesius (6th century AD), and Life of Ptolemy. None of them match precisely.
The Traditional order of Aristotle's remaining works, according to the edition of Immanuel Bekker, Berlin 1831. This is not at all necessarily the order in which the texts were written.
Organon (6 logical treatises)
On Coming-to-be and Passing-away
On the Cosmos
On the Heavens I-IV
On the Soul I-III
Historia Animalium I-X
Parts of Animals I-IV
On Movement of Animals
Progression of Animals
Generation of Animals I-V
Minor Works: On Colours, On Things Heard, Physiognomics, On Plants I-II, On Marvellous Things Heard, Mechanical Problems, On Invisible Lines, The Situations and Names of Winds, On Melissus, Xenophanes and Gorgias
Rhetoric to Alexander
Magna Moralia I-II
Nicomachean Ethics I-X
Eudemian Ethics I-VIII
On Virtyues and Vices
"Art" of Rhetoric
His texts are also often sorted according to categories, as in the following way:
poems (three fragments survive)
letters (fragments, some not genuine)
exoteric writings (Eudemus, Protrepticus, On Philosophy, On the Good, On the Ideas, only remaining in some extracts or summaries)
Memoranda and collections of material
(Over 200 titles, whereof 158 constitutions of Greek states, of which Constitution of the Athenians was rediscovered in 1890, also a lost record of dramtic festivals called Didascaliae, and surviving Problems, Historia Animalium)
Scientific and philosophical treatises (the surviving corpus, sorted into:)
© Stefan Stenudd 2003, 2006.
Logical works (The Organon)
Works on natural history
Philosophical works (containing Poetics)
The Greek Philosophers
Cosmos of the Ancients - the Book
The material on this website about the Greek philosophers and what they thought about cosmology, myth, and the gods, is now a book. It can be ordered at the Internet bookstores - printed or as a Kindle ebook. Both contain the footnotes with additional explanations as well as literary sources. Click the image to see the book at Amazon.
All about the peaceful Japanese martial art.
A system of jo (staff) and ken (sword) exercises for aikido students.
My paintings, drawings, photos and video art.
My writing - fiction as well as non-fiction.
What the Greek philosophers thought about the cosmos, the gods and the myths.
Myth, its psychology and archetypes.
Essays on various intriguing topics.
My reviews of film, TV, and music.
My Other Websites
Myths in general and myths of creation in particular.
Qi (also spelled chi or ki) explained, with exercises to increase it.
An encyclopedia of life energy concepts around the world.
The wisdom of Taoism and the Tao Te Ching, its primary source.
The ancient Chinese system of divination and free online reading.
Tarot card meanings in divination and a free online spread.
The complete horoscope and how to read it.
Zodiac sign astrology, explaining each sign and its meaning.
My Facebook page.
My Twitter account.
My Amazon Author page.
My YouTube account with aikido videos.
My YouTube account Aravadia, with art videos and poetry.
My Swedish website.
Click the header to get to the webpage where I present all my books in English. Click an image below to go to that book's page on Amazon.
I'm a Swedish writer
of fiction and non-fiction books in both Swedish and English. I'm also an artist
, an historian of ideas
and a 7 dan Aikikai Shihan aikido instructor
. Click the header to read my full bio. | <urn:uuid:ecf61221-bfb1-493d-93b5-2cf45a07732c> | CC-MAIN-2016-26 | http://www.stenudd.com/aristotle/aristotle-life-09-corpus.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00147-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955048 | 2,082 | 2.921875 | 3 |
Perhaps the best-loved gems of all time, pearls—both natural and modern cultured pearls—occur in a wide variety of colors. The most familiar colors are white and cream (a light yellowish brown). Black, gray, and silver are also fairly common, but the palette of pearl colors extends to every hue. The main color, or bodycolor, is often modified by additional colors called overtones, which are typically pink (sometimes called rosé), green, purple, or blue. Some pearls also show the iridescent phenomenon known as orient.
This newly opened akoya pearl oyster reveals the cultured pearl that grew inside its gonad. This oyster’s scientific name is pinctada fucata (martensii).
Cultured pearls are popular for bead necklaces and bracelets, or mounted in solitaires, pairs, or clusters for use in earrings, rings, and pendants. Larger pearls with unusual shapes are popular with creative jewelry designers.
This multicolored necklace combines the beauty of Tahitian and South Sea cultured pearls. The scientific names of the oysters that produced them are, respectively, Pinctada margaritifera and Pinctada maxima. - Courtesy Frank Mastoloni & Sons, Inc.
Pearl—natural or cultured—is a US birthstone for June, together with alexandrite and moonstone.
Natural pearls form in the bodies, or mantle tissue, of certain mollusks, usually around a microscopic irritant, and always without human help of any kind.
The growth of cultured pearls requires human intervention and care. Today, most of the mollusks used in the culturing process are raised specifically for that purpose, although some wild mollusks are still collected and used.
This group of cultured pearls displays some of the exotic colors pearls can exhibit. - Blaire Beavers, courtesy Takayas Mizuno
To begin the process, a skilled technician takes mantle tissue from a sacrificed mollusk of the same species and inserts a shell bead along with a small piece of mantle tissue into a host mollusk’s gonad, or several pieces of mantle tissue without beads into a host mollusk’s mantle. If a bead is used, the mantle tissue grows and forms a sac around it and secretes nacre inward and onto the bead to eventually form a cultured pearl. If no bead is used, nacre forms around the individual implanted mantle tissue pieces. Workers tend the mollusks until the cultured pearls are harvested.
There are four major types of cultured whole pearls:
Akoya—This type is most familiar to many jewelry customers. Japan and China both produce saltwater akoya cultured pearls.
South Sea—Australia, Indonesia, and the Philippines are leading sources of these saltwater cultured pearls.
Tahitian—Cultivated primarily around the islands of French Polynesia (the most familiar of these is Tahiti), these saltwater cultured pearls usually range from white to black.
Freshwater—These are usually cultured in freshwater lakes and ponds. They’re produced in a wide range of sizes, shapes, and colors. China and the US are the leading sources.
Cultured pearls from Australia, Indonesia, the Philippines, and Myanmar, are grown in the Pinctada Maxima mollusk. This shell one is called gold-lipped because of the color of the outer rim of its mother-of-pearl layer.
The black-lipped mollusk can produce a variety of cultured pearl colors. The color of the mother-of-pearl layer is often related to the color of the resulting cultured pearl’s nacre. - Courtesy A & Z Pearls and Tasaki Shinju Co
The images in this chart represent typical ranges of size, shape, color, luster, surface, and nacre quality of akoya cultured pearls.
The images in this chart represent typical ranges of size, shape, color, luster, surface, and nacre quality of South Sea cultured pearls.
The images in this chart represent typical ranges of size, shape, color, luster, surface, and nacre quality of Tahitian cultured pearls.
The images in this chart represent typical ranges of size, shape, color, luster, surface, and nacre quality of Chinese freshwater cultured pearls cultured pearls. | <urn:uuid:89686eea-f07a-4d44-b162-18fff8e49ea2> | CC-MAIN-2016-26 | http://www.gia.edu/pearl-description | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.920625 | 932 | 3.25 | 3 |
Two ways local governments can engage in green planning are:
- Create a county or city-wide conservation plan.
- Include a “habitat conservation” section in existing planning documents.
Creating a conservation plan involves six basic steps:
- Establish a conservation vision and goals for your community.
- Identify and describe the status of important habitats.
- Develop conservation strategies.
- Identify mechanisms to implement these conservation strategies.
- Write the plan.
- Implement the plan and monitor progress.
Most communities already have plans that guide land use, such as comprehensive plans, watershed management plans, park and open space plans, transportation plans, and growth management plans. Writing a “habitat conservation” section for these existing plans may be easier than creating a new plan.
Local Conservation Plans
The Chatham County, North Carolina, Comprehensive Conservation Plan - was funded by external grants andwas created by a partnership of scientific experts, community members, the planning department and elected officials.
The Jasper County, South Carolina, Natural Resources Conservation Plan - is an example of an effective conservation plan adopted by a local government.
Village of Schaumburg, Illinois Biodiversity Recovery Plan - Part of the Village of Schaumburg’s Comprehensive Plan, the Biodiversity Recovery Plan guides the community’s efforts to preserve, restore, and maintain biodiversity within the community.
Land Use and Comprehensive Plans
City of Raleigh Comprehensive Plan - Section C.6, 'Wildlife Habitat Protection and Preservation', in the Environmental Protection section of Raleigh’s Comprehensive Plan - outlines policies and actions to guide the City’s future efforts in conserving and maintaining “priority” wildlife habitat and species identified in the North Carolina Wildlife Action Plan.
Orange County 2030 Comprehensive Plan - Chapter 6 of the Orange County Comprehensive Plan provides examples of goals and objectives that address conservation of priority natural areas and wildlife habitat.
Town of Navassa, North Carolina, CAMA Land Use Plan - provides an example for a rural community near a major city.
Randolph County, N.C., Growth Management Plan - emphasizes a vision and practical goals to conserve natural heritage through cluster development. It also lays out techniques to allow for higher density development, once public water and sewer are available on previously developed sites.
The Westmoreland County, Pennsylvania, “New Horizons: A County-wide Greenways and Blueways Network,” plan uses habitat conservation data to identify a network of large habitat hubs and corridors. Strategies, land use methods and funding mechanisms to conserve the network are discussed.
McHenry County, Illinois, Long Range Transportation Plan - is based on a Green Infrastructure plan and maps that include important wildlife habitat areas.
Arizona Department of Transportation Wildlife Linkages Assessment - was created in consultation with wildlife professionals to identify key areas for wildlife underpasses and to minimize road construction.
See the Green Growth Toolbox Handbook, Green Planning Section, for more example plans and links. | <urn:uuid:ad1fe877-f1eb-4988-ad68-a068055c6c46> | CC-MAIN-2016-26 | http://www.ncwildlife.org/Conserving/Programs/GreenGrowthToolbox/GreenPlanning.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00131-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.832271 | 614 | 3.609375 | 4 |
The first session ever at Shodor of Medicine and Biosciences began with a good first day. Bob2 introduced what SUCCEED meant and then overviewed what the students would do throughout the course of the week. Next, there was a get-to-know-you session where the students introduced themselves.
Then, the students went outside and were shown Bob2's ¨stupid rope trick, ¨ that actually did have a purpose, aside from giving Bob a chance to show off. The students were asked how to get better observations of what was actually happening in the trick. After the students came in from the heat, Bob2 showed the students how to tie a shoelace, and then he showed them how to represent this action mathematically. The students then discussed what different computers and computer platforms can do differently and what scientists can use each for. Bob then held a short lesson in learning STELLA® to get the students used to the modeling software they would be using throughout the week. To test the studentsâ skills, the class, together, modeled a basic population graph. Then the students were required to model a predator-prey relation ship using Lynx as the predator and Hares as the prey.
In the afternoon Bob explained in detail ODEs, or Ordinary Differential Equations. The class also learned how they could model these equations in STELLA®. Next, the class was shown a model of how a flu epidemic will spread in a population by Erin Bell a graduate student in epidemiology at UNC-CH. Then for the rest of the day the students were given the task to model how an Ebola outbreak would affect a population. It was enough to keep the students busy. | <urn:uuid:abca413f-c8db-44de-9332-77a756a63fcc> | CC-MAIN-2016-26 | http://www.shodor.org/succeed/workshops/archive1999/summer/medbio/overviews/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00093-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.978636 | 356 | 2.671875 | 3 |
Geography field trips have been anticipated events for 75 summers at North Texas. According to the Campus Chat, geography instructor Mary Jo Cowling arranged the first trip "ever to be organized and conducted by any department" in the history of the university in June 1932. She led students to Carlsbad Caverns in New Mexico for a three-day outing in their private automobiles. The trip proved so popular that Cowling scheduled another during July in a chartered bus.
Trips to Carlsbad Caverns continued for decades and the itinerary was expanded to include many other destinations. According to a history of geography at UNT (pdf) by Donald Lyons, associate professor of geography, and Bill Forbes ('02 M.A., '02 M.S., '04 Ph.D.), Cowling listed 28 separate field trips for her students by 1936.
After Cowling's death in 1943, professors Robert Miller and Lee Knox expanded upon her legacy. They created a six-credit field trip course in the late 1940s, alternating each year between the eastern and western parts of the United States. Their month-long or longer field trips included destinations such as the Grand Canyon, the California coast, Yellowstone National Park, Washington, D.C., New York and even Canada.
The trips were a bargain by today's standards. Students on the 1956 tour, who visited 22 states in 24 days, paid $220 for transportation, tolls, taxes, lodging and tips. Meals, figured at $2 per day, were extra.
Geography students toured many Texas destinations, too. On summer trips with faculty member Walter Hansen in the '50s and '60s, they witnessed farm conservation in Pilot Point and the effects of erosion in Wise County. A popular destination was the Lone Star Steel Co. mill in Lone Star, referred to in several Chat articles as "one of the great shows in East Texas." The highlight was observing "Flossie Belle," the blast furnace, in action.
More recently the geography department has offered field courses to the British Isles, Ghana and Spain and archaeological trips to the southwest United States. Students taking the course "Human Impacts on the Environment" this May visited Mexico, covering such topics as the environmental effects of decorative flower-growing, water treatment and urban planning. And breaking from the
summer tradition, a field course to Nepal
is planned for this winter. | <urn:uuid:835020c7-a175-4148-bbca-3df05d225ef5> | CC-MAIN-2016-26 | http://www.unt.edu/northtexan/archives/s07/timetracks.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00037-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967919 | 488 | 3.140625 | 3 |
In which Scrabble dictionary does LACTOSE exist?
Definitions of LACTOSE in dictionaries:
- noun -
a sugar comprising one glucose molecule linked to a galactose molecule
A disaccharide, C 12H22O11, found in milk, that may be hydrolyzed to yield glucose and galactose.
A white crystalline substance obtained from whey and used in infant foods, bakery products, confections, and pharmaceuticals as a diluent and excipient.
There are 7 letters in LACTOSE:
A C E L O S T
Scrabble words that can be created with an extra letter added to LACTOSE
All anagrams that could be made from letters of word LACTOSE plus a
Scrabble words that can be created with letters from word LACTOSE
7 letter words
6 letter words
5 letter words
4 letter words
3 letter words
2 letter words
Images for LACTOSELoading...
SCRABBLE is the registered trademark of Hasbro and J.W. Spear & Sons Limited. Our scrabble word finder and scrabble cheat word builder is not associated with the Scrabble brand - we merely provide help for players of the official Scrabble game. All intellectual property rights to the game are owned by respective owners in the U.S.A and Canada and the rest of the world. Anagrammer.com is not affiliated with Scrabble. This site is an educational tool and resource for Scrabble & Words With Friends players. | <urn:uuid:7ebe1a73-2f5b-4ea7-8a2f-119e3609f108> | CC-MAIN-2016-26 | http://www.anagrammer.com/scrabble/lactose | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00041-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.894541 | 331 | 3.015625 | 3 |
This is a quick follow-up to my other post on fat cells, which as it happens, isn’t the only obesity-related story out today. Another paper found a common genetic variant that increases the risk of obesity in its carriers.
A huge team of researchers scoured the genomes of almost 17,000 European people for genetic variations that are linked to obesity. Until now, only one has been found and it sits within a gene called FTO. This new study confirmed that FTO variants have the strongest association with obesity, but in the runner-up position is another variant near a gene called melanocortin-4 receptor or MC4R.
Given that this new variant isn’t actually within the gene itself, it most likely affects the risk of obesity by changing how MC4R is controlled. This isn’t the first time that MC4R has been linked to obesity. It’s involved in controlling appetite and metabolism, and people with rarer mutations in the gene itself become severely obese at an early age. The new variant has much less dramatic effects, but it’s also much more common. In fact, 81% of people carry at least one risky variant at either FTO or MC4R.
Despite their widespread nature, the effects of these risky variants are very modest and their location can only be revealed by very large collaborative studies like this one. For example, each copy of the new MC4R-associated variant typically increases a person’s BMI by a mere 0.2 units, although the researchers found that each copy also increases the risk of being severely obese by 31%. It’s almost certain that a whole slew of variants like these influence the large variations in bodyweight in modern populations.
Whenever an obesity-related gene or variant is identified, the reactions from both the press and the public are often more interesting than the results themselves. I’ve written about this in a previous post on FTO, and since many of the same lessons are applicable to this new study, I’m going to partially repost what I wrote at the time.
To many people, the discovery of an “obesity gene” is yet another way for obese people to brush off the burden of responsibility from their shoulders. After all, they say, obesity is just a matter of eating healthily and exercising regularly. To go beyond this is to over-complicate a simple lack of willpower. And where exactly was this gene in post-war Britain when obesity levels were much lower?
This attitude couldn’t be more wrong or more unhelpful, and many of these complaints misunderstand the nature of obesity-related genes. It is clear that obesity has some genetic basis, but no researcher worth their salt would imagine that a single gene dictates whether a person becomes obese or not. Obesity-related genes are likely to work through much more delicate ways, by affecting our metabolism or the way we lay down fat.
But subtlest of all are genes that affect our very behaviour. These inherited influences could make individuals more responsive to the smells or sights of food. They could make the brain less responsive to signals from the gut that say, “I’m full.” They could give people a strong innate preference for the chemicals that give fatty foods their taste.
These subtle effects only become apparent when food is plentiful. When supplies are more modest, say post-war Britain, variations in such genes across a population wouldn’t have much effect. But in the 21st century, a combo of widely available, cheap junk food and sedentary jobs or pastimes make it easy to eat lots and do little. In this environment, small genetic differences that affect how people react to food or activity can have massive effects.
This is a case of nature via nurture; of genes and environment interacting with each other to affect our behaviour. Obese people are likely to carry around a large range of genetic variants that alter their behaviour in ways that leave them very vulnerable in a world where obesity is just around the corner.
Obesity then, is a complex disease with many underlying causes, some of which are genetic. And because of this, throwing words like ‘self-discipline’ and ‘laziness’ about with cavalier abandon does nothing for the obesity debate or for obese people themselves.
Is the combination of genes and environment an excuse for obesity? Hardly. But it does go some way toward explaining it, which is more than an accusation of faulty willpower will do. Scientists hope that understanding the genetics of obesity may herald new treatments, but this is a hope for the future. For the present, very little actually changes for obese and overweight people.
To overcome the effects of genes and environment, they must still make lifestyle choices. The old maxim of “eat well, be active” still applies. The real benefit of these discoveries would be a change in the attitudes of everyone else, from damning to supportive and from accusatory to understanding.
Now enough lecturing, here’s a little game. I’m writing this post before the story’s embargo breaks and at this minute, other journalists are probably doing the same thing. The story may not get covered at all but if it is, there’s a very good chance that the medley of headlines will betray a fundamental misunderstanding of the nature of genetics. So, let’s play a headline bingo. If the story does make it into tomorrow’s papers, see if you can spot any of the following phrases:
- Obesity being described as “in the genes”
- “The gene that makes you fat” or anything equally deterministic
- “Scientists discover fat gene” or anything suggesting this is the first time this has happened
- Any use of “It’s official!” or “Proof” or anything that suggests that replication is for wusses
Reference: Loos, R.J., et al. (2008). Common variants near MC4R are associated with fat mass, weight and risk of obesity. Nature Genetics DOI: 10.1038/ng.140 | <urn:uuid:fdaa05c7-ec08-4889-b022-6bd06f8dfc9a> | CC-MAIN-2016-26 | http://blogs.discovermagazine.com/notrocketscience/2008/05/04/making-sense-of-obesity-genes/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00097-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961144 | 1,276 | 2.84375 | 3 |
The Ten Sephiroth on the Kabbalistic Tree of Life
The use of gematria was widespread in esoteric Judaism at the time the New Testament was written. Although the Kabbalah did not exist during the first century, the esoteric influences that would later coalese into this practice were already fully developed and widely disseminated in Judaic thought. We can find the use of gematria in various other literary works dating back to well before the first century. Esoteric Judaism believed that numbers held a mystical symbolic value, and because each letter of the Hebrew alphabet was also used as numerals, each word in the Hebrew language had a numerical value. Jewish mystics believed that words with equal numerical value were purposely associated by God at the creation of the world, and spiritual insight into the properties of names and words could be found by exploring these connections.
As can be seen from the table above, the Hebrew letter waw is transliterated into English as the letter w, and has the numerical value of 6 - so www in Hebrew has the numerical equivalent of three sixes - 6-6-6.
The most famous use of gematria is of course the number of the Beast in the book of Revelation. The fact that the number of the Beast, 666, was also the number of a man's name (most likely the Roman Emperor Nero in the preterist layer of interpretation) makes it explicitly clear that the use of gematria should be used to decipher this riddle. The threefold repitition of sixes in this number was the primary symbolic value for the author of the Apocalypse. Falling short of the perfect divine number seven (which is to be found repeated throughout Revelation both in the central narrative itself and also hidden in the sub-text), the number six represented imperfection - the seperation between the Creator and the creation itself. The fact that it repeated three times in this number connotes the use of the Hebrew superlative. Hebrew does not use the superlative in the same singluar way as most other languages. To emphasise that the superlative is being used the word is repeated three times. So the fact that the word "holy" is repeated three times in Isaiah 6:3 indicates to the reader that God is holiest. The number six being repeated three times therefore suggests that the Beast is the superlative of imperfection - the exact opposite of the Holy Trinity which is represented by three sevens.
To the semitic mind of the author of the Apocalypse three sixes would be represented by a threefold repetition of the letter waw. And a threefold repetition of the Hebrew letter waw is transliterated into the modern Latin alphabet as www - an acronym which is used to represent the medium through which all electronic commerce is now conducted - the worldwide web (see the post Hebrew 666 for more details).
The New Testament contains some other examples of gematria other than the number of the Beast. Pope Benedict XVI for instance explains that Matthew's genelogy of Jesus is seperated into three groups of fourteen generations because 14 was the numerical value of the name of King David (Jesus of Nazareth p9). In another mysterious use of number symbolism which has been widely speculated upon number is that of the 153 fishes in the net described in John 21:
So Simon Peter went aboard and hauled the net ashore, full of large fish, 153 of them. And although there were so many, the net was not torn. (John 21:11)
John Michell asks why such a precise number would be included, and argues that it has a special signifcance for the author of the fourth Gospel. Michell notes that 153 is a triangular number - the sum of all numbers from 1-17, and that the numerial value of both the Greek words to diktuon ("the net") and ichthyes ("fish") in isopsephy (the Greek version of gematria) is 153 (How the World is Made pp97-98).
The use of gematria became common in the occult largely through the influences of the Kabbalah on Western esotericism. Although many Kabbalah adherents believe that the Zohar, the foundational work of Kabbalistic teaching, was written during the second century AD by Rabbi Shimon bar Yohai, it is more likely to have been written by Moses De Leon, who claimed to have discovered the book in the 13th century. After the publication of the Zohar, the Kabbalah became widely systemised, and began to attract thinkers from outside of Judaism. The renaissance philosopher and occultist Pico della Mirandola (1463-1494) is thought to be the first to introduce Kabbalistic teachings to Western Hermetic thought, leading to the development of the so-called "Christian Kabbalah" which was to profoundly influence secret societies such as the Rosicrucians and Freemasons.
Many of the rituals of Freemasonry are based upon the principles of sacred geometry and the dimensions of the Temple of Solomon - influences which are derived from the Kabbalah. The infamous 19th century mason Albert Pike called the Kabbalah the "key of the occult sciences" and stated that this key turns on the secret science of numbers - given rise to the masonic motto omnia in numeris sita sunt - "Everything lies veiled in numbers" (Morals and Dogma pp626-642).
See also the related articles:
The Mark of the Beast?
NFC Technology and the Mark of the Beast | <urn:uuid:0b7396d8-a6a7-41a2-94c0-63fbb5226d32> | CC-MAIN-2016-26 | http://unveilingtheapocalypse.blogspot.com/2011/11/gematria-and-number-of-beast.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00091-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956852 | 1,154 | 2.96875 | 3 |
Telescopes show us where we are in the universe, and also reveal our relationship to the cosmos.
Water is not just a refreshing beverage, it has other useful applications too.
Our senses allow us to collect data about the world around us, but they are also limited.
The chemistry of carbon and how it contributes to the formation of life.
Neil deGrasse Tyson is the same age as NASA, and both have come a long way in 50 years.
What would aliens conclude about intelligent life on our planet if they picked up our broadcast signals?
Some of the most challenging problems in physics are rooted in the most beautiful-to-behold sights.
Neil Tyson describes the uproar in 2004 after NASA announced it had cut funding to repair the Hubble.
Many people fear information that might contain bad news beyond their control, but such fear forfeits the opportunity to solve the problem. Here's how scientists approach bad news and uncertainty.
According to Neil deGrasse Tyson,
dark matter and
dark energy could just as well be called
without a clue A and
without a clue B. | <urn:uuid:9536b1fe-89cd-4ebe-8300-24cb39782b9d> | CC-MAIN-2016-26 | http://www.haydenplanetarium.org/tyson/tags/media/cosmic-perspective | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00160-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.919731 | 227 | 3 | 3 |
Mars has given NASA a platform to demonstrate what it does right in the field of unmanned planetary exploration. Spurred by the success of the Mars rovers Spirit and Opportunity, the space agency has entered a golden age of planetary robotics.
But because of budgetary restrictions, NASA’s success may be short-lived. President Obama’s 2013 budget plan slashes robotic space exploration by 20 percent, leaving NASA with $1.2 billion to divide among fiercely competing missions.
Fortunately, the next Mars rover, Curiosity, won’t be affected. It’s already on its way to the Red Planet and should roll across its surface this summer.
Maven, a suborbital probe that will study Mars’ upper atmosphere, is scheduled for launch next year. Assuming they operate as designed, Maven and Curiosity will provide many years of data on Earth’s neighbor.
The immediate casualties of the budget cuts include a 2016 orbiter that would measure gases in the Martian atmosphere for microbial activity and a 2018 mission that would land a probe on the surface, scoop up rocks, and return to Earth with them.
NASA would receive $17.7 billion in the 2013 budget, only slightly less than this year. But it still means that important, knowledge-expanding programs will be delayed. The development of the heavy-lift rocket that will take humans into deep space has been pushed back to the 2020s. The James Webb Space Telescope, Hubble’s successor, has been put off for six years.
Sadly, the list of postponed missions has more to do with the absence of political will and financial commitment than scientific know-how.
In austere times, every segment of government must take a hit to get spending under control. But Americans shouldn’t let the pioneer spirit that built NASA wither.
NASA exists to explore space and extend the frontiers of what we can know about the universe. It’s incumbent on the President, Congress, and taxpayers to get the nation’s fiscal priorities in order so that NASA can fill our thirst for knowledge and expand our desire to go beyond. | <urn:uuid:88bf5cdb-5be3-4528-99b0-ad9c65d16917> | CC-MAIN-2016-26 | http://www.toledoblade.com/Editorials/2012/03/19/NASA-on-the-block-1.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.930462 | 435 | 3.265625 | 3 |
This winter's harsh weather and record-setting snow have many asking, what happened to global warming?
Winter officially started a few days ago, but freezing temperatures have paralyzed European travel for weeks.
"We've reduced the number of flights by European airlines by a third," John Holland-Kaye, BAA commercial director, said. "And that's the first time that's ever been done by any airport in the world."
Now, European Union leadership, which believes in global warming, wants to know why European airports weren't prepared for what some are calling a new "ice age."
"The vice president wanted to underline that he is extremely concerned about the level of disruption to travel across Europe caused by the severe snow," Helen Kearns, European Commission spokeswoman, said. "Stating very clearly that it's unacceptable and that it should not happen again."
Britain is going through it's worst December since official weather records began.
In the United States, the collapse of the Minneapolis Metrodome roof from too much snow has become a symbol of winter 2010.
In 2002, some climate experts predicted there wouldn't be enough snow in the future for the Winter Olympics, and that ski resorts would be snowless.
Yet, that's currently not a problem in California.
Still, the climate change establishment isn't ready to doubt its own predictions about global warming. Supporters say the record snowfall and cold whether are, in fact, proof of climate change. | <urn:uuid:fde49c6e-b1ae-42ff-a1ad-987bd24a2504> | CC-MAIN-2016-26 | http://www.cbn.com/cbnnews/us/2010/December/Bizarre-Weather-Adds-Doubt-to-Global-Warming/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00124-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974267 | 299 | 3.015625 | 3 |
Chaco Canyon is also the site of a remarkable solstice marker, located atop Fajada Butte in the main canyon. The site was discovered in the late 1970's by an artist named Anna Sofaer, who was studying rock art in the region. It was an extremely fortuitous discovery, for a number of reasons. The marker resembles nothing more than a group of large slabs of rough stone leaning against the side of a cliff high up on the butte. On the wall of the cliff behind the stones two spiral designs had been carved. Nothing about the site suggests it was particularly important or noteworthy, yet Ms Sofaer discovered that at midday of the winter soltice, light shining through the stone slabs casts a wedge-shaped beam upon the center of the larger spiral, which slowly descends through the spiral and disappears. For this reason, the marker is known as "The Sun Dagger".
It was an incredibly exciting discovery. Later observations confirmed that the marker also marks the summer solstice and both equinoxes. Finally, experimentation has shown that the 19 year lunar cycle was also marked. This was an altogether remarkable achievement, but the story isn't finished there. In the early 90's the stone slabs shifted their position. Had Ms Sofaer not made her discovery when she had, chances are very good that the existence of the Sun Dagger would never have been known to us. Efforts to restore the marker are now underway. Because of its fragility, the site is not open to the public, but a video on the marker can be viewed at the visitor's center.
|Chetro Ketl - a view of the town from the Pueblo Alto trail on the north mesa.|
|Kin Kletso - another town viewed from the Pueblo Alto trail. The circular structures in this and the previous images are called kivas. The Hopi, Zuni, and Rio Grande Pueblo peoples use similar structures today as centers of ceremony, worship, and social interaction.|
|The Great Kiva - In Chaco Canyon, most of the towns are located along the north side of Chaco Wash, and their kivas are an integral part of their structure. On the south side of Chaco Wash stands Casa Rinconada, also called the Great Kiva. This kiva is more than twice the size of any other in the canyon. It stands among a small set of ruins that were crudely built as compared with the Great Kiva and the pueblos on the north bank. It is speculated that Casa Rinconada was a ceremonial center for all of the communities in the canyon.|
|The Supernova Pictograph - In AD 1054 a new star appeared in the constellation Taurus, so bright it was visible even by day. Today we know this object as the Crab Nebula, the remnants of a supernova of a massive star. The Chinese recorded this event at the time of its appearance, and many archaeologists believe that the Anasazi did so as well, with this set of pictographs located along the trail to Penasco Blanco ruin. The crescent shape accurately represents the phase of the moon at the time of the explosion, while the star shape is thought to represent the supernova itself. The handprint seems to be in the nature of a signature.|
Chaco Canyon is located in northwestern New Mexico, roughly 50 miles south of Farmington. It can be reached from state route 44 from the north, and from interstate route 40 from the south. Either way, the last 20 to 25 miles are driven over graded dirt roads. | <urn:uuid:896839c9-4c07-4e88-b4b7-adfa3097b1a7> | CC-MAIN-2016-26 | http://www.stmoroky.com/recreate/canyon/chaco.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00020-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974249 | 743 | 3.078125 | 3 |
I've heard it all before: Unix will control the world's desktop computers! No, NextStep will rule! Wait a minute, make that OS/2 — OS/2 will bury Microsoft Windows!
Not one of these predictions has come to pass. Instead, it's the McDonald's of software, Microsoft Windows, that's become the de facto standard for business computing. But now there are echoes of a new mantra in the land of computers: Linux, Linux, Linux.
Linux is yet another operating system, and tech heads are predicting that it will one day take over the computing world. They might be right. This time, the upstart is supported by thousands of volunteer programmers, hundreds of millions of dollars in venture capital, and stratospheric stock valuations.
And there's more. Unlike past contenders for the title of heavyweight operating system, Linux is free. Anyone can download Linux from the Internet, and it works on just about every type of computer. Moreover, Linux is stronger, tougher, faster, and more secure than Windows — at least, that's what Linux evangelists claim.
Linux's rise has been truly remarkable. But even until recently, Linux was mostly confined to back-office computers. Now, however, several hardware and software companies are trying to convince the rest of us that we should use Linux too.
Can we depend on a free operating system to run all of our business applications? To find out, I spent a few weeks installing Linux on different systems and sampling Linux-based software packages. After much poking and prodding, I'm primed to take on 10 of the most frequently asked questions about Linux. Here, then, are the FAQs.
1. What is Linux, and is it really free?
Linux is the underlying software code, or "kernel," that tells a computer what to do and how its software should behave. The prime mover behind Linux is Linus Torvalds, a software engineer who, in 1991 — while still a student at the University of Helsinki — began working with Minix software to create a Unix-like operating system from scratch. But he hasn't been going it alone. For the past nine years, thousands of programmers have worked on Linux (pronounced LINN-ucks), developing a free operating system that anyone can download from the Net.
Linux is free in more ways than one. In Linux's case, "free" also means "transparent." That is, anyone can look at the software's underlying components — which in most other operating systems are kept top secret — make changes, and thus write better programs or fix problems easily. That's because Linux hews to an open-source model, where there are no licensing fees for looking at, adapting, and fixing the code. And that's a very big deal, because anybody can use Linux software to create embedded applications, such as programs that are customized for specific work requirements. (This comes with a caveat: If you alter the source code — that is, the main components — of the Linux software, you must in turn make the source code of any software you write freely available to other programmers.)
2. Enough about code. What about Linux's "look and feel"?
Although there's just one stable kernel version of Linux, there are several different graphical interfaces from which to choose, each with its own special features. Most of these friendlier, "Linux-for-the-rest-of-us" packages are based on one of two popular graphical user interfaces, or GUIs.
KDE (K Desktop Environment) was an early graphical interface brought over from the commercial Unix world. At first, many Linux proponents resisted KDE, because some of the tools that programmers used to create programs and improve the software in KDE were not "open" — that is, a license was required to use them.
Consequently, a collection of volunteer programmers decided that a completely open and free GUI was necessary. So they created GNOME (meaning GNU Network Object Model Environment), a graphical front end that's comparable to KDE, but with a lot more flexibility. With GNOME (pronounced Guh-NOME), you can change and adjust nearly every aspect of your system. Since GNOME was released, KDE backers have also worked to distribute KDE tools for free. Consequently, retail versions of Linux are usually wrapped in either a KDE or a GNOME graphical interface.
To make things more confusing — but also more flexible — most GUIs work with several different "windows managers." These are the elements that you actually see on the screen, such as colored menu bars and cutesy icons. For example, GNOME works with several windows managers. One of these, called Enlightenment, looks very much like Microsoft Windows 98. You can also give GNOME a Macintosh-like interface, if that's what you're used to.
All of this is free: Linux, the GUI, and the windows manager. Most have been bundled together in a single package. Often dubbed "distros" (for "distributions"), these packages usually include an automated installation program, the Linux kernel, a GUI such as KDE, and several windows managers. Distros built for the average computer user include Red Hat Linux 6.1, Caldera's OpenLinux 2.3, and Debian GNU/Linux.
Coordinates: Linus Torvalds's home page, www.cs.helsinki.fi/~torvalds; Linux Online, www.linux.org; KDE, www.kde.org; GNOME, www.gnome.org
3. What do I get when I download Linux?
You can download a perfectly serviceable, free-of-charge version of Linux from the Web, along with the associated installation libraries. Even most of the commercial versions — those from Caldera and Red Hat, for example — are available free of charge on the Web. But unless you're a supergeek, I would not recommend taking the download path. First, the free Web version doesn't come with free technical support. Second, downloading and installing the software can be a serious challenge. (Even Torvalds reportedly uses a commercial version of Linux.)
A smarter, hassle-free approach is to shell out the money for a commercial distribution of Linux. I tried several versions — ranging from $20 to $50 — and they all worked fairly well. The commercial packages all have added an easy-to-use front end, which means you don't have to enter arcane code words to get the operating system up and running.
4. Which Linux package is right for me?
Computer users who want to run Windows on the same system will find that Caldera's OpenLinux 2.3, which costs $50, is the easiest of the Linux packages to install. OpenLinux uses the KDE graphical interface and includes all of the major applications you will need, including Sun Microsystems's StarOffice 5.1, Corel's WordPerfect 8, and Netscape Communicator 4.6.
Installation of Red Hat Linux 6.1 wasn't quite as easy as Caldera's, but it still managed to find nearly all of my computer's internal components. In other words, I didn't have to tell it what kind of hard drive I have. Better yet, for $80, the Red Hat Linux 6.1 Deluxe package gives you a choice of using either the KDE or GNOME interface. And it features a dial-up network connection that makes accessing the Internet as easy as it is from Windows. But what I most appreciated was Red Hat's free month of technical phone support. Caldera charges you for technical-support calls.
The much-ballyhooed release of Debian GNU/Linux — with its affordable $20 price tag — is supported by Silicon Graphics, O'Reilly & Associates, and VA Linux Systems Inc. But Debian GNU/Linux is a noncommercial version of the operating system — and in some ways, a step backward. Although it uses the GNOME interface, Debian's installation is much more awkward than Red Hat's or Caldera's. If you don't feel comfortable telling Debian what kind of hard drive you have, Debian isn't for you. And getting it to work alongside Windows on the same machine requires a lot of futzing around.
Coordinates: Caldera Systems Inc., www.caldera.com; Red Hat Inc., www.redhat.com; Debian, www.debian.org
5. We use several different systems in our office. Does Linux really work on any kind of computer?
Just about. According to the specs, the minimum PC requirements for Linux are an Intel 386 computer with about 4 MB of ram. But to run the graphical interface and to have enough speed and space for your work, you should have at least a 486 (or a Pentium) machine with 64 MB of ram and a 600-mb hard drive. Linux will also run on PCs with processors from Advanced Micro Devices Inc. and Cyrix Corp.
If you've got a variety of computers in your office, you'll appreciate Linux's ability to run on a variety of non-Windows machines. For now, Linux supports Compaq Computer's (formerly Digital Equipment) Alpha-based systems, Sun Microsystems's Sparc stations, and even some of Apple's Macintosh systems. But not all versions of Linux support all of these machines. A case in point: Red Hat's latest release only supports Intel, Alpha, and Sparc systems. So make sure you check with the software vendor before you make a purchasing decision.
6. Will Linux work with all of the peripherals running on my computer?
Linux will run on a variety of computers, but it may support all of the features on a computer. Sometimes, internal components, such as built-in sound chips, will fail to work. And many peripherals, such as video cameras, aren't supported at all. As of this writing, the most prominent of Linux's drawbacks is this: It does not support USB (universal serial bus) peripherals.
That said, I did find that Linux supports hundreds of popular printers and many common business tools. For example, there are several free software programs that will connect your PalmPilot to a Linux computer and synchronize your address books. Still, the easiest way to ensure that the operating system works on your computer is to buy one with Linux already installed.
7. Which computers come with Linux already preinstalled?
Among the gaggle of specialty-hardware vendors that are offering Linux work-stations, the most prominent is VA Linux Systems. The company offers an array of computers, ranging from complete PCs for about $1,000 to full-blown work-stations with multiple processors that come with a price tag of more than $10,000. Another respected Linux-only computer seller is Penguin Computing.
The mainstream PC makers are also jumping on the Linux bandwagon. Last fall, Dell Computer began preinstalling Linux on many of its machines. Prices for these systems are not appreciably lower than those of similar systems outfitted with Microsoft Windows, but that may change as the competition starts to heat up. Expect to see computer-maker stalwarts Hewlett-Packard and IBM selling Linux desktop computers soon. Database giant Oracle is also joining the free-software legion: It plans to establish a separate company to sell Linux-based networked computers, which are expected to cost about $200 each.
Coordinates: VA Linux Systems Inc., www.valinux.com; Penguin Computing Inc., www.penguincomputing.com; Dell Computer Corp., www.dell.com/linux; Oracle Corp., www.oracle.com
8. How long will it take me to get comfortable using Linux?
It won't take forever. People who have used a Macintosh or a Windows machine will easily find their way around a Linux system that's been gussied up with GNOME or KDE. Onscreen icons launch programs with a click, and a movable tool bar calls up menus listing everyday programs. You'll have to unlearn some old habits, but many things work in familiar ways. For example, the software menu pops up in the bottom left-hand corner of the screen, and you can select applications like a CD player or a word processor from that list.
Even small tasks, such as renaming a file, are usually accomplished in the same way that they are in Windows. Macintosh users can even make the Linux GUI look like a Mac screen by selecting a different windows manager, or by putting the menu bar at the top of the screen and changing the borders to look like those of the MacOS.
All of this seems pretty seamless, until you decide to do something different — like reset the screen resolution or install a new program. Then you'll find yourself flipping through the Linux manuals for the right commands. That's because Linux still makes you go out to terminal mode to complete many tasks — which means you must alter a text command line to make changes to the system. Getting a new modem to work or installing a Linux program can be a chore until you learn the commands for setting up drivers and manually running software-installation routines.
9. Enough about the operating system — what about applications for Linux?
The main programs for Linux that run under KDE and GNOME can open and store many Microsoft Windows file formats. And there are enough solid software programs written for Linux to let you get your work done — with a couple of notable exceptions.
In the word-processing category, Corel's WordPerfect 8 ($70) for Linux is a winner. WordPerfect includes more than 5,000 clip-art images and supports more than 1,000 printers. If most of your work involves words, you won't have to make any sacrifices when you use Linux.
When it comes to all-in-one office suites, however, no big-name program is currently available for Linux. Corel says it plans to release its WordPerfect Office suite for Linux, which will reportedly include the popular Quattro Pro spreadsheet program.
Meanwhile, there's StarOffice 5.1, the perfectly serviceable office suite from Sun Microsystems. You can download it free of charge or pay $10 for the promotional CD-ROM. StarOffice includes a word processor, a spreadsheet, an email package, a calendar, and even a program for making presentations.
A Web browser is a must-have business tool, and Netscape continues to support Linux. The latest Linux version of Netscape Communicator 4.7 contains all of the essential elements of the Windows and the Mac versions, including email management and a Web-page composer. Most of the popular plug-ins, including RealNetworks's RealServer G2 media player, also work for people who are cybersurfing from a Linux machine.
What's missing? For one thing, no one has yet written a full-blown contact manager, such as Symantec's Act!, for Linux. Most versions of KDE and GNOME include an address book and calendar, but they lack advanced features that can track sales contacts and juggle multiple cross-references.
Coordinates: WordPerfect 8, www.corel.com; StarOffice 5.1, www.sun.com; Netscape Communicator, www.netscape.com; RealServer G2, www.realnetworks.com
10. So is Linux ready for prime-time business users?
In many respects, Linux passes the office test. After weeks of use, I managed to crash just one machine running GNOME. (Meanwhile, my Windows 98 machine continues to crash at least once a day.) And I was able to do most of my work on the Linux-equipped machines.
If you've got a dotcom business, Linux deserves a serious look. Linux is reliable and flexible — a plus for online businesses that must make weekly software changes to keep up with the Internet economy. Begin by having your developers use a few Linux boxes. Then start introducing the systems to other employees when the interface becomes more mature later this year.
But Linux gets a thumbs-down for small companies and home-based offices. Managing the operating system still requires some expertise. And Linux lacks most of the finishing touches that make it easier for users to install new software and solve small glitches on Windows PCs and Macs. What's more, until Corel and others release their applications for Linux-based machines, you'll be stuck with a limited number of business-software packages.
That said, take a good, hard look at Linux in four months or so. The speed at which applications are being written and improved on in the Linux world is absolutely astonishing, thanks largely to the Internet.
While older software companies such as Microsoft rely on employees ensconced in corporate campuses to churn out new bells and whistles for Windows, the legions of Linux volunteers use the speed and flexibility of the Internet to fine-tune and to improve the software. The result is that free Linux software upgrades get churned out online, 24 hours a day. The information superhighway has come full circle: Linux is changing the way the software that got us there in the first place is made.
Contributing Editor John R. Quain (firstname.lastname@example.org) appears regularly on CBS and on MSNBC.
Action Item: Linux Links
Here are three of the most user-friendly sites for learning more about leveraging Linux.
Linux Online (www.linux.org) If you're looking for Linux-compatible hardware or the latest software, start here. Linux Documentation Project (www.linuxdoc.org) Chances are that whatever you're trying to do with Linux, someone else has tried it before; those solutions are posted here. Linuxberg (www.linuxberg.com) This is a neatly arranged center for locating new Linux programs, from networking fixes to newly designed themes that will add some pizzazz to your desktop.
Sidebar: GNOME Guru
From his office in Mexico City, master programmer Miguel de Icaza cajoled, pleaded, and cranked out code to get GNOME — the free graphical front end for Linux — up and running smoothly. Here, de Icaza gave us the goods on GNOME.
What attracted you to Linux?
"When I started programming years ago, proprietary tools were expensive, yet they were of poor quality. But the free tools were great — anyone could access them and improve upon them. So I started to work on Linux, and I urged others to help."
What are you working on now?
"We're developing more end-user applications, such as office suites, that will also be free. We're also working on presentation programs, and I'm starting a company to write GNOME applications for Linux and other free systems."
Is Linux ready for individual computer users?
"It depends on what you want a computer to do. Linux is not quite ready for everybody's desktop, though in four to six months it might be. But even now, there's a project in Mexico that's distributing computers equipped with Linux and GNOME to 1 million students."
Coordinates: Miguel de Icaza, miguel@GNU.org; GNOME, www.gnome.org
Sidebar: Leap to Linux
Wondering whether your company can make the leap to Linux? David O'Dell, VP of engineering at Powerize.com, a Web site offering business research, has already taken some first steps. Here are five of his top-of-mind tips for converting your company to Linux.
Linux can handle customized applications. "Most custom applications can be built especially for Linux."
But consider your resources. "You need it support for Linux systems. If you don't have the technical-support people to work with it, don't even think about taking on Linux."
Check connections. "Even though the rest of my company uses Windows, our developers can still use Linux. They just use Lotus Notes to communicate and trade files with everyone else. Make sure that you can do the same before you invest in Linux."
Get it preinstalled. "Buying computers with Linux already installed will help your it department avoid many headaches."
Don't force Linux on everyone. "Our technical writers and sales staff don't use it, because the applications they're comfortable with aren't running on Linux yet."
Coordinates: David O'Dell, email@example.com; Powerize.com, www.powerize.com
Sidebar: Linux on a Laptop
Because many components of notebook computers are custom-built, such systems present special challenges for Linux. To see how Linux would perform on the road, I took Red Hat Linux 6.1 for a test-drive on a Gateway Solo 9300. I restarted the Solo with Linux.
Nearly one-third of the screen remained blank! But I found an invaluable Web site, Linux on Laptops, where I located advice for fixing the display problem.
Then the internal modem died. Most built-in notebook modems require Windows software and therefore don't work with Linux. Fortunately, a PC Card modem works fine, so I plugged in an old 28.8-Kbps model to get online.
But the USB and Firewire ports didn't work. This is a major hole in Linux support. The next stable Linux kernel, version 2.4, is expected to close this gap.
Bottom line: Avoid putting Linux on a new notebook computer. It takes time for small fixes to be implemented for new hardware. An older laptop will probably work better with Linux.
Coordinates: Linux on Laptops, www.cs.utexas.edu/users/kharker/linux-laptop
Sidebar: Quain's Top 10
Are you thinking of installing Linux on your computer? Before you do, consider these 10 tips — and don't be afraid to consult the user's guide.
1. Set aside a weekend. It's possible to install Linux in a couple of hours. But chances are, you'll need more time.
2. Stick with even numbers. Stable versions of Linux end in even numbers (such as Linux 2.2); odd-numbered ones are works in progress.
3. Check tech support's business hours. Don't try to install Linux in the middle of the night — the folks in technical support may not be there to help you.
4. Check system compatibility. Make sure that all of the peripherals on your computer are supported by Linux, especially your modem.
5. Read all of the instructions carefully. Installing Linux is not as easy as installing Windows, so make sure you're familiar with all of the commands.
6. Back up everything. Before you start partitioning hard drives, copy your work to another machine or storage device.
7. Write down all of your system's specs. Make a list of system components, including the type of hard drive you use. Tech support might need that information if you run into problems.
8. When in doubt, "dual-boot." If possible, set up your system to work with Windows and Linux. That way, you can always go back to your old software.
9. Make sure you have enough disk space. The minimum is 300 MB, but plan on having at least 600 MB.
10. Seek advice online. Millions have already put Linux on their PCs. If you need help, look for it online.
A version of this article appeared in the Jan/Feb 2000 issue of Fast Company magazine. | <urn:uuid:96b9d7ef-947a-4650-947d-69f84b128f19> | CC-MAIN-2016-26 | http://www.fastcompany.com/38780/are-you-ready-linux | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00161-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.93285 | 4,852 | 2.875 | 3 |
The official name of Hungary, which is called Magyarorszag is Hungarian, is the Republic of Hungary. When translated into Hungarian this comes to Magyar Koztarsasag. Hungary is situated in the Carpathian Basin in the centre of Europe and is encircled by land on all sides. Hungary is surrounded by Croatia, Austria, Ukraine, Serbia, Slovenia, Slovakia, and Romania.
The city of Budapest is the capital of Hungary, which is a member of the European Union, NATO, and NATO. It is also a Schengen state. The language which is spoken officially in Hungary is Hungarian, which is also called Magyar. In the European Union, there only four official languages which are not of Indo-European origin and Hungarian is one of them.
In the last few years, Hungary has had the status of being a major tourist destination, and ranks among the top fifteen most popular places to visit. Budapest, which is the capital of Hungary, is considered to be one of the most beautiful places of the world. Even though the country is small in size, it consists of a number of places declared to be World Heritage Sites by UNESCO, as well as a thermal lake named Lake Heviz which is the second largest in the world, Biosphere reserves of UNESCO, Hortobagy which is a natural grassland and the largest in Europe, and the Lake Balaton which is the largest in size in Central Europe.
Hungary previously had a centrally planned economy, which was later changed into a market economy. Property is owned by foreigners and they also invest in Hungarian firms. There has been a target set for its accession to the euro zone, and to reach this target Hungary will have create economic reforms and reduce government spending.
The climate of Hungary is classified as being a Continental Climate. The summers are hot but the humidity level is low overall. The winters are frigidly cold with snow. | <urn:uuid:f6c43154-8036-4585-94bd-98d4083890ee> | CC-MAIN-2016-26 | http://www.agitproperties.com/europe/hungary/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00124-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.977763 | 394 | 3.09375 | 3 |
President Obama's 2014 budget request includes programs for research, development, and education in STEM (science, technology, engineering, and mathematics) fields. He has called for 100,000 new STEM teachers and one million new STEM graduates over the next 10 years, increased participation by groups historically underrepresented in these fields, and additional resources to support networks focused on STEM education. Mathematics is a major component of improving and expanding the STEM-literate work force.
But mathematicians, and the profession as a whole, are under scrutiny and attack. In 2012, the President's Council of Advisors on Science and Technology labeled mathematics the "bottleneck that is currently keeping many students from pursuing STEM majors" and called for teaching of college-level mathematics courses "by faculty from mathematics-intensive disciplines other than mathematics." E.O. Wilson recently claimed that "many of the most successful scientists in the world today are mathematically no more than semiliterate." Paul Krugman agreed that researchers do not need much math and writes that "higher math isn't usually essential; arithmetic is."
Not only are these statements misguided, they also reinforce popular negative stereotypes. When someone learns that I am a mathematician, the inevitable comment is, "I was never good at math," often accompanied by a dismissive chortle. Society accepts such comments but would never accept the analogous "I was never good at reading." We accept the use of a calculator to add small numbers but not the use of software to read basic English. We accept T-shirts advertising how hard math is, popular caricatures of math geeks, and scientists who claim they are mathematically ignorant, but we do not condone illiteracy, and we work hard to eradicate it.
Unfortunately, these discussions are a distraction from the main issue: We need to train more people to be scientifically literate, and mathematics is a core component of such training. The precise nature of mathematics provides a framework for scientific advances. Without proficiency in the language of logical reasoning and quantifiers, it is impossible to work in a STEM field. The study of mathematics is thousands of years old, yet it is still a hot field.
Mathematics provides a tool box for the sciences. Mathematical models are used to explain and predict events around us, and rigorous mathematical thinking organizes ideas. Mathematics is used to model the spread of infectious diseases and then as a tool to halt the spread. It is used to develop rigorous standards for testing in drug trials that lead to major improvements in treatment, and it is used to design buildings that can withstand earthquakes and other natural disasters.
But mathematics is much more than a tool box. Its logical reasoning underpins all scientific discoveries, and it has transformed the way we understand our world. Long before experimental evidence was available, Galileo used mathematics to predict that the earth revolves around the sun. Centuries later, Albert Einstein used mathematics to show that the universe is curved, not flat; his theories were only experimentally verified years later.
Mathematics plays a role in the design of satellites, whose applications include communications, weather prediction, Internet access, and military uses. Before public encryption codes, a theorem of Pierre de Fermat established a rigorous foundation for a commonly used cryptographic system. Without Alan Turing's fundamental work, the modern computer would not be possible. Numerical analysis, modeling, and statistics—all branches of mathematics—played a significant role in mapping the human genome. This is mathematical theory turned into applications, but applications that developed long after the theory.
Sometimes mathematical theory turns into practice much more quickly, as happened with the use of complex analysis to develop sophisticated coding techniques that protect the transmission of personal data.
Simpler mathematical concepts are implicitly used in numerous other professions: A plumber computes volumes and understands angles; a nurse calculates doses and drip rates; and a mechanic understands torque, ratios, and volumes.
This is not to say that every scientist needs a degree in mathematics. But every scientist needs the rigorous language and logic afforded by mathematics. Equating this knowledge with the ability to do calculus is as nonsensical as equating a biologist's ability to hunt with the ability to map a genome. Mathematics should not be used as a gatekeeper for the sciences, but one cannot excel in science without basic mathematical reasoning.
Increasingly, students arrive at colleges without sufficient background to take basic mathematics courses. Nonetheless, we are expected to teach them the higher-level concepts they need for classes in biology, statistics, physics, and chemistry. But mathematics builds on a previous foundation and cannot be taught starting at the end. It is like asking a student unable to read a newspaper to analyze Shakespeare.
From an early age, children are directed to books appropriate to their individual reading levels. Working within guidelines for a third grader, a good teacher or librarian directs a student to appropriate material, and schools are equipped with reading material at a wide range of levels. But elementary education in mathematics does not have specialists like librarians to present students with appropriate-level material. The result is that we bore the good students and lose the weaker ones, helping only some in the middle. Improving the STEM work force starts early—focusing on individual needs and teaching the language of mathematics.
With all our debating, we have lost sight of the main issues: We need to support mathematics research at all levels and train more people to be mathematically literate. Does every scientific discovery depend on mathematics? Of course not. Can any scientist function without mathematics? Absolutely not. | <urn:uuid:7f6e4590-5fc9-4e6f-8b71-9a69777893d8> | CC-MAIN-2016-26 | http://chronicle.com/article/Mathematics-1000-Years-Old/139943/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00063-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943691 | 1,118 | 3.015625 | 3 |
Tackling climate change through livestock
An important emitter of greenhouse gases (GHG), the livestock sector also has a large potential to reduce its emissions.
This is the main conclusion drawn by the report “Tackling climate change through livestock”. This newly released report provides the most comprehensive global assessment made to-date of the livestock sector’s GHG emissions and its mitigation potential.
The report also presents a detailed assessment of the magnitude, the sources and pathways of emissions from different production systems and supply chains. Relying on life cycle assessment, statistical analysis and scenario building, it identifies concrete options to reduce emissions.
It comes at a time when the world needs to urgently reduce GHG emissions to avert catastrophic climate change. The livestock sector can make an important contribution to such international efforts by offsetting some of the sector’s emission increases, which are expected as demand for livestock products is projected to grow by 70 percent by 2050.
Important GHG emissions
- The livestock sector plays an important role in climate change. It is estimated to emit 7.1 gigatonnes of carbon dioxide equivalent (CO2-eq) per annum, representing 14.5 percent of all human-induced emissions.
- Beef and cattle milk production account for the majority of emissions, respectively contributing 41 and 19 percent of the sector’s emissions. While pig meat and poultry meat and eggs contribute respectively 9 percent and 8 percent to the sector's emissions.
- The main sources of emissions are: feed production and processing (45 percent of the total – with 9 percent attributable to the expansion of pasture and feed crops into forests), enteric fermentation from ruminants (39 percent), and manure decomposition (10 percent). The remainder is attributable to the processing and transportation of animal products.
Sizeable reductions within reach
- Sector emissions could already be brought down significantly just through the wider use of existing best practices and technologies. Technologies and practices that contribute to reducing emissions already exist, but could be used more widely.
- A 30 percent reduction of GHG emissions would be possible if producers in a given system, region and climatic zone adopted the technologies and practices currently used by their least emission intensive (emissions per unit of animal product) peers.
- Substantial emission reductions can be achieved across all species, systems and regions.
Efficiency key to reducing emissions
- Possible interventions to reduce emissions are mainly based on technologies and practices that improve production efficiency at animal and herd levels. They include better feeding practices, animal husbandry and health management.
- Manure management practices that ensure the recovery and recycling of nutrients and energy contained in manure, and energy savings and recycling along supply chains, are further mitigation options.
Mitigation for development
- Most mitigation interventions can provide both environmental and economic co-benefits. Efficient practices and technologies can boost productivity and thus contribute to food security and poverty alleviation.
Urgent need for collective and global action
- Global action involving all sector stakeholders is urgently required to design and implement cost-effective and equitable mitigation strategies, and to set up the necessary supporting policies and institutional frameworks.
- The adoption of new practices and technologies will require a mix of supporting policies, incentives, research and on-the-ground extension work.
This report is complemented by two technical reports providing a more in-depth analysis of emissions by sub-sectors (pig and chicken supply chains and ruminant supply chains).
FAO’s commitment to practice improvements and sector sustainability
FAO is actively involved in the Agenda of Action in support of Sustainable Livestock Sector Development, a global multi-stakeholder partnership dedicated to improving practices for a more efficient use of natural resources. It focuses on three areas: practices efficiency, grassland management and manure management.
FAO is also actively involved in the Livestock Environment Assessment and Performance (LEAP) Partnership, a cross-sectoral effort to develop common metrics to define and measure environmental performance of livestock supply chains. | <urn:uuid:02fbaf53-7d37-40e6-87b9-49b6be7f01e1> | CC-MAIN-2016-26 | http://www.fao.org/AG/againfo/resources/en/publications/tackling_climate_change/index.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00140-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.915245 | 820 | 3.65625 | 4 |
Users are reminded to consider the large size of WFC exposures when allocating disk space for storing and reprocessing ACS data. Raw images serve as input to the calibration pipeline and have the file sizes (in MB) given in Table 2.4
. A WFC image typically contains two SCI arrays, while HRC and SBC detectors each have a single SCI array. The raw image sizes presume that both the SCI and DQ arrays are populated with short integer values, but that the ERR arrays are NULL (all pixels have a value of zero).
processing, the SCI arrays are converted from integer to floating point values. The null ERR array is populated with floating point values, and the null DQ array is populated with integer values. As a result, the size of calibrated images (in MB) is much larger.
The image size in pixels is given in Table 2.5
. Calibrated images taken with the WFC and HRC detectors are smaller in image dimensions than raw images because the overscan regions have been trimmed during processing.
calibration in the pipeline, AstroDrizzle
combines images in an association to create a product with four extensions: science (SCI), weight (WHT), context (CTX), and header keywords (HDRTAB). Once the distortion is corrected, the size of a drizzled image will be larger (in pixel dimensions and file size) than a calibrated image because the drizzled image’s projection on the sky is rhombus-shaped, rather than square. Also, the size of the image offsets, from sub-pixel dithers to image mosaics, will increase the image field of view and hence the image size. The specific dimensions of a drizzled image depends on the image orientation and on which distortion model is in use in the pipeline, and will vary slightly (about 1
pixels) due to the effects of velocity aberration. | <urn:uuid:17bc7fb5-b573-40d8-831f-0c711769eedb> | CC-MAIN-2016-26 | http://www.stsci.edu/hst/acs/documents/handbooks/currentDHB/acs_Ch24.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00048-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.902918 | 401 | 2.515625 | 3 |
Heal-n-Soothe Now For FREE
| || |
In North America, back pain, or dorsopathy is one of the most prevalent
conditions experienced by the average person. Over thirty million
Americans are suffering from some sort of back discomfort and it is thought
that in the general work force population that half of the workers will
experience of back problems during the average year. It is reported that
half of the American population over the age of 18 has had some sort of
back problems in the last five years and that between 60 and 90 % of all
Americans will have back pain at some time in their lives. Back pains
are estimated to cost the country fifty billion dollars yearly in lost
work time and incurred medical costs since nearly 40 % of all employees
sick days are caused by back pain.
Since at least 90 % of all back pain has no specific cause those
suffering from back pain will have the pain resolve on its own in no
more than four to six weeks without any medical treatment.
Treating back pain at home may consist of bed rest for 2 to 3 days with application
of hot/cold compresses and over the counter analgesics to help relieve
the pain and inflammation. After several days of rest, mild exercising
has been found to shorten the recovery period.
Studies have shown that the most
common causes of back pain
include poor lifting technique, poor posture, improper exercising technique, poor
posture during long periods of driving, lack of proper back support
while sleeping, being out of physical shape and/or overweight and
awkward or sudden movement. Improper lifting or lifting something that
is too heavy accounts for 35 % of all back pains. These studies also
tell us that most lower back pain or lumbago is transient and only lasts a short time
with no serious damage occurring to the back. Even the more serious
condition of a pinched nerve or slipped disc is very infrequent.
There are several
medical causes of back pain
ranging from a herniated disc to a deformed vertebrae joints. Osteoarthritis is a degenerative
wear and tear on the cartilage between the back bones that leads to a
painful rubbing of bone on bone. Sciatica is an extension or a herniated
disc that causes compression or pinching of the sciatic nerve. Spinal stenosis causes a compression or pinching of the spinal cord or nerves
as a result of a narrowing of the spinal canal. When one vertebrae
slides over an adjacent one, the compression or pinching of the nerves
in the area is a condition called spondylolisthesis. When the joints
between two vertebrae become deformed and pinch a nerve, usually in the
neck area, a condition called spondylosis develops.
Back pain researchers continue to try to understand who gets back pain
and what are the best ways of
preventing back pain
"After about 2-3 days on the activation dose, I noticed reduced
inflammation, and my pain in my leg reduced. I'm a believer that you
'Can't put a price on your health'."
--Dan Monahan, England | <urn:uuid:ccf1f287-1cd7-4f4d-b63d-143025496203> | CC-MAIN-2016-26 | http://allaboutbackpains.com/index.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00092-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932039 | 670 | 2.796875 | 3 |
The major issues that are affecting us are delivered through the UN's Agenda 21 policies. The UN has enlisted its leiutenants (politicians and corporate leaders) in its efforts to depopulate the Earth and enforce slavery upon us. The tactics that they use to get us to pay for their programs (through taxes and donations) as well as garnering support for them by well intentioned people will be exposed in this section.
California Drought - Caused by Drought
The most important issue to understand is how the UN is involved in creating food and water shortages- please read the articles in the California section to fully understand the mechanism that collectivists use. | <urn:uuid:caac2507-1247-4abb-9e4a-aae1146bb6c4> | CC-MAIN-2016-26 | http://morphcity.com/issues | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00100-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974652 | 136 | 2.578125 | 3 |
|- 10.000 years19th century|
|The agricultural period is based on the domestication
of food plants and animals which progressively constituted the main source
of human food.
This period corresponds to the agricultural food products, transformed and prepared inside of domestic consuming units.
During this period man gradually...
The 18th century is a time for major changes in Europe... when a new industrial and liberal society set up on a traditionnal agrarian basis,and where is formed and developed the agro-industrial period related to food. | <urn:uuid:f0a5ca28-bcec-4128-bac7-c28930665b37> | CC-MAIN-2016-26 | http://museum.agropolis.fr/english/pages/expos/fresque/agri.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00172-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.899239 | 113 | 3.03125 | 3 |
Nau mai haere mai ki te pae tukutuku o te Koti Taiao
The Environment Court is a single national court which sits in a number of courthouses in different parts of the country. Judges are permanently located in Wellington, Auckland and Christchurch. Environment Judges from the three main centres go on circuit to other locations as needed. The work of the Court is organised nationally by the Principal Environment Judge to ensure a match between judicial resources and work.
Most of the Court's work involves issues arising under the Resource Management Act 1991, largely dealing with appeals about the contents of regional and district plans and appeals arising out of applications for resource consents. The consents applied for may be for land use, for a subdivision, a coastal permit, a water permit or a discharge permit or a combination of these.
Search Environment Court decisions
You can search Environment Court decisions online by typing any word(s) related to what you are looking for in the search box below and clicking on search. The search box will take you to the New Zealand Legal Information Institute (NZLII) Database. This database contains decisions from September 1996 onwards.
If you are unable to locate a particular decision please email EnvironmentCourt@justice.govt.nz
What is NZLII?
The New Zealand Legal Information Institute (NZLII) is a joint project of the University of Otago Faculty of Law, University of Canterbury and the Australasian Legal Information Institute (AustLII) with the assistance of the Law School, Victoria University of Wellington. | <urn:uuid:ec0cb1fb-735a-4228-8bf9-84e93eff6490> | CC-MAIN-2016-26 | http://www.justice.govt.nz/courts/environment-court | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00078-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.916744 | 319 | 2.515625 | 3 |
Yongcheng was the capital city of the Qin
State from the Spring and Autumn Period (770-476BC) to the mid Warring States
Period (476-221BC). It remained the capital of the Qin from 677-383BC.
Covering an area of 11 square kilometers,
the city was orderly arranged with interlacing streets. To date, three
large-scale palace areas were discovered at the site. The No 1 temple
constructions comprise the main gate, middle court, ancestral temple, the Zhao
Temple, the Mu Temple and surrounding walls, covering an area of nearly 7,000
square meters. Its halls were all built on earth and wooden structures, with
huge roofs. The site is the largest and best-preserved construction group of the
period prior to the Qin Dynasty (221-206BC). The No 3 construction site covers
an area of 21,800 square meters and comprises five palaces from south to north.
It contains the most complete sleeping quarters of the period.
Covering an area of 21 square meters in the southwest of
the city site is the mausoleum area of the Qin ruler where 43 large tombs and
sacrificial pits were unearthed. The area has a standard layout that can be
divided into 13 cemeteries. Excavations indicate that people of the Qin State
had already formed a set of architectural concepts on the overall arrangement
and design of the mausoleum. The
shaped tomb is the most superior,
and other tombs are shaped like the character
or a knife handle. The largest
tomb is the No 1 Tomb of King Qin -- so far the largest wooden coffin chamber
discovered in China.
The No 1 Tomb has a coffin chamber 59.4
meters long from east to west and 38.5 meters wide from south to north,
including a 240.6-meter-long path that covers an area of 400,000 square meters.
Located in the middle of the coffin chamber is the main room, which is divided
into front and back rooms. The front room represents the palace where King Qin
discussed political affairs before his death; the back room represents the
palace where he dined and slept. Over 600 cultural relics made of various
materials, such as gold, jade, bronze, iron, bone, pottery, lacquerware and
wooden cultural relics were found in the coffin chamber. Some relics, such as
the golden woodpecker, white jade dagger and jade ornament resemble the jade
horse head found in the Terracotta Warriors and Horses Pits of Qin Emperor
Shihuang. They reflect the masterly crafts and well-developed techniques of the
early Qin State.
Also unearthed at the Yongcheng Site were 67
large construction components made of bronze that reveal the unparalleled,
luxurious style of Yongcheng Palace. | <urn:uuid:65510149-dbcb-4732-aac9-76470634f969> | CC-MAIN-2016-26 | http://www.chinaculture.org/gb/en_travel/2003-09/24/content_33584.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00176-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932395 | 611 | 3.34375 | 3 |
Print version ISSN 0042-9686
Bull World Health Organ vol.79 n.8 Genebra Jan. 2001
Evaluation of efficacy of school-based anthelmintic treatments against anaemia in children in the United Republic of Tanzania
OBJECTIVE: To determine the impact of deworming on anaemia as part of a large-scale school-based anthelmintic treatment programme in the Tanga Region of the United Republic of Tanzania.
METHODS: Both the reduction in the prevalence of anaemia and the cost per case prevented were taken into consideration. Cross-sectional studies involved parasitological examination and anaemia evaluation before and at 10 months and 15 months after schoolchildren were dewormed.
FINDINGS: Baseline studies indicated that the prevalence of anaemia (haemoglobin < 110 g/l) was high (54%) among schoolchildren, particularly those with high intensities of hookworm and schistosomiasis. Attributable fraction analysis suggested that hookworm and schistosomiasis were responsible for 6% and 15% of anaemia cases, respectively. Fifteen months after deworming with albendazole and praziquantel the prevalence of anaemia was reduced by a quarter and that of moderate-to- severe anaemia (haemoglobin <90 g/l) was reduced by nearly a half. The delivery of these anthelmintics through the school system was achieved at the relatively low cost of US$ 1 per treated child. The cost per anaemia case prevented by deworming schoolchildren was in the range US$ 68, depending on the haemoglobin threshold used.
CONCLUSION: The results suggested that deworming programmes should be included in public health strategies for the control of anaemia in schoolchildren where there are high prevalences of hookworm and schistosomiasis.
Keywords Anthelmintics/therapeutic use; Albendazole/therapeutic use/administration and dosage; Praziquantel/therapeutic use/administration and dosage; Anemia/parasitology; Hookworm infections/drug therapy/complications; Schistosomiasis/drug therapy/complications; Child; School health services; Cross-sectional studies; Logistic models; Regression analysis; United Republic of Tanzania ( source: MeSH ).
Mots clés Antihelminthiques/usage thérapeutique; Albendazole/usage thérapeutique/administration et posologie; Praziquantel/usage thérapeutique/administration et posologie; Anémie/parasitologie; Ankylostomiase/chimiothérapie/complication; Schistosomiase/chimiothérapie/complication; Enfant; Service hygiène scolaire; Etude section efficace; Modèle logistique; Analyse régression; République-Unie de Tanzanie ( source: INSERM ).
Palabras clave Antihelmínticos/uso terapéutico; Albendazol/uso terapéutico/administración y dosificación; Praziquantel/uso terapéutico/administración y dosificación; Anemia/parasitología; Infecciones por uncinaria/quimioterapia/complicaciones; Esquistosomiasis/quimioterapia/complicaciones; Niño; Servicios de salud escolar; Estudios transversales; Modelos logísticos; Análisis de regresión; República Unida de Tanzanía ( fuente: BIREME ).
Iron-deficiency anaemia can affect the mental and motor development of children (1, 2) with potential long-term consequences for productivity and wage-earning potential in adulthood (3). Parasitic worms contribute to iron-deficiency anaemia among children in sub-Saharan Africa, the predominant species being the hookworms Ancylostoma duodenale and Necator americanus (which inhabit the gut) and Schistosoma spp. (which inhabit the blood vessels surrounding the gut and bladder). The amount of blood loss resulting from iron-deficiency anaemia depends on the intensity of infection (4, 5), the dietary intake of iron (6, 7), and the presence of other parasitic diseases that can cause blood loss or haemolysis, such as malaria and Trichuris trichiura infection (810). School-age children are particularly vulnerable to iron-deficiency anaemia exacerbated by parasitic infection because they typically harbour the heaviest worm loads in communities (11). A recent analysis of the association between hookworm and anaemia in school-age children in Zanzibar suggested that 25% of all anaemia cases, 35% of iron-deficiency anaemia cases, and 73% of severe anaemia cases could be attributed to hookworm infection (9). Hookworm has been shown to contribute more than schistosomiasis or malaria to iron-deficiency anaemia in school-age children (5, 12).
Infections with intestinal worms and Schistosoma spp. are widespread and common among school-age children in United Republic of Tanzania. The Tanzania Partnership for Child Development (Ushirikiano wa Kumwendeleza Mtoto Tanzania) was established to undertake operations research on the delivery of mass treatment with anthelmintics to children through the school system. WHO recommends the mass treatment of all children with anthelmintic drugs when the prevalence of infection with intestinal worms or schistosomiasis is greater than 50% (13). A school health package, comprising albendazole treatment against intestinal worms, praziquantel treatment against Schistosoma haematobium, and health education aimed at preventing reinfection, was issued for children in primary schools in three districts of Tanga Region. Treatment with the anthelmintics, delivered through the education system at a cost of about US$ 1 per child (14) to schools where both were needed, led to significantly improved growth and haemoglobin concentrations. We report the associations between hookworm, S. haematobium, and anaemia in this school population and attempt to evaluate the impact and cost- effectiveness of the first round of anthelmintic treatments against anaemia.
Study population and survey design
The programme of the Tanzania Partnership for Child Development is being implemented in all 352 government primary schools in Tanga, Muheza, and Korogwe Districts of Tanga Region on the country's north-east coast. The process of interpreting and evaluating the programme is described below (see also Fig. 1).
1. A survey of intestinal parasitic infection and blood in urine was undertaken in a small sample of schools throughout the region in May 1995 using the KatoKatz method of stool examination (15). The prevalence of infection with intestinal worms was high enough to warrant mass treatment with albendazole in all schools, and there were foci of urinary schistosomiasis.
2. A questionnaire survey of self-reported urinary schistosomiasis, called kichocho in Kiswahili, was undertaken in August 1995 in all districts of Tanga Region to identify schools in which the prevalence of infection was estimated to be greater than 50% and thus warranted mass treatment (16). Schistosoma haematobium was the predominant species of schistosome in Tanga Region. S. mansoni has only been found in small foci and at a low prevalence (17).
3. A baseline survey was undertaken in March and April 1996 in 20 randomly selected schools. Schools were eligible for selection if the prevalence of self-reported infection with S. haematobium was estimated to be more than 50%, if there were more than 150 enrolled pupils, and if there were likely to be enough children to be studied in two age groups. Samples of equal numbers of male and female children in the age groups 89 years and 1214 years were randomly selected for study in each school. A fresh urine specimen was collected from each child between 10:00 and 14:00. S. haematobium eggs were collected by filtering 10 ml of each urine sample through a polycarbonate membrane, and their concentration was expressed as eggs per 10 ml urine. A fresh faecal sample was examined by the KatoKatz technique and the concentration of eggs per g faeces for each nematode species was determined. Since the eggs of the two hookworm species cannot be distinguished apart, the term hookworm is used for both. A venous blood sample was collected in a 50% subsample of children and the haemoglobin concentration was estimated using a portable haemoglobinometer (HemoCue, Sheffield, England). Height was measured with a stadiometer to an accuracy of 0.1 cm and weight was measured to an accuracy of 0.1 kg by means of electronic scales. A total of 466 children were examined for both parasites and their haemoglobin level.
4. In April 1996 all children in all schools in the three intervention districts received a single dose of 400 mg albendazole against intestinal helminths. Furthermore, all the children in the schools where the questionnaire survey had indicated the prevalence of S. haematobium to be above 50% received a single dose of praziquantel at a target dose of 40 mg/kg, determined on the basis of height. With a view to helping teachers to inform children about parasitic worms and nutrition, all schools were provided with flip-charts designed by the Tanzania Partnership for Child Development.
5. About six weeks after treatment, stool and urine specimens were collected from 413 children in 10 schools originally surveyed to check that the prevalence of infection with intestinal worms and S. haematobium was significantly lower than during the baseline survey. The children surveyed were not necessarily the same as those in the baseline survey.
6. Ten months after treatment, stool and urine specimens and blood samples were obtained in 10 schools from 429 children who were originally included in the baseline survey.
7. In July 1997, some 15 months after the treatments had been given, a follow-up survey was undertaken in 20 randomly selected schools and in randomly selected children. Care was taken to verify that all children examined in the intervention schools had been treated with both albendazole and praziquantel. For each child, a urine specimen was examined quantitatively for S. haematobium eggs, a stool sample was examined by the KatoKatz method, and a fingerprick blood sample was obtained for the determination of the haemoglobin concentration. A total of 1121 children were examined for both parasites and haemoglobin level.Whereas the use of capillary blood can lead to the misclassification of anaemia status in individuals, only very small biases result in respect of the prevalence of anaemia in a population or sample of individuals (18).
Anaemia is defined as a haemoglobin level in blood of <110 g/l; the thresholds for moderate-to-severe anaemia and severe anaemia are 90 g/l and 70 g/l, respectively (19). In the present study the recently published age-specific definitions of anaemia (20) were also applied: a haemoglobin concentration of <15 g/l for children aged 511 years and of < 120 g/l for children aged 1214 years. Height-for-age z-scores were calculated using the NCHS reference values. Children with z-scores >2 standard deviations below the NCHS median height-for-age were classified as stunted. Differences in the prevalence of infection or anaemia between groups subdivided by age or sex were assessed using c2 tests.
Logistic regression models were developed to assess the effect on being anaemic or not of the following explanatory variables: age; sex; being stunted or not; the presence or absence of infection with hookworm, Ascaris lumbricoides, T. trichiura or S. haematobium; and different classes of intensity of infection with hookworm and S. haematobium. The presence of interactions between the main effects and confounding effects was determined, and variables were removed in a stepwise manner. Adjusted odds ratios were calculated (21). The statistical analyses were performed using SPSS (Release 7.0, SPSS, Chicago, IL, 19891995).
Attributable fraction analysis provides an estimate of the proportion of cases of anaemia that can be attributed to a given parasitic infection for subjects who are infected (the infected attributable fraction) and for all subjects whether infected or not (the population attributable fraction), both of which were calculated using previously described methods (22). The analysis was performed by comparing uninfected and infected individuals, for hookworm and S. haematobium separately, and for uninfected or lightly infected children against heavily infected individuals, using different thresholds: > 750 eggs/g faeces, > 1250 eggs/g, and > 2500 eggs/g for hookworm; and > 250 eggs/10ml of urine and more than 500 eggs/10 ml for S. haematobium. Variance and 95% confidence intervals (CI) for the infected attributable fraction and the population attributable fraction were calculated using a previously described method (23).
Since the logistic regression showed that age was an important predictor of anaemia, estimates weighted by age group were also calculated. A weighted sum technique (24) was used to obtain estimates of the prevalence ratio, the infected attributable fraction, and the population attributable fraction. This approach provided age-specific estimates weighted in proportion to the number of anaemia cases in each stratum, in accordance with the formula:
where AFI is the infected attributable fraction and Ncj is the frequency of anaemia cases in each stratum.
The weighted population attributable fraction was provided by:
A previously described method was used to calculate the 95% CI (25).
The cost per child treated was US$ 0.23 for albendazole and US$ 0.79 for praziquantel, i.e. US$ 1.02 per child treated with both drugs (14). This represents the price of the drug and all delivery costs including those of distribution, training, and prior screening for S. haemaobium. A detailed breakdown of the cost calculations has been given previously (14). The total cost per child of US$ 1.02 comprises US$ 0.58 for praziquantel (Cost, Insurance and Freight, CIF), US$ 0.20 for albendazole (CIF), US$ 0.10 for training, US$ 0.06 for drug distribution, US$ 0.05 for the schistosomiasis questionnaire and US$ 0.03 for drug clearance, movement, and repackaging. Albendazole was given to children in all 352 schools in the three districts, whereas praziquantel was only given to children in 153 schools where the prevalence of infection was estimated to be greater than 50%. A total of 39 372 children were treated with both praziquantel and albendazole at a cost of US $ 40 159.44.
The effectiveness of treatment with both drugs was assessed as the number of anaemia cases prevented over 15 months, the approximate period between treatment and the second post-treatment survey. This was calculated as the difference between the proportions of children with anaemia at the baseline survey and the second survey multiplied by the number of children treated. The cost per anaemia case prevented was calculated for a range of thresholds (haemoglobin <70 g/l to <120 g/l).
In the cost-effectiveness analysis it was assumed that the reduction in anaemia was caused solely by the intervention. This was supported by data from the three neighbouring districts of Handeni, Pangani and Lushoto, where anthelmintics were not given. In this comparison area the arithmetic mean haemoglobin level and the prevalence of anaemia (haemoglobin <110 g/l) remained unchanged over the evaluation period: 107.8 g/l and 55% anaemic at baseline (n = 443) compared with 108.5 g/l and 51% anaemic (n = 1024) at follow-up 15 months later. Since the evaluations in the comparison and intervention districts were made within a few weeks of each other, external factors such as seasonal changes in food availability were unlikely to have had a confounding effect. The data from these areas were also comparable in terms of malaria transmission: the inclusion criterion of schistosomiasis prevalence exceeding 50% meant that all highland schools were excluded.
Baseline parasitology and anaemia
The baseline survey of 466 children in Muheza, Tanga, and Korogwe districts suggested that 87% of children aged 814 years were infected with at least one of the helminth species examined (intestinal nematodes and S. haematobium). The most common parasites were hookworms (61%) and S. haematobium (59%) and many children (37%) were infected with both of these; in only 17% of children were both of these parasites absent. In approximately 20% of the children there were more than 750 hookworm eggs/g faeces and more than 250 S. haematobium eggs/10ml of urine. A. lumbricoides and T. trichiura were present at low levels in 21% and 14% of children, respectively.
Both the prevalence of S. haematobium infection and heavy infection (egg counts >250/10 ml) were significantly higher in children aged 1114 years than in those aged 810 years (63% vs. 50% (P < 0.005) and 21 % vs. 9% (P < 0.01), respectively). In contrast, the prevalence of hookworm infection or heavy infection did not differ significantly between the age groups.
A total of 54% of the children had haemoglobin values < 110 g/l, and 10% had moderate-to- severe anaemia (< 90 g/l). Severe anaemia (<70 g/l) was identified in fewer than 2% of the children. Over two-thirds of the children were stunted and the prevalence of anaemia was slightly higher in this group (56% vs. 50%, P = 0.20). Anaemia was strongly associated with the intensity of infection with hookworm and S. haematobium (Table 1). Logistic regression analysis revealed that age group, stunting and the intensity of infection with hookworm and S. haematobium were the important predictive variables for anaemia ( haemoglobin < 110 g/l) (Table 2). Infection with A. lumbricoides or T. trichiura was not related to anaemia.
Table 3 summarizes the attributable fraction analysis for the association between helminth infection and anaemia. The unweighted analysis suggests that 6% of anaemia cases (<110 g/l haemoglobin) could be attributed to hookworm. The attributable fraction was 0.16 for children aged 810 years and 0.05 for those aged 1114 years. The unweighted population-attributable fraction for S. haematobium was more than twice that for hookworm at 15%. The estimated risk of anaemia associated with S. haematobium was 0.9 among children aged 810 years and 0.12 among those aged 1114 years. The differences in the attributable fractions of the two infections by age was caused by the differences in the prevalence of infection and anaemia between the age groups. In the case of S. haematobium, the prevalence was higher among older children, whereas for hookworm infection there was no difference between the age groups. The prevalence of anaemia, in contrast, was higher among younger children. As a result, the weighted attributable fraction estimates (both at 10%) were higher than the crude estimates for hookworm infection and lower for S. haematobium (Table 3). The proportions of moderate-to-severe anaemia cases (haemoglobin < 90 g/l) attributable to hookworm and schistosomiasis were higher at 14% and 25%, respectively.
Impact of treatment
Fig. 2 illustrates the changes in hookworm and S. haematobium infection at intervals of 6 weeks, 10 months, and 15 months after the baseline survey treatment having been administered just after this survey. At 6 weeks the prevalences of hookworm and S. haematobium had been reduced by 82% and 94%, respectively (Fig. 2). The prevalence of heavy infection was reduced by more than 97%, and the mean intensities of infection of hookworm and S. haematobium fell from 738 eggs/g to 18 eggs/g and from 194 eggs/10 ml to 2 eggs/10 ml, respectively. The surveys at 10 months and 15 months after treatment suggested a steady increase in the prevalence of infection, particularly that of hookworm. At 15 months the prevalence of hookworm (49%) had increased to almost the pretreatment value, whereas that of S. haematobium (24%) was less than half the pretreatment value. The prevalence of heavy infection for both species remained below 50% of the pretreatment level (Fig. 2).
The changes in anaemia after treatment are summarized in Table 4. An improvement in anaemic status, as defined by mean haemoglobin and a haemoglobin level <110 g/l, only became evident 15 months after treatment, when the administration of albendazole and praziquantel appeared to have reduced the prevalence of anaemia by a quarter and the prevalence of moderate-to-severe anaemia by almost 50%. The difference in the prevalence of anaemia between the baseline survey and follow-up 15 months later was used to estimate the proportion of anaemia cases prevented. Among the 39 372 children treated with both albendazole and praziquantel, 281 cases of severe anaemia (haemoglobin < 70 g/l) were prevented; for moderate-to-severe anaemia (< 90 g/l) and anaemia (<110 g/l) the numbers of cases prevented were 1445 and 5661, respectively. The cost for preventing these cases was US$ 40 150.44, the total cost of mass treatment with albendazole and praziquantel. The costs per anaemia case prevented are illustrated in Fig. 3 for a range of haemoglobin cut-offs (70120 g/l at intervals of 5 g/l). The relationship was markedly non-linear: the cost per anaemia case prevented was high and variable at low haemoglobin thresholds but relatively stable (<US$ 10) at thresholds above 100 g/l. Using a haemoglobin threshold of <110 g/l for anaemia corresponds to a cost per case prevented of US$ 7.23, increasing to US$ 145.71 for a case of severe anaemia prevented (< 70g/l).
The present study indicates that infection with hookworm and schistosomiasis could be responsible for 6% and 15%, respectively, of anaemia cases in school-age children (10% for both if weighted by age) in an area where the prevalence of both infections was estimated to be greater than 50%. If anthelmintic drugs were to reduce infection by 100%, concurrent relative reductions in the prevalence of anaemia would be predicted. In fact, treatment with albendazole and praziquantel reduced the prevalence of anaemia by 26% up to 15 months after deworming. This was achieved without iron supplementation. Combining these data with those on the costs of delivery and treatment suggests that the cost per anaemia case prevented over 15 months could be US$ 7.43 if the school system is used to deliver anthlemintics. Furthermore, these effects are likely to last longer than 15 months, since the intensities of hookworm and schistosomiasis infection, important factors in the causation of anaemia, had still not recovered to pretreatment values.
In order to place these results in context it is necessary to make similar calculations on the cost per anaemia case prevented for other interventions that improve anaemia status. The cost per moderate-to- severe anaemia case (haemoglobin < 90 g/l) prevented during one year of mebendazole treatment (three times a year) of 30 000 schoolchildren on Zanzibar was US$ 3.57; for severe anaemia (haemoglobin <70 g/l) the corresponding cost was US$ 16.30 (19). This is approximately an eighth of the cost estimated in the present analysis for a single treatment with praziquantel and albendazole. The difference arises because of the very low cost of mebendazole, estimated to be US$ 0.08 per child per year plus US$ 0.07 for delivery. The combined treatment with albendazole and praziquantel is more expensive but is of comparable effectiveness. Assuming an equivalent population of 30 000, a single treatment with albendazole and praziquantel would reduce by 1080 the number of moderate-to- severe anaemia cases in the present study in the United Republic of Tanzania over 15 months, compared with 1208 cases prevented in Zanzibar as a result of using mebendazole three times in 12 months. These analyses are not directly comparable since there are variations in the time frame for effectiveness (longer in the present study), the levels of infection and anaemia at baseline, and the methodologies employed to estimate cases prevented (in the Zanzibar study, extrapolation was performed from incidences in control and treatment groups over 6 months). However, the analysis provides some indication of the potential effectiveness of employing anthelmintics in reducing anaemia in schoolchildren.
Improvements in haemoglobin levels were not detected until at least 10 months after anthelmintic treatment. It had previously been concluded that, whereas iron supplementation can lead to rapid improvements in haemoglobin levels, the effects of deworming may appear up to 1520 months after treatment (6). A more detailed follow-up survey including other indicators of anaemia, such as ferritin, may shed more light on this apparent delay in haemoglobin recovery.
The most traditional approach to improving iron balance is to use iron supplementation. However, we found no studies in the literature that presented the cost per anaemia case prevented. A recent economic analysis assessed the cost-effectiveness, expressed as cost per averted disability-adjusted life year, of oral iron supplementation in preventing severe anaemia (packed cell volume <25%) among infants in the United Republic of Tanzania (26). This outcome measure permits comparison across disease conditions, but requires many assumptions in translating cases into years of disability and death. Unfortunately, the costs in terms of cases prevented were not indicated, but it is possible to get some indication of them by referring back to the control trial on which the analysis was based (27). If, for example, 40% of infants have severe anaemia, and iron supplementation prevents 30% of severe cases in the first year of life, the cost per severe case prevented would be US$ 14.77 at 1996 prices, assuming an intervention cost of US$ 4121 for 2322 infants (26). In an operational setting, however, effectiveness may be markedly diminished because of the failure of patients to adhere to a therapy that involves multiple treatments. Furthermore, although the cost of iron supplements may be low, i.e. < US$ 0.10 per year for a school-age child, the cost of delivery may be high unless existing channels such as those of the health and school systems are utilized. The costbenefits of iron supplementation in adults were assessed in terms of increased productivity for improvement in negative iron balances (28). For Indonesia the resulting costbenefit ratio was around 6; in Mexico it was almost 10 because of substantially higher wage rates. This suggests that iron supplementation would represent an efficient use of resources. Although increasing the iron status of schoolchildren would not directly translate into increased worker productivity, the improvement of the anaemia status of this age group is likely to have a positive effect on schooling. For example, cognitive skills might be increased (1), which would lead to better employment prospects and wage-earning capacity in adulthood (3). Stunting affects school performance adversely (2931); adults of relatively short stature are less productive than taller adults (32, 33). Growth places demands on iron levels, and thus these two health outcomes are intrinsically related. With regard to the deworming of schoolchildren, the subsequent improvements in relation to both anaemia and stunting are particularly important (19, 34, 35). In addition there are direct associations between intestinal nematode infection and cognitive achievement. Expressing effectiveness as anaemia cases prevented clearly underestimates the full potential benefits of deworming on the mental and physical development of schoolchildren.
For ethical reasons, different children were sampled in each survey of the present study. Consequently, the reported changes in anaemia are only trends. A cohort design would have been more precise. Furthermore, the costs did not include that of health education materials. Two flip-charts were provided for each school at a cost of US$ 17 per chart, increasing the cost per anaemia case prevented to US$ 8.01. As anecdotal evidence suggests that the teachers did not use the charts, however, it is unlikely that they helped to reduce the prevalence of anaemia.
We have shown that school-based deworming programmes can favourably influence the anaemia status of children at a cost of US$ 68 per anaemia case prevented over 15 months. The regular deworming of schoolchildren should therefore be given serious consideration as an approach to anaemia control and should be assessed in relation to iron supplementation and other traditional ways of improving iron status.
We gratefully acknowledge the support of the United Nations Development Programme, the Rockefeller Foundation, WHO, the Department for International Development of the United Kingdom, the United Nations Children's Fund, the World Bank, the Edna McConnell Clark Foundation, the James S. McDonnell Foundation, and the Wellcome Trust. The programme of the Tanzania Partnership for Child Development is supported by the Edna McConnell Clark Foundation. H.L.G. is in receipt of a Wellcome Trust Research Career Development Fellowship (055100), and S.B. a Wellcome Trust Prize Studentship. We thank Tom Smith for advice.
Conflicts of interest: none declared.
République-Unie de Tanzanie : évaluation de l'efficacité des traitements anthelminthiques en milieu scolaire pour lutter contre l'anémie chez l'enfant
OBJECTIF: Déterminer l'effet du traitement vermifuge sur l'anémie dans le cadre d'un programme à grande échelle de traitement par les anthelminthiques dans les écoles de la région de Tanga, en République-Unie de Tanzanie.
MÉTHODES: On a tenu compte à la fois de la réduction de la prévalence de l'anémie et du coût par cas évité. Dans les études transversales, on a effectué un examen parasitologique et une évaluation de l'anémie chez les écoliers avant le traitement vermifuge, puis 10 et 15 mois après.
RÉSULTANTS: Au départ, les études indiquaient que la prévalence de l'anémie (hémoglobine <110 g/l) était élevée (54 %) chez les écoliers, en particulier chez ceux qui présentaient de fortes charges parasitaires (ankylostomes et schistosomes). L'analyse du risque attribuable laisse à penser que l'ankylostomiase et la schistosomiase sont responsables de 6 et 15 % des cas d'anémie, respectivement. Quinze mois après le traitement vermifuge par l'albendazole et le praziquantel, la prévalence de l'anémie avait été réduite d'un quart et celle de l'anémie modérée à grave (hémoglobine <90 g/l) de près de la moitié. L'administration de ces anthelminthiques par le biais du système scolaire a été réalisée pour un coût relativement faible de US $1 par enfant traité. Le coût par cas d'anémie évité en appliquant un traitement vermifuge aux enfants est de l'ordre de US $6 à 8, selon le seuil employé pour le taux d'hémoglobine.
CONCLUSION: Ces résultats laissent à penser que des programmes de traitement vermifuge doivent figurer dans les stratégies de santé publique pour lutter contre l'anémie chez l'enfant d'âge scolaire, lorsque la prévalence de l'ankylostomiase et de la schistosomiase sont élevées.
Evaluación de la eficacia de los tratamientos antihelmínticos escolares contra la anemia en niños de la República Unida de Tanzanía
OBJETIVO: Determinar el impacto del tratamiento vermífugo en la anemia como parte de un programa de tratamiento antihelmíntico escolar en gran escala emprendido en la Región de Tanga, República Unida de Tanzanía.
MÉTODOS: Se consideraron tanto la reducción de la prevalencia de la anemia como el costo por caso evitado. Los estudios transversales realizados comprendían el análisis parasitológico y la evaluación de la anemia antes del tratamiento vermífugo de los escolares y a los 10 y 15 meses del tratamiento.
RESULTADOS: Los estudios basales mostraron que la prevalencia de anemia (hemoglobina < 110 g/l) era elevada (54%) entre los escolares, en particular en aquellos que presentaban una alta carga de anquilostoma y esquistosoma. El análisis de la fracción atribuible permitió relacionar la anquilostomiasis y la esquistosomiasis con el 6% y el 15% de los casos de anemia, respectivamente. Quince meses después del tratamiento vermífugo con albendazol y prazicuantel, la prevalencia de anemia se había reducido en una cuarta parte, y la de anemia moderada-grave (hemoglobina < 90 g/l) lo había hecho casi en un 50%. El suministro de esos antihelmínticos a través de las escuelas pudo hacerse a un costo relativamente bajo, US$ 1 por niño tratado. El costo por caso de anemia evitado mediante el tratamiento vermífugo de los escolares se situaba en el intervalo US$ 68, dependiendo del umbral de hemoglobina empleado.
CONCLUSIÓN: Los resultados indican que las estrategias de salud pública deben incluir programas de tratamiento vermífugo para combatir la anemia entre los escolares en los casos en que concurre una alta prevalencia de anquilostomiasis y esquistosomiasis.
1. Nokes C, van den Bosch C, Bundy DAP. The effects of iron deficiency and anaemia on mental and motor performance, educational achievement, and behaviour in children. An annotated bibliography. Washington, DC, International Nutritional Anaemia Consultative Group, 1998. [ Links ]
2. Pollitt E. Effects of a diet deficient in iron on the growth and development of preschool and school-age children. Food and Nutrition Bulletin, 1991, 13: 110118. [ Links ]
3. Guyatt HL. Do intestinal worms affect productivity in adulthood? Parasitology Today, 2000, 16: 153158. [ Links ]
4. Stoltzfus RJ et al. Hemoquant determination of hookworm-related blood loss and its role in iron deficiency in African children. American Journal of Tropical Medicine and Hygiene, 1996, 55: 399404. [ Links ]
5. Stephenson LS et al. Relationships of Schistosoma haematobium, hookworm and malaria infections and metrifonate treatment to haemoglobin levels in Kenyan school children. American Journal of Tropical Medicine and Hygiene, 1985, 34: 519528. [ Links ]
6. Roche M, Layrisse M. The nature and causes of hookworm anaemia. American Journal of Tropical Medicine and Hygiene, 1966, 15: 10311110. [ Links ]
7. Tatala S, Svanberg U, Mduma B. Low dietary iron availability is a major cause of anaemia: a nutritional survey in Lindi District of Tanzania. American Journal of Clinical Nutrition, 1998, 68: 171 178. [ Links ]
8. Robertson LJ et al. Haemoglobin concentrations and concomitant infections of hookworm and Trichuris trichiura in Panamanian primary school children. Transactions of the Royal Society of Tropical Medicine and Hygiene, 1992, 86: 654656. [ Links ]
9. Stoltzfus RJ et al. Epidemiology of iron deficiency in Zanzibari school children: the importance of hookworms. American Journal of Clinical Nutrition, 1997, 65: 153159. [ Links ]
10. Olsen A et al. The contribution of hookworm and other parasitic infection to haemoglobin and iron status among children and adults in western Kenya. Transactions of the Royal Society of Tropical Medicine and Hygiene, 1998, 92: 643649. [ Links ]
11. Bundy DAP. Population ecology of intestinal helminth infections in human communities. Philosophical Transactions of the Royal Society of London: B, 1988, 321: 405420. [ Links ]
12. Stephenson LS. Impact of helminth infection in human nutrition. New York, Taylor & Francis, 1987. [ Links ]
13. Guidelines for the evaluation of soil-transmitted helminthiasis and schistosomiasis at the community level. Geneva, World Health Organization, 1998 (unpublished document WHO/CTD/SIP/98.1). [ Links ]
14. Partnership for Child Development. Cost of school-based drug treatment in Tanzania. Health Policy and Planning, 1998, 13: 384396. [ Links ]
15. Basic laboratory methods in medical parasitology. Geneva, World Health Organization, 1991. [ Links ]
16. Health and education of the school-age child in Tanga Region. Dar es Salaam, UKUMTA, 1997 (Report Series No. 8). [ Links ]
17. Partnership for Child Development. The health of school-age children: experience from school health programmes in Ghana and Tanzania. Transactions of the Royal Society of Tropical Medicine and Hygiene, 1998, 92: 254261. [ Links ]
18. Morris SS et al. Precision, accuracy, and reliability of haemoglobin assessment with use of capillary blood. American Journal of Clinical Nuitrition, 1999, 69: 12431248. [ Links ]
19. Stoltzfus R et al. Effects of the Zanzibar school-based deworming program on iron status of children. American Journal of Clinical Nutrition, 1998, 68: 179186. [ Links ]
20. Stoltzfus RJ, Dreyfuss ML. Guidelines for the use of iron supplements to prevent and treat iron deficiency anaemia. Washington, DC, ILSI Press, 1998. [ Links ]
21. Altman DG. Practical statistics for medical research. London, Chapman & Hall, 1992: 351355. [ Links ]
22. Booth M. The application of attributable risk analysis in helminth epidemiology. Parasitology Today, 1998, 14: 497450. [ Links ]
23. Greenland S. Bias in methods for deriving standardised morbidity ratios and attributable fraction estimates. Statistics in Medicine, 1984, 3: 131141. [ Links ]
24. Walter SD. The estimation and interpretation of attributable risk in health research. Biometrics, 1976, 32: 829849. [ Links ]
25. Walter SD. Calculation of attributable risks from epidemiologic data. International Journal of Epidemiology, 1978, 7: 175182. [ Links ]
26. Alonso Gonzalez M et al. Cost-effectiveness of iron supplementation and malaria chemoprophylaxis in the prevention of anaemia and malaria among Tanzanian infants. Bulletin of the World Health Organization, 2000, 78: 97107. [ Links ]
27. Menendez C et al. Randomised placebo-controlled trial of iron supplementation and malaria chemoprophylaxis for prevention of severe anaemia and malaria in Tanzanian infants. Lancet, 1997, 350: 844850. [ Links ]
28. Levin HM. A benefit-cost analysis of nutritional programs for anaemia reduction. Research Observer, 1986, 1: 219245. [ Links ]
29. Partnership for Child Development. Short stature and delayed enrollment in primary school: studies from two African countries. Social Science and Medicine, 1999, 48: 675682. [ Links ]
30. Jamison D. Child malnutrition and school performance in China. Journal of Development Economics, 1986, 20: 299309. [ Links ]
31. Moock P, Leslie J. Childhood malnutrition and schooling in the Terai Region of Nepal. Journal of Development Economics, 1986, 20: 3352. [ Links ]
32. Bundy DAP. Health and early child development. In: Young ME, ed. Early child development. Washington, DC, World Bank, 1997. [ Links ]
33. Spurr GB. Marginal malnutrition in childhood: implications for adult work capacity and productivity. In: Collins KJ, Roberts DF, eds. Capacity for work in the tropics. Cambridge, Cambridge University Press, 1988 (Symposium 26 of the Society for the Study of Human Biology). [ Links ]
34. Stephenson LS et al. Treatment with a single dose albendazole improves growth of Kenyan schoolchildren with hookworm, Trichuris trichiura, and Ascaris lumbricoides infections. American Journal of Tropical Medicine and Hygiene, 1989, 41: 7887. [ Links ]
35. Stoltzfus RJ et al. School-based deworming program yields small improvement in growth of Zanzibari school children after one year. Journal of Nutrition, 1997, 127: 21872193. [ Links ]
1Wellcome Trust Centre for the Epidemiology of Infectious Disease, Department of Zoology, University of Oxford, South Parks Road, Oxford, OX1 3FY, England. Correspondence should be addressed to Dr Guyatt (Hguyatt@wtnairobi.mimcom.net).
2Tanzania Partnership for Child Development, Ocean Road Hospital, PO Box 9383, Dar es Salaam, United Republic of Tanzania.
3Scientific Coordinating Centre of the Partnership for Child Development, University of Oxford, South Parks Road, Oxford OX1 3FY, England.
Ref. No. 00-0523 | <urn:uuid:3736e944-c17a-44e2-b027-3cba3e971608> | CC-MAIN-2016-26 | http://www.scielosp.org/scielo.php?script=sci_arttext&pid=S0042-96862001000800004&lng=en&nrm=iso&tlng=en | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00026-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.865213 | 9,425 | 2.609375 | 3 |
Dresden, Hamburg. Researchers at the Medical Faculty of the Technische Universität Dresden (TUD) and the Heinrich Pette Institute (HPI), Leibniz Institute for Experimental Virology succeeded in developing a designer recombinase (Brec1) that is capable of specifically removing the provirus from infected cells of most primary HIV-1 isolates. The results have now been published in the renowned Journal Nature Biotechnology.
With 37 Million HIV-positive people and more than two Million new infections annually, HIV remains a major world health challenge. Even though enormous advances have been made in HIV treatment, a complete cure from the disease is still not possible. Indeed, the propagation of the virus in the body can nowadays be held in check through medication, but the provirus remains present in cells of the body.
A team of researchers from the Department of Medical Systems Biology at the TUD as well as the research unit Antiviral Strategies at the HPI in Hamburg employed directed molecular evolution to generate a designer recombinase (Brec1), which can precisely remove the provirus from the majority (>90%) of clinical HIV-1 isolates found in humans.
The team now demonstrated for the first time, that the approach works on cells directly isolated from HIV-1 patients. Importantly, the antiviral effects were accomplished without measurable cytotoxic or genotoxic side effects. Based on these findings, Brec1 represents a promising candidate for possible applications in improved HIV therapies.
"The generation of molecular scalpels, such as the Brec1 recombinase, will change medical practice. Not only HIV patients will likely benefit from this development, but also many other patients with genetically caused diseases. We are about to witness the beginning of the genome surgery era", predicts the head of the Dresden group, Prof. Frank Buchholz.
Brec1 recombinase was developed in close collaboration at the department of Medical Systems Biology (Prof. Frank Buchholz), TU Dresden and the Heinrich Pette Institute, Leibniz Institute for Experimental Virology (Prof. Joachim Hauber).
The results were published in the Journal Nature Biotechnology:
Janet Karpinski, Ilona Hauber, Jan Chemnitz, Carola Schäfer, Maciej Paszkowski-Rogacz, Deboyoti Chakraborty, Niklas Beschorner, Helga Hofmann-Sieber, Ulrike C. Lange, Adam Grundhoff, Karl Hackmann, Evelin Schrock, Josephine Abi-Ghanem, M. Teresa Pisabarro, Vineeth Surendranath, Axel Schambach, Christoph Lindner, Jan van Lunzen, Joachim Hauber & Frank Buchholz (2016). Directed evolution of a recombinase that excises the provirus of most HIV-1 primary isolates with high specificity. Nature Biotechnology 2016 Feb 22, advance online publication. doi: 10.1038/nbt.3467. | <urn:uuid:aa1fc22b-f57c-4290-9059-577db0b3a969> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2016-02/tud-rbt022216.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00106-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.865732 | 628 | 2.6875 | 3 |
(τὰ Τιφατηνὰ ὄρη
, Dio Cass.: Monte di Maddaloni
), a mountain ridge on the borders of Campania and Samnium, only about a mile from the city of Capua.
It is one of the last outlying masses of the Apennines, and is a long, narrow ridge of no great elevation, but above 12 miles in length from E. to W., and presenting a bold and steep mountain front towards the Campanian plain, upon which it looks directly down.
The name was derived according to Festus from the woods of evergreen oak with which it was covered, “Tifata” being equivalent to “iliceta,” though whether it was an Oscan or old Latin word, we are not told. (Fest. s. v. Tifata.
) It is first mentioned during the war between the Samnites and Campanians which immediately preceded the First Samnite War. On that occasion the Samnites in the first instance occupied the ridge itself with a strong force, and afterwards drew out their main army into the plain below, where they soon defeated the Campanians in a pitched battle. (Liv. 7.29
.) Livy calls it on this occasion “Tifata, imminentes Capuae colles,” and elsewhere “montem imminentem Capuae” (26.5), which well describes its character and situation.
It was this opportune position with regard to Capua and the surrounding plain, that caused it to be selected by Hannibal as a post where he established his camp in B.C. 215, and from whence he long carried on his operations against the various cities of Campania. (Id. 23.36, 37, 39, 43, 26.5; Sil. Ital. 12.487
At a later period it was in the plain at the foot of Tifata that Sulla defeated the Marian general Norbanus, B.C. 83; and in gratitude for this victory, he consecrated a considerable tract of territory to Diana, the tutelary goddess of the mountain. (Veil. Pat. 2.25.) We hence learn that that divinity had a celebrated temple on Tifata, and the “Dianae Tifatinae fanum” is noticed also in inscriptions found at Capua. From one of these we learn that the consecrated territory was again assigned to the goddess by Vespasian. (Orell. Inscr.
As the Tabula marks a station “Ad Dianae” near the W. extremity of the ridge, it is probable that the temple was situated in that neighbourhood. (Tab. Pent.
) From the same authority we learn that Jupiter, who was worshipped on so many of the highest points of the Apennines, had a temple also on Tifata, to which it gives the name of Jovis Tifatinus.
It is placed in the Tabula at the E. extremity of the ridge. (Tab. Peut.
) Again in B.C. 48 the fastnesses of this mountain ridge afforded a shelter to Milo when driven from Capua. (D. C. 42.25
This is the last time its name is mentioned in history, and it is not noticed by any of the geographers: in the middle ages the name seems to have been wholly forgotten; and the mountain is now called from a neighbouring village the Monte di Maddaloni.
But the descriptions of Livy and Silius Italicus leave no doubt of the identification.
It is indeed, from its proximity to Capua and the abruptness with which it rises from the plain, one of the most striking natural features of this part of Campania. | <urn:uuid:e5de00a9-354f-4d7d-988d-d3f1c28a517b> | CC-MAIN-2016-26 | http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.04.0064:alphabetic%20letter=T:entry%20group=10:entry=tifata-geo&toc=Perseus%3Atext%3A1999.04.0064%3Aalphabetic+letter%3DT%3Aentry+group%3D5 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00009-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.962601 | 810 | 3.40625 | 3 |
I see the letters CGI on many web pages and in many URLs. What is CGI?
CGI is short for Common Gateway Interface. Basically, a CGI script is a
program that runs on a web server. These scripts can do anything from counting hits on a web page to processing forms.
We use several types of these on our web site. In fact, when you signed up for this newsletter, you used a CGI script. | <urn:uuid:ec52cf9b-c6ba-4a63-9c2e-96e1b3cd7649> | CC-MAIN-2016-26 | http://www.worldstart.com/what-is-cgi/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.930844 | 89 | 2.703125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.